Planet Thunderbird

September 01, 2017

Mark Banner

New Thunderbird Conversations released (with support for 52)!

We’ve just released a new Thunderbird Conversations (previously know as Gmail Conversation View) with full support for Thunderbird 52. We’re sorry for the delay, but the good news is it should now work fine.

I’d like to thank Jonathan for letting me help out with the release process, and for all those who contributed to release or filed issues.

If you find an issue, please submit it at our support site.

The add-on should work with the current Thunderbird Beta versions (56), but won’t currently work in Daily (57) due to some compatibility issues. We’re hoping to get those resolved in the next week or so.

If you want to help out with future releases, then find the source code here and come and help us with supporting users or fixing issues.

September 01, 2017 06:35 AM

August 24, 2017

Mike Conley

Photon Engineering Newsletter #14

Just like jaws did last week, I’m taking over for dolske this week to talk about stuff going on with Photon Engineering. So sit back, strap in, and absorb Photon Engineering Newsletter #14!

If you’ve got the release calendar at hand, you’ll note that Nightly 57 merges to Beta on September 20th. Given that there’s usually a soft-freeze before the merge, this means that there are less than 4 weeks remaining for Photon development. That’s right – in less than a month’s time, folks on the Beta channel who might not be following Nightly development are going to get their first Photon experience. That’ll be pretty exciting!

So with the clock winding down, the Photon team has started to shift more towards polish and bug-fixing. At this point, all of the major changes should have landed, and now we need to buff the code to a sparkling sheen.

The first thing you may have noticed is that, after a solid run of dogefox, the icon has shifted again:

The new Nightly icon

We now return you to your regularly scheduled programming

The second big change are our new 60fps1 loading throbbers in the tabs, coming straight to you from the Photon Animations team!

The new loading throbber in Nightly

I think it’s fair to say that Photon Animations are giving Firefox a turbo boost!

Other recent changes

Menus and structure

Animations

Preferences

Visual redesign

Onboarding

Performance


  1. The screen capturing software I used here is only capturing at 30fps, so it’s really not doing it justice. This tweet might capture it better. 

August 24, 2017 05:04 PM

August 22, 2017

Joshua Cranmer

A review of the solar eclipse

On Monday, I, along with several million other people, decided to view the Great American Eclipse. Since I presently live in Urbana, IL, that meant getting in my car and driving down I-57 towards Carbondale. This route is also what people from Chicago or Milwaukee would have taken, which means traffic was heavy. I ended up leaving around 5:45 AM, which puts me around the last clutch of people leaving.

Our original destination was Goreville, IL (specifically, Ferne Clyffe State Park), but some people who arrived earlier got dissatisfied with the predicted cloudy forecast, so we moved the destination out to Cerulean, KY, which meant I ended up arriving around 11:00 AM, not much time before the partial eclipse started.

Partial eclipses are neat, but they're very much a see-them-once affair. When the moon first entered the sun, you get a flurry of activity as everyone puts on the glasses, sees it, and then retreats back into the shade (it was 90°F, not at all comfortable in the sun). Then the temperature starts to drop—is that the eclipse, or this breeze that started up? As more and more gets covered, then it starts to dim: I had the impression that a cloud had just passed in front of the sun, and I wanted to turn and look at that non-existent cloud. And as the sun really gets covered, then trees start acting as pinhole cameras and the shadows take on a distinctive scalloped pattern.

A total eclipse though? Completely different. The immediate reaction of everyone in the group was to start planning to see the 2024 eclipse. For those of us who spent 10, 15, 20 hours trying to see 2-3 minutes of glory, the sentiment was not only that it was time well spent, but that it was worth doing again. If you missed the 2017 eclipse and are able to see the 2024 eclipse, I urge you to do so. Words and pictures simply do not do it justice.

What is the eclipse like? In the last seconds of partiality, everyone has their eyes, eclipse glasses on of course, staring at the sun. The thin crescent looks first like a side picture of an eyeball. As the time ticks by, the tendrils of orange slowly diminish until nothing can be seen—totality. Cries come out that it's safe to take the glasses off, but everyone is ripping them off anyways. Out come the camera phones, trying to capture that captivating image. That not-quite-perfect disk of black, floating in a sea of bright white wisps of the corona, not so much a circle as a stretched oval. For those who were quick enough, the Baily's beads can be seen. The photos, of course, are crap: the corona is still bright enough to blot out the dark disk of the moon.

Then, our attention is drawn away from the sun. It's cold. It's suddenly cold; the last moment of totality makes a huge difference. Probably something like 20°F off the normal high in that moment? Of course, it's dark. Not midnight, all-you-see-are-stars dark; it's more like a dusk dark. But unlike normal dusk, you can see the fringes of daylight in all directions. You can see some stars (or maybe that's just Venus; astronomy is not my strong suit), and of course a few planes are in the sky. One of them is just a moving, blinking light in the distance; another (chasing the eclipse?) is clearly visible with its contrail. And the silence. You don't notice the usual cacophony of sounds most of the time, but when everyone shushes for a moment, you hear the deafening silence of insects, of birds, of everything.

Naturally, we all point back to the total eclipse and stare at it for most of the short time. Everything else is just a distraction, after all. How long do we have? A minute. Still more time for staring. A running commentary on everything I've mentioned, all while that neck is craned skyward and away from the people you're talking to. When is it no longer safe to keep looking? Is it still safe—no orange in the eclipse glasses, should still be fine. How long do we need to look at the sun to damage our eyes? Have we done that already? Are the glasses themselves safe? As the moon moves off the sun, hold that stare until that last possible moment, catch the return of the Baily's beads. A bright spark of sun, the photosphere is made visible again, and then clamp the eyes shut as hard as possible while you fumble the glasses back on to confirm that orange is once again visible.

Finally, the rush out of town. There's a reason why everyone leaves after totality is over. Partial eclipses really aren't worth seeing twice, and we just saw one not five minutes ago. It's just the same thing in reverse. (And it's nice to get back in the car before the temperature gets warm again; my dark grey car was quite cool to the touch despite sitting in the sun for 2½ hours). Forget trying to beat the traffic; you've got a 5-hour drive ahead of you anyways, and the traffic is going to keep pouring onto the roads over the next several hours anyways (10 hours later, as I write this, the traffic is still bad on the eclipse exit routes). If you want to avoid it, you have to plan your route away from it instead.

I ended up using this route to get back, taking 5 hours 41 minutes and 51 seconds including a refueling stop and a bathroom break. So I don't know how bad I-57 was (I did hear there was a crash on I-57 pretty much just before I got on the road, but I didn't know that at the time), although I did see that I-69 was completely stopped when I crossed it. There were small slowdowns on the major Illinois state roads every time there was a stop sign that could have been mitigated by sitting police cars at those intersections and effectively temporarily signalizing them, but other than that, my trip home was free-flowing at speed limit the entire route.

Some things I've learned:

August 22, 2017 04:59 AM

August 19, 2017

Robert Kaiser

Celebrating LCARS With One Last Theme Release

30 years ago, a lot of people were wondering what the new Star Trek: The Next Generation series would bring when it would debut in September 1987. The principal cast had been announced, as well as having a new Enterprise and even the pilot's title was known, but - as always with a new production - a lot of questions were open, just like today in 2017 with Star Trek Discovery, which is set to debut in September almost to the day on the 30th anniversary of The Next Generation.

Given that the story was set to play 100 years after the original and what was considered "futuristic" had significantly changed between the late 1960s and 1980s, the design language had to be significantly updated, including the labels and screens on the new Enterprise. Scenic art supervisor and technical consultant Michael Okuda, who had done starship computer displays for The Voyage Home, was hired to do those for the new series, and was instructed by series creator and show runner Gene Roddenberry that this futuristic ship should have "simple and clean" screens and not much animation (the latter probably also due to budget and technology constraints - the "screens" were built out of colored plexiglass with lights behind them).



With that, Okuda created a look that became known as "LCARS" (for Library Computer Access and Retrieval System (which actually was the computer system's name). Instead of the huge gray panels with big brightly-colored physical buttons in the original series, The Next Generation had touch-screen panels with dark background and flat-style buttons in pastel color tones. The flat design including the fonts and flat-design frames are very similar to quite a few designs we see on touch-friendly mobile apps 30 years later. Touch screens (and even cell phones and tablets) were pretty much unheard of and "future talk" when Mike Okuda created those designs, but he came to pretty similar design conclusions as those who design UIs for modern touch-screen devices (which is pretty awesome when you think of it).

I was always fascinated with that style of UI design even on non-touch displays (and am even more so now that I'm using touch screens daily), and so 18 years ago, when I did my first experiments with Mozilla's new browser-mail all-in-one package and realized that the UI was displayed with the same rendering engine and the same or very similar technologies as websites, I immediately did some CSS changes to see if I could apply LCARS-like styling to this software - and awesomeness ensued when I found out that it worked!

Image No. 23114

Over the years, I created a full LCARStrek theme from those experiments (first release, 0.1, was for Mozilla suite nightlies in late 2000), adapted it to Firefox (starting with LCRStrek 2.1 for Firefox 4), refined it and even made it work with large Firefox redesigns. But as you may have heard, huge changes are coming to Firefox add-ons, and full-blown themes in a manner of LCARStrek cannot be done in the new world as it stands right now, so I'm forced to stop developing this theme.

Image No. 23308

Given that LCARS has a huge anniversary this year, I want to end my work on this theme on a high instead of a too sad a note though, so right along the very awesome Star Trek Las Vegas convention, which just celebrated 30 years of The Next Generation, of course, I'm doing one last LCARStrek release this weekend, with special thanks to Mike Okuda, whose great designs made this theme possible in the first place (picture taken by myself at that convention just two weeks ago, where he was talking about the backlit LCARS panels that were dubbed "Okudagrams" by other crew members):
Image No. 23314

Live long and prosper!

August 19, 2017 10:21 PM

Lantea Maps: GPS Track Upload to OpenStreetMap Broken

During my holidays, when I was using Lantea Maps daily to record my GPS tracks, I suddenly found out one day that upload of the tracks to OpenStreetMap was broken.

I had added that functionality so that people (including myself) could get their GPS tracks out of their mobile devices and into a place from which they can download them anywhere. A bonus was that the tracks were available to the OpenStreetMap project as guides to improve the maps.

After I had wasted about EUR 50 of data roaming costs to verify that it was not only broken on hotel networks but also my mobile network that usually worked, I tried on a desktop Nightly and used the Firefox devtools to find out the actual error message, which was a CORS issue. I filed a GitHub issue but apparently it was an intentional change and OpenStreetMap doesn't support GPS track uploads any more in a way that is simple for pure web apps and also doesn't want to re-add support for that. Find more details in the GitHub issue.

Because of that, I think that this will mark the end of uploading tracks from Lantea Maps to OpenStreetMap. When I have time, I will probably add a GPS track store on my server instead, where third-party changes can't break stuff while I'm on vacation. If any Lantea Maps user wants their tracks on OpenStreetMap in the future, they'll need to manually upload the tracks themselves.

August 19, 2017 02:49 PM

July 20, 2017

Calendar

There is a lot to see — Convert XUL to HTML

This is a repost from medium, where Arshad originally wrote the blog post.

 

In the past blog, I talked mostly about the development environment setup, but this blog will be about the react dialog development.

Since then I have been working on converting some more dialogs into React. I have converted three dialogs — calendar properties dialog, calendar alarm dialog and print dialog into their React equivalent till now. Calendar alarm dialog and print dialog still need some work on state logic but it is not something that will take much time. Here are some screenshots of these dialogs.

calendar-properties-dialog

print-dialog

calendar-alarm-dialog

 

While making react equivalents, I found out XUL highly depends upon attributes and their values. HTML doesn’t work with attributes and their values in the same way XUL does. HTML allows attribute minimization and with React there are some other difficulties related to attributes. React automatically neglects all non-default HTML attributes so to add those attributes I have to add it explicitly using setAttribute method on the element when it has mounted. Here is a short snippet of code which shows how I am adding custom HTML attributes and updating them in React.

class CalendarAlarmWidget extends React.Component {
  componentDidMount() {
    this.addAttributes(this.props);
  }

  componentWillReceiveProps(nextProps) {
    // need to call removeAttributes first
    // so that previous render attributes are removed

    this.removeAttributes();
    this.addAttributes(nextProps);
  }

  addAttributes(props) {
    // add attributes here
  }

  removeAttributes() {
    // remove attributes here
  }
}

XUL also have dialog element which is used instead of window for dialog boxes. I have also made its react equivalent which has nearly all the attributes and functionality that XUL dialog element has. Since XUL has slightly different layout technique to position elements in comparison to HTML, I have dropped some of the layout specific attributes. With the power of modern CSS, it is quite easy to create the layout so instead of controlling layout using attributes I am depending more upon CSS to do these things. Some of the methods like centerWindowOnScreen and moveToAlertPosition are dependent on parent XUL wrapper so I have also dropped them for React equivalent.

There are some elements in XUL whose HTML equivalents are not available and for some XUL elements, HTML equivalents don’t have same structure so their appearance considerably differs. One perfect example would be menulist whose HTML equivalent is select. Unlike menulist whose direct child is menupopup which wraps all menuitem, select element directly wraps all the options so the UI of select can’t be made exactly similar to menulist. option elements are also not customizable unlike menuitem and it also doesn’t support much styling. While it is helpful to have React components that behave similar to their XUL counterparts, in the end only HTML will remain. Therefore it is unavoidable that some features not useful for the new components will be dropped.

I have made some custom React elements to provide all the features that existing dialogs provide, although I am still using HTML select element at some places instead of the custom menulist item because using javascript and extra CSS just to make the element look similar to XUL equivalent is not worth it.

As each platform has its own specific look, there are naturally differences in CSS rules. I have organized the files in a way that it is easy to write rules common to all platforms, but also add per-OS differences. A lot of the UI differences are handled automatically through -moz-appearance rules, which instruct the Mozilla Platform to use OS styling to render the elements. The web app will automatically detect your OS so you can see how the dialog will look on different platforms.

I thought it would be great to get quick suggestions and feedback on UI of dialogs from the community so I have added a comment section on each dialog page. I will be adding more cool features to the web app that can possibly help in making progress in this project.

Thanks to BrowserStack for providing free OSS plans, now I can quickly check how my dialogs are looking on Windows and Mac.

Thanks to yulia [IRC nickname] for finding time to discuss the react implementation of dialog, I hope to have more react discussions in future :)

Feel free to check the dialogs on web app and comment if you have any questions.


July 20, 2017 09:18 AM

June 13, 2017

Calendar

First Steps  —  Convert XUL to HTML

This is a repost from medium, where Arshad originally wrote the blog post.

 

This summer I am working on a Thunderbird project — Convert XUL to HTML, as a Google Summer of Code 2017 candidate. I am really excited and thrilled to start my journey at Mozilla. I will be working on Mozilla Calendar add-on for Thunderbird aka Lightning. The goal of this project will be to convert XUL dialog boxes into their React versions.

Project Abstract:

Lightning has traditionally been using XUL for its user interface. To modernize, we would like to convert dialogs, tab content and other parts of the user interface to HTML. The new components should use web standards as much as possible, avoiding extensive use of third party libraries.

The second week of the coding period is going to end and there is a lot to tell about the progress of the Convert XUL to HTML project. I was able to setup a balanced development environment and convert a dialog into React. Things are going well so far as the time invested in setting up the development environment is bringing results.

I will start by telling a bit about the challenges that I faced and later a bit about the solutions that I sorted out. Since Thunderbird doesn’t have any extra build step, it was very clear from the start that anything that needs an extra build/compile step is a NO for this project. By that, it means I have to compromise on the awesome features like hot-reloading, jsx etc. that are often paired with React. Another minor issue that I faced was styling of components of dialog box so that they can look exactly like their XUL versions.

At first, I thought of going with the option of importing react, react-dom via script tags and write code without jsx in vanilla js but later I thought why not automate this difficulty. I setup Babel with react-preset and wrote few lines of code to make a clean npm environment to do all these things. Since running Babel on the source directory only outputted the js files, I wrote a few gulp tasks to copy the HTML and CSS files to the compiled js directory.

It is kind of annoying to copy each file manually so I opted for going with Gulp. I also wrote a bash script that removes the Babel scripts and edits the type of main javascript files in the compiled directory’s HTML files. Now there is no extraneous code into the files of compiled directory(dist).

Using Gulp, I can live reload the browser automatically whenever I make any changes to the source files, this is not as good as hot-reloading but it’s better to have it rather than manually hitting the refresh button.

As a web developer, I never worried about the default styling of the browser but for this project, I have to be totally dependent on Firefox toolkit themes and Thunderbird CSS skins. It started to make sense after a few hours of work and now I can create exactly the same layout and appearance of elements in React as it has in XUL dialog boxes. All thanks go to developer tools of Thunderbird and DXR.

The dialog that I and my mentor Philipp decided to do first was calendar-properties-dialog as it was simple and it would help me to get a comfortable start. This dialog is now completely done except a few OS specific CSS rules which can be done later on after testing the dialog in Thunderbird. Working on this dialog was fun and easy and I hope this fun and easiness continues.

Anyone can check the progress of the project by either checking out this repository or logging on to https://gsoc17-convert-xul-to-html.herokuapp.com. I have also created an iframe testing ground where a user can send and modify the state object of dialog and open the dialog in an iframe. This page uses the same HTML5 postMessage API for communication between iframe and parent as it will use in Thunderbird dialog boxes, similar to how it is already working for the event dialog in the past GSoC project. I am sure the testing ground will save a lot of time in debugging and it clearly shows how things are going on internally within dialog box. It is like a mini control dashboard for our dialog boxes.

We haven’t tested out the current react dialog box in Thunderbird yet but after integrating react version of dialog boxes into Thunderbird, we will most likely not be using all these tools to generate the code, but focusing on using the minimal tools available in the Mozilla build system. We would like to hear the suggestions of Mozilla devtools folks to see if they have plans on improving tooling support and possibly using jsx, as it is much easier to read than having that converted to javascript.

I am very excited for the next weeks and I hope things go well as it has been going on. Many thanks to my mentor Philipp for his continuous support and Mozilla community for answering my questions on IRC. Any pieces of advice, suggestion and perhaps encouraging words are always welcome :)

June 13, 2017 05:42 PM

May 09, 2017

Thunderbird Blog

Thunderbird’s Future Home

Summary

The investigations on Thunderbird’s future home have concluded. The Mozilla Foundation has agreed to serve as the legal and fiscal home for the Thunderbird project, but Thunderbird will migrate off Mozilla Corporation infrastructure, separating the operational aspects of the project.

Background

In late 2015 Mitchell Baker started a discussion on the future of Thunderbird, and later blogged about the outcome of that, including this quote:

I’ve seen some characterize this as Mozilla “dropping” Thunderbird. This is not accurate. We are going to disentangle the technical infrastructure. We are going to assist the Thunderbird community. This includes working with organizations that want to invest in Thunderbird, several of which have stepped forward already. Mozilla Foundation will serve as a fiscal sponsor for Thunderbird donations during this time.

To investigate potential new homes for Thunderbird, Mozilla commissioned a report from Simon Phipps, former president of the OSI.

The Last Year’s Investigations

The Phipps report saw three viable choices for the Thunderbird Project’s future home: the Software Freedom Conservancy (SFC), The Document Foundation (TDF) and a new deal at the Mozilla Foundation. An independent “Thunderbird Foundation” alternative was not recommended as a first step but the report said it “may become appropriate in the future for Thunderbird to separate from its new host and become a full independent entity”.

Since then the Thunderbird Council, the governing body for the Thunderbird project, has worked to determine the most appropriate long term financial and organizational home, using the Phipps report as a starting point. Over the past year, the Council has thoroughly discussed the needs of a future Thunderbird team, and focused on investigating the non-Mozilla organizations as a potential future home. Many meetings and conversations were held with organizations such as TDF and SFC to determine their suitability as potential homes, or models to build on.

In parallel, Thunderbird worked to develop a revenue stream, which would be needed regardless of an eventual home. So the Thunderbird Council arranged to collect donations from our users, with the Mozilla Foundation as fiscal sponsor. Many months of donations have developed a strong revenue stream that has given us the confidence to begin moving away from Mozilla-hosted infrastructure, and to hire a staff to support this process. Our infrastructure is moving to thunderbird.net and we’re already running some Thunderbird-only services, like the ISPDB (used for auto configuring users’ email accounts), on our own.

Legally our existence is still under the Mozilla Foundation through their ownership of the trademark, and their control of the update path and websites that we use. This arrangement has been working well from Thunderbird’s point of view. But there are still pain points – build/release, localization, and divergent plans with respect to add-ons, to name a few. These are pain points for both Thunderbird and Firefox, and we obviously want them resolved. However, the Council feels these pain points would not be addressed by moving to TDF or SFC.

Thus, much has changed since 2015 – we were able to establish a financial home at the Mozilla Foundation, we are successfully collecting donations from our users, and the first steps of migrating infrastructure have been taken. We started questioning the usefulness of moving elsewhere, organizationally. While Mozilla wants to be laser-focused on the success of Firefox, in recent discussions it was clear that they continue to have a strong desire to see Thunderbird succeed. In many ways, there is more need for independent and secure email than ever. As long as Thunderbird doesn’t slow down the progress of Firefox, there seems to be no significant obstacles for continued co-existence.

We have come to the conclusion that a move to a non-Mozilla organization will be a major distraction to addressing technical issues and building a strong Thunderbird team. Also, while we hope to be independent from Gecko in the long term, it is in Thunderbird’s interest to remain as close to Mozilla as possible to in the hope that it gives use better access to people who can help us plan for and sort through Gecko-driven incompatibilities.

We’d like to emphasize that all organizations we were in contact with were extremely welcoming and great to work with. The decision we have made should not reflect negatively on these organizations and we would like to thank them for their support during our orientation phase.

What’s Next

The Mozilla Foundation has agreed to continue as Thunderbird’s legal, fiscal and cultural home, with the following provisos:

  1. The Thunderbird Council and the Mozilla Foundation executive team maintain a good working relationship and make decisions in a timely manner.
  2. The Thunderbird Council and the team make meaningful progress in short order on operational and technical independence from Mozilla Corporation.
  3. Either side may give the other six months notice if they wish to discontinue the Mozilla Foundation’s role as the legal and fiscal host of the Thunderbird project.

Mozilla would invoke C if A+B don’t happen. If C happened, Thunderbird would be expected to move to another organization over the course of six months.

From an operational perspective, Thunderbird needs to act independently. The Council will be managing all operations and infrastructure required to serve over 25 million users and the community surrounding it. This will require a certain amount of working capital and the ability to make strong decisions. The Mozilla Foundation will work with the Thunderbird Council to ensure that operational decisions can be made without substantial barriers.

If it becomes necessary for operational success, the Thunderbird Council will register a separate legal organization. The new organization would run certain aspects of Thunderbird’s operations, gradually increasing in capacity. Donor funds would be allocated to support the new organization. The relationship with Mozilla would be contractual, for example permission to use the trademark.

A Bright Future

The Thunderbird Council is optimistic about the future. With the organizational question settled, we can focus on the technical challenges ahead. Thunderbird will remain a Gecko-based application at least in the midterm, but many of the technologies Thunderbird relies upon in that platform will one day no longer be supported. The long term plan is to migrate our code to web technologies, but this will take time, staff, and planning. We are looking for highly skilled volunteer developers who can help us with this endeavor, to make sure the world continues to have a high-performance open-source secure email client it can rely upon.

May 09, 2017 10:49 AM

May 07, 2017

Robert Kaiser

Representing Mozilla at Linuxwochen Wien 2017

Linuxwochen ("Linux weeks") is a yearly series of Free & Open Source Software events/conferences in Austrian cities, organized by the respective local FLOSS communities but marketed via a common name and website. They commonly take place spread out over several weekends in April and May, with the largest one, Linuxwochen Wien, in Austria's capital of Vienna, on a Thursday through Saturday in early May. In this year's edition, from May 4-6, the Mozilla community was present there once again (like two years ago) with a booth, talks and a workshop.

Image No. 23309
While in 2015, the main topic at the Mozilla booth and workshop was Firefox OS, having a large 4K TV from Panasonic to show off and get people involved, things have changed a lot after sitting out a year (which happened due to me moving to a new condo at that time and as the sole Rep in the area being the one who needs to organize events like this presence).
This year, I was focusing on A-Frame (and therefore WebVR), both with the booth and the workshop. In addition, we could provide a talk by Dragana from Mozilla's network platform team about HTTP/2 and QUIC and I reprised my FOSDEM talk on web logins, this time in German. While the whole conference probably has a few hundred to a thousand visitors (hard to estimate when entrance is free and there are several parallel tracks), I probably got to talk to between several dozen and a hundred people at the booth, my workshop and talk both had 10-15 attendees, and Dragana's talk about 20-30. The conference overall has a bit of a family feel to it, attracting a decent amount of people but it's definitely not really large either. A lot of the attendees are pretty technical and already in the FLOSS scene in one way or another, but as it's happening on a technical college, we also get some of their students who may not be involved with that larger community - and then there are some casual visitors but they're probably rare.

Image No. 23310


At our booth, next to the takeaway collection of Firefox stickers and tatoos as well as Mozilla wristbands, I put up some printouts of the new logo and related artwork as decoration, and on the glass wall behind the booth, a big poster with a German variant of "doing good is part of our code" and the Firefox log as well as printouts of website screenshots depicting the variety of what's going on at Mozilla nowadays - from mozilla.org, Campus Clubs, Internet Health Report, and changecopyright.org via Rust, Servo, WebAssembly, CSS Grid, and A-Frame to Pocket and Let's Encrypt - of course all with big and visible URLs. On top of that, I had my laptop on the booth, running the Snowglobe example of A-Frame, as well as a few Cardboards and Z3C phones with the Museum example and a 360° image loaded and ready to show. On the laptop, I had the source code of the Hello WebVR on Glitch and a live view of that ready in additional tabs for explanations.

That setup ended up working very well - the always-moving snowglobe and the cardboards proved to be good eye-catchers and starting points for talking to people coming by. I had them look at the museum with the cardboard (nice because it's quite detailed and you can even "walk" around by staring at the yellow dots on the floor that you get on mobile) and told people how that was all running in the browser, and how Mozilla pioneered WebVR, which now is an open standard, and did the A-Frame library, that those demos are written in, and which makes it really easy to write VR scenes yourself, which led to showing them the Hello WebVR scene and its source code - often changing a color to show that it's really that easy. I later also added an <a-text> saying "Linuxwochen Wien" to that scene, when someone asked about text. A lot of "wow"s were heard, and many people noted down the aframe.io URL (which I should have had better visible somewhere) and/or had more questions, e.g. on using objects from 3D modeling software (you can, there are components for Collada, GLTF, and other formats), use cases outside of demos and games, device support (which I often had mentioned when talking about WebVR itself) and prices, which phones work with cardboard, how to get cardboards (I could have sold a few there), and more. All in all, WebVR and A-Frame peeked a real lot of interest.

Image No. 23311
Of course, questions outside of WebVR came up: "Mozilla has been killing so many things lately, what is the project actually working on now?" (leading to talk about a lot of the websites I had stuck on the wall, as well as the whole Quantum efforts to make Firefox better, as well as of course WebVR), questions on that status and future of Thunderbird (I'm on its planning mailing list so could answer most questions there), some Rust-related ones including "can I trust that Rust will be around in a few years when Mozilla tends to kill its own projects all the time?" (I hope I could calm the worries there), the usual Firefox support questions and some one-off specialty items - as well as multiple discussions on the demise of Firefox OS and how that increased the shortage of alternatives next to the proprietary iOS and Android choices on mobile. I was surprised at how there was nobody hugely disturbed by us killing plugins or the upcoming huge changes in the add-ons ecosystem, there was more concern about how many old computers we leave in the cold by unsupporting Windows XP and pre-SSE2 CPUs - and about how we seem to have more graphics-related crashes than Chrome.
One conversation with an IoT hacker once again showed me how much potential FlyWeb could have if it was pushed forward somewhat more.

The conversations definitely showed that there is interest in both more A-Frame/WebVR workshops and also potentially in Rust meetups in Vienna, so I will probably look into that.

This leads me to the A-Frame workshop I did on Friday, which went really well - starting with the introductory Presentation Kit, handing around the cardboards with the museum and 360° image as demos, an introduction round (which I forgot at the beginning, but fit well there as well), and then going hands-on on the attendees laptops. For that, I put up some steps from the A-Frame School - though I pointed people to awesome-aframe and where they can find the school, so they can also do some things at their own pace. I encouraged people to play around with the Hello WebVR example (and most didn't want to use Glitch but instead used local files and their editor of choice) and went around in the room, engaging with the attendees individually as they tried and also struggled with and solved different things. Adding image textures and tag-based animations were the big hit, unlike in my first workshop, there was very little JS used this time. One person had a big stone ball rolling towards the viewer in a narrow street, which can get scary... ;-)
The resounding feedback was that everyone (and we had a nicely diverse group, including an older man, multiple women, from web developers to an artist, people with our without previous experience with 3D or VR stuff) could take something with them and most of them were interested to join future workshops on the topic.

Image No. 23313
Our talks also did get good feedback from the people we talked to and pretty interesting and interested questions (I tend to take the kind and amount of questions I get at talks as a major piece of feedback). I think that all in all, we could spread the word on a number of Open Web and Mozilla topics and get people interested in things we are doing in this community. I also hope that this will result in growing our community somewhat in the mid to long term, as this time I had to man the booth alone most of the time. Thanks to Dragana and Arpad from the existing community though, who each joined the booth for a few hours on different days (and Dragana of course also for her talk).

For me, this was a pretty successful event, I hope we can do even better in the future - and if you are doing similar events, maybe my experiences can help you as well (feel free to ask me for more details)!

May 07, 2017 09:08 PM

May 01, 2017

Mike Conley

Making tabs close faster in multi-process Firefox

TL;DR: In bug 1336763, I have landed a series of patches that should hopefully make tab closing faster for the majority of cases for users that are using multi-process Firefox.

The rest of this blog post tries to explain why.

The beforeunload event handler

Perhaps you’ve seen this dialog before:

The beforeunload dialog in Firefox

Are you sure?

This dialog shows up when a website that you’ve interacted with (or one of its subframes) has set an event handler for the beforeunload event, and you attempt to close the tab or browse away from the website.

Here’s the documentation for how beforeunload works, but long story short, you can do a thing in the event handler that will cause the browser UI to show that dialog, which means giving the user the opportunity to cancel their request to close or navigate away from the website1.

In any event2, if a page is going to be unloaded, and that page (or one of its subframes) has set one or more beforeunload event handlers, then it is necessary to run those event handlers to see if we’re going to show the dialog, or go ahead and unload the page straight away.

How multi-process Firefox used to handle beforeunload

When closing a tab in multi-process Firefox, what we’ve been doing is sending a message to the content process for that tab to check for (and run any) beforeunload event handlers. The parent sends that message, and then just kinda waits for the content process to respond with whether or not the close should occur. If the content process doesn’t respond within 5 seconds (I know), then we consider it a wash, and just close the tab.

The content process is sometimes doing stuff on the main thread, and sometimes it’s just waiting for messages from the parent. In the latter case, tab closes happen pretty smoothly – the message comes in, beforeunload events are fired (and hopefully those don’t take too long, but you never know), and then hopefully a result goes up to the parent, and it can move on.

That’s the best case scenario – but lots of things can prevent the best case scenario; for one thing, the main thread might be busy doing other stuff when the message is sent from the parent. Perhaps it’s doing a garbage collection, or a cycle collection, or it’s blocked on some busy JavaScript that some silly advertisement company is running in the background of one of your tabs. In that case, the message from the parent won’t be processed until the main thread is ready.

Once the message is received, we’re still not out of the woods – the beforeunload event handlers can run any kind of JavaScript inside them, more or less. For example, one anti-pattern I’ve seen in the wild is to use the beforeunload event as an opportunity to send a sync XMLHttpRequest in order to get some data to a server before the page goes away3. So the script on the page has an opportunity to delay you, even if it’s not going to cause the dialog to appear.

This problem seems to plague all browsers. beforeunload is a real pain, and our current implementation can cause slow tab closing even if the tab doesn’t have beforeunload event handlers set4. The patches in bug 1336763 offer what I think is a decent, simple solution for that common case in Firefox.

Don’t ask, just remember

In bug 1336763, I’ve made it so that for any given tab running in a content process, if a beforeunload event is ever added in that tab (or in any of its subframes), the content process tells the parent process so it can mark that tab as having listeners we need to fire. If the beforeunload events are removed, we unmark the tab. If no beforeunload events are ever added, there’s no mark at all.

The parent process remembers these markings, so that if the user decides to close the tab, the parent can know immediately whether or not it needs to message to tell the content process to run beforeunload event handlers. In the cases where no beforeunload event handlers have been set, we can close immediately without asking for permission from the content process at all.

Details, details

Using some Gecko terminology here, we start by storing a count on something called the TabChild. It might simplify things a bit if you try to imagine the TabChild as the representative of everything in a particular browser tab, and that underneath that TabChild are a bunch of nodes, forming a tree-like structure.

Let’s call these nodes “inner windows”.

The inner windows under the TabChild contain the documents that are loaded in a tab. For simple web pages, that might just be a single document. In that case, we have a TabChild with just a single inner window node under it.

More complicated pages might contain iframes (which themselves contain iframes, etc). In those cases, we have a TabChild with a single inner window node under it, and that node has any number of inner window children (and those children have any number of inner window children, etc)5.

When any of those subframes have a beforeunload event listener added to them via script, the inner window node tells the TabChild to increment its internal count. If a beforeunload event listener is removed via script, the TabChild is told to decrement its internal count.

If the TabChild count ever goes above 0, then we need to tell the parent “Hey, you have at least one beforeunload event listener here”. If that count continues to go up, the TabChild doesn’t need to tell the parent anything – it just needs to record the increase. If the count ever drops back to 0, then the TabChild needs to tell the parent again, “All beforeunload event listeners are clear”.

Pretty straight-forward so far, but there are a few other cases we also have to consider.

Other cases

There are a couple of ways for a set of beforeunload event handlers to go away. We’ve already mentioned one – script on the page might remove them via removeEventListener.

One way is if the inner window gets navigated away from. If we’re on a page, and that page set a beforeunload event handler, and the user clicks on a link, the user might end up navigating away (assuming the dialog wasn’t shown and they didn’t cancel), which essentially replaces the inner window with one for a different page. In that case, script didn’t remove the beforeunload event handlers – the page went away, and so the beforeunload event handlers on the page we’ve unloaded are no longer relevant.

Another way is if an <iframe> which has set a beforeunload event handler is removed from the DOM. Instead of replacing the inner window, we’re snipping the inner window out of the tree structure entirely.

In both of these cases, if there are beforeunload event handlers in the subframe, it’s necessary to tell the TabChild so that the right number can be decremented from the TabChild count.

So what this means is that we need the inner windows to keep a track of how many beforeunload event handlers have been set as well. That way, when they start to tear themselves down, they can tell the TabChild, “Hey, I’m going away now – decrement X number of beforeunload event handlers”.

It might seem redundant to have these two counts – counts in the inner windows, and a total count in the TabChild. It would seem like one can be easily inferred from the other; just sum the beforeunload counts for the inner windows, and you should have your TabChild count.

Having the TabChild keep a count is an optimization that prevents us from having to walk the inner window tree to collect a sum every time the count changes. It’s a classic space / time tradeoff, and I think it’s worth the extra integer member on the TabChild.

Comparison to other browsers

Here’s one way to compare the behaviour across different browsers:

  1. In a browser window with more than one tab open, open the developer tools, and make your way to the JavaScript console.
  2. Drop this tasty little snippet in and press enter:
    var then = Date.now(); while (Date.now() - then < 15000) {}

This is going to hang the main thread in that tab for 15 seconds, but is otherwise inert.

Now try to close the tab. In Firefox, the tab closes right way. In Safari and Chrome (the two other browsers I have on this machine), the tab hangs out for a while. In Chrome, it appears to wait the full 15 seconds. In Safari, it seems to hit some kind of shorter timeout6.

Wrapping up

This was a neat set of patches to work on, precisely because it had me tour the depths of Gecko (dealing with things like inner / outer windows, the stuff that manages events, etc), which end up resulting in a simple property that the front-end can ultimately access to optimize closing tabs. So it nicely spanned the gap between low-level Gecko and higher-level Firefox, all for an event that was added back in 2004, for better or worse.

If all goes well, this change should ship in Firefox 55, and apply to multi-process tabs.


  1. It’s not always necessary for the beforeunload event handler to show the dialog. The event handler needs to set the returnValue property on the event to a string in order for the dialog to show, but plenty of other stuff can happen in that event handler. 

  2. Ooooh pun intended 

  3. A better way would be to use navigator.sendBeacon, which allows the browser to send one last XHR in the background for a page even after it’s gone away. 

  4. since we have to check to see if any of those event handlers exist. 

  5. For my fellow Gecko Hackers – yes, this is not quite right. I’m missing other key structures in my description (specifically, the outer window). Forgive me – this is a very low-resolution mental model to make this post easier to write. 🙂  

  6. Note that it appears that the script running in the console is treated differently from the script running on the page. If you set up a page with that script, I notice that the tabs close immediately on both Chrome and Safari. I might have goofed in my experiment though – it is rather late at night.  

May 01, 2017 05:05 PM

March 23, 2017

Kent James

Caspia Projects and Thunderbird – Open Source In Absentia

People of Thunderbird - Chinook Nation

Clallam Bay is located among various Native American tribes where the Thunderbird is an important cultural symbol.

I’m recycling an old trademark that I’ve used, Caspia, to describe my projects to involve Washington State prisoners in open-source projects. After an afternoon of brainstorming, Caspia is a new acronym “Creating Accomplished Software Professionals In Absentia”.
What does this have to do with Thunderbird? I sat in a room a few weeks ago with 10 guys at Clallam Bay, all who have been in a full-time, intensive software training program for about a year, who are really interested in trying to do real-world projects rather than simply hidden internal projects that are classroom assignments, or personal projects with no public outlet. I start in April spending two days per week with these guys. Then there are another 10 or so guys at WSR in Monroe that started last month, though the situation there is more complex. The situation is similar to other groups of students that might be able to work on Thunderbird or Mozilla projects, with these differences:1) Student or GSOC projects tend to have a duration of a few months, while the expected commitment time for this group is much longer.

2) Communication is extremely difficult. There is no internet access. Any communication of code or comments is accomplished through sneakernet options. It is easier to get things like software artifacts in rather than bring them out. The internal issues of allowing this to proceed at all are tenuous at both facilities, though we are further along at Clallam Bay.

3) Given the men’s situation, they are very sensitive to their ability to accumulate both publicly accessible records of their work, and personal recommendations of their skill. Similarly, they want marketable skills.

4) They have a mentor (me) that is heavily engaged in the Thunderbird/Mozilla world.

Because they are for the most part not hobbyists trying to scratch an itch, but rather people desperate to find a pathway to success in the future, I feel a very large responsibility to steer them in the direction of projects that would demonstrate skills that are likely to be marketable, and provide visibility that would be easily accessible to possible future employees. Fixing obscure regressions in legacy Thunderbird code, with contributions tracked only in hg.mozilla.org and BMO, does not really fit that very well. For those reasons, I have a strong bias in favor of projects that 1) involve skills usable outside the narrow range of the Mozilla platform, and 2) can be tracked on github.

I’ve already mentioned one project that we are looking at, which is the broad category of Contact manager. This is the primary focus of the group at WSR in Monroe. For the group at Clallam Bay, I am leaning toward focusing on the XUL->HTML conversion issue. Again I would look at this more broadly than just the issues in Thunderbird, perhaps developing a library of Web Components that emulate XUL functionality, and can be used both to easily migrate existing XUL to HTML, but also as a separate library for desktop-focused web applications. This is one of the triad of platform conversions that Thunderbird needs to do (the others being C++->JavaScript, and XPCOM->SomethingElse).

I can see that if the technical directions I am looking at turn out to work with Thunderbird, it will mean some big changes. These projects will mostly be done using GitHub repos, so we would need to improve our ability to work with external libraries. (We already do that with JsMime but poorly). The momentum in the JS world these days, unfortunately, is with Node and Chrome V8. That is going to cause a lot of grief as we try to co-exist with Node/V8 and Gecko. I could also see large parts of our existing core functionality (such as the IMAP backend) migrated to a third-party library.

Our progress will be very slow at first as we undergo internal training, but I think these groups could start having a major impact on Thunderbird in about a year.

:rkent

March 23, 2017 07:20 PM

March 14, 2017

Robert Kaiser

Final Round for My LCARStrek and EarlyBlue Themes

As you may have noted, Mozilla published a plan for a new themes system that doesn't fully cover my thoughts on the matter and ends up making themes that go as far as my LCARStrek theme impossible.

The only way I could still hold up this extent of theming is to spread it guerilla-style as userChrome.css mods, i.e. a long CSS sheet to be copied into people's userChromes.css manually. That would still allow the extent of theming, but be extremely inconvenient to distribute.

Because of that, I will stop development of my themes as soon as Firefox 57 hits Nightly and I can't use the LCARStrek theme myself any more (EarlyBlue, which is SeaMonkey-only, is something I just dragged along anyhow). Given the insecurity of even having releases and the small "market", I also will not continue them for SeaMonkey only, Firefox has been the only thing that really mattered any more there.

Also, explicit theming support for Firefox devtools is being removed from LCARStrek with the 2.49 release that I just submitted to AMO as it's extremely complicated to maintain and with the looming removal of full themes from Firefox, that amount of work is not worth my time any more. Because of this, there is a bit of a mixture of styles in some areas of devtools esp. in Firefox 52 (improving in newer versions) but that is outside of the control of a theme author. I tested that devtools are usable this way, contrast of icons in toolbars isn't optimal at times but visible enough so developers can work with them. To any LCARStrek users, sorry for the inconvenience, I would have put more work into this if the theming feature of this extent would not be removed.

Image No. 23308

This is a hard step for me as the first thing I experimented with when I downloaded my first Mozilla M5 build in 1999 was actually the theming files, and LCARStrek came out of that as a demonstration of how awesome this system of customization was and how far it could go. It achieve a look that really was out of this world, but I guess the new direction of Firefox is not compatible with a 24th century look. ;-)

It will also be hard for me go move back to the bland look of the default theme, esp. as it looks even more boring on Linux than on other platforms, but I have a few months to get used to the idea before I actually have to do this, and I will keep the themes going for that little while.

Somehow this fits well with the overall theme that MoCo and myself are at odds right now on a number of things, but you can be assured that I'm not gone from the community, as a matter of fact I have planned a few activities in Vienna in the next months, from WebVR workshops to conference appearances, and I'm just about to finish the Tech Speakers training and hope to be more active in that area in the future.

LLAP!

March 14, 2017 05:33 PM

February 21, 2017

Robert Kaiser

The PHP Authserver

As mentioned previously here on my blog, my FOSDEM 2017 talk ended up opening up the code for my own OAuth2 login server which I created following my earlier post on the login systems question.

The video from the talk is now online at the details page of the talk (including downloadable versions if you want them), my slides are available as well.

The gist of it is that I found out that using a standard authentication protocol in my website/CMS systems instead of storing passwords with the websites is a good idea, but I also didn't want to report who is logging into which website at what point to a third party that I don't completely trust privacy-wise (like Facebook or Google). My way to deal with that was to operate my own OAuth2 login server, preferably with open code that I can understand myself.

As the language I know best is PHP (and I can write pretty clean and good quality code in that language), I looked for existing solutions there but couldn't find a finished one that I could just install, adapt branding-wise and operate.
I found a good library for OAuth2 (and by extension OpenID Connect) in oauth2-server-php, but the management of actual login processes and creating the various end points that call the library still had to be added, and so I set out to do just that. For storing passwords, I investigated what solutions would be good and in the end settled for using PHP's builtin password_hash function including its auto-upgrade-on-login functionalities, right now that means using bcrypt (which is decent but not fully ideal), with PHP 7.2, it will move to Argon2 (which is probably the best available option right now). That said, I wrote some code to add an on-disk random value to the passwords so that hacking the database alone will be insufficient for an offline brute-force attack on the hashes. In general, I tried to use a lot of advice from Mozilla's secure coding guidelines for websites, and also made sure my server passes with A+ score on Mozilla Observatory as well as SSL Labs, and put the changes for that in the code as much as possible, or example server configurations in the repository otherwise, so that other installations can profit from this as well.
For sending emails and building up HTML as DOM doucuments, I'm using helper classes from my own php-utility-classes and for some of the database access, esp. schema upgrades, I ended up including doctrine DBAL. Optionally, the code is there to monitor traffic via Piwik.

The code for all this is now available at https://github.com/KaiRo-at/authserver.

It should be relatively easy to install on a Linux system with Apache and MySQL - other web servers and databases should not be hard to add but are untested so far. The main README has some rudimentary documentation, but help is needed to improve on that. Also, all testing is done by trying logins with the two OAuth2 implementations I have done in my own projects, I need help in getting a real test suite set up for the system.
Right now, all the system supports is the OAuth2 "Authorization Code" flow, it would be great to extend it to support OIDC as well, which php-server-php can handle but the support code for it needs to be written.
Branding can easily be adapted for the operator running the service via the skin support (my own branding on my installation builds on that as well), and right now US English and German are supported by the service but more can easily be added if someone contributes them.

And last but not least, it's all under the MPL2 license, which I hope enables people easily to contribute - I hope including yourself!

February 21, 2017 03:05 PM

February 06, 2017

Rumbling Edge - Thunderbird

On hiatus …

I’ve run The Rumbling Edge for over a decade now. It’s time to take a break – that stage in your life where you would like to try other stuff.

February 06, 2017 10:33 PM

December 23, 2016

Robert Kaiser

Looking for Review on PHP Code (Login/Auth System)

Yay! My talk about "Web Logins after Persona - How I solved logins on my small websites" has been accepted for the Mozilla DevRoom at FOSDEM 2017!

That talk is a followup on my earlier post on the login systems question, which I ended up solving by writing my own OAuth2 login server based on oauth2-server-php. While that library provides the actual functionality for OAuth2, I had to build a system around it that handles the actual registration and login and the API for retrieving an email address for the logged in user.

I would like to open up the code for that server to the public at FOSDEM!

For that, I need someone (hopefully multiple people) to review the code to be sane security-wise (an in-depth audit is probably not needed yet, but review for sanity for sure), as I have it deployed myself and don't want the open code to be a risk for me, and also I want it to be fine for people to deploy and depend their own (small) websites on this system for login.

It's basically all PHP code, but it's not too much, the PHP files of the project itself are just about 900 lines long altogether, though it uses the document and email classes from my php-utility-classes as well as oauth2-server-php and a bit of doctrine DBAL, though I don't think the latter two need any review for sanity. The JS is minimal and the CSS no issue for security sanity. ;-)

I have one Mozillian who has volunteered and should look into the code soon, but I'd like to have two or three people to take a look, if possible.

If you can help, please let me know with a reply on this post (leave your email, as I'll contact you there), Telegram, Diaspora*, or email and tell me why/how you are qualified to review this code.

Thanks and Happy Holidays!

December 23, 2016 02:08 AM

December 03, 2016

Robert Kaiser

I Want an Internet of Humans

I'm going through some difficult times right now, for various reasons I'm not going into here. It's harder than usual to hold onto my hopes and dreams and the optimism for what's to come that fuels my life and powers me with energy. Unfortunately, there's also not a lot of support for those things in the world around me right now. Be it projects that I shared a vision with being shut down, be it hateful statements coming from and being thrown at a president elect in the US, politicians in many other countries, including e.g. the presidential candidates right here in Austria, or even organizations and members of communities I'm part of. It looks like the world is going through difficult times, and having an issue with holding on to hopes, dreams, and optimism. And it feels like even those that usually are beacons of light and preach hope are falling into the trap of preaching the fear of darkness - and as soon as fear enters our minds, it's starting a vicious cycle.

Some awesome person or group of people wrote a great dialog into Star Wars Episode I, peaking in Yoda's "Fear is the path to the dark side. Fear leads to anger. Anger leads to hate. Hate leads to suffering." - And so true this is. Think about it.

People fear about securing their well-being, about being able to live the life they find worth living (including their jobs(, and about knowing what to expect of today and tomorrow. When this fear is nurtured, it grows, leads to anger about anything that seems to threaten it. They react hatefully to anyone just seeming to support those perceived threats. And those targeted by that hate hurt and suffer, start to fear the "haters", and go through the cycle from the other side. And and in that climate, the basic human uneasy feeling of "life for me is mostly OK, so any change and anything different is something to fear" falls onto fertile ground and grows into massive metathesiophobia (fear of change) and things like racism, homophobia, xenophobia, hate of other religions, and all kinds of other demons rise up.

Those are all deeply rooted in sincere, common human emotions (maybe even instincts) that we can live with, overcome and even turn around into e.g. embracing infinite diversity in infinite combinations like e.g. Star Trek, or we can go and shove them away into a corner of our existence, not decomposing them at their basic stage, and letting them grow until they are large enough that they drive our thinking, our personality - and make us easy to influence by people talking to them. And that works well both for the fears that e.g. some politicians spread and play with and the same for the fears of their opponents. Even the fear of hate and fear taking over is not excluded from this - on the contrary, it can fire up otherwise loving humans into going fully against what the actually want to be.

That said, when a human stands across another human and looks in his or her face, looks into their eyes, as long as we still realize there is a feeling, caring other person on the receiving end of whatever we communicate, it's often harder to start into this circle - if we are already deep into the fear and hate, and in some other circumstances this may not be always true, but in a lot of cases it is.

On the Internet, not so much. We interact with and through a machine, see an "account" on the other end, remove all the context of what was said before and after, of the tone of voice and body language, of what surroundings others are in, we reduce to a few words that are convenient to type or what the communication system limits us to - and we go for whatever gives us the most attention. Because we don't actually feel like we interact with other real humans, it's mostly about what we get out of it. A lot of likes, reshares, replies, interactions. It helps that the services we use maximize whatever their KPI are and not optimize for what people actually want - after all, they want to earn money and that means having a lot of activity, and making people happy is not an actual goal, at best a wishful side effect.

We need to change that. We need to make social media actually social again (this talk by Chris Heilmann is really worth watching). We need to spread love ("make Trek, not Wars" in a tounge-in-cheek kind of way, no meaning negativity towards any franchise, but thinking about meanings and how we can make things better for our neighbors, our community, our world), not even hate the fear or fear the hate (which leads back into the circle), but analyze it, take it seriously and break it down. If we understand it, know how to deal with it, but not let it overcome us, fear can even be healthy - as another great screenwriter put it "Fear only exists for one purpose: To be conquered". That is where we need to get ourselves, and need to help those other humans end up that spread hate and unreflected fear - or act out of that. Not by hating them back, but by trying to understand and help them.

We need to see the people, the humans, behind what we read on the Internet (I deeply recommend for you to watch this very recent talk by Erika Baker as well). I don't see it as a "Crusade against Internet hate" as mentioned in the end of that talk, but more as a "Rally for Internet love" (unfortunately, some people would ridicule that wording but I see it as the love of humanity, the love for the human being inside each and everyone of us). I'm always finding it mind-blowing that every single person I see around me, that reads this, that uses some software I helped with, and every single other person on this planet (or in its orbit, there are none out further at this time as far as I know), is a fully, thinking, feeling, caring human being. Every one of those is different, every one of those has their own thoughts and fears that need to be addressed and that we need to address. And every one of those wants to be loved. And they should be. No matter who they voted for. No matter if they are a president elect or a losing candidate. We don't need to agree with everything they are saying. But their fears should be addressed and conquered. And yes, they should be loved. Their differences should be celebrated and their commonalities embraced at the same time. Yes, that's possible, think about it. Again, see the philosophy of infinite diversity in infinite combinations.

I want an Internet that connects those humans, brings them closer together, makes them understand each other more, makes them love each other's humanity. I don't care how many "things" we connect to the Internet, I care that the needs and feelings of humans and their individual and shared lives improve. I care that their devices and gadgets are their own, help their individuality, and help them embrace other humans (not treat them as accounts and heaps to data to be analyzed and sold stuff to). I want everyone to see that everyone else is (just) human, and spread love to or at least embrace them as humans. Then the world, the humans in it, and myself, can make it out of the difficult times and live long and prosper in the future.

I want an Internet of humans.
We all, me, you can start creating that in how we interact with each other on social networks and other places on the web and even in the real world, and we can build it into whatever work we are doing.

I want an Internet of humans.
Can, will, you help?

December 03, 2016 03:13 AM

October 03, 2016

Robert Kaiser

The Neverending Question of Login Systems

I put a lot of work into my content management system in the last week(s), first because I had the time to work on some ongoing backend rework/improvements (after some design improvements on this blog site and my main site) but then to tackle an issue that has been lingering for a while: the handling of logins for users.

When I first created the system (about 13 years ago), I did put simple user and password input fields into place and yes, I didn't know better (just like many people designing small sites probably did and maybe still do) and made a few mistakes there like storing passwords without enough security precautions or sending them in plaintext to people via email (I know, causes a WTF moment in even myself nowadays but back then I didn't know better).

And I was very happy when the seemingly right solution for this came along: Have really trustworthy people who know how to deal with it store the passwords and delegate the login process to them - ideally in a decentralized way. In other words, I cheered for Mozilla Persona (or the BrowserID protocol) and integrated my code with that system (about 5 years ago), switching most of my small sites in this content management system over to it fully.

Yay, no need to make my system store and handles passwords in a really safe and secure way as it didn't need to store passwords any more at all! Everything is awesome, let's tackle other issues. Or so I thought. But, if you haven't heard of that, Persona is being shut down on November 30, 2016. Bummer.

So what were the alternatives for my small websites?

Well, I could go back to handling passwords myself, with a lot of research into actually secure practices and a lot of coding to get things right, and probably quite a bit of bugfixing afterwards, and ongoing maintenance to keep up with ever-growing security challenges. Not really something I was wanting to go with, also because it may make my server's database more attractive to break into (though there aren't many different people with actual logins).

Another alternative is using delegated login via Facebook, Google, GitHub or others (the big question is who), using the OAuth2 protocol. Now there's two issues there: First, OAuth2 isn't really made for delegated login but for authentication of using some resource (via an API), so it doesn't return a login identifier (e.g. email address) but rather an access token for resources and needs another potentially failure-prone roundtrip to actually get such an identifier - so it's more complicated than e.g. Persona (because using it just for login is basically misusing it). Second, the OAuth2 providers I know of are entities to whom I don't want to tell every login on my content management system, both because their Terms of Service allow them to sell that information to anyone, and second because I don't trust them enough to know about each and every one of those logins.

Firefox Accounts would be an interesting option, given that Mozilla is trustworthy on the side of dealing with password data and wouldn't sell login data or things like that, may support the same BrowserID assertion/verification flow as Persona (which I have implemented already), but it doesn't (yet) support non-Mozilla sites to use it (and given that it's a CMS, I'd have multiple non-Mozilla sites I'd need to use it for). It also seems to support an OAuth2 flow, so may be an option with that as well if it would be open to use at this point - and I need something before Persona goes away, obviously.

Other options, like "passwordless" logins that usually require a roundtrip to your email account or mobile phone on every login sounded too inconvenient for me to use.

That said, I didn't find anything "better" as a Persona replacement than OAuth2, so I took an online course on it, then put a lot of time into implementing it and I have a prototype working with GitHub's implementation (while I don't feel to trust them with all those logins, I felt they are OK enough to use for testing against). That took quite some work as well, but some of the abstraction I did for Persona implementation can be almost or completely reused (in the latter case, I just abstracted things to a level that works for both) - and there's potential in for example getting some more info than an email from the OAuth2 provider and prefill some profile fields on user creation. That said, I'm still wondering about an OAuth2 provider that's trustworthy enough privacy-wise - ideally it would just be a login service, so I don't have to go and require people to register for a whole different web service to use my content management system. Even with the fallback alone and without the federation to IdPs, Mozilla Persona was nicely in that category, and Firefox Accounts would be as well if they were open to use publicly. (Even better would be if the browser itself would act as an identity/login agent and I could just get a verified email from it as some ideas behind BrowserID and Firefox Accounts implied as a vision.)

I was also wondering about potentially hosting my own OAuth2 provider, but then I'd need to devise secure password handling on my server yet again and I originally wanted to avoid that. And I'd need to write all that code - unless I find out how to easily run existing code for an OAuth2 or BrowserID provider on my server.

So, I'm not really happy yet but I have something that can go into production fast if I don't find a better variant before Persona shuts down for good. Do you, dear reader, face similar issues and/or know of good solutions that can help?

October 03, 2016 07:21 PM

September 13, 2016

Calendar

GSoC 2016: Some Thoughts on React

As discussed in the previous post, the HTML-based UI for editing events and tasks in a tab is still a work in progress that is in a fairly early stage and not something you could use yet.  (However, for any curious folks living on the bleeding edge who might still want to check it out, the previous post also describes how to activate it.)  This post relates to its implementation, namely the use of React, “a Javascript library for building user interfaces.”

For the HTML UI we decided to use React (but not JSX which is often paired with it).  React basically provides a nice declarative way to define composable, reusable UI components (like a tab strip, a text box, or a drop down menu) that you use to create a UI.  These are some of its main advantages over “raw” HTML.  It’s also quite efficient / fast and is a library that does one thing well and can be combined with other technologies (as compared with more monolithic frameworks).  I enjoyed using and learning about React.  Once you understand its basic model of state management and how the components work it is not very difficult or complicated to use.  I found its documentation to be quite good, and I liked how it lets you do everything in Javascript, since it generates the HTML for the UI dynamically.

One of the biggest differences when using React is that instead of storing state in DOM elements and querying them for their state (as we currently do), the app state is centralized in a top-level React component and from there it gets automatically distributed to various child components.  When the state changes (on user input) React automatically updates the UI to reflect those changes.  To do this it uses an internal “virtual DOM” which is basically a representation of the state of the DOM in Javascript.  When there are changes it compares the previous version of that virtual DOM with the new version to decide what changes need to be made to the actual DOM.  (Because the actual DOM is quite slow compared to Javascript, this approach gives React an advantage in terms of performance.)  Centralizing the app state in this way simplifies things considerably.  Direct interaction with DOM elements is not needed, and is actually an anti-pattern.

One example of the power and flexibility that React offers is that I actually did the “responsive design” part of the HTML UI with React rather than CSS.  The reason was that some of the UI components had to move to different positions in the UI when transitioning between the narrow and wide layouts for different window sizes.  This was not really possible with CSS, at least not without overly complex workarounds.  However, it was simple to do it with React because React can easily re-render the UI in any configuration you define, in this case in response to resizing the window past a certain threshold.  (Once CSS grid layout is available this kind of repositioning will be straightforward to do with CSS.)

React’s different approach to state does present some challenges for using it with existing code.  For this project at least it is not simply a matter of dropping it in and having it work, rather using it will entail some non-trivial code refactoring.  Basically, the code will need to be separated out into different jobs.  First there’s (1) interacting with the outside of the iframe (e.g. toolbar, menubar, statusbar) and (2) modifying and/or formatting the event or task data.  These are needed for both the XUL and HTML UIs.  Next there’s (3) updating and interacting with the XUL UI inside the iframe.  Currently these things (1, 2, and 3) are usually closely intertwined, for example in a single function.  Then there is (4) using React to define components and how they respond to changes to the app state, and (5) updating and interacting with the HTML UI inside the iframe (i.e. read from or write to the app state in the top-level React component).  So there is some significant refactoring work to do, but after it is done the code should be more robust and maintainable.

Despite the refactoring work that may be involved, I think that React has a lot to offer for future UI work for Calendar or Thunderbird as an alternative to XUL.  Especially for code that involves managing a lot of state (like the current project) using React and its approach should reduce complexity and make the code more maintainable.  Also, because it mostly involves using Javascript this simplifies things for developers.  When CSS grid layout is available that will also strengthen the case for HTML UI work since it will offer greater control over the layout and appearance of the UI.

I’ll close with links to two blog posts and a video about React that I found helpful:

— Paul Morris

September 13, 2016 03:17 AM

September 08, 2016

Robert Kaiser

IDIC: Embrace Differences

The philosophy of IDIC or Infinite Diversity in Infinite Combinations has kept my mind going around quite much recently.

Well, if we want to go by the book, IDIC is actually seen as the basis of a philosophy, specifically that of Star Trek's Vulcan species, it's "native language" name is Kol-Ut-Shan, and it's symbolized by that really nice-looking jewel that has a triangle/pyramid with a marked point/ball on top and a circle around it (see image). That said, it ends up culminating Gene Roddenberry's philosophy behind a lot of what Star Trek depicts, and the philosophy that even 50 years (to this exact day) after the show first aired is still largely shared by the fans of the franchise (including myself).

What IDIC centers around is to increase and heavily embrace diversity in all things - and that can be applied to and give thought inspiration to many things.
Everything of course starts with Gene's vision of a lead crew as diverse as the mid-1960s would allow it, a United Federation of Planets that is a utopian in-between of UN and US in a galactic dimension, to other figures than white mean being in leadership positions in various incarnations of the franchise, and preserving diversity of life forms beyond the two-legged variety in various stories as well (if you like deeply digging into messages and philosophy of Star Trek episodes, the Mission Log Podcast may be something for you).
I like looking beyond Star Trek when it comes to this philosophy though. Take for example the genomes of life forms we know (in reality, on this planet) - no two life forms have the exact same genes, not even twins. Nature shows that "infinite" diversity (created from seemingly infinite combinations of very few elements) not because it's fun, or because our design sucks, or it's Vulcan, of course. It gives life an ingenious robustness by making it hard for attacks to affect large amounts of different individuals and species, it makes life forms complement each other to cover different environments, and adaptive to react to different circumstances.
And from all I hear from studies and see in practice, when we put together diverse groups of people, they usually excel in creativity and putting up different ideas, they are harder to control by a single bad influence, they develop more respect for other humans, higher sensitivity towards the needs of other people, deeper understanding of and respect for different persons - at least in comparison to many groups of people very similar to each other. Fun fact on the side, the crowd I see at Star Trek conventions is probably one of the most diverse group of "geeks" you can find (across gender, race, age, profession, and other criteria) - thanks to the role models and the philosophy put front and center in that franchise. That kind of diversity is something I want to see in many more areas of my life and around me. The more we get different people to sit down or stand together, the more we create and show role models of diversity enriching life, the more we get people to respect other people, no matter who they are, and the more we create a better world - and universe.

Now, what about things other than life forms? What for example about computer systems? About software?
There's a lot of people advocating for hardware, operating systems, software packages that are exactly the same for everyone, so it's easy to verify that they haven't been modified unduly, and that software updates are easier to apply. And that surely has merit in a number of dimensions, and reproducible builds, Flatpak and Snap, even reducing "fingerprintability" on the Web and quite a few other mechanisms exist to reduce differences between our systems.
But then, we as users of those computers and that software are all different. We want our systems to be personalized and therefore to be different from anyone else's system. We install different add-ons into our Firefox, different apps or applications on our computers and smartphones, log into different accounts on different websites, we want our system to be uniquely ours, or at least feel like it is that. So at some level, we as users want "infinite" diversity of computers. Different people may even want different screen numbers and sizes, have different focus on what is important for them that their computer does, desire different set-ups of the hardware on their home and/or work desks. And there are security reasons to put randomization (like ASLR and other RoP defense mechanisms) into our computer (runtime) setups in some cases. Would a higher degree of diversity on software make it harder for attacks to break a large amount of systems? Maybe, I don't know which benefits outweigh the others there.

It's clear that's a principle which works pretty decently in nature at a low level, and for groups of people at a high level, and we definitely should embrace it there. At which layers of our software and hardware it's useful or detrimental is not always entirely clear, but it has to work in personalizing our computer systems to our requirements, desires and wishes as we are all different and that diversity needs to end up being reflected so we can use its strength to work together and improve this world.

Thanks to Gene Roddenberry and Star Trek in general for giving me something interesting to think about - and Happy 50th Birthday Star Trek!

September 08, 2016 09:52 PM

August 30, 2016

Calendar

GSoC 2016: Wrapping Up

It’s hard to believe it is already late August and this year’s Google Summer of Code is all wrapped up.  The past couple of months have really flown by.  In the previous post I summarized the feedback we received on the new UI design and discussed the work I’ve been doing to port the current UI (for editing events and tasks) to a tab.  In this post I’ll describe how to try out this new feature in a development version of Thunderbird, and give an update on the HTML implementation of the new UI design. In my next post I’ll share some thoughts on using React for the HTML UI.

To try out editing events and tasks in a tab instead of in a dialog window you’ll need a development version of Thunderbird (aka: “Daily”).  Since it is a development version you will want to use a separate profile and/or make sure your data is backed up.  Once you have that all set up, you can turn on the “event in a tab” feature with a hidden preference.  To access hidden preferences, go to Preferences > Advanced > Config Editor, and then search for “calendar.item.editInTab” and toggle it to true by double-clicking on it.

Or if that’s too much trouble you can just wait until it arrives in the next stable release of Thunderbird/Lightning.  In the meantime, here’s what it looks like (click to enlarge):

xul-ui-in-tab

The screenshot above shows the current XUL-based UI ported to a tab.  I ended up not having much time to work on the new HTML-based UI (actually only a week or so) and did not get as far on it as I’d hoped — only as far as a basic and preliminary implementation, a starting point for further development rather than something that can be used today.  For example, it does not yet support saving changes and not all of the data is loaded into the UI for a given event or task.

Some aspects do already work, like the responsive design where the UI changes to adapt to the width of the window, taking more advantage of the greater space available in a tab.  Here are two screen shots that show the wide and narrow views (click to enlarge).

html-ui-in-tab

html-ui-in-window

Even though the HTML UI is not ready for use yet, we decided to go ahead and land it in the code base as a work-in-progress for further development.  So if you are curious to see where it stands, it can also be turned on with a hidden preference (“calendar.item.useNewItemUI”) in a current development version of Thunderbird, as described above.  Again, be sure to use a separate profile and/or make sure your data is backed up.

For more technical details about the project, including some high-level documentation I wrote for this part of the code, see the meta bug, especially my comment #2 which summarizes the state of things as of the end of the Summer of Code period.

It was a great summer working on this project.  I learned a lot and enjoyed contributing.  As my time permits, I hope to continue to contribute and finish the implementation of the new UI.  Many thanks to Google, Mozilla, and especially to my mentors Philipp Kewisch (Fallen) and MakeMyDay for their guidance and tireless willingness to answer my questions and review code.  Also thanks to Richard Marti (Paenglab) for his help and feedback on the UI design work.

I wish there was another month of the official coding period to get the HTML implementation further along, but alas, so far we’ve only been able to help people manage their time, not actually generate more of it.

— Paul Morris

August 30, 2016 03:14 PM

August 25, 2016

Calendar

GSoC 2016: Where Things Stand

The clock has run out on Google Summer of Code 2016.  In this post I’ll summarize the feedback we received on the new UI design and the work I’ve been doing since my last post.

Feedback on the New UI Design

A number of people shared their feedback on the new UI design by posting comments on the previous blog post.  The response was generally positive.  Here’s a brief summary:

Thanks to everyone who took the time to share their thoughts.  It is helpful to hear different views and get user input.  If you have not weighed in yet, feel free to do so, as more feedback is always welcome.  See the previous blog post for more details.

Coding the Summer Away

A lot has happened over the last couple months.  The big news is that I finished porting the current UI from the window dialog to a tab.  Here’s a screenshot of this XUL-based implementation of the UI in a tab (click to enlarge):

xul-ui-in-tab

Getting this working in a really polished way took more time than I anticipated, largely because the code had to be refactored so that the majority of the UI lives inside an iframe.  This entailed using asynchronous message passing for communication between the iframe’s contents and its outer parent context (e.g. toolbars, menus, statusbar, etc.), whether that context is a tab or a dialog window.  While this is not a visible change, it was necessary to prepare the way for the new HTML-based design, where an HTML file will be loaded in the iframe instead of a XUL file.

Along with the iframe refactoring, there are also just a lot of details that go into providing an ideal user experience, all the little things we tend to take for granted when using software.  Here’s a list of some of these things that I worked on over the last months for the XUL implementation:

In the next two posts I’ll describe how to try out this new feature in a development version of Thunderbird, discuss the HTML implementation of the new UI design, and share some thoughts on using React for the HTML implementation.

— Paul Morris

August 25, 2016 08:22 PM

June 13, 2016

Calendar

GSoC 2016: Seeking Feedback on UI Design

As you can see on the Event in a Tab wiki page, I have created a number of mockups, labeled A through N, for the new UI for creating, viewing, and editing calendar events and tasks.  (This has given me a lot of practice using Inkscape!)  The final design will be implemented in the second phase of the project.  So far the revisions have been based on valuable feedback from Paenglab and MakeMyDay (thanks!), and we are now seeking broader feedback from users on the latest and greatest mockup “N” (click to view full size):

Event in a Tab

Event in a Tab, UI Design, Mockup “N”

Please take a look and send any feedback, comments, suggestions, questions, etc. to the calendar mailing list / newsgroup where we will be discussing the design, or you can leave a comment on this blog post, send a private email to mozilla@kewis.ch, or reach us via IRC (in Mozilla’s #calendar channel).

Here are some notes and details about the behavior of the proposed UI that are not apparent from a static image.

The mockup is intended as a relatively rough “wire frame” to show layout and it only approximates spacing, sizing, and aesthetic details. Unless otherwise noted, functionality is the same as in the current Lightning add-on.

A responsive design approach will be used to implement this UI in HTML. As the window expands horizontally, the elements will expand with it up to a breakpoint where the two-column “tab” layout goes into effect. Then the elements will continue to expand in both of the columns, up to a certain maximum limit at which they would expand no further. (Having this limit will keep things more focused on very wide monitors/windows.)

For vertical scrolling in a tab… Categories, Reminders, Attachments, Attendees, and Description can expand to take up as much vertical space as necessary to show all of their content. In most cases, where there are only a small number of these items, there will be enough room on the page to show them all without any scrolling. In less common cases where there are many items, the content of the tab will grow taller until it no longer fits vertically, and then the whole tab will become scrollable. (The toolbar at the top, with the buttons like “Save and Close,” will not scroll, remaining in place, still easily accessible.) This approach makes it possible to view all of the items at once when there are many of them (instead of having smaller boxes around each of these elements that are each independently scrollable).  This “whole tab scrolling” approach is how it works in Google Calendar.

For vertical scrolling in a dialog window…  When the contents of the tabbed box (Reminders, Attachments, Attendees, and Description) becomes too big to fit vertically, the tabbed box becomes scrollable.  (Suggestions are welcome for the name of the “More” tab in the window dialog.)

The mockup shows the new date/time picker that is being developed by Mozilla.  It remains to be seen whether it will be available in time for use in this project.  Another possibility is the date/time picker developed by Fastmail.

Progress Report on Coding

Besides working on the design for the UI, I have continued to work on porting the current event dialog UI to a tab.  I created a bug for this part of the first phase of the project, posted my first work-in-progress patch there, and am now working on the next iteration based on the feedback.

This work includes refactoring the current event dialog’s XUL file into more than one file to separate the main part of the UI from its menu bar, tool bar, and status bar items.  This more modular arrangement will make it possible to make the menu bar, tool bar, and status bar items appear in the correct places in the main Thunderbird window when displaying the UI in a tab.  This will solve the problem of the doubled status bar and menu bar in my first patch.

The next patch will also have a hidden preference (accessible via “about:config” but eventually to be added to Lightning’s preferences UI) that determines whether event and task dialogs are opened in a window or a tab by default.

So overall, things are progressing well, which is a good thing since there is only about a week or so left before the GSoC midterm milestone, and the goal is to have phase one of the project completed by that point.  After I have finished this initial “phase one” patch, and any follow-up work that needs to be done for it, we will reach a decision about whether to use XUL, Web Components, React.js, or “plain vanilla” HTML for the implementation of the new UI design, and then start working on implementing it.

— Paul Morris

June 13, 2016 08:17 PM

June 02, 2016

Calendar

GSoC 2016: First Steps

Time for a progress report after my first week or so working on the Event in a Tab GSoC project. Things are going well so far. In short, I have the current event and task dialogs opening in a tab rather than a window and I can create and edit tasks and events in a tab. While not everything is working yet most things already are.

The trickiest part has been working with XUL, since I am not as familiar with it as I am with Javascript. With some help from Fallen on IRC I figured out how to register a new XUL document that contains an iframe and how to load another XUL file into this iframe. For an event or task that is editable one XUL file is loaded (calendar-event-dialog.xul), but if it is read-only then a different XUL file is loaded (calendar-summary-dialog.xul).

Initially I used the tabmail interface’s “shared tab” option — where a single XUL file is loaded and then its appearance and content is modified to create the appearance of completely different tabs. (This is how Thunderbird’s “3-pane” and “single message” tabs work, and also Lightning’s “Calendar” and “Tasks” tab.) However, this did not work when you opened multiple events/tasks in separate tabs. So I figured out the tabmail interface’s other option which loads each tab separately as you would expect and everything is now working fine.

The next step was to figure out how to access the data for an event (or task) from the tab. I actually figured out two ways to do this. The first was via the tabmail interface in the way that it is set up to work (i.e. “tabmail.currentTabInfo”). That meant that the current event dialog code (that referenced the data as a property of the “window” object) had to be changed to access it from this new location.  But that is not so good since we will be supporting both window and tab options and it would be nice if the same code could “just work” for both cases as much as possible.

So I figured out a second way to provide access to the data by just putting it in the right place relative to the iframe, so that the current code could reach it without having to be modified (i.e. still as a property of the “window” object, but with the “window” being relative to the iframe). This is a better approach since the same code will work for both cases (events/tasks in a dialog window or in a tab).

One small thing I implemented via the tabmail interface is that the title of the tab indicates whether you are creating a new item or modifying an existing one and whether the item is an event or a task. However, I will probably end up re-working this because the current dialog window code updates the title of the window as you change the title of the event/task, and that code can probably also be used to generate the initial title of the tab. This is something I will be looking into as I start to really work with the event dialog code.

On the UI design side of things, I created three new mockups based on some more feedback from Richard Marti and MakeMyDay. Part of the challenge is that there are a number of elements that vary in size depending on how many items they contain (e.g. reminders, categories, attachments, attendees). Mockups K and L were my attempt at a slightly different approach for handling this, although we will be following the design of mockup J going forward. You can take a look at these mockups and read notes about them on the wiki page.

The next steps will be to push toward a more finalized design and seek broader feedback on it.  On the coding side I will be identifying where things are not working yet and getting them to work. For example, the code for closing a window does not work from a tab and the status bar items are appearing just above the status bar (at the bottom of the window) because of the iframe.

So far I think things are going well. It is really encouraging that I am already able to create and modify events and tasks from a tab and that most of the basic functionality appears to be working fine.

— Paul Morris

June 02, 2016 07:26 PM

May 23, 2016

Calendar

GSoC 2016: Getting Oriented

Today is the first day of the “coding period” for Google Summer of Code 2016 and I’m excited to be working on the “Event in a Tab” project for Mozilla Calendar. The past month of the “community bonding period” has flown by as I made various preparations for the summer ahead. This post covers what I’ve been up to and my experience so far.

After the exciting news of my acceptance for GSoC I knew it was time to retire my venerable 2008 Apple laptop which had gotten somewhat slow and “long in the tooth.” Soon, with a newly refurbished 2014 laptop via Ebay in hand, I made the switch to GNU/Linux, dual-booting the latest Ubuntu 16.04. Having contributed to LilyPond before it felt familiar to fire up a terminal, follow the instructions for setting up my development environment, and build Thunderbird/Lightning. (I was even able to make a few improvements to the documentation – removed some obsolete info, fixed a typo, etc.) One difference from what I’m used to is using mercurial instead of git, although the two seem fairly similar. When I was preparing my application for GSoC my build succeeded but I only got a blank white window when opening Thunderbird. This time, thanks to some guidance from my mentor Philipp about selecting the revision to build, everything worked without any problems.

One of the highlights of the bonding period was meeting my mentors Philipp Kewisch (primary mentor) and MakeMyDay (secondary mentor). We had a video chat meeting to discuss the project and get me up to speed. They have been really supportive and helpful and I feel confident about the months ahead knowing that they “have my back.” That same day I also listened in on the Thunderbird meeting with Simon Phipps answering questions about his report on potential future legal homes for Thunderbird, which was an interesting discussion.

At this point I am feeling pretty well integrated into the Mozilla infrastructure after setting up a number of accounts – for Bugzilla, MDN, the Mozilla wiki, an LDAP account for making blog posts and later for commit access, etc. I got my feet wet with IRC (nick: pmorris), introduced myself on the Calendar dev team’s mailing list, and created a tracker bug and a wiki page for the project.

Following the Mozilla way of working in the open, the wiki page provides a public place to document the high-level details related to design, implementation, and the overall project plan. If you want to learn more about this “Event in a Tab” project, check out the wiki page.  It contains the mockup design that I made when applying for GSoC and my notes on the thinking behind it. I shared these with Richard Marti who is the resident expert on UI/UX for Thunderbird/Calendar and he gave me some good feedback and suggestions. I made a number of additional mockups for another round of feedback as we iterate towards the final design. One thing I have learned is that this kind of UI/UX design work is harder than it looks!

Additionally, I have been getting oriented with the code base and figuring out the first steps for the coding period, reading through XUL documentation and learning about Web Components and React, which are two options for an HTML implementation. It turns out there is a student team working on a new version of Thunderbird’s address book and they are also interested in using React, so there will be a larger conversation with the Thunderbird and Calendar dev teams about this. (Apparently React is already being used by the Developer Tools team and the Firefox Hello team.)

I think that about covers it for now. I’m excited for the coding period to get underway and grateful for the opportunity to work on this project. I’ll be posting updates to this blog under the “gsoc” tag, so you can follow my progress here.

— Paul Morris

May 23, 2016 08:07 PM

May 17, 2016

Calendar

Google Summer of Code 2016

It is about time for a new blog post. I know it has been a while and there are certainly some notable events I could have blogged about, but in today’s fast paced world I have preferred quick twitter messages.

The exciting news I would like to spread today is that we have a new Google Summer of Code student for this summer! May I introduce to you Paul Morris, who I believe is an awesome candidate. Here is a little information about Paul:

I am currently finishing my graduate degree and in my spare time I like to play music and work on alternative music notation systems (see Clairnote). I have written a few Firefox add-ons and I was interested in the “Event in a Tab” project because I wanted to contribute to Mozilla and to Thunderbird/Calendar which is used by millions of people and fills an important niche. It was also a good fit for my skills and an opportunity to learn more about using html/css/javascript for user interfaces.

Paul will be working on the Event in a Tab project, which aims to allow opening a calendar event or task in a tab, instead of in the current event dialog. Just imagine the endless possibilities we’d have with so much space! In the end you will be able to view events and tasks both in the traditional dialog and in a tab, depending on your preference and the situation you are in.

The project will have two phases, the first taking the current event dialog code and UI as is and making it possible to open it in a tab. The textboxes will inevitably be fairly wide, but I believe this is an important first step and gives users a workable result early on.

Once this is done, the second step is to re-implement the dialog using HTML instead of XUL, with a new layout that is made for the extra space we have in a tab. The layout should be adaptable, so that when the window is resized or the event is opened in a narrow dialog, the elements fall in to place, just like you’d experience in a reactive designed website. You can read more about the project on the wiki.

Paul has already made some great UI mock-ups in his proposal, we will be going through these with the Thunderbird UI experts to make sure we can provide you with the best experience possible. I am sure we will share some screenshots on the blog once the re-implementation phase comes closer.

Paul will be using this blog to give updates about his progress. The coding phase is about to start on May 22nd after which posts will become more frequent. Please join me in welcoming Paul and wishing him all the best for the summer!

 

 

 

May 17, 2016 09:58 PM

May 16, 2016

Robert Kaiser

Tools I Wrote for Crash (Stats) Analysis

Now that I'm off the job that dominated my life (and almost burned me out) for the last years, I finally have some time again to blog. And I'll start with stuff I actually did for that job, as I still am happy to help others to continue from where I left.

The more fun part of the stability management job was actually creating new analysis - and tools. And those tools are still helpful to people working on crash analysis or crash stats analysis now - so as my last task on the job, I wrote some documentation for the tools I had created.

One of the first things I created (and which was part of the original job description when I started) was a prototype for detecting crash "explosiveness", i.e. a detector for crashes that are rising significantly in volume. This turned out to be quite helpful for me and others to use, and the newest reports of it are listed in my Report Overview. I probably should talk about it in more detail at some point, but I did write up a plan on the wiki for the tool, and the (PHP) code is on hg.m.o (that was the language I knew best and gave me the fastest result for a prototype). I had plans to port/rewrite it in python, but didn't get to it. Calixte, who is looking after most of "my" tools now, is working on that though, and I have already promised to review his work as a volunteer so we can make sure we have this helpful capability in better code (and hopefully better UI in the end) for future use.

In general, I have created one-line docs for all the PHP scripts I had in the Mercurial repository, and put them into the run-reports script that is called by a daily cron job. Outside of the explosiveness script, most of those have been obsoleted by Socorro Super Search (yay for Adrian's work and for the ElasticSearch backend!) nowadays.

Also, the scripts that generate the summed-up data for Are We Stable Yet dashboard and graphs (also see an older blog post discussing the graphs) have been ported to python (thanks Peter for helping me to get started there) - and those are available in the Magdalena repository on GitHub. You'll see that this repository doesn't just have more modern code, using python instead of PHP and the public Socorro API instead of private PostgreSQL access, it also has a decent README documenting what it and every script in it does. :)

The most important tools for people analyzing crash stats are in the Datil repository on GitHub (and its deployment on crash-analysis), though. I used all those 4 dashboards/tools daily in the last months to determine what to report to Release Managers and other parties, find out what we need to file as bugs and/or push to get fixed. Datil, like Magdalena, has good docs right in the repository now, readable directly on GitHub.

So, what's there?
Well, the before-mentioned "Are We Stable Yet" dashboard and graphs, for sure (see the longtermgraph docs for what graphs you can get and a legend of what the lines mean).
There's also a tool/prototype for "what's important" weighed top crash lists that I called "Top Crash Score", see the score docs for what it does and examples on how to use that tool.
And finally, I created a search query comparison tools that did let me answer questions like "which crashes happen more with or without multi-process support (e10s) being active?" or "which crashes have vanished with the new beta and which have appeared (instead)?" - which was incredibly helpful to me at least. Read the searchcompare docs for more details and examples.

I probably won't spend a lot of time with those tools any more, neither in usage nor in development, but I'm still happy about people using them, giving me feedback, and I'm also happy to review and merge pull requests that feel like making sense to me!

May 16, 2016 08:33 PM

April 06, 2016

Meeting Notes

Thunderbird: 2016-04-05

Thunderbird notes 2016-04-05. NOON PT (Pacific). For meeting time, previous notes and call-in details see https://wiki.mozilla.org/Thunderbird/StatusMeetings

Attendees

marcoagpinto, aceman, ba, Jorg K, Paenglab, wsmwk, rkent, mkmelin, makemyday

MAIN FOCUS OF MEETING

  • release 45

Action items from last meetings

Current status / Announcements

Current Release Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

blocking

  • bug 1224846 (esr45) – TEST-UNEXPECTED-FAIL | toolkit/components/osfile/tests/xpcshell/test_read_write.js
    • Fallen?
  • bug 1250723 (esr45) – startup CRASH: C-C TB ASSERTION: can’t be your own parent aka morkTable::AddRow (v45 topcrash)
    • rkent is taking this, reproducible result of exceeding limit of key size – FIXED, LANDED
  • bug 1251120 (esr45)- crash in nsMsgI18NConvertFromUnicode (v45 topcrash) – FIXED, LANDED
  • Bug 1162148 – Warning: Identity file /builds/.ssh/{ffxbld_rsa,tbirdbld_dsa} not accessible: No such file or directory (won’t be needed after esr38, fixed in 42) – status?
  • status TBD – bug 1182629 – update to 38.1.0 from 38.0.1 re-enables disabled Lightning
  • status TBD – bug 1176399 – Multiple master password when GMail OAuth2 is enabled
  • status TBD – bug 1176748 – fix main thread proxies to the migration code (jorgk and m_kato helped in past)
  • proxy (eg causing foxyproxy problems)
  • topcrash bug 1149287 is ** 31% of our crashes** – see below
  • bug 1211291 (esr38, unknown in esr45) – Folders are visible, but messages are not. (?related to bug 1211358 lightning chrome.manifest not updated in 38.3.0 ?)
    • nobody has a clue about what to do with this, leaning toward shipping 38.7.0 with not fix. (wsmwk) agree, ship without fixing it
  • Problems updating to 38.6 to do with Lightning bug 1249894 and its five duplicates. Two users attached their extensions folder.
    • Isn’t this the same as the one above, bug 1211291?

Version 45

  • tracking-tb45 flags: unfixed ?/+ – http://mzl.la/1PMzNiK
  • Items that may need to be checked and tested: gtk3, windows 10

Releases

  • Past
    • 45.0b1 2016-02-04 with GTK3 built but updates not enabled
    • 45.0b2 with GTK2 2016-02-19
    • 45.0b3 (shelved)
    • 38.6.0 2016-02-12
    • 38.7.0
    • 38.7.1 (shelved)
    • 38.7.2

Lightning

Past releases:

  • 4.0.7.2 (bundled)
  • 4.0.5.2 (AMO)

Upcoming releases:

  • 4.0.8 (bundled) (TB 38.6)
  • 4.7.0 – bug 1225778 (TB 45 tracking)

Round Table

Jorg K

  • Landed in last four weeks:
  • Awaiting review:

wsmwk

  • triage Neil review and NI requests
  • releases

rkent

  • donation website is “live” but procedures to report and spend income are not established. Meeting on Thursday 2016-04-07 with MoFo admin on this.
  • beginning to investigate bug 1260724 IMAP failing with godaddy servers in TB 38
  • investigated and landed bug 1250723 (esr45) – startup CRASH: C-C TB ASSERTION: can’t be your own parent aka morkTable::AddRow (v45 topcrash)
  • releases
  • I have a huge project to convert ExQuilla to JS (prototype for a possible similar conversion for Thunderbird) that will take the majority of my time for the next six months.

Question Time

rkent, there is no 64-bit version of TB 45 for Windows. It seems only 46+ but the next public will be 52. 🙁 Is there a possibility of compiling 45 in 64-bit?

  • Someone needs to work on the release engineering to figure out to enable those builds.

Help Wanted

  • Accessibility lead
  • Person comfortable (not necessarily technical experts) with Core type issues. Example, graphics to guide bug 1195947 Thunderbird hardware acceleration (HWA) issues to be resolved
  • Lead to run the donation campaign after TB 45 is released.

April 06, 2016 03:00 AM

March 21, 2016

Andrew Sutherland

Web Worker-assisted Email Visualizations using Vega

Faceted and overview visualizations

tl;dr: glodastrophe, the experimental entirely-client-side JS desktop-ish email app now supports Vega-based visualizations in addition to new support infrastructure for extension-y things and creating derived views based on the search/filter infrastructure.

Two of the dreams of Mozilla Messaging were:

  1. Shareable email workflows (credit to :davida).  If you could figure out how to set up your email client in a way that worked for you, you should be able to share that with others in a way that doesn’t require them to manually duplicate your efforts and ideally without you having to write code.  (And ideally without anyone having to review code/anything in order to ensure there are no privacy or security problems in the workflow.)
  2. Useful email visualizations.  While in the end, the only visualization ever shipped with Thunderbird was the simple timeline view of the faceted global search, various experiments happened along the way, some abandoned.  For example, the following screenshot shows one of the earlier stages of faceted search development where each facet attempted to visualize the relative proportion of messages sharing that facet.

faceted search UI prototype

At the time, the protovis JS visualization library was the state of the art.  Its successor the amazing, continually evolving d3 has eclipsed it.  d3, being a JS library, requires someone to write JS code.  A visualization written directly in JS runs into the whole code review issue.  What would be ideal is a means of specifying visualizations that is substantially more inert and easy to sandbox.

Enter, Vega, a visualization grammar that can be expressed in JSON that can not only define “simple” static visualizations, but also mind-blowing gapminder-style interactive visualizations.  Also, it has some very clever dataflow stuff under the hood and builds on d3 and its well-proven magic.  I performed a fairly extensive survey of the current visualization, faceting, and data processing options to help bring visualizations and faceted filtered search to glodastrophe and other potential gaia mail consumers like the Firefox OS Gaia Mail App.

Digression: Two relevant significant changes in how the gaia mail backend was designed compared to its predecessor Thunderbird (and its global database) are:

  1. As much as can possibly be done in a DOM/Web Worker(s) is done so.  This greatly assists in UI responsiveness.  Thunderbird has to do most things on the main thread because of hard-to-unwind implementation choices that permeate the codebase.
  2. It’s assumed that the local mail client may only have a subset of the messages known to the server, that the server may be smart, and that it’s possible to convince servers to support new functionality.  In many ways, this is still aspirational (the backend has not yet implemented search on server), but the architecture has always kept this in mind.

In terms of visualizations, what this means is that we pre-chew as much of the data in the worker as we can, drastically reducing both the amount of computation that needs to happen on the main (page) thread and the amount of data we have to send to it.  It also means that we could potentially farm all of this out to the server if its search capabilities are sufficiently advanced.  And/or the backend could cache previous results.

For example, in the faceted visualizations on the sidebar (placed side-by-side here):

faceted-histograms

In the “Prolific Authors” visualization definition, the backend in the worker constructs a Vega dataflow (only!).  The search/filter mechanism is spun up and the visualization’s data gathering needs specify that we will load the messages that belong to each conversation in consideration.  Then for each message we extract the author and age of the message and feed that to the dataflow graph.  The data transforms bin the messages by date, facet the messages by author, and aggregate the message bins within each author.  We then sort the authors by the number of messages they authored, and limit it to the top 5 authors which we then alphabetically sort.  If we were doing this on the front-end, we’d have to send all N messages from the back-end.  Instead, we send over just 5 histograms with a maximum of 60 data-points in each histogram, one per bin.

Same deal with “Prolific domains”, but we extract the author’s mail domain and aggregate based on that.

Authored content size overview heatmap

Similarly, the overview Authored content size over time heatmap visualization sends only the aggregated heatmap bins over the wire, not all the messages.  Elaborating, for each message body part, we (now) compute an estimate of the number of actual “fresh” content bytes in the message.  Anything we can detect as a quote or a mailing list footer or multiple paragraphs of legal disclaimers doesn’t count.  The x-axis bins by time; now is on the right, the oldest considered message is on the left.  The y-axis bins by the log of the authored content size.  Messages with zero new bytes are at the bottom, massive essays are at the top.  The current visualization is useless, but I think the ingredients can and will be used to create something more informative.

Other notable glodastrophe changes since the last blog post:

More to come!

March 21, 2016 09:28 AM

March 09, 2016

Meeting Notes

Thunderbird: 2016-03-08

Thunderbird notes 2016-03-08. NOON PT (Pacific). For meeting time, previous notes and call-in details see https://wiki.mozilla.org/Thunderbird/StatusMeetings

Attendees

marcoagpinto, rkent, Paenglab, ba, Jorg K, aceman, mkmelin, wsmwk,MakeMyDay, Fallen

Action items from last meetings

Current status / Announcements

Current Release Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

blocking

  • bug 1224846 (esr45) – TEST-UNEXPECTED-FAIL | toolkit/components/osfile/tests/xpcshell/test_read_write.js
    • Fallen?
  • bug 1250723 (esr45) – startup CRASH: C-C TB ASSERTION: can’t be your own parent aka morkTable::AddRow (v45 topcrash)
    • rkent is taking this, reproducible result of exceeding limit of key size
  • bug 1251120 (esr45)- crash in nsMsgI18NConvertFromUnicode (v45 topcrash)
    • We really need a reproducible case, individual messages don’t do the trick. (wsmwk reproduces 100%)
  • bug 1211291 (esr38, unknown in esr45) – Folders are visible, but messages are not. (?related to bug 1211358 lightning chrome.manifest not updated in 38.3.0 ?)
    • nobody has a clue about what to do with this, leaning toward shipping 38.7.0 with not fix. (wsmwk) agree, ship without fixing it
  • Bug 1162148 – Warning: Identity file /builds/.ssh/{ffxbld_rsa,tbirdbld_dsa} not accessible: No such file or directory (won’t be needed after esr38, fixed in 42) – status?
  • Problems updating to 38.6 to do with Lightning bug 1249894 and its five duplicates. Two users attached their extensions folder.
    • Isn’t this the same as the one above, bug 1211291?

important (but not top critical)

  • status TBD – bug 1182629 – update to 38.1.0 from 38.0.1 re-enables disabled Lightning
  • status TBD – bug 1176399 – Multiple master password when GMail OAuth2 is enabled
  • status TBD – bug 1176748 – fix main thread proxies to the migration code (jorgk and m_kato helped in past)
  • proxy (eg causing foxyproxy problems)
  • topcrash bug 1149287 is ** 31% of our crashes** – see below

Version 45

  • tracking-tb45 flags: unfixed ?/+ – http://mzl.la/1PMzNiK
  • Items that may need to be checked and tested: gtk3, windows 10

Releases

  • Past
    • 44.0b1 2016-01-18
    • 45.0b1 2016-02-04 with GTK3 built but updates not enabled
    • 45.0b2 with GTK2 2016-02-19
    • 38.6.0 2016-02-12

Lightning

Past releases:

  • 4.0.5.1 (bundled)
  • 4.0.5.2 (AMO)

Upcoming releases:

  • 4.0.6 (bundled) (TB 38.6)
  • 4.7.0 – bug 1225778 (TB 45 tracking)

Round Table

wsmwk

  • filed new crashes
    • bug 1253855 – crash in nsContentIterator::NextNode (v45 potential topcrash)
    • bug 1251120 – crash in nsMsgI18NConvertFromUnicode (v45 topcrash)
  • sorting related support issues:
    • bug 1246269 – Old Thunderbird versions are not updating.
    • bug 1249216 – SUMO landing page for “Help” could be for better
    • bug 1239386 – Thunderbird start page is outdated

Jorg K

  • Landed:
    • bug 1233935 – Bustage fix (nsAutoPtr – take 2)
    • bug 1250010 – M-C: Type-in style not applied to empty paragraphs
    • bug 1251408 – External images not displayed in reply/forward
    • bug 1251783 – Forwarding multipart message with bad structure doesn’t detect charset
    • bug 1254596 – Reply with selection doesn’t work any more (regression from the DOM windows stuff)
  • Awaiting review:

clokep [not in meeting]

rkent

  • Making our case to Mozilla
  • Leadership

Help Wanted

  • Accessibility lead
  • Person comfortable (not necessarily technical experts) with Core type issues. Example, graphics to guide bug 1195947 Thunderbird hardware acceleration (HWA) issues to be resolved

March 09, 2016 04:00 AM

February 01, 2016

Andrew Sutherland

An email conversation summary visualization

We’ve been overhauling the Firefox OS Gaia Email app and its back-end to understand email conversations.  I also created a react.js-based desktop-ish development UI, glodastrophe, that consumes the same back-end.

My first attempt at summaries for glodastrophe was the following:

old summaries; 3 message tidbits

The back-end derives a conversation summary object from all of the messages that make up the conversation whenever any message in the conversation changes.  While there are some things that are always computed (the number of messages in the conversation, whether there are any unread messages, any starred/flagged messages, etc.), the back-end also provides hooks for the front-end to provide application logic to do its own processing to meet its UI needs.

In the case of this conversation summary, the application logic finds the first 3 unread messages in the conversation and stashes their date, author, and extracted snippet (if any) in a list of “tidbits”.  This also is used to determine the height of the conversation summary in the conversation list.  (The virtual list is aware of a quantized coordinate space where each conversation summary object is between 1 and 4 units high in this case.)

While this is interesting because it’s something Thunderbird’s thread pane could not do, it’s not clear that the tidbits are an efficient use of screen real-estate.  At least not when the number of unread messages in the conversation exceeds the 3 we cap the tidbits at.

time-based thread summary visualization

But our app logic can actually do anything it wants.  It could, say, establish the threading relationship of the messages in the conversation to enable us to make a highly dubious visualization of the thread structure in the conversation as well as show the activity in the conversation over time.  Much like the visualization you already saw before you read this sentence.  We can see the rhythm of the conversation.  We can know whether this is a highly active conversation that’s still ongoing, or just that someone has brought back to life.

Here’s the same visualization where we still use the d3 cluster layout but don’t clobber the x-position with our manual-quasi-logarithmic time-based scale:

the visualization without time-based x-positioning

Disclaimer: This visualization is ridiculously impractical in cases where a conversation has only a small number of messages.  But a neat thing is that the application logic could decide to use textual tidbits for small numbers of unread and a cool graph for larger numbers.  The graph’s vertical height could even vary based on the number of messages in the conversation.  Or the visualization could use thread-arcs if you like visualizations but want them based on actual research.

If you’re interested in the moving pieces in the implementation, they’re here:

February 01, 2016 11:29 AM

January 27, 2016

Meeting Notes

Thunderbird: 2016-01-26

Thunderbird notes 2016-01-26. NOON PT (Pacific). For meeting time, previous notes and call-in details see https://wiki.mozilla.org/Thunderbird/StatusMeetings

Attendees

  • rkent, ba, jcranmer, wsmwk, Jorg K, marcoagpinto, mkmelin, aceman, sshagarwal (audio only), Paenglab, MakeMyDay

MAIN FOCUS OF MEETING

  • version 45
  • governance, futures
  • builds
  • bug 1211160 – Why is updates with calendar add-on messing with Thunderbird?
    • I (rkent) hope to respond to review comments today.

Action items from last meetings

  • Thunderbird Council reorganization: we’ll discuss this in the Council directly and do some reorganization. This is not a forever governance plan, but we need to be practical given the many demands of the moment.
    • I (rkent) sent a proposal to the Council two weeks ago, now I need to badger a few people to give responses and decisions. Gerv has a process of voting to approve the slate using tb-planning that he will be running.
  • we should really clean up the module owners and peers at https://wiki.mozilla.org/Modules/MailNews_Core and https://wiki.mozilla.org/Modules/Thunderbird. Candidates for removal: Bienvenu, standard8, sid0, JosiahOne.
    • jcranmer??

Current status / Announcements

Current Release Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

blocking

  • 38.3.1 bug 1211160 calendar 4.0.3.1 is a workaround for 38.3.0 — Thunderbird Version 38.3 Buttons not working menu items // bug 1211291 – Folders are visible, but messages are not. (?related to bug 1211358 lightning chrome.manifest not updated in 38.3.0 ?
    • rkent has patch, see above

important (but not top critical)

  • status TBD – bug 1182629 – update to 38.1.0 from 38.0.1 re-enables disabled Lightning
  • status TBD – bug 1176399 – Multiple master password when GMail OAuth2 is enabled
  • status TBD – bug 1176748 – fix main thread proxies to the migration code (jorgk and m_kato have done such fixes in the past)
  • filelink, proxy,
  • topcrash bug 1149287 is ** 31% of our crashes** – see below

Version 45

  • tracking-tb45 flags: unfixed ?/+ – http://mzl.la/1PMzNiK
  • Items that may need to be checked and tested: gtk3, windows 10, windows 64bit?
  • hardware acceleration – not looking promising (no changes / no testing in recent months)

Releases

  • Past
    • 38.5.1
    • 43.0b1 2015-12-31
    • 44.0b1 2016-01-18

Lightning

Past releases:

  • 4.0.5.1 (bundled)
  • 4.0.5.2 (AMO)

Upcoming releases:

  • 4.0.6 (bundled) (TB 38.6)
  • 4.7.0 – bug 1225778 (TB 45)

Round Table

Jorg K

  • Landed:
    • bug 1175839 – JSMime regression, square brackets not quoted correctly.
    • bug 1231917 – funny artefacts when replying to a saved message.
    • bug 1235205 – dictionary selection problem.
    • bug 1239658 – Override charset carried over to the next message processed (part 1).
    • bug 1240903 – Links broken in compose (regression)
    • bug 301712 – M-C – More en-US dictionary clean-up, removed 4000 uncommon names.
    • bug 1241480 – Bustage fix.
  • Awaiting review:
    • bug 597369 – Override charset carried over to the next message processed (part 2).
  • Ongoing:
    • Investigating why drafts folder gets corrupted every few months bug 1216914
  • Other work: Aurora Landings

jcranmer

  • Review queue is still backlogged
    • Most complicated review at this point is the JSAccount stuff
  • Looking at JS-ification concerns, see mdat/tb-planning post
    • Current status of JS protocol libraries:
      • SASL: Complete, except for NTLM, GSSAPI, and EXTERNAL (not that it’s easy to pull in)
      • Email-socket: prototyping along with NNTP client
      • NNTP client: I want to test replacing our C++ implementation with this once JSAccount is finished
      • JSMime: I have untested prototypes for message composition for below
  • Working on improving JS send implementation
    • Current idea is to try to use an internal interface to break the patches into smaller sets
  • PSA: if you’re touching anything in nsMsgSend or nsMsgCompUtils (or anything else in mailnews/compose except for nsMsgCompose), PLEASE ADD AN XPCSHELL TEST (or a mozmill test if not possible via xpcshell)… if you don’t, it will probably break within a year!

sshagarwal

  • Apologies for being invisible for quite some time
  • Completed delivery format status being persisted in draft feature request: bug 1202165
  • Will be working on Address book visibility issues in AB window and composition sidebar

aceman

  • We need decision on {{bug|584313)) from a mailnews peer if it can go in (was looked at by Neil and comments were solved)
    • Magnus make the call if needed.
  • 1202165 is ready in code (needs review from jcranmer, who gave r- due to test failure). I just work on some additional mozmill tests.

Question Time

When will we get 64 bit Thunderbird builds on Windows. They are built right now for Daily, for example:
http://ftp.mozilla.org/pub/thunderbird/nightly/2015/12/2015-12-15-03-02-04-comm-central/thunderbird-46.0a1.en-US.win64.installer.exe
See bug 634233

Other

European meet-up dates:
TB is looking for events/dates in March to May timeframe, that would allow the team to meet with possible European partners in Europe. BA to send an e-mail to the TB team with upcoming European events.

Help Wanted

January 27, 2016 04:00 AM

January 26, 2016

Instantbird

Facebook Chat Issues

Recently we’ve received many questions from users whose Facebook Chat accounts will no longer connect in Instantbird.

Facebook officially deprecated its XMPP gateway on April 30th, 2015, but it continued to function until recently. Unfortunately it appears this is no longer the case.

After investigating the issue, we have been unable to find a workaround to keep it working. However, since Facebook deprecated the gateway, it’s surprising it even worked for this long!

Meanwhile, libpurple has a new protocol plugin that uses Facebook’s proprietary chat API, which we are considering offering as part of a future release of Instantbird.

Stay tuned!

January 26, 2016 09:37 PM

January 23, 2016

Matt Harris

Outlook.com / Office365 Imap subscribed folders disappear, difficulty in subscribing

Over the past couple of days it has become apparent that there has been an issue with IMAP accounts hosted on office365 and outlook.com.  Support has received a number of complains of subscribed folders disappearing from Thunderbird and attempts to re-subscribe failing.

A workaround has been identified by turning off the Thunderbird option "Show only subscribed folders".

To turn off this option;
  1. Right click the account in the folder pane.
  2. Select the menu entry Settings
  3. In the server settings for the account,select the advanced button.
  4. In the advanced account settings dialog, un-check the option "Show only subscribed folders"


----o0O0o----


Microsoft have now acknowledged the issue as EX41924 and are posting updates here. At the time of writing this post the latest update (update 4) is suggesting a code solution has been developed and is currently being deployed across the office365 and outlook.com web sites to remediate the failure their previous patch caused.Affected users that wish to follow the Microsoft support thread on community.office365.com can find it here.

While it is unfortunate,  there is nothing the Thunderbird community can do in this other than offer the workaround until such time as Microsoft resume normal services.




January 23, 2016 04:48 AM

December 16, 2015

Meeting Notes

Thunderbird: 2015-12-15

Thunderbird notes 2015-12-15. NOON PT (Pacific). For meeting time, previous notes and call-in details see https://wiki.mozilla.org/Thunderbird/StatusMeetings

Attendees

  • jorgk, marcoagpinto, ba, mkmelin, makemyday, aceman

MAIN FOCUS OF MEETING

  • version 45
  • governance, futures (rkent: nothing I want to volunteer, but I can answer questions).
  • builds

Action items from last meetings

Current status / Announcements

Current Release Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

blocking

  • 38.3.1 bug 1211160 calendar 4.0.3.1 is a workaround for 38.3.0 — Thunderbird Version 38.3 Buttons not working menu items // bug 1211291 – Folders are visible, but messages are not. (?related to bug 1211358 lightning chrome.manifest not updated in 38.3.0 ?)
  • nightly (3 bugs blocking builds, all are assigned) – bug 1195442 – Mac builds broken ~1 month on nightly
  • Fixed – bug 1183490 – (dataloss) New emails do not adhere to sort by order received
    • Plan is to understand issues in converting completely to nextKey = ++lastKey for non-IMAP, then plan how much to implement for 38.3.0 A simple fix of the dataloss is probable. I have a try patch that does the nextKey = ++lastKey that works, now I need to think about patch for 38 (bug 1202105 needs to go to trunk and 38)
    • update 2015-10-20, patches have been posted but waiting for reviews for 2 weeks (aceman will look at those)
    • update 2015-11-17, review done 11-14, need to update and land.

important (but not top critical)

  • status TBD – bug 1182629 – update to 38.1.0 from 38.0.1 re-enables disabled Lightning
  • status TBD – bug 1176399 – Multiple master password when GMail OAuth2 is enabled
  • status TBD – bug 1176748 – fix main thread proxies to the migration code (jorgk and m_kato have done such fixes in the past)
  • filelink, proxy,
  • topcrash bug 1149287 is ** 31% of our crashes** – see below

Version 45

  • tracking-tb45 flags: unfixed ?/+ – http://mzl.la/1PMzNiK
  • Items that may need to be checked and tested: gtk3, windows 10, windows 64bit?
  • hardware acceleration – not looking promising (no changes / no testing in recent months)
  • jsaccount should come in aurora

Releases

Lightning

Past releases:

Upcoming releases:

Round Table

Jorg K

  • Landed eight bugs:
    • bug 769604 – font size toolbar button and change increase/decrease size function.
    • bug 1225904 – CJK e-mail with long lines gets corrupted
    • bug 653342 – CJK fix serializer flags, support delsp=yes, tie all CJK issues altogether. Automatically fixes bug 26734
    • bug 1230968 – CJK Missing bit for bug bug 26734
    • bug 1230970 – CJK Regression from CJK stuff, the only one so far
    • bug 330891 – allow

      to be entered in composition

    • bug 232021 – Copy full name and e-mail from message pane (trivial)
    • bug 1231017 – Flowing lost in inline plain text replies
  • Hunted down two regressions: bug 1231654 and bug 1231887. Looked at Korean auto-complete bug 1058491
  • Awaiting review:
    • bug 1175839 – JSMime regression, square brackets not quoted correctly.
    • bug 1231917 – funny artefacts when replying to a saved message.

rkent

  • I have really for got to focus on JsAccount for the next month, as it is absolutely critical for me to get this landed in TB38.
  • Many thanks to those who got tests working in TB 45, and have been uplifting test fixes to aurora and beta.

aceman

  • practicing check ins 🙂
  • fixing some bugs in tests so that the tree is greener, like 1231036, 998710, 1232485

Question Time

When will we get 64 bit Thunderbird builds on Windows. They are built right now for Daily, for example:
http://ftp.mozilla.org/pub/thunderbird/nightly/2015/12/2015-12-15-03-02-04-comm-central/thunderbird-46.0a1.en-US.win64.installer.exe

Other

ba

Regarding the question on Enigmail/p≡p, the installation will be a single executable file that will include everything if it is installed as add-on. In Microsoft Outlook, a similar p≡p plug-in installation takes 5 clicks and 10 seconds without asking a single question. Patrick/Volker will provide more input on this via the wiki once it is set-up.

Help Wanted

December 16, 2015 04:00 AM

December 08, 2015

Thunderbird Blog

Thunderbird Active Daily Inquiries Surpass 10 Million!

We are pleased to report that Thunderbird usage, as reported though the standard Mozilla metric of Active Daily Inquiries (ADI), has surpassed 10 million users per day on Monday November 30 2015 for the first time ever.

Thunderbird Active Daily Inquiries graph, showing new record of 10,000,000

ADI is a raw measurement of active users, and is taken by counting the daily requests from Thunderbird users for updates to the plugin blocklist. This measure under-counts active users for a variety of reasons (such as firewalls, or users that do not use Thunderbird everyday). Based on more detailed studies with other applications, a typical multiplier applied to ADI to estimate total active users is 2.5. So the best estimate of current active users is 25,000,000.

Thunderbird Celebrates its 11 Birthday

Eleven years ago, on December 7 2004, Mozilla announced in a blog post the birth of Thunderbird. Happy Birthday, Thunderbird!

String-freeze for Thunderbird 45 on December 14

The Thunderbird development team is working hard on the next major release of Thunderbird, version 45, which is due for release in March of 2016. String freeze for new features is this weekend. Over 1000 code commits have been done to the main Thunderbird code repository in preparation for this release (in addition to the tens of thousands of commits to the Mozilla platform repository that Thunderbird uses as its base).

Mozilla Foundation as (Temporary) Thunderbird Home

Coincidentally on the same date as the new ADI record, in a post to a public Mozilla discussion forum, Mozilla Chairperson Mitchell Baker outlined some upcoming changes in the relationship of Mozilla to Thunderbird.

In the administrative part of that post, Mitchell announced that the Mozilla Foundation under Mark Surman has been working with Thunderbird to provide at least a temporary legal and financial home for the Thunderbird project (which we have been sorely lacking for several years). At the same time, a formal process will be undertaken to determine what is the best long-term home for Thunderbird, which might be Mozilla or might be some other entity.

Practically what this means is that in 2016, Thunderbird will finally be able to accept donations from users directed toward the update and maintenance of Thunderbird. In the long run, Thunderbird needs to rely on our users for support, and not expect to be subsidized by revenue from Firefox. We welcome this help from the Mozilla Foundation in moving toward our goal of developing independent sources of income for Thunderbird.

In the technical part of that post, Mitchell reiterated that Mozilla needs to be laser-focused on Firefox, and that the burden this places on Thunderbird (as well as the burden that Thunderbird places on Firefox) is leading to unacceptable outcomes for both projects. The most immediate need is for the Thunderbird release infrastructure to be independent of that used by Firefox, and Mozilla has offered to help. In the long-term, there will be additional technical separation between Firefox and Thunderbird as a continuation of a process that has been ongoing for the last three years.

 

December 08, 2015 11:49 PM

December 02, 2015

Meeting Notes

Thunderbird: 2015-12-01

Thunderbird notes 2015-12-01. NOON PT (Pacific). For meeting time, previous notes and call-in details see https://wiki.mozilla.org/Thunderbird/StatusMeetings

Attendees

  • jorgk, fallen, ba, wsmwk, mkmelin,MakeMyDay, rkent, marcoagpinto, roker

MAIN FOCUS OF MEETING

  • version 45
  • governance, futures
  • builds

Action items from last meetings

  • Thunderbird Council reorganization: we’ll discuss this in the Council directly and do some reorganization. This is not a forever governance plan, but we need to be practical given the many demands of the moment.

Current status / Announcements

Current Release Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

blocking

  • 38.3.1 bug 1211160 calendar 4.0.3.1 is a workaround for 38.3.0 — Thunderbird Version 38.3 Buttons not working menu items // bug 1211291 – Folders are visible, but messages are not. (?related to bug 1211358 lightning chrome.manifest not updated in 38.3.0 ?)
  • nightly (3 bugs blocking builds, all are assigned) – bug 1195442 – Mac builds broken ~1 month on nightly
  • rkent is working on it – bug 1183490 – (dataloss) New emails do not adhere to sort by order received
    • Plan is to understand issues in converting completely to nextKey = ++lastKey for non-IMAP, then plan how much to implement for 38.3.0 A simple fix of the dataloss is probable. I have a try patch that does the nextKey = ++lastKey that works, now I need to think about patch for 38 (bug 1202105 needs to go to trunk and 38)
    • update 2015-10-20, patches have been posted but waiting for reviews for 2 weeks (aceman will look at those)
    • update 2015-11-17, review done 11-14, need to update and land.

important (but not top critical)

  • status TBD – bug 1182629 – update to 38.1.0 from 38.0.1 re-enables disabled Lightning
  • status TBD – bug 1176399 – Multiple master password when GMail OAuth2 is enabled
  • status TBD – bug 1176748 – fix main thread proxies to the migration code (jorgk and m_kato have done such fixes in the past)
  • filelink, proxy,
  • topcrash bug 1149287 is ** 31% of our crashes** – see below

Version 45

  • tracking-tb45 flags: unfixed ?/+ – http://mzl.la/1PMzNiK
  • Items that may need to be checked and tested: gtk3, windows 10, windows 64bit?
  • hardware acceleration – not looking promising (no changes / no testing in recent months)
  • jsaccount should come in aurora

Releases

  • Past
    • 38.0.1 nominally shipped 2015-06-12, 38.1.0 shipped 2015-07-10, 31.8.0 shipped 2015-07-17, 40.0beta shipped 2015-07-27 (skipping 39.0b), 38.2.0 shipped 2015-08-14, 41.0b1 2015-09-08 build2 (missed ~2015-08-10)
    • 41.0b2 (with uplifts for 38.3.0)
    • 38.3.0 ~2015-09-25 (throttled on 2015-10-06)
    • 42.0b2 2015-10-13 (42.0b1 abandoned)
    • 42.0b3 ~2015-09-23 (skipped)
    • 38.4.0 2015-11-25 (quite late) – many release notes issues
  • Upcoming – https://wiki.mozilla.org/RapidRelease/Calendar#Future_branch_dates

Lightning

Past releases:

Upcoming releases:

Round Table

Jorg K

  • Landed six patches
    • bug 1219928 – M-C spell checker problem
    • bug 586587 + one more – M-C drag and drop onto editor window
    • ((bug|1226537}} – C-C spell check in subject problem
    • bug 1214377 – M-C editor problem, copy/paste – “white-space: pre;”
    • bug 1225864 – M-C CJK route cause of problem in the serialiser
  • Awaiting review:
    • bug 769604 – C-C font size toolbar button and change increase/decrease size function.
    • bug 1225904 – C-C CJK e-mail with long lines gets corrupted
    • bug 653342 – C-C CJK fix serializer flags, support delsp=yes, tie all CJK issues altogether. Automatically fixes bug 26734
  • (Attended Mozilla Berlin “Tech Meeting”: Firefox OS, Telemetry, Servo/Rust)

aceman

Assisting Thomas D. to drive the patches to fix determining the send format (plain/HTML) of outgoing messages:

wsmwk

rkent

  • I was away for a full week for Thanksgiving, just back today, and have a ear infection as well, so has not been a good week. Responding to the pEp proposals, and Mitchell’s post, are my main priorities at the moment.

Friends of the tree

Action Items

Help Wanted

December 02, 2015 04:00 AM

November 18, 2015

Meeting Notes

Thunderbird: 2015-11-17

Thunderbird notes 2015-11-17. NOON PT (Pacific). For meeting time, previous notes and call-in details see https://wiki.mozilla.org/Thunderbird/StatusMeetings

Attendees

  • jorgk, Pegasus, ba, wsmwk, mkmelin,MakeMyDay, rkent

FOCUS OF MEETING

(to be covered in sections below)

  • version 45
  • governance, futures
  • builds

Action items from last meetings

  • Thunderbird Council reorganization: we’ll discuss this in the Council directly and do some reorganization. This is not a forever governance plan, but we need to be practical given the many demands of the moment.

Current status / Announcements

  • Need to release 38.4 and 43.0b1
    • Please help green up the tree – including not only c-c but also esr38, beta, and aurora
      • We must be aggressive so checkins when patches become available. Thoughts: if you file a bug, please consider fixing it, or pass it off to someone rather than rely on “the fates”. If it breaks build or tests, please make severity=blocker
  • Thunderbird 45: string freeze per schedule, interface freeze later (that’s me demanding a concession like last year).

Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

blocking

  • 38.3.1 bug 1211160 calendar 4.0.3.1 is a workaround for 38.3.0 — Thunderbird Version 38.3 Buttons not working menu items // bug 1211291 – Folders are visible, but messages are not. (?related to bug 1211358 lightning chrome.manifest not updated in 38.3.0 ?)
  • nightly (3 bugs blocking builds, all are assigned) – bug 1195442 – Mac builds broken ~1 month on nightly
  • rkent is working on it – bug 1183490 – (dataloss) New emails do not adhere to sort by order received
    • Plan is to understand issues in converting completely to nextKey = ++lastKey for non-IMAP, then plan how much to implement for 38.3.0 A simple fix of the dataloss is probable. I have a try patch that does the nextKey = ++lastKey that works, now I need to think about patch for 38 (bug 1202105 needs to go to trunk and 38)
    • update 2015-10-20, patches have been posted but waiting for reviews for 2 weeks (aceman will look at those)
    • update 2015-11-17, review done 11-14, need to update and land.

important (but not top critical)

  • status TBD – bug 1182629 – update to 38.1.0 from 38.0.1 re-enables disabled Lightning
  • status TBD – bug 1176399 – Multiple master password when GMail OAuth2 is enabled
  • status TBD – bug 1176748 – fix main thread proxies to the migration code (jorgk and m_kato have done such fixes in the past)
  • filelink, proxy,
  • topcrash bug 1149287 is ** 31% of our crashes** – see below

Releases

  • Past
    • 38.0.1 nominally shipped 2015-06-12, 38.1.0 shipped 2015-07-10, 31.8.0 shipped 2015-07-17, 40.0beta shipped 2015-07-27 (skipping 39.0b), 38.2.0 shipped 2015-08-14, 41.0b1 2015-09-08 build2 (missed ~2015-08-10)
    • 41.0b2 (with uplifts for 38.3.0)
    • 38.3.0 ~2015-09-25 (throttled on 2015-10-06)
    • 42.0b2 2015-10-13 (42.0b1 abandoned)
  • Upcoming

Lightning

Past releases:

Upcoming releases:

Round Table

Jorg K

  • bug 1174452 – M-C editor problem, copy/paste – “white-space: pre;” – patch preposed in bug 1214377 ; landed and sadly got reopened, awaiting review again.
  • bug 586587 – M-C drag and drop onto editor window – awaiting review.
  • bug 769604 – C-C font size toolbar button and change increase/decrease size function – awaiting review.
  • bug 1219928 – M-C spell checker problem – awaiting review.
  • Started looking as CJK problems in flowed plaintext messages: bug 26734, bug 653342.
  • Got hit by bug 1224840 – Assertion failure: IsOuterWindow(), fixed one occurrence.

rkent

  • Thunderbird’s future plans
    • Pending letter from Mitchell on Thunderbird and MoCo
    • Big issues for Thunderbird future: 1) cost of transition 2) sources of income
      • 1) Volker Birk & another senior architect will work with you to estimate the cost of transition in man days and $ as they have lots of experince in setting up and migrating complex systems.
      • 2) We have been talking to DigitalCourage about setting up a donation based system and we as the p≡p Foundation will fund Thunderbird until the donation levels are sufficient for a sustained future.
    • Even bigger issues for the future: articulating a vision, collecting partners: p≡p, Postbox, Chinese fork, Gaia email, TDF/Collabora, MoFo, ???
      • We see these points as elements that need to be worked out in the six months’ period that we have agreed with MoFo. As for collecting partners, we consider this as an outcome of the Business Strategy that will be defined in the next 6 months. We, the p≡p Foundation will start discussions with TDF. In my opinion, ‘Collecting Partners’could require quite large time committment and it should hence be very well planned.
    • Status of MoFo-Thunderbird and Thunderbird-p≡p discussions
      • p≡p Foundation is waiting for MoFo to come back with final comments. In the meantime, I’ll be wait for Kent’s input to align
    • My concern for developing capacity of Thunderbird to be self-managed and focused.
    • Planning a Europe meetup in January, focused on a business plan for January – June 2016
      • We are happy to set it up here in Europe.

Question Time

  • new account provisioner: Are there currently any providers other than Gandi? Would it be worth trying to find some more?
  • We need a plan for the build system fairly urgently, given the apparent “no” to the c-c/m-c merge

Help Wanted

November 18, 2015 04:00 AM

October 30, 2015

Mike Conley

A printing story and a PSA: outparams over the IPC layer might not behave like you’d expect

Here’s a story about a printing bug on OS X, and a lesson about how our IPC layer works.

Last week, :ehsan came up to my desk and said “Mike… printing is broken on OS X with e10s enabled. Did you know this?”

It was sad to hear. Nobody, as far as I knew, had been touching printing code (besides bobowen, but he hadn’t landed his patches yet), which meant that some mystery change had landed and it was my responsibility (as the one who had implemented printing on OS X for e10s) to fix it.

The first step was to confirm it. Yep, it looked like I couldn’t print on my Nightly. Crap1. So we get the bug filed, and then I fired up my local build and lldb, and tried to trace out where the problem was occurring.

Drilling down to the regressing changeset

The problem was that my local build printed just fine. Eyebrow raised, I tried using my default profile with my local build to see if something about my profile was causing printing to break. Printing continued to work with my local build2.

Scratching my head, I used mozregression to bisect the problem down to a single changeset.

Here’s the bug it spat out:

Bug 1209930 Update Mac clang to match the version in use everywhere else

The hair stood up on the back of my neck. Printing got broken by a compiler upgrade? This had bad news written all over it.

Many try builds

So that, I guess, explained why I couldn’t reproduce the problem locally. I needed to build with a particular compiler in order to experience the bug.

The bad news was that I found out that the version of clang that we’d upgraded to didn’t work on my version of OS X (10.10.5). It was supposed to work on machines running OS X 10.7, but I didn’t have access to any3.

I was able to get debug builds off of try, but I had no luck convincing lldb to let me step through it, despite having the symbols and the right revision checked out4.

So this meant a lot of try builds. The good news is that I was able to add a bunch of logging to try, and just kinda walk away while it built. A few hours later, I’d have my build, and I’d get my results. I wasn’t blocked waiting on it to build, so I could work on other things.

What did the logging reveal?

Our printing code sometimes shows this progress dialog when printing starts. It’s put up so that while we’re laying out the page for printing, the user knows that stuff is happening. We show this progress dialog on Linux and Windows, but not on OS X (I guess OS X users don’t expect such a progress window… this was a decision made way back in the day).

The printing engine in the content process sends up a message to the parent saying, “I’m all set to print, let’s show the progress dialog!”. On OS X, the parent responds, “Uh, nope” (which is a return code of NS_ERROR_NOT_IMPLEMENTED), and the child is supposed to go, “Oh okay, I’ll just print right away”.

But here’s the kicker – along with message to show the progress dialog, the child sends an outparam5. That outparam is sent because for some platforms that show the progress dialog, we want to wait until the dialog closes before starting the actual print. On other platforms, we may just want to start printing right away when laying out the page finishes without waiting for the dialog to close.

That outparam is a bool initialized to a value of false in the print engine, and in the non-e10s case where we return “Sorry, no progress dialog” with NS_ERROR_NOT_IMPLEMENTED, that bool stays false, and when it stays false, it means that we start printing right away.

My logging showed that for some reason, that value was getting flipped to true despite not being touched by (seemingly) anything in the IPC sender / receiver nor the print progress dialog backend (which still just returns NS_ERROR_NOT_IMPLEMENTED when asked to open the printing dialog).

What the hell?

Here’s the PSA

Ehsan helped me dig into what was going on, and we figured it out. Here’s the big lesson / PSA here:

outparams over IPC, when untouched on the receiver side, get filled with values from uninitialized memory when returned to the sender.

I’ll give you an example. Here’s some sorta-pseudocode:

#include <stdio.h>

void someFunc(bool* aOutParam) {
}

int main(int argc, char** argv) {
  bool myBool = false;
  someFunc(&myBool);
  if (myBool) {
    printf("Wait, WHAT?");
    return 1;
  }
  return 0;
}

We should never enter that “Wait, WHAT?” conditional block because myBool remains false throughout. It’s initialized to false, and even though we pass a pointer to it to someFunc, someFunc never touches it, so it stays false, and so we skip the conditional and return 0 as expected.

If, however, you did the same thing, but over IPC… well, now you’re in for a fun treat.

On the receiver side of the IPC message, here’s a chunk of the generated code that calls into RecvShowPrintProgress on the parent side:

            bool notifyOnOpen;
            bool success;
            int32_t id__ = mId;
            if \((!(RecvShowProgress(mozilla::Move(browser), mozilla::Move(printProgressDialog), mozilla::Move(isForPrinting), (&(notifyOnOpen)), (&(success)))))) {
                mozilla::ipc::ProtocolErrorBreakpoint("Handler for ShowProgress returned error code");
                return MsgProcessingError;
            }

            reply__ = new PPrinting::Reply_ShowProgress(id__);

            Write(notifyOnOpen, reply__);
            Write(success, reply__);
            (reply__)->set_sync();
            (reply__)->set_reply();

That notifyOnOpen bool is the one that is the outparam in the child. Notice that it’s not being initialized to any value! That means its current value could be, well, anything. When we call into RecvShowProgress, we pass a pointer to that anything value. If RecvShowProgress doesn’t do anything with that pointer (like in the OS X case), well… that random value is what gets written to the IPC message that gets sent back to the child.

And somehow a clang upgrade made it far more likely that this value would evaluate to TRUE as a boolean instead of FALSE.

And so the child assumed that it’d have to wait for a dialog to close before starting to print – a dialog that would never open, because we don’t show it on OS X.

So the patch I eventually wrote initializes the outparam to false before calling into the OS X widget code that just returns NS_ERROR_NOT_IMPLEMENTED, and that fixed the bug.

It’s a small patch, but in my mind, it’s a big lesson, at least for me. I had assumed that outparams would work the same over IPC as they do in the same process, and that’s simply not the case.

I’ve filed bug 1220168 to try to make this sort of thing easier to spot in the future.


  1. Our automated testing story for printing is pretty bad. We test some of the built-in UI for printing, but as for actually sending stuff to printer drivers… zero automated tests, which explains why this had gone uncaught for so long. 

  2. If you’re wondering, I wasn’t printing out reams of paper to test this. XCode comes with a Printer Simulator, which is like a PDF printer, except that I can get a better sense of the communications being sent to the printer 

  3. Okay, there’s one in the office, but apparently trying to build on it is a slow nightmare 

  4. Happy to hear and share how to do this if someone will show me 

  5. Read this about passing multiple values back from a method call if you’re curious about what outparams are 

October 30, 2015 03:37 PM

October 04, 2015

Mike Conley

The Joy of Coding – Ep’s 23 – 29

Wow! I’ve been a way from this blog for too long. I also haven’t posted any new episodes for The Joy of Coding. I also haven’t been keeping up with my Things I’ve Learned posts.

Time to get back in the saddle. First thing’s first, here are 6 episodes of The Joy of Coding that have aired. Unfortunately, I haven’t put together summaries for any of them, but I’ve put their agendas near the videos so that might give some clues.

Here we go!

Episode 23

Agenda

Episode 24

Agenda

Episode 25

Agenda

Episode 26

Agenda

Episode 27

Agenda

Episode 28

Agenda

Episode 29

Agenda

October 04, 2015 04:43 PM

September 27, 2015

Rumbling Edge - Thunderbird

2015-09-27 Calendar builds

Common (excluding Website bugs)-specific: (41)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac (2015-08-12 build only, due to bug 1195442)

September 27, 2015 08:19 PM

2015-09-27 Thunderbird comm-central builds

Thunderbird-specific: (89)

MailNews Core-specific: (35)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac (2015-08-12 build only, due to bug 1195442)

September 27, 2015 08:18 PM

August 26, 2015

Meeting Notes

Thunderbird: 2015-08-25

Thunderbird meeting notes 2015-08-25. NOON PT (Pacific). For meeting time, previous notes and call-in details see https://wiki.mozilla.org/Thunderbird/StatusMeetings

Attendees

  • wsmwk, aceman, fallen, makemyday, jorgk, rkent, mkmelin, clokep

Action items from last meetings

  • Get release documentation into wiki

Current status and discussions

Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

  • needs assignee after solution identified – bug 1196662 not checking mails after hibernation, caused by core bug 1178890 (and we may be seeing other issues from 1178890, like hangs)
  • aceman is assignee – bug 1183490 – (dataloss) New emails do not adhere to sort by order received
  • status TBD – bug 1182629 – update to 38.1.0 from 38.0.1 re-enables disabled Lightning
  • status TBD – bug 1176399 – Multiple master password when GMail OAuth2 is enabled
  • status ??? – bug 1176748 – Someone needs to add additional main thread proxies to the migration code (jorgk and m_kato have in previous years been the major drivers of fixes)
  • topcrash bug 1149287 is ** 31% of our crashes** – see below

removing from critical list/fixed:

Critical for TB42 and beyond:
bug 1193200 – Blocker: Permissions manager/Remote content exceptions

Releases

  • Past
    • 38.0.1 nominally shipped 2015-06-12
    • 38.1.0 shipped 2015-07-10
    • 31.8.0 shipped 2015-07-17
    • 40.0beta shipped 2015-07-27 (skipping 39.0b)
    • 38.2.0 shipped 2015-08-14
  • Upcoming
    • 38.3.0 ~2015-09-23
    • 41.0beta ~2015-08-10
    • 42.0beta ~2015-09-26

Lightning

Past releases:

Upcoming releases:

Updates:

  • Still need help rewriting SUMO help articles on Lightning
  • Need to fix bug 1194997 for calendars disabling themselves

Round Table

Jorg K

  • bug 772796bug 1174452 – M-C Editor problems to do with pre and “white-space: pre;” currently stalled.
  • bug 368915 – change language in subject (language button), ready to land.
  • bug 1197687 – Font menu again, waiting for Neil.
  • bug 1020181 – Set spell check language for recipient (published add-on as quick fix).

wsmwk

  • developers need to be thinking about landing stuff now, and in the next 2 months, for version 45 moves to aurora dec 14
  • HWA – collected more data and filed bug 1195947 Track HWA issues
  • crashes
    • 38.2.0 crash rate ~0.35, dramatically reduced from ~.5 for 38.1.0 https://crash-stats.mozilla.com/daily?p=Thunderbird likely mostly due to disabling HWA. Only ~6 GPU crash signatures in top 200 and the highest rank is 80 (vs several in top 20 for 38.2.0), so most GPU driver crashes seem to be gone (but not all of the crash rate reduction will have been just from disabling HWA)
    • topcrash bug 1149287 is **31% of crashes (1)** and needs much work because it’s got multiple causes. Some reports implicate McAfee, some are not “shutdown” because user is hung or waiting on some prompt or protocol issue (only then does user try to close and crashes). (speculating) Are some caused by timer issues from bug 1178890? See the comments http://tinyurl.com/nzwkgaq (1) https://crash-stats.mozilla.com/topcrasher/products/Thunderbird/versions/38.2.0
    • lacking time to follow up on other signatures in the “new” topcrash rankings (but that’s OK because we’re very well off compared to 38.1.0)
  • following progress of Thunderbird web pages moving to bedrock – they are close to being done
  • beware – ishikawa in the next several weeks will be landing big changes from many (>12?) bugs, both core and Thunderbird, for IO performance and error checking, eg bug 1116055

Fallen

  • Patched a lot of build issues, together with aleth

rkent

Code:

  • I think we have made progress on maildir issues, I hope to get two critical issues fixed before the next release.

I’m finally starting to turn more of my attention to management issues rather than urgent TB 38 problems. In the long run, those are very important, but the urgent easily crowds out the important. Kudos to those trying to get the tree usable again!

We could talk about:

  • Our thoughts on the announced changes to addons (Including not only
    • require signed vs unsigned?
      • no upside? i.e. we have no identified exploits
      • need to wait for enterprise impacts to be resolved
      • concerned that because the “unsigned process” will be no longer used by AMO, it may impact ability to push out unsigned THunderbird addons using the script they used to mass sign Firefox’s unsigned addons
    • XPCOM/XUL deprecation
      • need to make sure we are involved in the conversation and decisions so that developers have what they need in the “new environment”
  • Patrick is happy to do Enigmail integration into Thunderbird, I think we should agree. comments?
  • Joshua, Suyash, and I are working on the SkinkGlue (now JsAccount) integration, which will be the basis of both JMAP and CARDDAV support. –> consider using https://github.com/gaye/dav for carddav support
  • Volker Birk of Pretty Easy Privacy is *very* serious about heavily engaging his Foundation with Thunderbird. Concerns? http://pep-project.org/
  • Then there is the legal and financial home issue …
    • Expect a full-court press on MoFo by me soon, if you have contacts there please advise. One thing that is clear to me after talking to people is how critical Thunderbird is in the greater software world. We DO have a place at the table! How to get MoFo to see that?
    • We have submitted a formal application to Software Freedom Conservancy to affiliate with them, now we wait for their evaluation queue.
    • Very interesting talks with The Document Foundation(TDF)/LibreOffice. #1 concern: All of our energy goes into keeping up with Mozilla to allow us to optimize some future code development that we never actually have time for. Their funding model is very interesting, would be worth discussing.
    • hoping to meet with postbox in a few weeks
  • It would be really great if someone could lead efforts to prototype a fund-raising drive web page in-product. (hoping to target Oct/Nov)

aceman

  • looking at bug 1183490 (the order received regression), 2 options:
    1. back out bug 854798. Should be safe. I recommend this.
    2. finally drop msgkey=offset in mbox and just increment by 1. May be risky but there is already a precedent that msgs moved by filters already get key+1 so code expecting key==offset should have already been uncovered. However, there are still some unsure places, fixing which means basically bug 793865 (the 64bit envelope and msgkey!=offset cleanup). I would not recommend this for ESR.
  • finally looking to finish some feature patches (compose font selector, saved files tab, show map widget)

Question Time

Other

  • PLEASE PUT THE NEXT MEETING IN YOUR (LIGHTNING) CALENDAR
  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

Help Wanted

August 26, 2015 03:00 AM

August 25, 2015

Thunderbird Blog

Thunderbird and end-to-end email encryption – should this be a priority?

In the last few weeks, I’ve had several interesting conversations concerning email encryption. I’m also trying to develop some concept of what areas Thunderbird should view as our special emphases as we look forward. The question is, with our limited resources, should we strive to make better support of end-to-end email encryption a vital Thunderbird priority? I’d appreciate comments on that question, either on this Thunderbird blog posting or the email list tb-planning@mozilla.org.

"I took an oath to defend the constitution, and I felt the Constitution was being violated on a massive scale" SnowdenIn one conversation, at the “Open Messaging Day” at OSCON 2015, I brought up the issue of whether, in a post-Snowden world, support for end-to-end encryption was important for emerging open messaging protocols such as JMAP. The overwhelming consensus was that this is a non-issue. “Anyone who can access your files using interception technology can more easily just grab your computer from your house. The loss of functionality in encryption (such as online search of your webmail, or loss of email content if certificates are lost) will give an unacceptable user experience to the vast majority of users” was the sense of the majority.

Woman In HandcuffsIn a second conversation, I was having dinner with a friend who works as a lawyer for a state agency involved in white-collar crime prosecution. This friend also thought the whole Snowden/NSA/metadata thing had been blown out of proportion, but for a very different reason. Paraphrasing my friend’s comments, “Our agency has enormous powers to subpoena all kinds of records – bank statements,  emails – and most organizations will silently hand them over to me without you ever knowing about it. We can always get metadata from email accounts and phones, e.g. e-mail addresses of people corresponded with, calls made, dates and times, etc. There is alot that other government employees (non NSA) have access to just by asking for it, so some of the outrage about the NSA’s power and specifically the lack of judicial oversight is misplaced and out of proportion precisely because the public is mostly ignorant about the scope of what is already available to the government.”

So in summary, the problem is much bigger than the average person realizes, and other email vendors don’t care about it.

There are several projects out there trying to make encryption a more realistic option. In order to change internet communications to make end-to-end encryption ubiquitous, any protocol proposal needs wide adoption by key players in the email world, particularly by client apps (as opposed to webmail solutions where the encryption problem is virtually intractable.) As Thunderbird is currently the dominant multi-platform open-source email client, we are sometimes approached by people in the privacy movement to cooperate with them in making email encryption simple and ubiquitous. Most recently, I’ve had some interesting conversations with Volker Birk of Pretty Easy Privacy about working with them.

Should this be a focus for Thunderbird development?

August 25, 2015 09:27 AM

August 17, 2015

Rumbling Edge - Thunderbird

Use funfuzz to find new, unique security bugs in Mozilla for bounty rewards

Do you have spare computer cycles, and would like to help find security bugs in Mozilla products? If you discover new and unique security issues, you may be able to earn bounties within guidelines!

Recently, the Mozilla Platform Fuzzing team released funfuzz (fuzzers & fuzzing harness) and lithium (an updated version of a line-based reducer) on GitHub:

  1. funfuzz
    1. https://github.com/MozillaSecurity/funfuzz
    2. Repository of fuzzers and harness scripts to run them
    3. Jesse Ruderman wrote an excellent blogpost
    4. Components of funfuzz:
      1. jsfunfuzz – js fuzzer
      2. domfuzz – DOM fuzzer
      3. compareJIT – runs with different runtime flags and compares output
      4. randorderfuzz – adds in random tests from repository to jsfunfuzz
      5. compileShell – compiles js shells
      6. autoBisect – bisects Mercurial repositories to find regressors
  2. lithium

We have in-tree documentation to help you get started on your way to find new, unique security bugs.

Quick-start guide:

  1. Ensure you have build prerequisites installed
  2. Clone both repositories side-by-side (adjacent to each other)
    • e.g. into ~/lithium and ~/funfuzz
  3. Clone the Mercurial mozilla-central repository.
    • e.g. into ~/trees/mozilla-central
  4. Start the loopBot script!
    1. Example command:
      • python -u funfuzz/loopBot.py -b "--random" -t "js" --target-time 28800 | tee ~/log-loopBotPy.txt
    2. Use `-t “js”` to test SpiderMonkey shells only, `-t “dom”` for only Firefox DOM
    3. More documentation here.

Notes:

  1. The harness should work with most common platforms, e.g. Windows, Linux and Macs as well as on EC2.
  2. When fuzzing, the computer will use a large amount of computer resources. It is recommended not to use the computer heavily when it is fuzzing.
  3. Until FuzzManager integration arrives, the list of known bugs are in:
    1. assertion failures
    2. crashes
  4. For SpiderMonkey `-t “js”` mode, if you find an unknown crash or assertion failure, there are several files to look for, in the wtmp1 subfolder:
    1. *-reduced.js files usually contain a partially-reduced testcase
    2. *-orig.js files are the original unreduced testcase
    3. *-summary.txt shows the runtime flags needed to trigger the bug
    4. *-crash.txt files contain the crash stacktrace
    5. *-err.txt files contain stderr output
    6. *-out.txt files contain stdout output
    7. *-autobisect.txt files contain bisection information
    8. build-source.txt files contain the information on shell build type
    9. Follow the guidelines as listed in the “Claiming a bug bounty” section of the bug bounty document
  5. In case things go wrong, kill all the relevant Python processes.
    • Example command that kills all running Pythons on machine:
      1. $ killall python # Linux
      2. $ killall Python # Mac

Where you can help:

  1. Run funfuzz with dynamic analysis tools
    • ASan
      • Works on Mac
      • May have issues on Linux, especially EC2 VMs
    • Valgrind
    • TSan, LSan, UBSan not integrated yet
      • Volunteers welcome!
  2. Add to our fuzzers
  3. Improve our fuzzing harness
    • File an issue if something does not work
    • Send us a pull request for improvements!
  4. Help out in other Mozilla Security projects

Note that the final bounty reward amounts are up to the discretion of the bounty committee. Help us help everyone fuzz our way to a safer Gecko for everyone!

(This is part of a new category of posts related to fuzzing. Fuzzing is used extensively to find bugs, regressions and security issues in Gecko, which Firefox, Firefox OS and Thunderbird are based on)

Edit: Tweaked wordings throughout.

August 17, 2015 09:27 PM

July 29, 2015

Matt Harris

LogJam and Thunderbird.

Recently the Firefox core developers patched the LogJam vulnerability in Firefox.  As Thunderbird shares code with Firefox at a low level,  Thunderbird inherited the patch and it made it's largely unannounced debut in Thunderbird 38.1.

Are you affected?  The easiest way to check is look in the Error console Ctrl+Shift+J

The error message is quite distinctive and will take the form shown below;



What this means is that any server using SSL/TLS and 512bit encryption keys is not going to work with the updated Thunderbird.  These keys have a long history.  Introduced in the 1990's to meet US export restrictions on Cyphers.  By the year 2000 these restrictions were lifted.  But by that time the use of the so called Export Cypher suites was well entrenched.

Now 15 years on, when it was assumed basically no one would be using these obsolete suites, up pops a security vulnerability in them and we find that the "if it ain't broke don't fix it" approach to things is alive and well on the Internet. Here on the bleeding edge of technology,  many large organisation, such as the NSA, are still using these obsolete and inherently insecure cypher suites.

There is a short term workaround for those using Thunderbird, by installing the add-on Disable DHE. This add-on is listed in the add-on site as for Firefox,  but it will install in Thunderbird if you download it and drag it over the add-ons entries in the add-on tab.  This is not a long term solution.  You are still at risk of a man in the middle attack using it.  But it gives breathing time to actually make arrangements for new key pairs to be generated for the server. You should contact the server administrator or your mail provider to make these arrangements.

I can not say it better than the team that found the vulnerability, so the following is extracted from their web site
____________________________________________ 
What should I do? 
If you run a server… 
If you have a web or mail server, you should disable support for export cipher suites and generate a unique 2048-bit Diffie-Hellman group. We have published a Guide to Deploying Diffie-Hellman for TLS with step-by-step instructions. If you use SSH, you should upgrade both your server and client installations to the most recent version of OpenSSH, which prefers Elliptic-Curve Diffie-Hellman Key Exchange.
If you use a browser… 
Make sure you have the most recent version of your browser installed, and check for updates frequently. Google Chrome (including Android Browser), Mozilla Firefox, Microsoft Internet Explorer, and Apple Safari are all deploying fixes for the Logjam attack.
If you’re a sysadmin or developer …
Make sure any TLS libraries you use are up-to-date, that servers you maintain use 2048-bit or larger primes, and that clients you maintain reject Diffie-Hellman primes smaller than 1024-bit.
____________________________________________ 
.

July 29, 2015 10:02 AM

Meeting Notes

Thunderbird: 2015-07-28

Thunderbird meeting notes 2015-07-28. NOON PT (Pacific). Check https://wiki.mozilla.org/Thunderbird/StatusMeetings for meeting time conversion, previous meeting notes and call-in details

Attendees

wsmwk, fallen, paenglab, sshagarwal, mkmelin, florian, makemyday

Action items from last meetings

  • asked releng to enable updates ot 38.1.0 for 31.* users

Current status and discussions

Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

  • status TBD bug 1176399 Multiple requests for master password when GMail OAuth2 is enabled
  • Someone needs to add additional main thread proxies to the migration code.
  • bug 1131879 – Disable hardware acceleration (HWA) is back on the table

removing from critical list/fixed:

  • status TBD bug 1174797 Add a cookie exception for GMail OAuth

Releases

  • Past
    • 38.0.1 nominally shipped 2015-06-12
    • 38.1.0 shipped 2015-07-10
    • 31.8.0 shipped 2015-07-17
  • Upcoming
    • 40.0beta almost done (skipping 39.0b)
    • 38.2.0 (~ 2015-08-10)
    • 41.0beta (~ 2015-08-10)

Lightning

Past releases:

Upcoming releases:

Updates:

  • (do we still need this??) As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles
  • We’ve doubled our active daily users

Round Table

wsmwk

  • focus on 38.* updates and stabilitiy
    • HWA issues bug 1131879
    • new topcrash bug 1187764 mozilla::layers::SyncObjectD3D11::FinalizeFrame()
    • new topcrashes (win7) DrawingContext::FillRectangle(D2D_RECT_F const*, ID2D1Brush*), win8/8.1 D2DDeviceContextBase<T>::FillRectangle(D2D_RECT_F const*, ID2D1Brush*)
    • proxy crashes – need to reblocklist?
    • emailed/IRCed trackerbird addon author abustany about version 38 topcrash on opensuse
  • everything in SVN/php needs to move to Bedrock before end of Q3 deadline bug 836461

clokep

  • Met with Florian and aleth last week in France:
    • Instantbird will now use Gecko version numbering
      • “Chat Core” component now has Instantbird 42 (and 43 and 44) as target versions. These match Thunderbird 42 / 43 /44.
      • We will not go back and fix things from previous periods, but I did fix anything committed since the 41 merge.
  • Skype landed (preffed off! flip chat.prpls.prpl-skype.disable) with text-chat only
    • Test it if you use Skype, it’s missing a lot of features still: bug 953999
  • The next part of nhnt11’s GSoC project from *last year* landed! (Logs are now split across days in a saner fashion: bug 1025522)
  • Lots of XMPP fixes (including fixing issues when connecting to Slack)

Jorg K (won’t attend)

  • bug 209189 – C-C: shift delete, undo -> corruption, finished with Kent’s help
  • bug 368915 – C-C: change language in subject, waiting for Magnus.
  • bug 345852 – M-C: persdict stuff for Firefox, landed.
  • bug 772796 – M-C: Editor destroys preformatting, TB papercut, under investigation.

sshagarwal

  • On the last stage of completing demo protocol.
    • Stuck at Folder pane issue for quite a very long time now.

Question Time

When is council elections? –> In a couple months. 1 year after initial team.

Other

  • PLEASE PUT THE NEXT MEETING IN YOUR (LIGHTNING) CALENDAR
  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

Action Items

  • Get release documentation into wiki

Help Wanted

July 29, 2015 03:00 AM

July 25, 2015

Rumbling Edge - Thunderbird

2015-07-24 Calendar builds

Common (excluding Website bugs)-specific: (16)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64) (2015-07-22 builds)

Mac builds Official Mac

July 25, 2015 07:08 AM

2015-07-24 Thunderbird comm-central builds

Thunderbird-specific: (24)

MailNews Core-specific: (21)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

July 25, 2015 07:05 AM

July 21, 2015

Kent James

Is Mozilla an Open Source Project?

At the 2015 Community Leadership Summit, keynote speaker Henrik Ingo asked what he intended to be a trick question:

Everybody knows that Redhat is the largest open source company by revenue, with 1.5 billion dollars per year in revenue. What is the second largest open source company?

Community Leadership Summit 2015

Community Leadership Summit 2015

It took awhile before someone came up with the correct answer – Mozilla! Why is this a trick question? Because people don’t view Mozilla as an open source software company! Even in an open-source friendly crowd, people need to be reminded that Mozilla is open source, and not another Google or Apple. The “open source” brand is getting ever more powerful, with hot new technologies like OpenStack, Docker, and node.js adopting the foundation-owned open source model, while Mozilla seems to be drifting away from that image.

The main point of Henrik’s talk was that projects that are “open-source” while dominated by a single company show limited growth potential when compared to projects where there is an independent foundation without any single dominating company. Mozilla is an odd model, with a company that is dominated by a foundation (at least in theory). It seems though that these days, what has emerged is a foundation that is dominated by a company, exactly the model that Henrik claims limits growth. As that company gets more and more “professional” (acting like a company), it gets harder to perceive Mozilla to be anything other than another big tech company.

Something has changed at Mozilla, that I don’t really understand. Not that I have any inside knowledge (Thunderbird folks like me don’t get invited to large Mozilla gatherings any more), but is this really the brand image that Mozilla wants? I doubt it. Hopefully people smarter than me can figure out how to fix it, as there is still something about Mozilla that many of us love.

July 21, 2015 05:13 AM

July 15, 2015

Kent James

Fixing QuickText addon for Thunderbird

The popular QuickText addon has not been updated for Thunderbird 38 and no longer works. As a user of that addon, I wanted to make it work again. This post provides instructions on how to do that.

The addon has no license mentioned, and unfortunately that defaults to “all rights reserved”. That means that I cannot provide the modified source to download, but I can under “Fair Use” describe the needed changes, that you can do yourself. They are trivial (at least for my use case). I will describe the changes for the non-Pro version but presumably they are the same for the Pro version. The only problem is that the template file is written in one format, but is read in a different format so that it does not work.

To edit the source, first you need to uncompress it. The QuickText .xpi file is just a renamed .zip file, so extract this file with your favorite zip utility (I use 7-zip). Find the file named components/wzQuicktext.js Find the line (near line 570) that looks like this:

if (bomheader == "\xFF\xFE" || bomheader == "\xFE\xFF")

Modify that line by adding an additional condition to be this:

if (bomheader == "\xFF\xFE" || bomheader == "\xFE\xFF" || bomheader.length == 1)

That’s it! Now you just have to select all of the files in the addon, and put them back into a ZIP archive re-named as .xpi

I changed a few other meta details such as the version number and compatibility, so here is a link to the actual patch that I use:

http://mesquilla.net/releases/quicktext.patch

I’ll keep contacting the author trying to either get the official version modified, or a release that allows me to modify it. But you should be able to do this yourself and run a modified version.

July 15, 2015 05:01 PM

July 11, 2015

Mike Conley

The Joy of Coding (Ep. 20): Reviewin’ and Mystery Solvin’

After a two week hiatus, we’re back with Episode 20!

In this episode, I start off by demonstrating my new green screen1, and then dive right into reviewing some code to make the Lightweight Theme web installer work with e10s.

After that, I start investigating a mystery that my intern ran into a few days back, where for some reason, preloaded about:newtab pages were behaving really strangely when they were loaded in the content process. Strangely, as in, the pages wouldn’t do simple things, like reload when the user pressed the Reload button.

Something strange was afoot.

Do we solve the mystery? Do we figure out what’s going on? Do we find a solution? Tune in and find out!

Episode agenda.

References

Bug 653065 – Make the lightweight theme web installer ready for e10s
Bug 1181601 – Unable to receive messages from preloaded, remote newtab pageNotes
@mrrrgn hacks together a WebSocket server implementation in Go. To techno!


  1. Although throughout the video, the lag between the audio and the video gets worse and worse – sorry about that. I’ll see what I can do to fix that for next time. 

July 11, 2015 04:29 PM

The Joy of Coding (Ep. 19): Cleaning up a patch

In this episode, I picked up a patch that another developer had been working on to try to drive it over the line. This was an interesting exercise in trying to take ownership and responsibility of something rather complex, in order to close a bug.

I also do some merging and conflict resolution with Mercurial in this episode.

Something else really cool happens during the latter half of this episode – I ask the audience for advice on how to clean up some state-machine transition logic in some code I was looking at. I was humming and hawing about different approaches, and put the question out to the folks watching: What would you do? And I got responses! 

More than one person contacted me either in IRC or over email and gave me suggestions on how to clean things up. I thought this was awesome, and I integrated a number of their solutions into the patch that I eventually put up for review.

Thanks so much to those folks for watching and contributing!

Episode agenda.

References

Bug 1096550 – Dragging tab from one window to another on different displays zooms inNotes
Bug 863514 – Electrolysis: Make gesture support workNotes

July 11, 2015 04:19 PM

June 17, 2015

Meeting Notes

Thunderbird: 2015-06-16

Thunderbird meeting notes 2015-06-16. NOON PT (Pacific). Check https://wiki.mozilla.org/Thunderbird/StatusMeetings for meeting time conversion, previous meeting notes and call-in details

Attendees

rkent, wsmwk, sshagarwal, jorgk, joes1, jcranmer, mkmelin, rolandtanglao, makemyday

Current status and discussions

Critical Issues

Leave critical bugs here until confirmed fixed. If confirmed, then remove.

  • see tb38 etherpad
  • tracking-tb38 flags – approved(+): http://tinyurl.com/nk3vbhl nominated(?): http://tinyurl.com/pgwflyg
  • Do we unthrottle updates for TB38?
    • suggest 10% for a couple days so we get more data, if there are no objections (wsmwk). we are roughly at 200k users or 1% of user population
    • JoeS1 FWIW Firefox unthrottles immediately to 25 for info, I think we should do the same
      • But I guess they do that because they are ready to do a point release if necessary, and we are not that ready
    • What is state of addons? (generally we don’t decide based on this, except maybe calendar)
    • Are there tracked bugs we want fixed before full unthrottle? (means waiting for 38.1.0)
  • rkent list of critical issues blocking even partial unthrottling:
    • broken Simplified Chinese bug 1174580 – Not display GB2312 encoded texts correctly
      • It might be possible to simply change the encoding mapping to fix this?
    • proxy not working bug 1175051 and probably related crashes
  • rkent list of other critical issues
    • Outlook/Eudora import completely busted: bug 1175055.
      • I suggest that for the next dot release, if we cannot get the crash fixed, we should disable the menu item that promotes this. Does not block unthrottling since mostly affects new users.
      • Someone needs to add additional main thread proxies to the migration code.
    • bug 1174797 Add a cookie exception for GMail OAuth
      • Does not affect unthrottling since no effect unless someone tries to use OAuth. Lightning has an example of how to fix this, so should not be difficult.

Releases

  • Past
    • 31.6.0 shipped
    • 38.0b3 shipped 2015-04-26 Sunday
    • 38.0b4 shipped 2015-05-03 Sunday
    • 31.7.0 2015-05-18 Monday (lots of issues – Tues 5-12 was target)
    • 38.0b5 shipped 2015-05-19 Tuesday
    • 38.0b6 shipped 2015-05-23
    • 38.0.1 nominally shipped 2015-06-12
  • Upcoming
    • 31.8.0 ~2015-06-20 GTB
    • 38.1.0 FF GTB on June 22 release June 30.
      • Would it be possible to continue to use the beta channel for TB 38 until post-38.2.0? So maybe we could do a 38.1.0b1?

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles
  • We need to fill the “Learn More” page with content, possibly point it to something more specific bug 1159682
  • Opt-out dialog: change “disable” to “remove” bug 1159698
  • tracking bug for lightning 4.0 bug 1153752

Round Table

wsmwk

jorgk

  • bug 345852 – Personal dictionary, waiting for Ehsan
  • bug 209189 – delete, delete, undo -> corruption, waiting for Neil/Kent
  • bug 368915 – spell check in subject, ongoing.
  • bug 1175055 – Import (Eudora/Outlook) busted.

rkent

My availability will be limited from June 20 – June 29. If there is going to be progress on fixing TB 38 releases in that time, someone else would have to drive it.

  • Monitoring issues appearing in TB 38.0.1
  • Previously there were issues getting TB 38 to build with current mozilla-esr38 (which resulted in THUNDERBIRD_38_0_20150603_RELBRANCH) and I suspect there may be some fixes needed to get the merge of mozilla-esr38 into THUNDERBIRD_38_VERBRANCH to work.

jcranmer

  • Huge backlog, being more or less processed in stack order unfortunately
    • Ping me on IRC if you have anything really important to look at something
  • Following up on some releng future issues
  • I have some discussions to start after 38 is finally out the door

sshagarwal

  • At step one of the project: Setting up a dummy protocol to understand how a protocol

is added to TB using addon approach (using Skinkglue).

  • I have taken too much time to do this mostly due to health issues continuing for over a month now. Will pick up soon.

Question Time

Jorg K:
When will JSMime take over all RFC2047 decoding? Looking at bug 1146099 I found RFC2047 code used in comm-central/mozilla/netwerk/mime.
Also, while Joshua is here: Will we fix whatever we did wrong in bug 1154521

Support team

  • Roland’s last day is June 30, 2015 – please needinfo :rolandtanglao or email rtanglao AT mozilla.com if you need my help starting July 1, 2015

Other

  • PLEASE PUT THE NEXT MEETING IN YOUR (LIGHTNING) CALENDAR
  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

June 17, 2015 03:00 AM

June 16, 2015

Rumbling Edge - Thunderbird

2015-06-15 Calendar builds

Common (excluding Website bugs)-specific: (6)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

June 16, 2015 07:22 AM

2015-06-15 Thunderbird comm-central builds

Thunderbird-specific: (18)

MailNews Core-specific: (6)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

June 16, 2015 07:21 AM

June 14, 2015

Mike Conley

The Joy of Coding (Ep. 18): New Theme Song!

In this episode, I debuted The Joy of Coding’s new theme song, lovingly crafted by my good friend Barn Costello!1

Then I dove into fixing a new bug that allows e10s to continue running if the user is in safe mode. After that, we dove into an investigation on why click-to-play wasn’t working for a particular site.

Episode agenda.

References


  1. Shameless plug – Barn and I are in a band together. Here are some of our music videos

June 14, 2015 08:07 PM

June 13, 2015

Calendar

There is no Lightning 4.0

…but of course there is is a release for Thunderbird 38! Since the release date for Thunderbird has been postponed and in the meanwhile Firefox has released 38.0.1, Thunderbird will also be released as Thunderbird 38.0.1. Since the Lightning version is automatically generated at build time, we have just released Lightning 4.0.0.1. If you are still using Thunderbird 31 and Lightning 3.3.3, you will be getting an update in the next days.

The exciting thing about this release is that Lightning has been integrated into Thunderbird. I expect there will be next to no issues during upgrade this time, because Thunderbird includes the Lightning addon already.

If you can’t wait, you can get Thunderbird in your language directly from mozilla.org. If you do happen to have issues with upgrading, you can also get Lightning from addons.mozilla.org. The latest Seamonkey version is 2.33.1 at the time of writing, you need to use Lightning 3.8b2 in this case. For more information on compatibility, check out the calendar versions page.

As mentioned in a previous blog post, most fixed issues are backend fixes that won’t be very visible. We do however have a great new feature to save copies of invitations to your calendar. This helps in case you don’t care about replying to the invitation but would still like to see it in your calendar. We also have more general improvements in invitation compatibility, performance and stability and some slight visual enhancements. The full list of changes can be found on bugzilla.

If you are upgrading manually, you might want to make a backup. Although I don’t anticipate any major issues, you never know.

If you have questions, would like support, or have found a bug, feel free to leave a comment here and I’ll get back to you as soon as possible.

June 13, 2015 12:10 AM

June 12, 2015

Thunderbird Blog

Thunderbird 38 Released

Thunderbird 38 is now released (actual initial version is 38.0.1 to maintain compatibility with equivalent Firefox releases). This release has some significant new features, as well as many, many bug fixes. Some of the new features include:

This is a significant milestone for the Thunderbird team, as it is the first release that has been fully managed by our volunteer team rather than by Mozilla staff.

Mozilla is still heavily involved with this release, as we still use Mozilla infrastructure for the build and release process. Thanks to the many Mozilla staff who helped out to fix issues!

Thanks to all of the volunteers who have contributed to make this release possible!

(Note that while general comments on Thunderbird 38 are welcome, please do not use the comment section of this blog as a place to make bug reports, or to request support for specific issues).

June 12, 2015 08:35 PM

June 09, 2015

Mike Conley

Things I’ve Learned This Week (June 1 – June 5, 2015)

How to get an nsIGlobalObject* from a JSContext*

I’m working on a patch for bug 1116188 to make gathering profiles from subprocesses asynchronous. In order to do that, I’m exposing a new method on nsIProfiler called getProfileDataAsync that is returning a DOM Promise. What’s interesting about this is that I’m returning a DOM Promise from C++! 1

In order to construct a DOM Promise in C++, I need to hand it something that implements nsIGlobalObject. I suspect that this helps the Promise determine which memory region that it belongs to.

My new method gets a JSContext* because I’ve got the [implicit_jscontext] bit about the method definition in the nsIProfiler.idl file… so how do I go about turning that into an nsIGlobalObject?

Here’s the maneuver:

// Where aCX is your JSContext*:
nsIGlobalObject* go = xpc::NativeGlobal(JS::CurrentGlobalOrNull(aCx));

That will, as the name suggests, return either an nsIGlobalObject*, or a nullptr.

Resolving a DOM Promise from C++ with a JS Object

For my patch for bug 1116188, it’s all well and good to create a DOM Promise, but you have to resolve or reject that Promise for it to have any real value.

In my case, I wanted to take a string, parse it into a JS Object, and resolve with that.

Resolving or rejecting a DOM Promise in Javascript is pretty straight-forward – you’re given back resolve / reject function, and you just need to call those with your results and you’re done.

In C++, things get a little hairier. As I discovered in my most recent episode of The Joy of Coding, conditions need to be right in order for this to work out.

Here’s what I ended up doing (I’ve simplified the method somewhat to remove noise):

void
ProfileGatherer::Finish()
{
  AutoJSAPI jsapi;
  jsapi.Init();
  JSContext* cx = jsapi.cx();
  JSAutoCompartment ac(cx, mPromise->GlobalJSObject());

  // Now parse the JSON so that we resolve with a JS Object.
  JS::RootedValue val(cx);
  {
    UniquePtr<char[]> buf = mWriter.WriteFunc()->CopyData();
    NS_ConvertUTF8toUTF16 js_string(nsDependentCString(buf.get()));
    MOZ_ALWAYS_TRUE(JS_ParseJSON(cx, static_cast<const char16_t*>(js_string.get()),
                                 js_string.Length(), &val));
  }
  mPromise->MaybeResolve(val);
}

The key parts here are getting the AutoJSAPI on the stack, initting it, gettings its JSContext, and then putting the JSAutoCompartment on the stack. Note that I had to pass not only the JSContext, but the global JS Object for the Promise as well – I suspect that’s, again, to ensure that the right compartment is being entered. Otherwise, I start failing assertions like crazy.

Note that the code above is by no means perfect – I’m missing error handling functions for when the JSON parsing goes wrong. In that case, I should probably reject the Promise instead. bz pointed me to a good example of that going on here in Fetch.cpp:

      if (!JS_ParseJSON(cx, decoded.get(), decoded.Length(), &json)) {
        if (!JS_IsExceptionPending(cx)) {
          localPromise->MaybeReject(NS_ERROR_DOM_UNKNOWN_ERR);
          return;
        }

        JS::Rooted<JS::Value> exn(cx);
        DebugOnly<bool> gotException = JS_GetPendingException(cx, &exn);
        MOZ_ASSERT(gotException);

        JS_ClearPendingException(cx);
        localPromise->MaybeReject(cx, exn);
        return;
      }

      localPromise->MaybeResolve(cx, json);
      return;

I’ll probably end up doing something similar in the next iteration of my patch.


  1. I learned how to do that a few weeks back

June 09, 2015 01:55 AM

June 06, 2015

Mike Conley

The Joy of Coding (Ep. 17): Frustrations in the Key of C++

In this episode, I gave a quick update on the OS X printing bug we’d been working on a for a few weeks (Spoiler alert – the patch got reviewed and landed!), and then dove into my new problem: getting performance profiles from subprocesses asynchronously.

And, I won’t lie to you, this is probably the most frustrating episode in the series so far. I really didn’t make much headway.

The way I want to solve this problem involves passing a DOM Promise back to the Javascript caller that resolves when all of the profiles have been gathered asynchronously.

If I were writing this in Javascript, it’d be a cinch. Creating, passing around, and resolving Promises is pretty straight-forward in that world.

But the Gecko profiler backend is written entirely in C++, and so that’s where I’d have to create the Promise.

A few weeks back, I posted a “Things I’ve learned this week” about how to create DOM Promises in C++. That’s all well and good, but creating the Promise is only half of the job. You have to resolve (or reject) the Promise in order for it to be useful at all.

The way I wanted to resolve the Promise involved parsing a JSON string and resolving with the resulting object.

That turned out to be a lot harder than I thought it’d be. Watch the video to see why. Suffice it to say, I spend a lot of it asking for help in IRC. It’s a 100% accurate demonstration of what I do when I’m lost, or can’t figure something out, and I need help.

Since I recorded this episode, I’ve figured out what I needed to do – I’ve posted a “Things I’ve learned this week” containing that information. Hopefully that’ll help somebody else in the future!

Oh – also, this episode has sound effects, courtesy of Wacky Morning DJ (which I demonstrated in last week’s episode).

Episode agenda.

References

Bug 1116188 – [e10s] Stop using sync messages for Gecko profilerNotes

June 06, 2015 04:14 PM

June 03, 2015

Meeting Notes

Thunderbird: 2015-06-02

Thunderbird meeting notes 2015-06-02. NOON PT (Pacific). Check https://wiki.mozilla.org/Thunderbird/StatusMeetings for meeting time conversion, previous meeting notes and call-in details

Attendees

rkent, Fallen, jorgk, margoagpinto, mkmelin, paenglab, wsmwk, aceman, sshagarwal, joes1

Current status and discussions

How can we keep Thunderbird relevant with competitors like Geary Discourage new email clients with small teams? (Mailpile, Postbox also come to mind).

And speaking of Postbox … we plan to have discussion soon about more active cooperation.

Critical Issues

  • Last call for issues prior to 38.0 I am not aware of any blockers, plan to build Wednesday.

Critical bugs. Leave these here until they’re confirmed fixed. If confirmed, then remove.

  • AMO compatibility bump! (is not going to happen)
  • In general, the tracking-tb38 flag http://mzl.la/1EOx9Tm shows critical issues.
  • maildir UI: nothing more to do for UI, still want to land a patch for letting IMAP set this.

removing from critical list/fixed:

Releases

  • Past
    • 31.6.0 shipped
    • 38.0b3 shipped 2015-04-26 Sunday
    • 38.0b4 shipped 2015-05-03 Sunday
    • 31.7.0 2015-05-18 Monday (lots of issues – Tues 5-12 was target)
    • 38.0b5 shipped 2015-05-19 Tuesday
    • 38.0b6 shipped 5/23
  • Upcoming
    • 38.0 final 2015-06-03?
    • 31.8.0 2015-06-20~ GTB

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles
  • We need to fill the “Learn More” page with content, possibly point it to something more specific bug 1159682
  • Opt-out dialog: change “disable” to “remove” bug 1159698
  • tracking bug for lightning 4.0 bug 1153752

Round Table

wsmwk

  • creating moztrap tests for new features

Jorg K

sshagarwal

  • Getting started on GSoC project.
  • Trying to understand xpcom components and figure out how to write a dummy protocol to use with Skinkglue.

Question Time

  • Jorg K: Release notes? I think more bugs should go on the list that we currently have. We will consider those with more than 3 votes 😉
  • rkent: Should we consider enabling maildir as default on trunk? Use ifdef to only compile on trunk.
    • I think maildir still too untested to set as default, even on trunk (JoeS1)
    • Can we wait with that until after getting feedback from testers of it on TB38? (aceman)
  • aceman: still no luck in getting Extra folder columns addon disabled (= limited compatibility to <=37.x) on AMO?
  • aceman: also somebody should at least comment on the “Send filter” addon on AMO that the feature is in base TB. It may conflict.

Other

  • PLEASE PUT THE NEXT MEETING IN YOUR (LIGHTNING) CALENDAR

June 03, 2015 03:00 AM

June 01, 2015

Mike Conley

Things I’ve Learned This Week (May 25 – May 29, 2015)

MozReview will now create individual attachments for child commits

Up until recently, anytime you pushed a patch series to MozReview, a single attachment would be created on the bug associated with the push.

That single attachment would link to the “parent” or “root” review request, which contains the folded diff of all commits.

We noticed a lot of MozReview users were (rightfully) confused about this mapping from Bugzilla to MozReview. It was not at all obvious that Ship It on the parent review request would cause the attachment on Bugzilla to be r+’d. Consequently, reviewers used a number of workarounds, including, but not limited to:

  1. Manually setting the r+ or r- flags in Bugzilla for the MozReview attachments
  2. Marking Ship It on the child review requests, and letting the reviewee take care of setting the reviewer flags in the commit message
  3. Just writing “r+” in a MozReview comment

Anyhow, this model wasn’t great, and caused a lot of confusion.

So it’s changed! Now, when you push to MozReview, there’s one attachment created for every commit in the push. That means that when different reviewers are set for different commits, that’s reflected in the Bugzilla attachments, and when those reviewers mark “Ship It” on a child commit, that’s also reflected in an r+ on the associated Bugzilla attachment!

I think this makes quite a bit more sense. Hopefully you do too!

See gps’s blog post for the nitty gritty details, and some other cool MozReview announcements!

June 01, 2015 05:49 AM

May 27, 2015

Rumbling Edge - Thunderbird

2015-05-26 Calendar builds

Common (excluding Website bugs)-specific: (23)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

May 27, 2015 08:26 AM

2015-05-26 Thunderbird comm-central builds

Thunderbird-specific: (54)

MailNews Core-specific: (30)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

May 27, 2015 08:25 AM

April 30, 2015

Andrew Sutherland

Talk Script: Firefox OS Email Performance Strategies

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...>
<script ...></script>
DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo');
var htmlString = require('text!./foo.html');
var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

April 30, 2015 08:11 PM

April 03, 2015

Thunderbird Blog

Thunderbird 38 goes to beta!

The next major release of Thunderbird, version 38, is now in beta and available for testing. You may download Thunderbird 38.0b1 here.

This version of Thunderbird is the first that is mostly managed by volunteer community members rather than by Mozilla staff. We have many new features, including:

Release notes are available here.

There are still a couple of features missing from this beta that we hope to ship in the final version of Thunderbird 38. Those are:

 

April 03, 2015 09:13 AM

April 01, 2015

Joshua Cranmer

Breaking news

It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

April 01, 2015 07:00 AM

February 27, 2015

Thunderbird Blog

Thunderbird Usage Continues to Grow

We’re happy to report that Thunderbird usage continues to expand.

Mozilla measures program usage by Active Daily Installations (ADI), which is the number of pings that Mozilla servers receive as installations do their daily plugin block-list update. This is not the same as the number of active users, since some users don’t access their program each day, and some installations are behind firewalls. An estimate of active monthly users is typically done by multiplying the ADI by a factor of 3.

To plot changes in Thunderbird usage over time, I’ve picked the peak ADI for each month for the last few years. Here’s the result:

Thunderbird Active Daily Installations, peak value per month.

Germany has long been our #1 country for usage, but in 4th quarter 2014, Japan exceeded US as the #2 country. Here’s the top 10 countries, taken from the ADI count of February 24, 2015:

Rank Country ADI 2015-02-24
1 Germany 1,711,834
2 Japan 1,002,877
3 United States 927,477
4 France 777,478
5 Italy 514,771
6 Russian Federation 494,645
7 Poland 480,496
8 Spain 282,008
9 Brazil 265,820
10 United Kingdom 254,381
All Others 2,543,493
Total 9,255,280

Country Rankings for Thunderbird Usage, February 24, 2015

The Thunderbird team is now working hard preparing our next major release, which will be Thunderbird 38 in May 2015. We’ll be blogging more about that release in the next few weeks, including reporting on the many new features that we have added.

February 27, 2015 10:44 PM

January 13, 2015

Joshua Cranmer

Why email is hard, part 8: why email security failed

This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

January 13, 2015 04:38 AM

January 10, 2015

Joshua Cranmer

A unified history for comm-central

Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash

set -e

WORKDIR=/tmp
HGCVS=$WORKDIR/mozilla-cvs-history
MC=/src/trunk/mozilla-central
CC=/src/trunk/comm-central
OUTPUT=$WORKDIR/full-c-c

# Bug 445146: m-c/editor/ui -> c-c/editor/ui
MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204

# Bug 669040: m-c/db/mork -> c-c/db/mork
MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a

# Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/*
MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d

# Step 0: Grab the mozilla CVS history.
if [ ! -e $HGCVS ]; then
  hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS
fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately
# interested in.
cat >$WORKDIR/convert-filemap.txt <<EOF
# Revision e4f4569d451a
include directory/xpcom
include mail
include mailnews
include other-licenses/branding/thunderbird
include suite
# Revision 7c0bfdcda673
include calendar
include other-licenses/branding/sunbird
# Revision ee719a0502491fc663bda942dcfc52c0825938d3
include editor/ui
# Revision 52efa9789800829c6f0ee6a005f83ed45a250396
include db/mork/
include db/mdb/
EOF

# Add the S/MIME import files
hg -R $MC log -r "children($MC_SMIME_IMPORT)" \
  --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt

if [ ! -e $WORKDIR/convert-authormap.txt ]; then
hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \
  | sort -u > $WORKDIR/convert-authormap.txt
fi

cd $WORKDIR
hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}')
CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2)
MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}')
MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2)

# Okay, now we need to build the map of revisions.
cat >$WORKDIR/convert-revmap.txt <<EOF
e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite
7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar
9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point
ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor
8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat
EOF

# Next, import mozilla-central revisions
for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do
  hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \
    --filemap $WORKDIR/convert-filemap.txt
done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings.
pushd $OUTPUT
hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2)
(hg -R $MC log -r "children($MC_MORK_IMPORT)" \
  --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash
hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200'
MC_MORK_IMPORT=$(hg log -r tip --template '{node}')

hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2)
(hg -R $MC log -r "children($MC_SMIME_IMPORT)" \
  --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash
hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700'
MC_SMIME_IMPORT=$(hg log -r tip --template '{node}')
popd

# Echo the new move commands to the changeset conversion map.
cat >>$WORKDIR/convert-revmap.txt <<EOF
52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork
50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME
EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need
hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt
hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

January 10, 2015 05:55 PM

November 25, 2014

Thunderbird Blog

Thunderbird Reorganizes at 2014 Toronto Summit

In October 2014, 22 active contributors to Thunderbird gathered at the Mozilla office in Toronto to discuss the status of Thunderbird, and plan for the future.

Toronto Contributors at 2014 Toronto Summit

Thunderbird contributors gather in Toronto to plan the future.

As background, Mitchell Baker, Chair of the Mozilla Foundation, posted in July 2012 that Mozilla would significantly reduce paid staff dedicated to Thunderbird, and asked community volunteers to move Thunderbird forward. Mozilla at that time committed several paid staff to maintain Thunderbird, each working part-time on Thunderbird but with a main commitment to other Mozilla projects. The staff commitment in total was approximately one full-time equivalent.

Over the last two years, those individuals had slowly reduced their commitment to Thunderbird, yet the formal leadership of Thunderbird remained with these staff. By 2014 Thunderbird had reached the point where nobody was effectively in charge, and it was difficult to make important decisions. By gathering the key active contributors in one place, we were able to make real decisions, plan our future governance, and move to complete the transition from being staff-led to community-led.

At the Summit, we made a number of key decisions:

There is a lot of new energy in Thunderbird since the Summit, a number of people are stepping forward to take on some critical roles, and we are looking forward to a great next release. More help is always welcome though!

November 25, 2014 06:15 PM

October 15, 2014

Kent James

Thunderbird Summit in Toronto to Plan a Viable Future

On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time)  using the same channels as the regular weekly Thunderbird status meetings.

Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.

Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014

Feel free to join in if you are interested in the future of Thunderbird.

October 15, 2014 04:17 AM

October 02, 2014

Philipp Kewisch

Monitor all http(s) network requests using the Mozilla Platform

In an xpcshell test, I recently needed a way to monitor all network requests and access both request and response data so I can save them for later use. This required a little bit of digging in Mozilla’s devtools code so I thought I’d write a short blog post about it.

This code will be used in a testcase that ensures that calendar providers in Lightning function properly. In the case of the CalDAV provider, we would need to access a real server for testing. We can’t just set up a few servers and use them for testing, it would end in an unreasonable amount of server maintenance. Given non-local connections are not allowed when running the tests on the Mozilla build infrastructure, it wouldn’t work anyway. The solution is to create a fakeserver, that is able to replay the requests in the same way. Instead of manually making the requests and figuring out how the server replies, we can use this code to quickly collect all the requests we need.

Without further delay, here is the code you have been waiting for:


Tagged: Mozilla, network, xpcshell

October 02, 2014 02:38 PM

August 06, 2014

Joshua Cranmer

Why email is hard, part 7: email security and trust

This post is part 7 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. This part discusses the problem of trust.

At a technical level, S/MIME and PGP (or at least PGP/MIME) use cryptography essentially identically. Yet the two are treated as radically different models of email security because they diverge on the most important question of public key cryptography: how do you trust the identity of a public key? Trust is critical, as it is the only way to stop an active, man-in-the-middle (MITM) attack. MITM attacks are actually easier to pull off in email, since all email messages effectively have to pass through both the sender's and the recipients' email servers [1], allowing attackers to be able to pull off permanent, long-lasting MITM attacks [2].

S/MIME uses the same trust model that SSL uses, based on X.509 certificates and certificate authorities. X.509 certificates effectively work by providing a certificate that says who you are which is signed by another authority. In the original concept (as you might guess from the name "X.509"), the trusted authority was your telecom provider, and the certificates were furthermore intended to be a part of the global X.500 directory—a natural extension of the OSI internet model. The OSI model of the internet never gained traction, and the trusted telecom providers were replaced with trusted root CAs.

PGP, by contrast, uses a trust model that's generally known as the Web of Trust. Every user has a PGP key (containing their identity and their public key), and users can sign others' public keys. Trust generally flows from these signatures: if you trust a user, you know the keys that they sign are correct. The name "Web of Trust" comes from the vision that trust flows along the paths of signatures, building a tight web of trust.

And now for the controversial part of the post, the comparisons and critiques of these trust models. A disclaimer: I am not a security expert, although I am a programmer who revels in dreaming up arcane edge cases. I also don't use PGP at all, and use S/MIME to a very limited extent for some Mozilla work [3], although I did try a few abortive attempts to dogfood it in the past. I've attempted to replace personal experience with comprehensive research [4], but most existing critiques and comparisons of these two trust models are about 10-15 years old and predate several changes to CA certificate practices.

A basic tenet of development that I have found is that the average user is fairly ignorant. At the same time, a lot of the defense of trust models, both CAs and Web of Trust, tends to hinge on configurability. How many people, for example, know how to add or remove a CA root from Firefox, Windows, or Android? Even among the subgroup of Mozilla developers, I suspect the number of people who know how to do so are rather few. Or in the case of PGP, how many people know how to change the maximum path length? Or even understand the security implications of doing so?

Seen in the light of ignorant users, the Web of Trust is a UX disaster. Its entire security model is predicated on having users precisely specify how much they trust other people to trust others (ultimate, full, marginal, none, unknown) and also on having them continually do out-of-band verification procedures and publicly reporting those steps. In 1998, a seminal paper on the usability of a GUI for PGP encryption came to the conclusion that the UI was effectively unusable for users, to the point that only a third of the users were able to send an encrypted email (and even then, only with significant help from the test administrators), and a quarter managed to publicly announce their private keys at some point, which is pretty much the worst thing you can do. They also noted that the complex trust UI was never used by participants, although the failure of many users to get that far makes generalization dangerous [5]. While newer versions of security UI have undoubtedly fixed many of the original issues found (in no small part due to the paper, one of the first to argue that usability is integral, not orthogonal, to security), I have yet to find an actual study on the usability of the trust model itself.

The Web of Trust has other faults. The notion of "marginal" trust it turns out is rather broken: if you marginally trust a user who has two keys who both signed another person's key, that's the same as fully trusting a user with one key who signed that key. There are several proposals for different trust formulas [6], but none of them have caught on in practice to my knowledge.

A hidden fault is associated with its manner of presentation: in sharp contrast to CAs, the Web of Trust appears to not delegate trust, but any practical widespread deployment needs to solve the problem of contacting people who have had no prior contact. Combined with the need to bootstrap new users, this implies that there needs to be some keys that have signed a lot of other keys that are essentially default-trusted—in other words, a CA, a fact sometimes lost on advocates of the Web of Trust.

That said, a valid point in favor of the Web of Trust is that it more easily allows people to distrust CAs if they wish to. While I'm skeptical of its utility to a broader audience, the ability to do so for is crucial for a not-insignificant portion of the population, and it's important enough to be explicitly called out.

X.509 certificates are most commonly discussed in the context of SSL/TLS connections, so I'll discuss them in that context as well, as the implications for S/MIME are mostly the same. Almost all criticism of this trust model essentially boils down to a single complaint: certificate authorities aren't trustworthy. A historical criticism is that the addition of CAs to the main root trust stores was ad-hoc. Since then, however, the main oligopoly of these root stores (Microsoft, Apple, Google, and Mozilla) have made their policies public and clear [7]. The introduction of the CA/Browser Forum in 2005, with a collection of major CAs and the major browsers as members, and several [8] helps in articulating common policies. These policies, simplified immensely, boil down to:

  1. You must verify information (depending on certificate type). This information must be relatively recent.
  2. You must not use weak algorithms in your certificates (e.g., no MD5).
  3. You must not make certificates that are valid for too long.
  4. You must maintain revocation checking services.
  5. You must have fairly stringent physical and digital security practices and intrusion detection mechanisms.
  6. You must be [externally] audited every year that you follow the above rules.
  7. If you screw up, we can kick you out.

I'm not going to claim that this is necessarily the best policy or even that any policy can feasibly stop intrusions from happening. But it's a policy, so CAs must abide by some set of rules.

Another CA criticism is the fear that they may be suborned by national government spy agencies. I find this claim underwhelming, considering that the number of certificates acquired by intrusions that were used in the wild is larger than the number of certificates acquired by national governments that were used in the wild: 1 and 0, respectively. Yet no one complains about the untrustworthiness of CAs due to their ability to be hacked by outsiders. Another attack is that CAs are controlled by profit-seeking corporations, which misses the point because the business of CAs is not selling certificates but selling their access to the root databases. As we will see shortly, jeopardizing that access is a great way for a CA to go out of business.

To understand issues involving CAs in greater detail, there are two CAs that are particularly useful to look at. The first is CACert. CACert is favored by many by its attempt to handle X.509 certificates in a Web of Trust model, so invariably every public discussion about CACert ends up devolving into an attack on other CAs for their perceived capture by national governments or corporate interests. Yet what many of the proponents for inclusion of CACert miss (or dismiss) is the fact that CACert actually failed the required audit, and it is unlikely to ever pass an audit. This shows a central failure of both CAs and Web of Trust: different people have different definitions of "trust," and in the case of CACert, some people are favoring a subjective definition (I trust their owners because they're not evil) when an objective definition fails (in this case, that the root signing key is securely kept).

The other CA of note here is DigiNotar. In July 2011, some hackers managed to acquire a few fraudulent certificates by hacking into DigiNotar's systems. By late August, people had become aware of these certificates being used in practice [9] to intercept communications, mostly in Iran. The use appears to have been caught after Chromium updates failed due to invalid certificate fingerprints. After it became clear that the fraudulent certificates were not limited to a single fake Google certificate, and that DigiNotar had failed to notify potentially affected companies of its breach, DigiNotar was swiftly removed from all of the trust databases. It ended up declaring bankruptcy within two weeks.

DigiNotar indicates several things. One, SSL MITM attacks are not theoretical (I have seen at least two or three security experts advising pre-DigiNotar that SSL MITM attacks are "theoretical" and therefore the wrong target for security mechanisms). Two, keeping the trust of browsers is necessary for commercial operation of CAs. Three, the notion that a CA is "too big to fail" is false: DigiNotar played an important role in the Dutch community as a major CA and the operator of Staat der Nederlanden. Yet when DigiNotar screwed up and lost its trust, it was swiftly kicked out despite this role. I suspect that even Verisign could be kicked out if it manages to screw up badly enough.

This isn't to say that the CA model isn't problematic. But the source of its problems is that delegating trust isn't a feasible model in the first place, a problem that it shares with the Web of Trust as well. Different notions of what "trust" actually means and the uncertainty that gets introduced as chains of trust get longer both make delegating trust weak to both social engineering and technical engineering attacks. There appears to be an increasing consensus that the best way forward is some variant of key pinning, much akin to how SSH works: once you know someone's public key, you complain if that public key appears to change, even if it appears to be "trusted." This does leave people open to attacks on first use, and the question of what to do when you need to legitimately re-key is not easy to solve.

In short, both CAs and the Web of Trust have issues. Whether or not you should prefer S/MIME or PGP ultimately comes down to the very conscious question of how you want to deal with trust—a question without a clear, obvious answer. If I appear to be painting CAs and S/MIME in a positive light and the Web of Trust and PGP in a negative one in this post, it is more because I am trying to focus on the positions less commonly taken to balance perspective on the internet. In my next post, I'll round out the discussion on email security by explaining why email security has seen poor uptake and answering the question as to which email security protocol is most popular. The answer may surprise you!

[1] Strictly speaking, you can bypass the sender's SMTP server. In practice, this is considered a hole in the SMTP system that email providers are trying to plug.
[2] I've had 13 different connections to the internet in the same time as I've had my main email address, not counting all the public wifis that I have used. Whereas an attacker would find it extraordinarily difficult to intercept all of my SSH sessions for a MITM attack, intercepting all of my email sessions is clearly far easier if the attacker were my email provider.
[3] Before you read too much into this personal choice of S/MIME over PGP, it's entirely motivated by a simple concern: S/MIME is built into Thunderbird; PGP is not. As someone who does a lot of Thunderbird development work that could easily break the Enigmail extension locally, needing to use an extension would be disruptive to workflow.
[4] This is not to say that I don't heavily research many of my other posts, but I did go so far for this one as to actually start going through a lot of published journals in an attempt to find information.
[5] It's questionable how well the usability of a trust model UI can be measured in a lab setting, since the observer effect is particularly strong for all metrics of trust.
[6] The web of trust makes a nice graph, and graphs invite lots of interesting mathematical metrics. I've always been partial to eigenvectors of the graph, myself.
[7] Mozilla's policy for addition to NSS is basically the standard policy adopted by all open-source Linux or BSD distributions, seeing as OpenSSL never attempted to produce a root database.
[8] It looks to me that it's the browsers who are more in charge in this forum than the CAs.
[9] To my knowledge, this is the first—and so far only—attempt to actively MITM an SSL connection.

August 06, 2014 03:39 AM

July 31, 2014

Kent James

Thunderbird’s Future: the TL;DR Version

In the next few months I hope to do a series of blog posts that talk about Mozilla’s Thunderbird email client and its future. Here’s the TL;DR version (though still pretty long). These are my personal views, I have no authority to speak for Mozilla or for the Thunderbird project.

Current Status

Where We Need to Go

How We Get There

The Thunderbird team is currently planning to get together in Toronto in October 2014, and Mozilla staff are trying to plan an all-hands meeting sometimes soon. Let’s discussion the future in conjunction with those events, to make sure that in 2015 we have a sustainable plan for the future.

 

July 31, 2014 08:16 PM