Daniel Stenbergcurl user poll 2015

Now is the time. If you use curl or libcurl from time to time, please consider helping us out with providing your feedback and opinions on a few things:


It’ll take you a couple of minutes and it’ll help us a lot when making decisions going forward.

The poll is hosted by Google and that short link above will take you to:


Luis VillaCome work with me – developer edition!

It has been a long time since I was able to say to developer friends “come work with me” in anything but the most abstract “come work under the same roof” kind of sense. But today I can say to developers “come work with me” and really mean it. Which is fun :)

Details: Wikimedia’s new community tech team is hiring for a community tech developer and a team lead. This will be extremely community-intensive work, so if you enjoy and get energy from working with a community and helping them achieve their goals, this could be a great role for you. This team will work intensely with my department to ensure that we’re correctly identifying and prioritizing the needs of our most active editors. If that sounds like fun, get in touch :)

[And I realize that I’ve been bad and not posted here, so here’s my new job announce: “my department” is the Foundation’s new Community Engagement department, where we work to support healthy contributor communities and help WMF-community collaboration. It is a detour from law, but I’ve always said law was just a way to help people do their thing — so in that sense is the same thing I’ve always been doing. It has been an intense roller coaster of a first two months, and I look forward to much more of the same.]

Mozilla Open Policy & Advocacy BlogMozilla View on Zero Rating

Our support of net neutrality is grounded in our belief that we all must fight to maintain an open, global, and growing Internet. Because of the scale and potential of the Internet, it must be an international effort. We see a growing focus on net neutrality around the world and believe that this focus is positive and necessary for the continued health of this valuable global asset.

In India, for example, the focus on net neutrality and the impacts of zero-rating have reached an important inflection point. This week, we sent a letter to the Prime Minister of India supporting net neutrality, in response to an open consultation by the Telecom Regulatory Authority of India on Internet services. The Indian Internet community, including many Mozillians, has spoken out expressing concerns with zero-rating and its impacts on an open Internet. Not surprisingly, we too are concerned, and Mozilla’s Executive Chairwoman Mitchell Baker posted to her blog to identify what those concerns are. The bottom line is that zero-rating may actually NOT connect the world’s unconnected billions to the Internet, in India or elsewhere.

Zero-rating does not at first pass invoke the prototypical net neutrality harms of throttling, blocking, or paid prioritization, all of which involve technical differentiation in traffic management. Instead, zero-rating makes some Internet content and services “free” by excluding them from data caps that apply to other uses of traffic (which can result in “blocking” of sorts if a user has no available data left in a billing period).

The impact of zero-rating may result in the same harms as throttling, blocking, or paid prioritization. By giving one company (or a handful) the ability to reach users at no cost to them, zero-rating could limit rather than expand a user’s access to the Internet and ultimately chill competition and innovation. The promise of the Internet as a driver of innovation is that anyone can make anything and share it with anyone. Without a level playing field, the world won’t benefit from the next Facebook, Google or Twitter.

There are many things we still don’t know about zero-rating. It’s a relatively new business model and there is not a lot of data about its benefits or its harms, so we don’t know with certainty what the long-term effects will be. We don’t have data on substitutability – how many users will reduce or even stop their open Internet use because they have to pay, while walled garden offerings are free to them. But we do have data indicating that a significant percentage of people confuse “the Internet” and “Facebook,” – in part because of Facebook’s Internet.org initiative – notably including a global survey by Quartz where over half of respondents agreed with a statement equating Facebook with the entire Internet.

There’s also missing data on the other side of the equation. There may be markets where affordability hurdles to access remain so significant that mobile networks can’t reach economies of scale to keep prices down. It may be possible that access to zero- rated services will help to give previously unconnected users a “taste” of the Internet leading them to demand access to the open Internet itself. The truth is we don’t know.

Still, prohibition through legislation or regulation, a path some governments have taken or are considering, may not be the right answer. Taken to an extreme regulation could chill some innovation and could result in industry not taking collective action. Even worse, regulation could allow governments to determine which content could/should be zero-rated – and the benefit of net neutrality is that no entity should get to decide which content a user has access to. Different markets and political environments require individual analysis. In some contexts, such as Netflix’s abandoned zero-rating plans in Australia, resolution may occur as a result of public pressure, without formal action.

We understand the temptation to say “some content is better than no content,” choosing a lesser degree of inclusion over openness and equality of opportunity. But it shouldn’t be a binary choice; technology and innovation can create a better way, even though these new models may take some time to develop. Furthermore, choosing limited inclusion today, even though it offers short-term benefits, poses significant risk to the emergence of an open, competitive platform that will ultimately stifle inclusion and economic development.

There are alternative approaches that could serve as solutions to the challenges that zero-rating seeks to address. For example, Mozilla has sought to create such an alternative within the Firefox OS ecosystem. Our partnership with Grameenphone (owned by Telenor Group) in Bangladesh allows users to receive 20 MB of data usage for free each day, in exchange for viewing an advertisement. Our partnership with Orange will allow residents of multiple African countries to purchase $40 Firefox OS smartphones that come packaged with 6 free months of voice, text, and up to 500 MB per month of data. Scaling up arrangements like these could represent a long-term solution to the key underlying problems of digital inclusion and equality.

Likely, the solution will be found in some combination of: new approaches and business models; potential increases in philanthropic engagement as Mitchell’s post suggests; and technology and business innovations to reduce the costs of connectivity. But whatever the mix is, preserving the level playing field that drives innovation and competition on the Internet must be the baseline.

We’ve tried to outline here some of the positive and negative issues associated with zero-rating. More education about these issues, and affordability and accessibility challenges, will be part of working out the right solutions. Multi-stakeholder roundtables and incubation challenges around alternative solutions to affordability problems are also likely fruitful pathways. Or maybe solutions will come from academia and think tanks, through research driven white papers. Mozilla will be exploring these options further in the months to come.

We look forward to working with the Mozilla community, others in industry, civil society, governments and other actors to think through how best to provide everyone with access to the full diversity of the open Web. We hope you’ll join us in these conversations.


Mitchell BakerZero Rating and the Open Internet

One of the challenges of our time is how to make Internet access and use a realistic possibility for the billions of people who cannot afford the data charges.  An attitude of “just wait, eventually this will work out” is not acceptable.  Such an approach would reinforce the global digital divide; it would keep a […]

Jan de MooijUsing Rust to generate Mercurial short-hash collisions

At Mozilla, we use Mercurial for the main Firefox repository. Mercurial, like Git, uses SHA1 hashes to identify a commit.

Short hashes

SHA1 hashes are fairly long, a string of 40 hex characters (160 bits), so Mercurial and Git allow using a prefix of that, as long as the prefix is unambiguous. Mercurial also typically only shows the first 12 characters (let’s call them short hashes), for instance:

$ hg id
$ hg log -r tip
changeset:   242221:312707328997
tag:         tip

And those are the hashes most Mercurial users use, for instance they are posted in Bugzilla whenever we land a patch etc.

Collisions with short hashes are much more likely than full SHA1 collisions, because the short hashes are only 48 bits long. As the Mercurial FAQ states, such collisions don’t really matter, because Mercurial will check if the hash is unambiguous and if it’s not it will require more than 12 characters.

So, short hash collisions are not the end of the world, but they are inconvenient because the standard 12-chars hg commit ids will become ambiguous and unusable. Fortunately, the mozilla-central repository at this point does not contain any short hash collisions (it has about 242,000 commits).

Finding short-hash collisions

I’ve wondered for a while, can we create a commit that has the same short hash as another commit in the repository?

A brute force attack that works by committing and then reverting changes to the repository should work, but it’d be super slow. I haven’t tried it, but it’d probably take years to find a collision. Fortunately, there’s a much faster way to brute force this. Mercurial computes the commit id/hash like this:

hash = sha1(min(p1, p2) + max(p1, p2) + contents)

Here p1 and p2 are the hashes of the parent commits, or a null hash (all zeroes) if there’s only one parent. To see what contents is, we can use the hg debugdata command:

$ hg debugdata -c 34828fed1639
Carsten "Tomcat" Book <cbook@mozilla.com>
1430739274 -7200
...list of changed files...

merge mozilla-inbound to mozilla-central a=merge

Perfect! This contains the commit message, so all we have to do is append some random data to the commit message, compute the (short) hash, check if there’s a collision and repeat until we find a match.

I wrote a small Rust program to brute-force this. You can use it like this (I used the popular mq extension, there are other ways to do it):

$ cd mozilla-central
$ echo "Foo" >> CLOBBER # make a random change
$ hg qnew patch -m "Some message"
$ hgcollision
Got 242223 prefixes
Generated random prefix: 1631965792_
Tried 242483200 hashes
Found collision! Prefix: b991f0726738, hash: b991f072673876a64c7a36f920b2ad2885a84fac
Add this to the end of your commit message: 1631965792_24262171

After about 2 minutes it’s done and tells us we have to append “1631965792_24262171″ to our commit message to get a collision! Let’s try it (we have to be careful to preserve the original date/time, or we’ll get a different hash):

$ hg log -r tip --template "{date|isodatesec}"
2015-05-05 20:21:59 +0200
$ hg qref -m "Some message1631965792_24262171" -d "2015-05-05 20:21:59 +0200"
$ hg id
b991f0726738 patch/qbase/qtip/tip
$ hg log -r b991f0726738
abort: 00changelog.i@b991f0726738: ambiguous identifier!

Voilà! We successfully created a Mercurial short hash collision!

And no, I didn’t use this on any patches I pushed to mozilla-central.. ;)


The Rust source code is available here. It was my first, quick-and-dirty Rust program but writing it was a nice way to get more familiar with the language. I used the rust-crypto crate to calculate SHA1 hashes, installing and using it was much easier than I expected. Pretty nice experience.

The program can check about 100 million hashes in one minute on my laptop. It usually takes about 1-5 minutes to find a collision, this also depends on the size of the repository (mozilla-central has about 242,000 commits). It’d be easy to use multiple threads (you can also just use X processes though) and there are probably a lot of other ways to improve it. For this experiment it was good and fast enough to get the job done :)

Michael KaplyFirefox ESR 38 Overview

This post will provide a high level overview of changes coming up in the next Firefox ESR. This list is primarily focused on changes that will impact enterprise users. It is not intended to be an exhaustive list. For a list of all the changes, see the release notes links.

Note: Firefox Hello and Encrypted Media Extensions will NOT be part of the ESR.

Firefox 32

Firefox 33

Firefox 34

Firefox 35

  • Firefox Marketplace Menu and Button
  • New Search UI in more locales
  • Release Notes (35.0, 35.0.1)

Firefox 36

Firefox 37

Firefox 38

  • Preferences in tabs
  • Release Notes (38.0)

My plan is to have a new CCK2 beta that coincides with the Firefox 38 release that will allow for disabling some of these new features. It's a beta because it also has the new code for no longer using the distribution directory.

If I missed something, please post it in the comments.

Air MozillaWebdev Extravaganza: May 2015

Webdev Extravaganza: May 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Mihai SucanTouched

Articol disponibil și în limba română aici.

Hello everyone!

This is most likely my last article here, and I apologize for the length and not so much of a fun reading, it is not going to be about my usual technical subjects. I am also going to tag Planet Mozilla for reasons that will become obvious here.

This is hard to write, but here we go: I have recessive dystrophic epidermolysis bullosa (RDEB) which means I have had a fragile skin since birth. This had an important impact that I will not detail here, you can read everything about it on dedicated websites.

As I mentioned in my previous article the cancer metastasis is going to most likely end my journey here - unless I am going to live through yet another miracle in my life.

I have never written publicly about my situation, which is also shared by Marius, because we are both proud of what we do, we do not want people to feel pity or anything like that. We want you and others to know us by our work, our achievements, and such. People easily get emotional and too supportive once they know the situation. Their actions never made us feel comfortable.

What changed? Having reached a life prognosis of several weeks or months I feel that trying to keep this proudness is not going to help me achieve one of the goals I still have.

One of my current goals is to raise awareness of EB and in particular the ongoing EB research, which is very promising. I am fully convinced that children born with EB will no longer have to go through the same hardships as Marius and I did.

The running theme of this article is how I was touched by the people I met. How you have all made my life much better, and ultimately how you can also make a positive difference for others.

I will dedicate the rest of this article to making a case for donations, why they are important for you and others. I am going to put things in my context, otherwise I feel like leaving matters unexplained will not make a strong enough case for my goal, my wish for others.

This is where you, the Mozilla community, and anyone who listens come in. You can help others get treatments sooner than later by supporting EB research.

In the 27 years of my life, I have met a few people who made a huge impact on my life, in different ways. I will start with the charities that have donated a lot of dressings to us. With EB patients wound care is an integral part of daily life, and the special dressings we received were essential to making wound care much easier and safer. From these charities I have to name Debra UK and an amazing person, Agnes Beveridge. Thank you Agnes. Since its beginnings Mini Debra from Romania has also been very helpful.

The people I made friends with and their impact on me are very important. A chemist, engineer, who has the spark of a geek. As an older man figure, he was influential. We played lots of chess games and we had fun. Thank you Dan for supporting our geeky nature in the early days and staying a friend forever.

Around 1999 when Marius and I first went online, one of the goals we had was e-learning. In the tech age of those days e-learning was a lofty concept. Little did we care or know, so we contacted a professor from a Romanian University, as easy as writing an email to a friend, even more mundane. :-) We started from that silly email to what it became today: we made a friend and found a mentor of online conduct and education. Thank you Prof. Mihai Jalobeanu.

The third person who I want to mention here is a cousin who grew up to become a Catholic priest. As a student, he spent many summer vacations with us, returning from Rome, Italy every year. We learned so many things from him, about culture, philosophy, history, religions, faith and many more. We played a lot of chess as well. His influence cannot be quantified. Thank you Simon.

Much respect and appreciation to Professor loan Dzițac who coordinated my bachelor's degree and master's thesis. He is one of the professors who make the Romanian academia better every day, together with a few others like Prof. Jalobeanu.

Before I became a Mozillian I was an Opera browser user, and as an aspiring Web developer I became involved with beta testing pre-releases of their browser. This was during the highschool years and slightly after, starting with 2006. Those years and the influence of the Opera team I met online and the other enthusiasts have shaped my skills and interests. I became more interested in the open Web and standards. Thanks to everyone at Opera who were really great people. Your support for the open Web was important for me.

Proud to be a Mozillian

No, not because of the company, but the people who make Mozilla. It is an honor to have such colleagues.

My latest experiences are those with the Mozilla team. I was lucky to join the Firefox developer tools team in 2010 when the team just formed. I was quite proud, hehe. :-)

In 2012, after working with the team remotely, I finally got the courage to be part of the regular team meetings in various Mozilla offices around the world. The first meeting for me was in London.

When I arrived in London, March 2012, I felt like dreams can and do come true. It was seemingly impossible for me to do that. To work for a great company like Mozilla, to meet some of the makers of the Web, to travel to fancy places, not hospitals, not tourism, but a work meeting, and no pity. I felt proud, but I also felt the burden of the amount of work and help others have put into making this happen. Mozilla on one side and my family with their tireless support. I did things to get to this "achievement", but they could not have happened without the endless help from others, their love. With all that greatness, there was also the disappointment with the amount of work I was giving others, just to "get my way".

It feels weird and uncomfortable that my fight for life, for living through the experiences I wanted, to work and travel, really means that others get more work to do for me. My push forward gives others work and I need a lot of help. The concept of burden quickly comes to mind, but then should I give up? I almost always chose to not give up. For me saying no to things I wanted meant giving up on life bit by bit. Whenever I had to decide whether I go to a developer tools team meeting I had this dilemma. Go and get what I want or let it all be?

I remember how nice it was to meet everyone, the first dinner, with my colleagues and Johnathan Nightingale. People who you remember forever. I was impressed with how accepting people were. I recall meeting Chris Lord (gecko graphics layer work for Android at that time, iirc) - we had a natural conversation without any awkwardness. Back home I was used to people asking what is wrong with me. I appreciated the respect and professionalism. My disabilities did not matter in these contexts.

I remember with pleasure how mom learned word by word things to ask for in English, in these meetings, at dinners or various places where she was with me and my colleagues. Mom was asking for butter, spoons and other things in Mountain View offices, and so on. :-)

One evening in the first week with the team: we went to a nice restaurant with the team, for dinner. Once done, I went with mom and Rob (my manager at the time) in a cab. Once I went up the ramp, the wheelchair tipped off the back; in a second I could've been seriously injured. Rob and mom grabbed the chair, but the driver grabbed my hand down the elbow. That caused, obviously, a big wound underneath the clothing. Nobody saw that, mom estimated it, I did not scream or anything like that. Still, everyone saw, including other colleagues who were around, how fragile these simple moments are for me. It gave everyone a "good scare". I felt relieved nothing worse had occurred. That silly wound did not matter to me. I was more than happy to be there, things like that happen at home as well. It is all about enjoying life, irrespective of such nuisance.

Another story is again with Rob as my manager at the time, second meeting in London, autumn 2012. Having dinner I failed in epic ways to eat due to dysphagia. After half an hour of nonstop coughing at the table I gave up and left with mom back to the hotel room. My face was all red, I was sweating, etc. This was quite embarrassing but "normal" back home, yet it was disturbing to others seeing it. Rob was touched by the situation and he also went back to his room. Mom was in tears, obviously. I arrived back in my room and had an online chat with Rob.

In London again, I do not forget eating milk with some kind of dough mixed by a colleague, Heather. I am sure it was not much of a big deal for her to help me with that. Yet, I appreciated her help and kindness.

Another big decision was for me to go to a Mozilla team meeting in Sunnyvale, California, in spring 2013. That was quite a task. As usual, Mozilla was very helpful and supportive. At the destination, Alex, my older brother, was also ready with his support.

Once I was in Sunnyvale I felt again that dreams do come true. Me in Silicon Valley, in California, seeing all the tech companies there. Felt epic. Mom was happy as well. I felt that from my room back home where I went through so much hard time, work and study, I was able to go beyond that - there was a really good outcome, finally.

In Sunnyvale my current manager, Joe, helped me get inside a restaurant by carrying my chair, together with Anton - another colleague. I grabbed tightly to one of their arms, not because of the fear of wounds, but the potential embarrassment that a small wound would cause. As previous experiences tell me, in a split second, any simple thing can turn out quite bad. And... when I am with others, I can see they are not so much aware of the situation they are in control of. I just did my part, hold strong! :-)

Another story of Mozillians being awesome: we went to a dinner and I returned only to see some silly parts of the power wheelchair fell off. This was still in Sunnyvale, and this time Dave (one of my previous managers) together with Anton have helped me again. They spent half an hour or so mounting back that silly wheelchair part. Lots of sweat went into it. They did not give up and I appreciated it. I was speechless.

I will never forget how pleasant it was to have the technical discussions, to watch the talks and demos of my colleagues, and have the informal chats during dinners and such. Talking to Jim, Paul, Mik, Eddy and more of them.

Much respect and appreciation for the whole Firefox developer tools team in Paris, August 2013, where they all applauded Cecilia during a dinner in a fancy restaurant. As a nurse and cousin she was there to be my assistant at that time (she went with me instead of mom). Thank you Mozilla for your beautiful recognition of her efforts and help. That was a very touching moment for me. Of course, thank you Cecilia as well.

And I did not forget the birthday cake I got in Paris, end of August 2013. :-) Mozilla ftw!

In 2013 Marius had his cancer tumor and foot amputated. Several months of problems and distress for the whole family. Around that time one of our German friends, from Marius's circle of friends, fought cancer as well. Philipp Althoff passed away that year and it felt quite sad to see how one can live another day and someone else does not. Why? I mention Philipp here to remember him, his work and spirit. Thank you Philipp for being a great friend.

Lots of thanks to the German friends that Marius met online many years ago, Michael Auerbach, Dennis Schubert, Nina Markiewicz, Jan Frischmuth, Boris Eissrich and the rest of the bunch. Their support is not forgotten.

Around the same time I was reading the blog articles written by Eric Meyer known in the Web technology community, about his daughter's fight with cancer, Rebecca. Very touching and made me teaful. I just want to publicly thank Eric for his strength and courage to write about such a hard topic. Inspiring. This article I am writing here would not have happened without his touching words. (I do not know Eric personally.)

I would also like to mention fellows from the Web technology community who are fighting very hard for their lives and they do it in their best possible ways: Molly E. Holzschlag and Gervase Markham. Their stories and strength are inspiring as well. Best of luck to you both and lots of courage. Thanks for your great work. Please support Molly with a donation.

Last year (2014) I went to the St Thomas’ hospital in London. I did not know what to expect from the team of doctors. I did not expect miracles or perfection. On the contrary, the main doctor I was in contact with has specifically been honest with me about my prognosis, ever since August. There was no cure from the start, but I went there to try the best possible ways to fight the cancer and dysphagia.

Overall, the months I spent there do not feel like a hospital stay. It was months of experiences with people. I met people who are different, special and loving. The team of doctors and nurses was so much better than my previous experiences. There I met new people from Mozilla as well, and made really good friends. Catherine and Jess, you are angels, in a lack of better words. Many of my colleagues visited me in the hospital, and my manager visited regularly. Thank you all very much.

Mozilla was amazing in the given situation. There is no way for me to thank them for their support. I had hoped to do so with more work, to get back into the devtools projects and help as much as I could.

All these stories are about people who make a difference, with a small or big gesture. Every meeting with my colleagues felt humbling in a good way. The time where others help you out of their way, and you do not know how to thank them and you cannot give back. Makes you think.

All the medical procedures that I went through in London are insignificant compared to the experience of getting to know those people.

I mentioned people who I thank for various reasons, but there are more. Friends and relatives back home who are helpful, supportive and kind. People who have hosted my early websites and others who I worked with. I do not know if trying to list their names has any point here, and I will most likely forget someone important. Over the past weeks I have been working on sending them my gratitude, individually, face to face when possible, or online.

Some conclusions

All of the experiences I had bring me to some conclusions:

  • What matters is not my work, skills, money or education. I do not feel too much about these. It is nice what I did but... Meh.
    Work is great and having a great job is awesome. I was lucky enough to get to work with Mozilla and to have managers that I cannot complain a single word about. It is not often people have such good jobs. Nonetheless, work is just an activity that is part of keeping your mind sane, to enjoy life. Your team makes work more enjoyable or less so, it is not only about the projects.
  • It is the experiences and the people who touched my life and helped me; it is the events and the accumulated life that made these 27 years "better", worth it.
  • It is you who makes life better for others around you. Yes, people are annoying and frustrating many times, but in the end they also give life meaning and worth, a purpose.
  • In the end I feel like I did not do much for others. I received a lot of love and help from others, gave little back. I lived a life of fighting for myself. I always hoped that someday I will be able to do more. I wanted to have a family, like anyone else, to do things for my family, not just for myself. Love.
  • Writing this blog post is a minimal effort in this short time. I hope others who read this silly lengthy article will ponder about what they do, and maybe some time they will be better prepared to make a more positive impact on others than myself. That would be a huge win for everyone involved.
  • I still cannot point to the purpose of my life, which is kind of sad. What was the sense of it all, but maybe having the answer does not really matter as much as you would expect. I am sure that the impact and importance of anyone's life cannot be estimated. Not much point in being stressed about things you cannot ever know. Nonetheless, I wish I knew what was the point of all the hardship…
  • Thank others for their help and support. Do not take it for granted and never forget you cannot truly thank those who have helped you most. Think about the person who helped you most and what can you do for him? I cannot match what Mozilla did for me, my family, the doctors and others. I cannot even match what nurses did for me.

It is ironic for me to make such recommendations because if you would have known me all of my life, you would think it's not truly me. I did fail to do these things I am suggesting here. I was not this "nice", as explained already, having a lot of health problems I did not get enough peace to pay attention to such details. I was focused on living, like an animal. It is only in the past years I started learning the importance of such aspects. I am not a "new person" or anything like that, but it is good to be aware of these matters. They slowly change you.

All of these things I am writing about here lead me to suggest ways you can make a change in the lives of others outside your circles. Do not wait for anything. You can help friends, your family and relatives, but I see a lot of value in changing things far away from you. Do not focus only on "egoistic" ways of helping.

There is some kind of egoism in helping others. It makes you feel better, but it is a good kind, where everyone involved wins.

Choosing to help people you know is really difficult sometimes. You know that the money you give may not be used as you believe they should. You worry that the help you offer does not always reach the intended outcome. You worry about what others think of your gesture, others who probably feel they would need similar help. You get into the politics of family, relatives and friends. At the end it is easy to give up on making a meaningful and consistent difference for others. These are the kind of things that I have been thinking about lately. This is why I choose targeted donations to medical research. You do not need the recognition or fame. Just help with making the lives of others better. If you give all your energy to someone, you cannot ever expect them to be able to thank you properly. It is impossible. Do this only if you truly never expect or care what happens after you help that person.

Think about making a donation for medical research.

I would be very happy if you would donate money for EB research, but I would be equally happy if you pick any other medical research center to donate to. It is more important that you will be happier to have made a positive impact, do not mind me.

Why EB research? Because EB research is going quite well and there are clinical trials that will help future patients with EB to avoid a lot of the hardships associated with the condition. This condition is sufficiently understood nowadays and researchers are at a point where they work on several approaches for treating it, or to greatly reduce its impact. There are cell therapy, protein therapy, genetic patching and other approaches, each under testing in various stages and levels of success. It will still take years before patients get such treatments and they will not be magical. They will not fix everything, but getting these available to them is essential to improve the quality of their lives. For us a reduction of skin fragility by any percentage would mean the difference between life or death.

I recommend you read a paper that summarizes the current state of EB research and where it is headed: Advances in understanding and treating dystrophic epidermolysis bullosa by Michael J Vanden Oever and Jakub Tolar, published in May 2014. EB is a rare condition with a lucky situation: much easier to study and understand compared to other conditions.

You should donate to help others get the treatments sooner.

This is the list of funds I trust for supporting EB research:

The Sohana Research Fund is in UK and I really appreciate how much Sohana does for raising awareness about EB. Her impact on the world is already quite important, having raised a lot of funds for research. Thank you Sohana, keep it up!

I would suggest you think of making a donation once per year, instead of buying a new phone or a new laptop, depending on how often you change things. See if you regret the choice. Can you skip an upgrade cycle every couple of updates? Pick any other device that is similarly acceptable to continue using. This is the amount you should donate. Think of the luxury you have. If you buy cars like others buy smartphones, then skip buying a new car, donate that money.

Medical research campaigns do not really go viral. We do not get a ton of people donating $5. You should consider donating as much as you are comfortable with.

Thank you very much for your time to read this article until here, and even more so if you will go ahead with making a difference.

The above concludes the first part of the article. I wrote more about the topics I touched upon in the previous sections, and I feel they belong in a single page, even if the whole document is quite lengthy. I have been suggested to split this part away, however I believe everything here flows together and it is related to how others have touched my life, for the better.

Please feel free to stop here, or go ahead if you want to know more about my perspectives on the topics below.

This whole article is not trying to give you some brilliant advice or ideas that were never heard before. It is mainly intended to send some of my thoughts to the circle of people who I can reach online. Thoughts that I hope will be positive.

Thank you.

On trust and choices

I did not trust much in the idea of making donations to anyone, to any charity or organization. Here in Romania, at least, there is a generalized mistrust in giving your money to anyone, unfortunately. You do not know what the money is used for.

My answer to the problem of trust is that you should, indeed, never give your money to anything you do not trust or care about. No problem. However, you are the one responsible for finding the organizations or causes you support and trust, if you ever want to do something like this. I did this last year. My experience in London taught me to appreciate the importance of medical research and donating to support it. The St Thomas’ hospital in London relies on funds from various charities for a lot of their EB research. They even had a dermatologist from Sydney working with the team for the purpose of learning more about EB, for one year, paid from the budget of a charity, from donations. That's epic for me. I'm glad to have met doctor Susan Robertson and to be her patient. She went back to Sydney and I am hoping her additional experience will be of benefit for more EB patients.

I saw how much more support other charities get in London, like Macmillan, compared to what I was used to. It is a different culture of giving back, unlike what I see back home. Not saying everyone in the UK is so giving, but I think it is a result of a better quality of life spanning more years than in Eastern Europe.

Mistrust blocks any chance to do or experience something good, beautiful. You get stuck in a lack of action. If we do not trust anyone how do we ever get any research done? How do we find love? Make great friends?

A lot of medical research only happens through donations because states and universities do not allocate too much funding for rare conditions. Cancer surely gets a lot of funds, but we can prevent EB patients from getting cancer by working on the main problem, which is more tractable.

I am not saying you should not make a donation for cancer, far from that. Actually please make your choice. Do not let your current lack of trust and indecision prevent you from making donations. Find a charity or organization you trust donating to - there are trustworthy ones.

I would like to point out that, at least, Europeans and Americans live in very good conditions that we take for granted too easily. The majority of us afford basic health care, a modest job, more than enough food (even if we complain about the quality), access to information, transportation, travels, technology and entertainment of many kinds. When was the human race so capable to provide such high standards of living for so many people? We have so many gadgets and we keep buying new ones as they come out.

On charities I would like to say one thing that bothers me: far too common you see their websites and presentations with too much pity and emotional content. Sad photos of EB patients and wounds, dressings, etc. You are given the impression that EB patients only know suffering and a life of hell, with no hope of ever doing anything in their lives, except you should give them money to help them live some more. Ironic and harsh. Why should someone donate money for that? This kind of messaging drives people away, and it even makes it too intense for any interested person to learn more. Even myself I sometimes disable images in the browser just to be able to focus on the content. If anyone wants photos and videos of EB patients they should have a dedicated section. I am not saying that showing others’ details of the condition is wrong. We need photos and videos, but not straight in your face. This is one reason why I do not show myself in public photos. Some feel too much when they see me.

My previous comment applies to other conditions and it does not apply to all EB charities either. Actually some EB related websites are very well done.

Some families personally present their cases. In such situations that is a lot more acceptable, because it is a personal choice. It also takes great courage to go public and campaign for what you want. I am weaker than that. :-)

My only gripe is with some charities that should encourage us to donate to research and show us the potential of patients. We are not limited to a world of pains.

I do not want this article to be a sad one, it may very well be, but for different reasons. Making a donation is about supporting a better life for future generations. It is about helping today's children to reach a better potential sooner and healthier. They all have a great potential to their lives. It is also about giving purpose and more meaning to your life, to touch others further away from yourself.

On love

This is probably the hardest part to write down, because it feels like every girl I loved would deserve a whole section. :-) silly me. I also do not want any of them to feel like X was "better" than Y for some silly reasons. Each love is unique and it never truly disappears. Each person is unique and special.

Like almost anyone I wanted a family, starting with a girlfriend and all the normal things in life. Given the situation I am in, this is obviously quite a task. Marius wrote an eBook on being a person with disabilities which includes ample sections about the problems we face in this kind of situations, in relationships.

Over the years I met some special girls, both offline and online. Each relationship failed for various reasons which I usually blame on my condition, the typical scapegoat. Special thanks go to Cristiana (lily), Corina, Livia, Alina and Claire.

These relationships failed before they even turned into anything like a proper girlfriend and boyfriend thing. With Cristiana it was just my first online-only thing, wanted more but nothing happened. With Corina things were offline, neighbors, the thing ended when I wanted more than a friend, but obviously the situation was more complicated. With Livia, again we had a good friendship which ended with a lot of suffering when I wanted more. Things were even more complicated, with too many mistakes. With Alina I kept an online friendship for almost 8 years before I had the courage to tell her my feelings, lol. We only met face to face a couple of years ago, when she came to Arad with her job for a short while.

Even with these unsuccessful relationships I feel it is much better than nothing at all. There was something, with each person. I know I will always be in their hearts. I know this sounds silly and optimistic, but there is more than that. Surely their feelings do not match mine, they cannot, because everyone feels things differently. Having even this (small?) amount of love and these experiences is really valuable.

I am going to focus a bit just on the latest special girl I met, and there is a good reason for that: Claire. I met Claire in November 2014, less than a week before I left the St Thomas’ hospital. She was there in the same ward as myself, as a patient, for several weeks. When I saw her a couple of times walking downstairs I was pleasantly surprised to see another patient there able to smile, to be gracious. She was obviously going through hard times, but nothing mattered. It is rare to see something like that. Someone else who is that strong. I was telling a friend she was like an angel. It is silly, but so many patients are disgruntled and sad in hospitals. She was different. I know myself, I am smiling going into surgeries, I come out smiling, except once when I was not feeling well enough. Seeing Claire she reminded of my way of being.

Like in silly movies I asked one of the nurses for Claire's phone number, so we met online and in the ward, and started to talk almost daily. Met her again in January this year, at the same hospital. Unfortunately, she continues to have her health issues.

Claire is a smart girl, she is pursuing a PhD in law and you can have a very lively discussion on politics, economy, faith and other topics with her. Did I mention she is kind? :-)

I do not want anyone to feel any kind of pity here. The point I am trying to make here is that there are special people out there with different medical conditions that need your support, and they make the best out of their life. Claire does as well. She has Lupus and if you want, please go ahead and make a donation to the St Thomas' Lupus trust from the St Thomas’ hospital in London, as per her wish. She fully appreciates and supports the team working there on Lupus.

Now I know you might wonder why I am suggesting people to donate both to EB and Lupus research. I could avoid mentioning Claire and Lupus, and just make a call to action here for EB research. I am not trying to convince you here to pick a charity. Just go ahead with making donations and supporting other causes.

I write here about Claire because it is the least I can do in her honor. In these past weeks I have been thinking a lot about how I can give back or show my love to the people I care about. This is one way, for Claire. It is difficult when you want to help or do something important, meaningful, for a special person and you cannot find anything to do. I see this with others who would do almost anything for me to get over this cancer. They feel powerless. I feel powerless as well. I cannot help Claire and others.

I feel that it would be egoistic to ask everyone here to simply donate to the cause I care about. I did that too much in 27 years. Now you choose if and how you touch others.

I also like the idea of having an impact in a completely unexpected direction in this world. I never knew about Lupus until I met Claire.

On faith

A topic closely related to love is faith and God. I want to mention that we, Marius and myself, were typically the target of various religious fanatics as we called them. Since early times they wanted to show us the light and love of God and even, recently, that of Allah (which is the same but not quite). What they initially achieved was causing rejection from both of us.

My cousin Simon helped with explaining things on demand, not like a spammer. To me belief in God starts with experiences, I cannot believe what I hear or read.

From my experience I would say that there might be something out there beyond human grasps that we cannot define, like afterlife and deities. Religions are only attempts to explain these matters. To claim there is only one truth is a big mistake. Humanity should not take that much pride in what it does. I am only closest to God and Catholic views because of social contexts, but this is not necessarily the "best" or "worst" deity out there, whatever that would mean.

On miracles I want to point out that too often that we want things to just happen, like in movies. We do not notice true miracles. The things me and Marius achieved were not something one would rationally bet on happening. Small achievements compared to what others have done, yet better than just a life of EB. Less than 10 years ago you could have asked any doctor or someone else to make an educated guess about us, and he would have not picked anything like how things really turned out. Seemingly impossible things can happen, even if there are very small chances. You only need to try and to have courage, to persevere.

I have an amazing family, got to have a great job, met great people, travelled to cool places, etc. Others do not get these, even if they have similar or the same condition. Is it all a coincidence or "little miracles"? I do not know, but I could have had it much worse, and throughout the years I "dodged" death perhaps more times than I can remember.

Even now, facing the prognosis I have, I cannot be sure about it. Until the very end there is always a way. I am entirely convinced that this cancer can be cured by today's medicine, that is if we include the alternative medicine as well. The only problem I face is finding the needle in the haystack. Very few other patients in late stages of cancer seem to have successfully overcome their illness, but none that I know of had EB. Were those miracles or not? It is all within the realm of humanity, but the actual finding is what makes it a miracle, nailing that very small chance. The question is how many of these "miracles" can one have in his lifetime? I cannot expect as many as I wish.

When you hear that man is made in the image of God you probably do not understand why. I see this with others, like my mom. She did and continues to give her life for her children, slowly, every day. Her sacrifices are like the symbolic sacrifice of Jesus for the sins of humanity. We have that desire to sacrifice ourselves for the ones we love. I would be so happy if my life would be so meaningful to at least one person out there, if I could have made a sacrifice for a higher purpose.

True love is about being there for the people you love, going through joy or unhappiness, sacrifices. When I hear some lover committed suicide because his partner decided to leave him it does not really point to how much he loved her. Unfortunately, having serious lifelong health problems such thoughts did cross my mind in different circumstances. Ultimately, I believe we have only one life to live, and we must make the best of it. If you abandon the project, then you deny any chances of improvements. Once you stop you cannot go back. I always hoped things will not be as bad as I am told, or as I expected them. I was generally right. :-)

I mentioned in this article all the love and support I got from the many people, which I really appreciate. I believe that the cumulated love is actually God's love. Ultimately, I feel like thanking him for all of this simply because it is all so much.

I could be very bitter about my untimely demise, but I am not. I wonder why? My answer to that is probably all the love and God have given me this peace. What more can I ask for when the end comes? Peace is the most important thing at the end. You could say it's my smartness and education, or whatever, that have have brought me to this peace, but I do not believe that. You can have smart and educated people going ballistic as well. It is something more than education. Maybe Christian brainwashing almost got me, lol. :-)

I want to recommended a movie on the topic of body and spiritual healing that is very well made, seemingly boring, but full of meaning: Lourdes (2009). Enjoy it.

A common question is why do we suffer so much if there is God and he loves us so much? It is the wrong question to ask, I would say. If there is a God, he is not going to do things how we imagine them in this reality. It is silly and limited. If God is love then he must also be freedom. You cannot love someone without giving them the freedom of choice. Freedom means anything can happen, good or bad. Even if you know something bad can happen you must allow your loved one the freedom to pick. You want freedom from your parents to choose what is right or not. If you do not do this, then it becomes a different kind of relationship. There is no love in a dictatorship, even in a "good" dictatorship - where you pick whatever happens with your loved ones and they do not get any choice. This is the same with God's love: his love does not prevent us from suffering, sickness, or from making bad choices for ourselves. We have that freedom.

It is good to pray or to meditate, to take your mind off the problems you have. You do not have to fully trust the deity you pray to. That comes in time. Also do not expect answers to your prayers as you want them. Things happen differently. I wanted the end of the suffering we go through. I am getting to that end now, but not exactly how I wanted.

As I wrote on Facebook in Romanian, in autumn 2014: you do not live until you get to "die". If you do not get to miss life for a while, you cannot really appreciate it.

I find the fear of death almost illogical. If you like your life, or life in general, death is part of the process of life. You must be prepared for it, and it will never be when you expect it.

I do not fear death at all. I expected it will be an early one. I am not happy to leave the things I like here, people, events, work, etc. I am also enthusiastic about technical progress. There is so much going on. I would like to see where we end up in 30 or more years. Silly, I know. :-)

Afterlife is another concept many are bothered by. There is no point to be worried about something that human language, psychology, intellect, etc cannot even begin to grasp. As such, I am waiting to see the afterlife peacefully, if there is anything like that. The various religions and cultures try to define this concept, deities and more, each with their own qualities, but I feel they are just exercises of imagination and human limitations.

I believe that faith starts with the courage to keep going in spite of all the disappointments, fears and failures you had or still have. How can you trust God if you fear death or you worry about tomorrow's big exam? You must face hard times with all the courage you can muster. That is faith.

It is silly how much time and energy people waste on anger and other problems, including myself. We cheat, lie, play games with each other, etc. We are mainly driven by fear. We think about what is next and we choose to avoid admitting feelings about the problems bothering us. We say half truths, we hide. We just do not fully admit what bothers us to the people around. Friendships, romantic relationships and families break up because of poor communication.

Being too honest also makes it easier for people to dislike and hurt you. It is a hard balance to keep between being yourself or being nice. I know I was mostly too direct, easily annoyed people. :-)

At the end I believe you do not really regret being honest. Mainly you get disappointed by the things you do wrong, the inexplicable complications that stem from miscommunication.

We fear too much and we trust too little. When someone tells you their own feelings the "best" option is to doubt him and make up your own version. That is really the recipe for disaster.

There is a song that captures this idea really nicely: Jem - Down to earth.

I am OK with all the people who did wrong by me, which is far fewer than those who did good. Those who do wrong only do it as a reflection or result of their fears, lack of trust and own problems.

I think having serious problems of any kind makes us more like animals. I have seen this with myself. It is much harder to be nice, educated, and considerate under the stress of pains, failures and frustrations. Survival mode kicks in quite subtly actually.

We compete and hurt others for little gains so often that we do not notice. We hear people say things and make assumptions. It is ridiculous how many times people assume I always like what Marius does, or often people think "you can't do that, right?" I see this in the technical world as well: just send an email to a technical mailing list and you will see replies from people who do not entirely read your message, or misinterpret what you write. Too many assumptions.

On technology

I will be abrupt here: no, the Internet did not fail. I read this article a long time ago and I still remember it. While I agree with the main points of the article, I consider my life an example of the amazingly positive impact of technology and the Internet. All of the miserable problems with the tech industry are minor compared to the improvements technology brings to the human quality of life. Without the Internet I could not have done what I did.

The latest example I have is with smart phones. I was a naysayer. I always used a PC, never a phone more than 5 minutes. I also was not much of a mobile person, staying at home most of the time anyway. Last year in March the bank I used started requiring a token device for authentication, or a phone. I could not use the token, but they were kind enough to offer me a pretty good smart phone (thank you very much Anda Dărăban). That was my first phone that I actually used. I used it in the hospital in Hungary, then in London and so on. My left arm was not usable for many months, it is not usable now either due to the tumors and surgeries. I never expected that technology on these otherwise addictive gadgets was so accessible to me. I can type this article easily on my phone, I do not mind not being able to use a keyboard anymore. How is this not a great achievement of all the work that the tech industry has put in? It is all for commercial interests, I know, but we easily forget about what these things enable us to do. Countless patients in hospitals and other people in difficult times come to rely on technology that was not available a few years ago, making all the difference in the world for them. I would have suffered a lot more in hospitals and back home now, and last year after the surgery, without being able to stay in touch with the people I know from home, work and others.

This is also an example of how a simple gesture of help makes a huge impact. I did not expect, neither Anda did, that this phone would be so useful for me in such hard times.

Proud to be a Șucan

I cannot end this article without saying a few words about the people who did everything they could for me, my family.

Mom's sacrifices are endless and tireless. Me and Marius, in silly attempts at being funny, we call mom RoboCop, for her tireless energy, she is unstoppable. We call her many things including things she has to forgive, during stressful and angry times. She forgives, loves and moves on. She was and she will always be our guardian angel.

I specifically want to recognize mom's ability to overcome her limitations. Born in a remote village that not very long ago got electricity, she learned to accept new things, to travel to places she never expected, to do things that she believed are nearly impossible. She did and continues to do everything for us, for love.

Almost everybody loves their mom, but if there would be a kind of contest I am certain my mom would be among the winners, simply because not everyone would be able to do what she did, objectively. Not every mom is equal to other moms.

If there are saints and angels mom would be one of them, or she is closest to being one. She always does things for the benefit of the other, never for herself. She takes the lesser half of a plate, for example, even when splitting with strangers. She even gives it all. I wish I could be a quarter as kind as her. Today I would not feel like I did not help others much.

Dad, similarly, is a strong character who never ever gives up and fights for us, with his own qualities and personality.

My twin, Marius, has always made images representing his feelings, which is much more valuable than trying to be nice and fit in some contemporary art and style that people like right now. His work and its impact have a value that will outlive mine, and that makes me happy.

Alex, my older brother, did everything he was able to with helping us as well, more than he notices. His education and level of technical expertise is epic and his impact in robotics is going to be larger than he expects. I wish I could get to see where robotics will get to, and that will include a Șucan. :-) His success was always a model for me and Marius.

Proud of my parents and brothers. Thank you all.

I will end this with thanks everyone for their love and support. Thank you God.

PS. Now go touch the world a bit by making a donation to medical research, EB and / or Lupus research. Cheer yourself up! :-)

Articol disponibil și în limba română aici.

Mozilla Release Management TeamFirefox 38 beta9 to rc1

For this 38 RC release, as usual, we only took fixes for the last top crashes. We also uplifted some last minute improvements for one of the new features (in this case EME).

  • 20 changesets
  • 45 files changed
  • 463 insertions
  • 499 deletions



List of changesets:

Jacek CabanBug 1156131 - mingw cross compilation fixup. a=NPOTB - b9f3bdfbf395
Jean-Yves AvenardBug 1158568 - Fix potential size overflow. r=kentuckyfriedtakahe, a=abillings - 8a61f534f496
Edwin FloresBug 1159300 - Don't use decrypting Gecko Media Plugins for non-encrypted playback. r=cpearce, a=sledru - 28521384c589
Randell JesupBug 1159300 - Add a clone of gmp-fake that doesn't do decryption. r=glandium, r=cpearce, a=sledru - d262c6789549
Jean-Yves AvenardBug 1148224 - Disable invalid tests. r=karlt, a=test-only - 03d9efe3dd1e
Ryan VanderMeulenBug 1146061 - Re-enable test_peerConnection_basicH264Video.html on Windows. a=test-only - a2843f37ba38
Boris ZbarskyBug 1154505 - Speed up test_bug346659.html by dropping the extra gcs, since the test harness now does a better job of disabling the popup blocker. r=smaug, a=test-only - 31452d32ba4d
Mark HammondBug 1090633 - Fix some focus related oranges with chats. r=mixedpuppy, a=test-only - dda1fe153565
James WillcoxBug 1159262 - Don't do EGL preloading hack on ICS and higher. r=jchen, a=sledru - e31ad7262160
Masatoshi KimuraBug 1145844 - Update fallback whitelist. r=keeler, a=sledru - a61af55e410d
Bob OwenBug 1158849 - Only enable Windows content sandbox on Nightly because of thumbnail process. r=glandium, a=sledru - 742d81505cd3
Robert StrongBug 1159826 - ensure_copy_recursive() leaks directory streams. r=spohl, a=sledru - 9edf93465d0d
Justin DolskeBug 1159814 - Change the Adobe CDM's homepage URL. r=gavin, a=sledru - db6a2986c24d
Chris PearceBug 1159495 - Only report Adobe EME supported if required WMF codecs are installed. r=edwin, a=sledru - 60555feb4888
Chris PearceBug 1159495 - Only report that Adobe EME is available if we have a plugin-container voucher. r=edwin, a=sledru - 6e95db92c8d4
Matt WoodrowBug 1155608 - Blacklist Intel G45 hardware decoding. r=k17e, a=sledru - 5f1ca8bf7e94
Bas SchoutenBug 1116812 - Consider DXGI_ERROR_INVALID_CALL a recoverable error for IDXGISwapChain::GetBuffer. r=jrmuizel, a=sledru - a1efc72ea226
Robert StrongBug 1127481 - Run the updater from the install directory instead of copying it. r=spohl, a=abillings - dd9d5b512e0e
Steve SingerBug 1141642 - Fix disable-skia builds. r=jmuizelaar a=sledru - 538fd67bb637
Ben TurnerBug 1159967 - Handle logging after threads have shut down, r=janv, a=sylvestre - 257a4e9e8236

Niko MatsakisVirtual Structs Part 1: Where Rust's enum shines

One priority for Rust after 1.0 is going to be incorporating some kind of support for “efficient inheritance” or “virtual structs”. In order to motivate and explain this design, I am writing a series of blog posts examining how Rust’s current abstractions compare with those found in other languages.

The way I see it, the topic of “virtual structs” has always had two somewhat orthogonal components to it. The first component is a question of how we can generalize and extend Rust enums to cover more scenarios. The second component is integrating virtual dispatch into this picture.

I am going to start the series by focusing on the question of extending enums. This first post will cover some of the strengths of the current Rust enum design; the next post, which I’ll publish later this week, will describe some of the advantages of a more “class-based” approach. Then I’ll discuss how we can bring those two worlds together. After that, I will turn to virtual dispatch, impls, and matching, and show how they interact.

The Rust enum

I don’t know about you, but when I work with C++, I find that the first thing that I miss is the Rust enum. Usually what happens is that I start out with some innocent-looking C++ enum, like ErrorCode:

enum ErrorCode {

ErrorCode parse_file(String file_name);

As I evolve the code, I find that, in some error cases, I want to return some additional information. For example, when I return UnexpectedChar, maybe I want to indicate what character I saw, and what characters I expected. Because this data isn’t the same for all errors, now I’m kind of stuck. I can make a struct, but it has these extra fields that are only sometimes relevant, which is awkward:

struct Error {
    ErrorCode code;

    // only relevant if UnexpectedChar:
    Vector<char> expected; // possible expected characters
    char found;

This solution is annoying since I have to come up with values for all these fields, even when they’re not relevant. In this case, for example, I have to create an empty vector and so forth. And of course I have to make sure not to read those fields without checking what kind of error I have first. And it’s wasteful of memory to boot. (I could use a union, but that is kind of a mess of its own.) All in all, not very good.

One more structured solution is to go to a full-blown class hierarchy:

enum ErrorCode {

class Error {
    Error(ErrorCode ec) : errorCode(ec) { }
    const ErrorCode errorCode;

class FileNotFoundError : public Error {
    FileNotFound() : Error(FileNotFound);

class UnexpectedChar : public ErrorCode {
    UnexpectedChar(char expected, char found)
      : Error(UnexpectedChar),
    { }

    const char expected;
    const char found;

In many ways, this is pretty nice, but there is a problem (besides the verbosity, I mean). I can’t just pass around Error instances by value, because the size of the Error will vary depending on what kind of error it is. So I need dynamic allocation. So I can change my parse_file routine to something like:

unique_ptr<Error> parse_file(...);

Of course, now I’ve wound up with a lot more code, and mandatory memory allocation, for something that doesn’t really seem all that complicated.

Rust to the rescue

Of course, Rust enums make this sort of thing easy. I can start out with a simple enum as before:

enum ErrorCode {

fn parse_file(file_name: String) -> ErrorCode;

Then I can simply modify it so that the variants carry data:

enum ErrorCode {
    UnexpectedChar { expected: Vec<String>, found: char }

fn parse_file(file_name: String) -> ErrorCode;

And nothing really has to change. I only have to supply values for those fields when I construct an instance of UnexpectedChar, and I only read the values when I match a given error. But most importantly, I don’t have to do dummy allocations: the size of ErrorCode is automatically the size of the largest variant, so I get the benefits of the a union in C but without the mess and risk.

What makes Rust and C++ behave differently?

So why does this example work so much more smoothly with a Rust enum than a C++ class hierarchy? The most obvious difference is that Rust’s enum syntax allows us to compactly declare all the variants in one place, and of course we enjoy the benefits of match syntax. Such “creature comforts” are very nice, but that is not what I’m really talking about in this post. (For example, Scala is an example of a language that offers great syntactic support for using “classes as variants”; but that doesn’t change the fundamental tradeoffs involved.)

To me, the key difference between Rust and C++ is the size of the ErrorCode types. In Rust, the size of an ErrorCode instance is equal to the maximum of its variants, which means that we can pass errors around by value and know that we have enough space to store any kind of error. In contrast, when using classes in C++, the size of an ErrorCode instance will vary, depending on what specific variance it is. This is why I must pass around errors using a pointer, since I don’t know how much space I need up front. (Well, actually, C++ doesn’t require you to pass around values by pointer: but if you don’t, you wind up with object slicing, which can be a particularly surprising sort of error. In Rust, we have the notion of DST to address this problem.)

Rust really relies deeply on the flat, uniform layout for enums. For example, every time you make a nullable pointer like Option<&T>, you are taking advantage of the fact that options are laid out flat in memory, whether they are None or Some. (In Scala, for example, creating a Some variant requires allocating an object.)

Preview of the next few posts

OK, now that I spent a lot of time telling you why enums are great and subclassing is terrible, my next post is going to tell you why I think suclassing is sometimes fantastic and enums kind of annoying.


I’m well aware I’m picking on C++ a bit unfairly. For example, perhaps instead of writing up my own little class hierarchy, I should be using boost::any or something like that. Because C++ is such an extensible language, you can definitely construct a class hierarchy that gives you similar advantages to what Rust enums offer. Heck, you could just write a carefully constructed wrapper around a C union to get what you want. But I’m really focused here on contrasting the kind of “core abstractions” that the language offers for handling variants with data, which in Rust’s case is (currently) enums, and in C++’s case is subtyping and classes.

Matjaž HorvatTerminology Search in Pontoon

New release of Pontoon is out the door. It’s mostly a bugfix release eliminating annoying glitches like broken contributor profile links. Thank you for your first contribution to Pontoon, Benoit! :-)

Some new features are also available, e.g. displaying warnings on unsaved translations as suggested by flod. And — Terminology Search is now also available as a standalone feature, making it easier to access. It works similarly as the Search tab in the out-of-context translation panel.

Translations are taken from:

Pascal FinetteEntrepreneurship for Executives

Earlier this week I had the great pleasure and honor to present a modified/expanded version of my "10 Lessons for Entrepreneurs" talk at Singularity University's Executive Program.

As our guests are a mixture between entrepreneurs and corporate executives I tweaked the deck to talk more about how these lessons are applicable for a corporate context:

One of these days I will turn this into a book... :)

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1159166] When I ask for review from someone who is not accepting reviews, my red request count in the top left becomes their request count
  • [1159307] Can we add the Rank field to Product “Toolkit”
  • [1151745] add ui to minimise steps required to move bugs between products
  • [1157124] don’t report sql search errors to senty
  • [1153100] add mozreview’s table to bug-modal
  • [1159282] form.dev-engagement-event: stop creating a second “discussion” bug

discuss these changes on mozilla.tools.bmo.

Filed under: bmo, mozilla

QMOQA Discourse category

We now have a category for QA in the mozilla-community discourse! Please take a moment to introduce yourself in the forum.

The following discussion categories are available:

  • Announcements
  • Introductions
  • Automation
  • Desktop Firefox
  • Mobile Firefox
  • Cloud Services
  • Web QA
  • FX OS

If you want to follow what is going on in QA, we also currently use these communication channels for announcements and discussion:


Mozilla Addons BlogDropping support for binary components in extensions

Starting with Firefox 40, scheduled to be released in August this year, binary XPCOM support for extensions will be dropped.

Binary XPCOM is an old and fairly unstable technology that a small number of add-on developers have used to integrate binary libraries into their add-ons, sometimes to tap into Firefox internals (hence the unstable part). Better technologies have become available to replace binary XPCOM and we have encouraged developers to switch to them. From the original post:

Extension authors that need to use native binaries are encouraged to do
so using the addon SDK “system/child_process” pipe mechanism:

If this is not sufficient, JS-ctypes may be an alternative mechanism to
use shared libraries, but this API is much more fragile and it’s easy to
write unsafe code.

Developers who rely on binary XPCOM should update their code as soon as possible to prevent compatibility issues. If you have any questions or comments about this move, please do so in the mozilla.dev.extensions newsgroup.

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Mozilla Science LabMozilla Science Lab Week in Review, April 27 – May 3

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

Awards & Grants

  • Applications for the PLOS Early Career Travel Award Program are now open; ten $500 awards are available to help early career researchers publishing in PLOS attend meetings and conferences to present their work.

Tools & Resources

Blogs & Papers

  • A study led by the Center for Open Science that attempted to replicate the findings of 100 journal articles in psychology has concluded, with data posted online; 39 of the articles investigated were reproduced, with substantial similarities found in several dozen more.
  • The Joint Research Centre of the European Commission has released an interim report on their ongoing work in ‘Analysis of emerging reputation mechanisms for scholars'; the report maps an ontology of research-related activities onto the reputation-building activities that attempt to capture them, and reviews the social networks that attempt to facilitate this construction of reputation on the web.
  • Alyssa Goodman et al published Ten Simple Rules for the Care and Feeding of Scientific Data in PLOS Computational Biology. In it, the authors touch not only on raw data, but the importance of permanent identifiers by which to identify it, and the context provided by publishing workflows in addition to code.
  • David Takeuchi wrote about his concerns that the American federal government’s proposed FIRST act will curtail funding for the social sciences, and place too much emphasis on perceived relevance at the expense of reproducibility.
  • The Georgia Tech Computational Linguistics Lab blogged about the results of a recent graduate seminar where students were set to reproducing the results of several papers in computational social science. The author makes several observations on the challenges faced, including the difficulties in reproducing results based on social network or other proprietary information, and on the surprising robustness of machine-learning driven analyses.
  • Cobi Smith examined both the current state and future importance of open government data in Australia.

Meetings & Conferences

Mike ConleyElectrolysis and the Big Tab Spinner of Doom

Have you been using Firefox Nightly and seen this big annoying spinner?

Big Tab Spinner of Doom in an e10s tab

Aw, crap. You again.

I hate that thing. I hate it.

Me, internally, when I see the spinner.

And while we’re working on making the spinner itself less ugly, I’d like to eliminate, or at least reduce its presence to the absolute minimum.

How do I do that? Well, first, know your enemy.

What does it even mean?

That big spinner means that the graphics part of Gecko hasn’t given us a frame yet to paint for this browser tab. That means we have nothing yet to show for the tab you’ve selected.

In the single-process Firefox that we ship today, this graphics operation of preparing a frame is something that Firefox will block on, so the tab will just not switch until the frame is ready. In fact, I’m pretty sure the whole browser will become unresponsive until the frame is ready.

With Electrolysis / multi-process Firefox, things are a bit different. The main browser process tells the content process, “Hey, I want to show the content associated with the tab that the user just selected”, and the content process computes what should be shown, and when the frame is ready, the parent process hears about it and the switch is complete. During that waiting time, the rest of the browser is still responsive – we do not block on it.

So there’s this window of time where the tab switch has been requested, and when the frame is ready.

During that window of time, we keep showing the currently selected tab. If, however, 300ms passes, and we still haven’t gotten a frame to paint, that’s when we show the big spinner.

So that’s what the big spinner means – we waited 300ms, and we still have no frame to draw to the screen.

How bad is it?

I suspect it varies. I see the spinner a lot less on my Windows machine than on my MacBook, so I suspect that performance is somehow worse on OS X than on Windows. But that’s purely subjective. We’ve recently landed some Telemetry probes to try to get a better sense of how often the spinner is showing up, and how laggy our tab switching really is. Hopefully we’ll get some useful data out of that, and as we work to improve tab switch times, we’ll see improvement in our Telemetry numbers as well.

Where is the badness coming from?

This is still unclear. And I don’t think it’s a single thing – many things might be causing this problem. Anything that blocks up the main thread of the content process, like slow JavaScript running on a web-site, can cause the spinner.

I also seem to see the spinner when I have “many” tabs open (~30), and have a build going on in the background (so my machine is under heavy load).

Maybe we’re just doing things inefficiently in the multi-process case. I recently landed profile markers for the Gecko Profiler for async tab switching, to help figure out what’s going on when I experience slow tab switch. Maybe there are optimizations we can make there.

One thing I’ve noticed is that there’s this function in the graphics layer, “ClientTiledLayerBuffer::ValidateTile”, that takes much, much longer in the content process than in the single-process case. I’ve filed a bug on that, and I’ll ask folks from the Graphics Team this week.

How you can help

If you’d like to help me find more potential causes, Profiles are very useful! Grab the Gecko Profiler add-on, make sure it’s enabled, and then dump a profile when you see the big spinner of doom. The interesting part will be between two markers, “AsyncTabSwitch:Start” and “AsyncTabSwitch:Finish”. There are also markers for when the parent process displays the spinner – “AsyncTabSwitch:SpinnerShown” and “AsyncTabSwitch:SpinnerHidden”. The interesting stuff, I believe, will be in the “Content” section of the profile between those markers. Here are more comprehensive instructions on using the Gecko Profiler add-on.

And here’s a video of me demonstrating how to use the profiler, and how to attach a profile to the bug where we’re working on improving tab switch times:

And here’s the link I refer you to in the video for getting the add-on.

So hopefully we’ll get some useful data, and we can drive instances of this spinner into the ground.

I’d really like that.

The Mozilla BlogPlease welcome Jascha Kaykas-Wolff, Chief Marketing Officer and Nick Nguyen, Vice President of Product Strategy

Today we’re excited to announce two new additions to the leadership team at Mozilla, one joining us for the first time today, and the other returning.

Jascha Kaykas-Wolff joins us this week as Mozilla’s new Chief Marketing Officer with responsibility for leading our global marketing strategy and organization.

Jascha Kaykas-WolffJascha’s background and deep experience in product positioning, marketing strategy, and brand building make him ideally positioned to lead a strategic marketing organization that continues to build Mozilla’s global brands and influence in a highly competitive marketplace.

Jascha was most recently at BitTorrent where he served as Chief Marketing Officer. Prior to joining BitTorrent, Jascha was Chief Marketing Officer for Mindjet, Senior Vice President of Marketing and Customer Success at Involver and led Global Marketing for Webtrends.

He will be based in the Bay Area, working out of our Mozilla Space in San Francisco and our headquarters in Mountain View.

Jascha’s bio & Mozillians profile
LinkedIn profile
High-res photo

Nick Nguyen returns to Mozilla today as a Vice President of Product Strategy after 4 years away.  As a product strategist, he will be responsible for leading strategic product initiatives to advance our mission.

Nick NguyenPrior to his return to Mozilla today, Nick served as Sr. Director of Mobile Products at Walmart Labs following the acquisition of Tasty Labs, a mobile and social startup he co-founded and where he served as Chief Operating Officer and Vice President of Products. At Walmart Labs, he was responsible for launching award-winning Android and iOS apps for Walmart, Sam’s Club and Asda. Prior to this he served as Director of Addons for Mozilla where he was responsible for Firefox ecosystem development and customization features. He has also held a variety of product leadership and software development roles at Yahoo!, Trilogy and Ford Motor Company.

He will be based in the Bay Area and will work primarily out of our headquarters in Mountain View, California.

Nick’s bio & Mozillians profile
LinkedIn profile
High-res photo

We’re thrilled that both Jascha and Nick are joining us in our relentless pursuit of the Mozilla mission, our strategy of building great products and empowering people, and the impact we aim to have on the world.

Welcome Jascha!  Welcome back Nick!


Gregory SzorcReporting Mercurial Issues

I semi-frequently stumble upon conversations in hallways and on irc.mozilla.org about issues people are having with Mercurial. These conversations periodically involve a legitimate bug with Mercurial. Unfortunately, these conversations frequently end without an actionable result. Unless someone files a bug, pings me, etc, the complaints disappear into ether. That's not good for anyone and only results in bugs living longer than they should.

There are posters around Mozilla offices that say if you see something, file something. This advice does not just apply to Mozilla projects!

If you encounter an issue in Mercurial, please take the time to report it somewhere meaningful. The Reporting Issues with Mercurial page from the Mercurial for Mozillians guide tells you how to do this.

It is OK to complain about something. But if you don't inform someone empowered to do something about it, you are part of the problem without being part of the solution. Please make the incremental effort to be part of the solution.

Laura HilligerOpen Web Leadership

Over the last couple of weeks, we’ve been talking about an organizing structure for future (and current) Teach Like Mozilla content and curriculum. This stream of curriculum is aimed at helping leaders gain the competencies and skills needed for teaching, organizing and sustaining learning for the web. We’ve been short-handing this work “Open Fluency” after I wrote a post about the initial thinking.

Last week, in our biweekly community call, we talked about the vision for our call. In brief, we want to:

“Work together to define leadership competencies and skills, as well as provide ideas and support to our various research initiatives.”

We decided to change the naming of this work to “Open Web Leadership”, with a caveat that we might find a better name sometime in the future. We discussed leadership in the Mozilla context and took some notes on what we view as “leadership” in our community. We talked about the types of leadership we’ve seen within the community, noted that we’ve seen all sorts, and, in particular, had a lengthy conversation about people confusing management with leadership.

We decided that as leaders in the Mozilla Community, we want to be collaborative, effective, supported, compassionate for people’s real life situations. We want to inspire inquiry and exploration and ensure that our community can make independent decisions and take ownership. We want to be welcoming and encouraging, and we are especially interested in making sure that as leaders, we encourage new leaders to come forward, grow and participate.

I believe it was Greg who wrote in the call etherpad:

“Open Web Leaders engage in collaborative design while serving as a resource to others as we create supportive learning spaces that merge multiple networks, communities, and goals.”

Next, we discussed what people need to feel ownership and agency here in the Mozilla community. People expressed some love for the type of group work we’re doing with Open Web Leadership, pointing out that working groups who make decisions together fuels their own participation. It was pointed out that the chaos of the Mozilla universe should be a forcing function for creating on-boarding materials for getting involved, and that a good leader:

“Makes sure everyone “owns” the project”

There’s a lot in that statement. Giving ownership and agency to your fellow community members requires open and honest communication, not one time but constantly. No matter how much we SAY it, our actions (or lack of action) color how people view the work (as well as each other).

After talking about leadership, we added the progressive “ing” form to the verbs we’re using to designate each Open Web Leadership strand. I think this was a good approach as to me it signifies that understanding, modeling and uniting to TeachTheWeb are ongoing and participatory practices. Or, said another way, lifelong learning FTW! Our current strands are:

  • Understanding Participatory Learning (what you need to know)
  • Modeling Processes and Content (how you wield what you know)
  • Uniting Locally and Globally (why you wield what you know)

We established a need for short, one line descriptors on each strand, and decided that the competency “Open Thinking” is actually a part of “Open Practices”. We’ll refine and further develop this in future calls!

As always, you’re invited to participate. There are tons of thought provoking Github issues you can dive into (coding skills NOT required), and your feedback, advice, ideas and criticisms are all welcome.

Gregory SzorcMercurial 3.4 Released

Mercurial 3.4 was released on May 1 (following Mercurial's time-based schedule of releasing a new version every 3 months).

3.4 is a significant release for a few reasons.

First, the next version of the wire protocol (bundle2) has been marked as non-experimental on servers. This version of the protocol paves over a number of deficiencies in the classic protocol. I won't go into low-level details. But I will say that the protocol enables some rich end-user experiences, such as having the server hand out URLs for pre-generated bundles (e.g. offload clones to S3), atomic push operations, and advanced workflows, such as having the server rebase automatically on push. Of course, you'll need a server running 3.4 to realize the benefits of the new protocol. hg.mozilla.org won't be updated until at least June 1.

Second, Mercurial 3.4 contains improvements to the tags cache to make performance concerns a thing of the past. Due to the structure of the Firefox repositories, the previous implementation of the tags cache could result in pauses of dozens of seconds during certain workflows. The problem should go away with Mercurial 3.4. Please note that on first use of Mercurial 3.4, your repository may perform a one-time upgrade of the tags cache. This will spin a full CPU core and will take up to a few minutes to complete on Firefox repos. Let it run to completion and performance should not be an issue again. I wrote the patches to change the tags cache (with lots of help from Pierre-Yves David, a Mercurial core contributor). So if you find anything wrong, I'm the one to complain to.

Third, the HTTP interface to Mercurial (hgweb) now has JSON output for nearly every endpoint. The implementation isn't yet complete, but it is better than nothing. But, it should be good enough for services to start consuming it. Again, this won't be available on hg.mozilla.org until the server is upgraded on June 1 at the earliest. This is a feature I added to core Mercurial. If you have feature requests, send them my way.

Fourth, a number of performance regressions introduced in Mercurial 3.3 were addressed. These performance issues frequently manifested during hg blame operations. Many Mozillians noticed them on hg.mozilla.org when looking at blame through the web interface.

For a more comprehensive list of changes, see my post about the 3.4 RC and the official release notes.

3.4 was a significant release. There are compelling reasons to upgrade. That being said, there were a lot of changes in 3.4. If you want to wait until 3.4.1 is released (scheduled for June 1) so you don't run into any regressions, nobody can fault you for that.

If you want to upgrade, I recommend reading the Mercurial for Mozillians Installation Page.

Daniel StenbergHTTP/2 in curl, status update

http2 logoI’m right now working on adding proper multiplexing to libcurl’s HTTP/2 code. So far we’ve only done a single stream per connection and while that works fine and is HTTP/2, applications will still want more when switching to HTTP/2 as the multiplexing part is one of the key components and selling features of the new protocol version.

Pipelining means multiplexed

As a starting point, I’m using the “enable HTTP pipelining” switch to tell libcurl it should consider multiplexing. It makes libcurl work as before by default. If you use the multi interface and enable pipelining, libcurl will try to re-use established connections and just add streams over them rather than creating new connections. Yes this means that A) you need to use the multi interface to get the full HTTP/2 stuff and B) the curl tool won’t be able to take advantage of it since it doesn’t use the multi interface! (An old outstanding idea is to move the tool to use the multi interface and this would yet another reason why this could be a good idea.)

We still have some decisions to make about how we want libcurl to act by default – especially when we can expect application to use both HTTP/1.1 and HTTP/2 at the same time. Since we don’t know if the server supports HTTP/2 until after a certain point in the negotiation, we need to decide on how to do when we issue N transfers at once to the same server that might speak HTTP/2… Right now, we get the best HTTP/2 behavior by telling libcurl we only want one connection per host but that is probably not ideal for an application that might use a mix of HTTP/1.1 and HTTP/2 servers.

Downsides with abusing pipelining

There are some drawbacks with using that pipelining switch to allow multiplexing since users may very well want HTTP/2 multiplexing but not HTTP/1.1 pipelining since the latter is just riddled with interop problems.

Also, re-using the same options for limited connections to host names etc for both HTTP/1.1 and HTTP/2 may not at all be what real-world applications want or need.

One easy handle, one stream

libcurl API wise, each HTTP/2 stream is its own easy handle. It makes it simple and keeps the API paradigm very much in the same way it works for all the other protocols. It comes very natural for the libcurl application author. If you setup three easy handles, all identifying a resource on the same server and you tell libcurl to use HTTP/2, it makes perfect sense that all these three transfers are made using a single connection.

As multiplexed data means that when reading from the socket, there is data arriving that belongs to other streams than just a single one. So we need to feed the received data into the different “data buckets” for the involved streams. It gives us a little internal challenge: we get easy handles with no socket activity to trigger a read, but there is data to take care of in the incoming buffer. I’ve solved this so far with a special trigger that says that there is data to take care of, that it should make a read anyway that then will get the data from the buffer.

Server push

HTTP/2 supports server push. That’s a stream that gets initiated from the server side without the client specifically asking for it. A resource the server deems likely that the client wants since it asked for a related resource, or similar. My idea is to support server push with the application setting up a transfer with an easy handle and associated options, but the URL would only identify the server so that it knows on which connection it would accept a push, and we will introduce a new option to libcurl that would tell it that this is an easy handle that should be used for the next server pushed stream on this connection.

Of course there are a few outstanding issues with this idea. Possibly we should allow an easy handle to get created when a new stream shows up so that we can better deal with a dynamic number of  new streams being pushed.

It’d be great to hear from users who have ideas on how to use server push in a real-world application and how you’d imagine it could be used with libcurl.

Work in progress code

My work in progress code for this drive can be found in two places.

First, I do the libcurl multiplexing development in the separate http2-multiplex branch in the regular curl repo:


Then, I put all my test setup and test client work in a separate repository just in case you want to keep up and reproduce my testing and experiments:



All comments, questions, praise or complaints you may have on this are best sent to the curl-library mailing list. If you are planning on doing a HTTP/2 capable applications or otherwise have thoughts or ideas about the API for this, please join in and tell me what you think. It is much better to get the discussions going early and work on different design ideas now before anything is set in stone rather than waiting for us to ship something semi-stable as the closer to an actual release we get, the harder it’ll be to change the API.

Not quite working yet

As I write this, I’m repeatedly doing 99 parallel HTTP/2 streams with no data corruption… But there’s a lot more to be done before I’ll call it a victory.

This Week In RustThis Week in Rust 80

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

115 pull requests were merged in the last week, and 2 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • Brendan Graetz
  • Carol (Nichols || Goulding)
  • critiqjo
  • Dominic van Berkel
  • Hech
  • Jan Bujak
  • J Bailey
  • jooert
  • Jordan Humphreys
  • Poga Po
  • sinkuu
  • Xuefeng Wu

Approved RFCs

New RFCs


The current beta is rustc 1.0.0-beta.4 (850151a75 2015-04-30).

There were 2 PRs this week landing backports to beta.

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

"Ultimately, I think this all boils down to the fact that borrowck only cares about reachable values. A leaked value isn't reachable, therefore it doesn't matter that it had a lifetime associated with it and technically outlives that lifetime, since it's not reachable no undefined behavior can be invoked."

Insight from kballard on the safety of linking.

Thanks to Gankro for the tip. Submit your quotes for next week!.

Andy McKayTFSA

In the budget the Conservatives increased the TFSA in allowance from $5,500 to $10,000. This was claimed to be:

11 million Cdns use Tax-Free Savings Accounts. #Budget2015 will increase limit from $5500 to $10000. Help for low and middle income Cdns.

Wai Young MP

"Low income" really? According to Revenue Canada we can see that most people are not to maxing out their TFSA room. In fact since 2009, the amount of unused contribution has been growing each year.

Year Aveage unused contribution Change
2009 $1,156.29
2010 $3,817.25 $2,660.96
2011 $6,692.37 $2,875.12
2012 $9,969.19 $3,276.83

People are having trouble keeping up with TFSA contributions as it is. But what's low income? Depends how you define it, there's a few ways.

LICO is an income threshold below which a family will likely devote a larger share of its income to the necessities of food, shelter and clothing than an average family would


And "Thus for 2011, the 1992 based after-tax LICO for a family of four living in an community with a population between 30,000 and 99,999 is $30,487, expressed in current dollars.". That is after tax.

Of that income by definition, over 50% of the families incomes is on food, shelter and clothing. Meaning that there's $15,243 left. Maybe, these are all averages and many people will be way, way worse off.

Is $10,000 in TFSA reasonable for families who have less than $15,243 a year? No. It benefits people with more money and the ability to save. Further we can see that the actual amount of money going into TFSA has been dropping every year since its creation and the unused contribution has been growing.

There isn't really a good justification for increasing the TFSA except as a way of helping the rich just before the election.

Mike ConleyThings I’ve Learned This Week (April 27 – May 1, 2015)

Another short one this week.

You can pass DOM Promises back through XPIDL

XPIDL is what we use to define XPCOM interfaces in Gecko. I think we’re trying to avoid XPCOM where we can, but sometimes you have to work with pre-existing XPCOM interfaces, and, well, you’re just stuck using it unless you want to rewrite what you’re working on.

What I’m working on lately is nsIProfiler, which is the interface to “SPS”, AKA the Gecko Profiler. nsIProfiler allows me to turn profiling on and off with various features, and then retrieve those profiles to send to a file, or to Cleopatra1.

What I’ve been working on recently is Bug 1116188 – [e10s] Stop using sync messages for Gecko profiler, which will probably have me adding new methods to nsIProfiler for async retrieval of profiles.

In the past, doing async stuff through XPCOM / XPIDL has meant using (or defining a new) callback interface which can be passed as an argument to the async method.

I was just about to go down that road, when ehsan (or was it jrmuizel? One of them, anyhow) suggested that I just pass a DOM Promise back.

I find that Promises are excellent. I really like them, and if I could pass a Promise back, that’d be incredible. But I had no idea how to do it.

It turns out that if I can ensure that the async methods are called such that there is a JS context on the stack, I can generate a DOM Promise, and pass it back to the caller as an “nsISupports”. According to ehsan, XPConnect will do the necessary magic so that the caller, upon receiving the return value, doesn’t just get this opaque nsISupports thing, but an actual DOM Promise. This is because, I believe, that DOM Promise is something that is defined via WebIDL. I think. I can’t say I fully understand the mechanics of XPConnect2, but this all sounded wonderful.

I even found an example in our new Service Worker code:

From dom/workers/ServiceWorkerManager.cpp (I’ve edited the method to highlight the Promise stuff):

// If we return an error code here, the ServiceWorkerContainer will
// automatically reject the Promise.
ServiceWorkerManager::Register(nsIDOMWindow* aWindow,
                               nsIURI* aScopeURI,
                               nsIURI* aScriptURI,
                               nsISupports** aPromise)

  // XXXnsm Don't allow chrome callers for now, we don't support chrome
  // ServiceWorkers.

  nsCOMPtr<nsPIDOMWindow> window = do_QueryInterface(aWindow);

  // ...

  nsCOMPtr<nsIGlobalObject> sgo = do_QueryInterface(window);
  ErrorResult result;
  nsRefPtr<Promise> promise = Promise::Create(sgo, result);
  if (result.Failed()) {
    return result.StealNSResult();

  // ...

  nsRefPtr<ServiceWorkerResolveWindowPromiseOnUpdateCallback> cb =
    new ServiceWorkerResolveWindowPromiseOnUpdateCallback(window, promise);

  nsRefPtr<ServiceWorkerRegisterJob> job =
    new ServiceWorkerRegisterJob(queue, cleanedScope, spec, cb, documentPrincipal);

  return NS_OK;

Notice that the outparam aPromise is an nsISupports**, and yet, I do believe the caller will end up handling a DOM Promise. Wicked!

  1. Cleopatra is the web application that can be used to browse a profile retrieved via nsIProfiler 

  2. Like being able to read the black speech of Mordor, there are few who can. 

Mike ConleyThe Joy of Coding (Ep. 12): Making “Save Page As” Work

After giving some updates on the last bug we were working on together, I started a new bug: Bug 1128050 – [e10s] Save page as… doesn’t always load from cache. The problem here is that if the user were to reach a page via a POST request, attempting to save that page from the Save Page item in the menu would result in silent failure1.

Luckily, the last bug we were working on was related to this – we had a lot of context about cache keys swapped in already.

The other important thing to realize is that fixing this bug is a bandage fix, or a wallpaper fix. I don’t think those are official terms, but it’s what I use. Basically, we’re fixing a thing with the minimum required effort because something else is going to fix it properly down the line. So we just need to do what we can to get the feature to limp along until such time as the proper fix lands.

My proposed solution was to serialize an nsISHEntry on the content process side, deserialize it on the parent side, and pass it off to nsIWebBrowserPersist.

So did it work? Watch the episode and find out!

I also want to briefly apologize for some construction noise during the video – I think it occurs somewhere halfway through minute 20 of the video. It doesn’t last long, I promise!

Episode Agenda


Bug 1128050 – [e10s] Save page as… doesn’t always load from cache – Notes

  1. Well, it’d show something in the Browser Console, but for a typical user, I think that’s still a silent failure. 

Mozilla Release Management TeamFirefox 38 beta8 to beta9

In this beta, 16 changesets are test-only or NPOTB changes. Besides those patches, we took graphic fixes, stabilities improvements and polish fixes.

  • 38 changesets
  • 87 files changed
  • 713 insertions
  • 287 deletions



List of changesets:

Ryan VanderMeulenBug 1062496 - Disable browser_aboutHome.js on OSX 10.6 debug. a=test-only - 657cfe2d4078
Ryan VanderMeulenBug 1148224 - Skip timeout-prone subtests in mediasource-duration.html on Windows. a=test-only - 82de02ddde1b
Ehsan AkhgariBug 1095517 - Increase the timeout of browser_identity_UI.js. a=test-only - 611ca5bd91d4
Ehsan AkhgariBug 1079617 - Increase the timeout of browser_test_new_window_from_content.js. a=test-only - 1783df5849c7
Eric RahmBug 1140537 - Sanity check size calculations. r=peterv, a=abillings - a7d6b32a504c
Hiroyuki IkezoeBug 1157985 - Use getEntriesByName to search by name attribute. r=qdot, a=test-only - 55b58d5184ce
Morris TsengBug 1120592 - Create iframe directly instead of using setTimeout. r=kanru, a=test-only - a4f506639153
Gregory SzorcBug 1128586 - Properly look for Mercurial version. r=RyanVM, a=NPOTB - 49abfe1a8ef8
Gregory SzorcBug 1128586 - Prefer hg.exe over hg. r=RyanVM, a=NPOTB - a0b48af4bb54
Shane TomlinsonBug 1146724 - Use a SendingContext for WebChannels. r=MattN, r=markh, a=abillings - 56d740d0769f
Brian HackettBug 1138740 - Notify Ion when changing a typed array's data pointer due to making a lazy buffer for it. r=sfink, a=sledru - e1fb2a5ab48d
Seth FowlerBug 1151309 - Part 1: Block until the previous multipart frame is decoded before processing another. r=tn, a=sledru - 046c97d2eb23
Seth FowlerBug 1151309 - Part 2: Hide errors in multipart image parts both visually and internally. r=tn, a=sledru - 0fcbbecc843d
Alessio PlacitelliBug 1154518 - Make sure extended data gathering (Telemetry) is disabled when FHR is disabled. r=Gijs, a=sledru - cb2725c612b2
Bas SchoutenBug 1151821 - Make globalCompositeOperator work correctly when a complex clip is pushed. r=jrmuizel, a=sledru - 987c18b686eb
Bas SchoutenBug 1151821 - Test whether simple canvas globalCompositeOperators work when a clip is set. r=jrmuizel, a=sledru - 1bbb50c6a494
Bob OwenBug 1087565 - Verify the child process with a secret hello on Windows. r=dvander, a=sledru - c1f04200ed98
Randell JesupBug 1157766 - Mismatched DataChannel initial channel size in JSEP database breaks adding channels. r=bwc, a=sledru - a8fb9422ff13
David MajorBug 1130061 - Block version 1.5 of vwcsource.ax. r=bsmedberg, a=sledru - 053da808c6d9
Martin ThomsonBug 1158343 - Temporarily enable TLS_RSA_WITH_AES_128_CBC_SHA for WebRTC. r=ekr, a=sledru - d10817faa571
Margaret LeibovicBug 1155083 - Properly hide reader view tablet on landscape tablets. r=bnicholson, a=sledru - f7170ad49667
Steve FinkBug 1136309 - Rename the spidermonkey build variants. r=terrence, a=test-only - 604326355be0
Mike HommeyBug 1142908 - Avoid arm simulator builds being considered cross-compiled. r=sfink, a=test-only - 517741a918b0
Jan de MooijBug 1146520 - Fix some minor autospider issues on OS X. r=sfink, a=test-only - 620cae899342
Steve FinkBug 1146520 - Do not treat osx arm-sim as a cross-compile. a=test-only - a5013ed3d1f0
Steve FinkBug 1135399 - Timeout shell builds. r=catlee, a=test-only - b6bf89c748b7
Steve FinkBug 1150347 - Fix autospider.sh --dep flag name. r=philor, a=test-only - b8f7eabd31b9
Steve FinkBug 1149476 - Lengthen timeout because we are hitting it with SM(cgc). r=me (also jonco for a more complex version), a=test-only - 16c98999de0b
Chris PearceBug 1136360 - Backout 3920b67e97a3 to fix A/V sync regressions (Bug 1148299 & Bug 1157886). r=backout a=sledru - 4ea8cdc621e8
Patrick BrossetBug 1153463 - Intermittent browser_animation_setting_currentTime_works_and_pauses.js. r=miker, a=test-only - c31c2a198a71
Andrew McCreightBug 1062479 - Use static strings for WeakReference type names. r=ehsan, a=sledru - 5d903629f9bd
Michael ComellaBug 1152314 - Duplicate action bar configuration in code. r=liuche, a=sledru - cdfd06d73d17
Ethan HuggBug 1158627 - WebRTC return error if GetEmptyFrame returns null. r=jesup, a=sledru - f1cd36f7e0e1
Jeff MuizelaarBug 1154703 - Avoid using WARP if nvdxgiwrapper.dll is around. a=sledru - 348c2ae68d50
Shu-yu GuoBug 1155474 - Consider the input to MThrowUninitializedLexical implicitly used. r=Waldo, a=sledru - daaa2c27b89f
Jean-Yves AvenardBug 1149605 - Avoid potential integers overflow. r=kentuckyfriedtakahe, a=abillings - fcfec0caa7be
Ryan VanderMeulenBacked out changeset daaa2c27b89f (Bug 1155474) for bustage. - 0a1accb16d39
Shu-yu GuoBug 1155474 - Consider the input to MThrowUninitializedLexical implicitly used. r=Waldo, a=sledru - ff65ba4cd38a

Christian HeilmannStart of my very busy May speaking tour and lots of //build videos to watch

I am currently in the Heathrow airport lounge on the first leg of my May presenting tour. Here is what lies ahead for me (with various interchanges in other countries in between to get from one to the other):

  • 02-07/05/2015 – Mountain View, California for Spartan Summit (Microsoft Edge now)
  • 09/05/2015 – Tirana, Albania for Oscal (opening keynote)
  • 11/05/2015 – Düsseldorf, Germany for Beyond Tellerand
  • 13-14/05/2015 – Verona, Italy – JSDay (opening keynote)
  • 15/05/2015 – Thessaloniki, Greece – DevIt (opening keynote)
  • 18/05/2015 – Amsterdam, The Netherlands – PhoneGap Day (MC)
  • 27/05/2015 – Copenhagen, Denmark – At The Frontend
  • 29/05/2015 – Prague, Czech Republic – J and Beyond

All packed and ready to go

I will very likely be too busy to answer a lot of requests this month, and if you meet me, I might be disheveled and unkempt – I never have more than a day in a hotel. The good news is that I have written 3 of these talks so far.

To while away the time on planes with my laptop being flat, I just downloaded lots of videos from build to watch (you can do that on each of these pages, just do the save-as), so I am up to speed with that. Here’s my list, in case you want to do the same:

Morgan PhillipsTo Serve Developers

The neatest thing about release engineering, is the fact that our pipeline forms the primary bridge between users and developers. On one end, we maintain the CI infrastructure that engineers rely on for thorough testing of their code, and, on the other end, we build stable releases and expose them for the public to download. Being in this position means that we have the opportunity to impact the experiences of both contributors and users by improving our systems (it also makes working on them a lot of fun).

Lately, I've become very interested in improving the developer experience by bringing our CI infrastructure closer to contributors. In short, I would like developers to have access to the same environments that we use to test/build their code. This will make it:
  • easier to run tests locally
  • easier to set up a dev environment
  • easier to reproduce bugs (especially environment dependent bugs)

[The release pipeline from 50,000ft]


The first part of my plan revolves around integrating release engineering's CI system with a tool that developers are already using: mach; starting with a utility called: mozbootstrap -- a system that detects its host operating system and invokes a package manager for installing all of the libraries needed to build firefox desktop or firefox android.

The first step here was to make it possible to automate the bootstrapping process (see bug: 1151834 "allow users to bootstrap without any interactive prompts"), and then integrate it into the standing up of our own systems. Luckily, at the moment I'm also porting some of our Linux builds from buildbot to TaskCluster (see bug: 1135206), which necessitates scrapping our old chroot based build environments in favor of docker containers. This fresh start has given me the opportunity begin this transition painlessly.

This simple change alone strengthens the interface between RelEng and developers, because now we'll be using the same packages (on a given platform). It also means that our team will be actively maintaining a tool used by contributors. I think it's a huge step in the right direction!

What platforms/distributions are you supporting?

Right now, I'm only focusing on Linux, though in the future I expect to support OSX as well. The bootstrap utility supports several distributions (Debian/Ubuntu/CentOS/Arch), though, I've been trying to base all of release engineering's new docker containers on Ubuntu 14.04 -- as such, I'd consider this our canonical distribution. Our old builders were based on CentOS, so it would have been slightly easier to go with that platform, but I'd rather support the platform that the majority of our contributors are using.

What about developers who don't use Ubuntu 14.04, and/or have a bizarre environment

One fabulous side effect of using TaskCluster is that we're forced to create docker containers for running our jobs, in fact, they even live in mozilla-central. That being the case, I've started a conversation around integrating our docker containers into mozbootstrap, giving it the option to pull down a releng docker container in lieu of bootstrapping a host system.

On my own machine, I've been mounting my src directory inside of a builder and running ./mach build, then ./mach run within it. All of the source, object files, and executables live on my host machine, but the actual building takes place in a black box. This is a very tidy development workflow that's easy to replicate and automate with a few bash functions [which releng should also write/support].

[A simulation of how I'd like to see developers interacting with our docker containers.]

Lastly, as the final nail in the coffin of hard to reproduce CI bugs, I'd like to make it possible for developers to run our TaskCluster based test/build jobs on their local machines. Either from mach, or a new utility that lives in /testing.

If you'd like to follow my progress toward creating this brave new world -- or heckle me in bugzilla comments -- check out these tickets:

Christopher ArnoldCalling Android users: Help Mozilla Map the World!

Many iPhone users may have wondered why Apple prompts them with a message saying “Location accuracy is improved when Wi-Fi is turned on” each time they choose to turn Wi-Fi off.  Why does a phone that has GPS (Global Positioning Satellite) capability need to use Wi-Fi to determine it’s location?

The reason is fairly simple.  There are of course thousands of radio frequencies traveling through the walls of buildings all around us.  What makes Wi-Fi frequency (or even bluetooth) particularly useful for location mapping is that the frequency travels a relatively short distance before it decays, due to how low energy the Wi-Fi wavelengths are.  A combination of three or more Wi-Fi signals can be used in a very small area by a phone to triangulate locations on a map in the same manner that earthquake shockwave strengths can be used to triangulate epicenters.  Wi-Fi hubs don't need to transmit their locations to be useful.  Most are oblivious of their location.  It is the phone's interpretations of their signal strength and inferred location that creates the value to the phone's internal mapping capabilities.  No data that goes over the Wi-Fi frequency is  relevant to using radio for triangulation.  It is merely the signal strength/weakness that makes it useful for triangulation.  (Most Wi-Fi hubs are password protected and the data sent over them is encrypted.) 

Being able to let phone users determine their own location is of keen interest to developers who can’t make location-based-services work without fairly precise location determinations.  The developers don't want to track the users per se.  They want the users to be able to self-determine location when they request a service at a precise location in space.  (Say requesting a Lyft ride or checking in at a local eatery.)  There are a broad range of businesses that try to help phones accurately orient themselves on maps.  The data that each application developer uses may be different across a range of phones.  Android, Windows and iPhones all have different data sources for this, which can make it frustrating to have consistency of app experience for many users, even when they’re all using the same basic application.

At Mozilla, we think the best way to solve this problem is to create an open source solution.  We are app developers ourselves and we want our users to have consistent quality of experience, along with all the websites that our users access using our browsers and phones.  If we make location data accessible to developers, we should be able to help Internet users navigate their world more consistently.  By doing it in an open source way, dozens of phone vendors and app developers can utilize this open data source without cumbersome and expensive contracts that are sometimes imposed by location service vendors.  And as Mozilla we do this in a way that empowers users to make personal choice as to whether they wish to participate in data contribution or not.

How can I help?  There are two ways Firefox users can get involved.  (Several ways that developers can help.)  We have two applications for Android that have the capability to “stumble” Wi-Fi locations.

The first app is called “Mozilla Stumbler” and is available for free download in the Google Play store. (https://play.google.com/store/apps/details?id=org.mozilla.mozstumbler)  By opening MozStumbler and letting it collect radio frequencies around you, you are able to help the location database register those frequencies so that future users can determine their location.  None of the data your Android phone contributes can be specifically tied to you.  It’s collecting the ambient radio signals just for the purpose of determining map accuracy.  To make it fun to use MozStumbler, we have also created a leaderboard for users to keep track of their contributions to the database. 

Second app is our Firefox mobile browser that runs on the Android operating system.  (If it becomes possible to stumble on other operating systems, I’ll post an update to this blog.)  You need to take a couple of steps to enable background stumbling on your Firefox browser.  Specifically, you have to opt-in to share location data to Mozilla.  To do this, first download Firefox on your Android device.  On the first run you should get a prompt on what data you want to share with Mozilla.  If you bypassed that step, or installed Firefox a long time ago, here’s how to find the setting:

1) Click on the three dots at the right side of the Firefox browser chrome then select "Settings" (Above image)

2) Select Mozilla (Right image)

Check the box that says “Help Mozilla map the world! Share approximate Wi-Fi and cellular location of your device to improve our geolocation services.” (Below image)

If you ever want to change your settings, you can return to the settings of Firefox, or you can view your Android device's main settings menu on this path: Settings>Personal>Location which is the same place where you can see all the applications you've previously granted access to look up your physical location.

The benefit of the data contributed is manifold:
1) Firefox users on PCs (which do not have GPS sensors) will be able to determine their positions based on the frequency of the WiFi hotspots they use rather than having to continually require users to type in specific location requests. 
2) Apps on Firefox Operating System and websites that load in Firefox that use location services will perform more accurately and rapidly over time.
3) Other developers who want to build mobile applications and browsers will be able to have affordable access to location service tools.  So your contribution will foster the open source developer community.

And in addition to the benefits above, my colleague Robert Kaiser points out that even devices with GPS chips can benefit from getting Wi-Fi validation in the following way:
"1) When trying to get a location via GPS, it takes some time until the chip actually has seen signals from enough satellites to determine a location ("get a fix"). Scanning the visible wi-fi signals is faster than that, so getting an initial location is faster that way (and who wants to wait even half a minute until the phone can even start the search for the nearest restaurant or cafe?).
2) The location from this wifi triangulation can be fed into the GPS system, which enables it to know which satellites it roughly should expect to see and therefore get a fix on those sooner (Firefox OS at least is doing that).
3) In cities or buildings, signals from GPS satellites get reflected or absorbed by walls, often making the GPS position inaccurate or not being able to get a fix at all - while you might still see enough wi-fi signals to determine a position."

Thank you for helping improve Mozilla Location Services.

If you'd like to read more about Mozilla Location Services please visit:
To see how well our map currently covers your region, visit:
If you are a developer, you can also integrate our open source code directly into your own app to enable your users to stumble for fun as well.  Code is available here: https://github.com/mozilla/DemoStumbler
For an in-depth write-up on the launch of the Mozilla Location Service please read Hanno's blog here: http://blog.hannosch.eu/2013/12/mozilla-location-service-what-why-and.html
For a discussion of the issues on privacy management view Gervase's blog:

Robert AccetturaOn Deprecating HTTP

Mozilla announced:

There’s pretty broad agreement that HTTPS is the way forward for the web. In recent months, there have been statements from IETF, IAB (even the other IAB), W3C, and the US Government calling for universal use of encryption by Internet applications, which in the case of the web means HTTPS.

I’m on board with this development 100%. I say this as a web developer who has, and will face some uphill battles to bring everything into HTTPS land. It won’t happen immediately, but the long-term plan is 100% HTTPS . It’s not the easiest move for the internet, but it’s undoubtedly the right move for the internet.

A brief history

The lack of encryption on the internet is not to different from the weaknesses in email and SMTP that make spam so prolific. Once upon a time the internet was mainly a tool of academics, trust was implicit and ethics was paramount. Nobody thought security was of major importance. Everything was done in plain text for performance and easy debugging. That’s why you can use telnet to debug most older popular protocols.

In 2015 the landscape has changed. Academic use of the internet is a small fraction of its traffic. Malicious traffic is a growing concern. Free sharing of information, the norm in the academic world is the exception in some of the places the internet reaches.

Protecting the user

Users deserve to be protected as much as technology will allow. Some folks claim “non-sensitive” data exist. I disagree with this as it’s objective and a matter of personal perspective. What’s sensitive to someone in a certain situation is not sensitive to others. Certain topics that are normal and safe to discuss in most of the world are not safe in others. Certain search queries are more sensitive than others (medical questions, sensitive business research). A web developer doesn’t have a good grasp of what is sensitive or not. It’s specific to the individual user. It’s not every network admin’s right to know if someone on their network browsed and/or purchased pregnancy tests or purchased a book on parenting children with disabilities on Amazon. The former may not go over well at a “free” conservative school in the United States for example. More than just credit card information is considered “sensitive data” in this case. Nobody should be so arrogant as to think they understand how every person on earth might come across their website.

Google and Yahoo took the first step to move search to HTTPS (Bing still seems to be using HTTP oddly enough). This is the obvious second step to protecting the world’s internet users.

Protecting the website’s integrity

Michelangelo David - CensoredUnfortunately you can no longer be certain a user sees a website as you intended it as a web developer. Sorry, but it doesn’t work that way. For years ISP’s have been testing the ability to do things like insert ads into webpages. As far as I’m aware in the U.S. there’s nothing explicitly prohibiting replacing ads. Even net neutrality rules seem limited to degrading or discriminating against certain traffic, not modifying payloads.

I’m convinced the next iteration of the great firewall will not explicitly block content, but censor it. It will be harder to detect than just being denied access to a website. The ability to do large-scale processing like this is becoming more practical. Just remove the offending block of text or image. Citizens of oppressed nations will possibly not notice a thing.

There’s also been attempts to “optimize” images and video. Again even net-neutrality is not entirely clear assuming this isn’t targeted to competitors for example.

But TLS isn’t perfect!

True, but let’s be honest, it’s 8,675,309 times better than using nothing. CA’s are a vulnerability, they are a bottleneck, and a potential target for governments looking to control information. But browsers and OS’s allow you to manage certificates. The ability to stop trusting CA’s exists. Technology will improve over time. I don’t expect us to be still using TLS 1.1 and 1.2 in 2025. Hopefully substantial improvements get made over time. This argument is akin to not buying a computer because there will be a faster one next year. It’s the best option today, and we can replace it with better methods when available.

SSL Certificates are expensive!

First of all, domain validation certificates can be found for as little as $10. Secondly, I fully expect these prices to drop as demand increases. Domain verification certificates have virtually no cost as it’s all automated. The cheaper options will experience substantial growth as demand grows. There’s no limit in “supply” except computing power to generate them. A pricing war is inevitable. It would happen even faster if someone like Google bought a large CA and dropped prices to rock bottom. Certificates will get way cheaper before it’s essential. $10 is the early adopter fee.

But XYZ doesn’t support HTTPS!

True, not everyone is supporting it yet. That will change. It’s also true some (like CDN’s) are still charging insane prices for HTTPS. It’s not practical for everyone to switch today. Or this year. But that will change as well as demand increases. Encryption overhead is nominal. Once again pricing wars will happen once someone wants more than their shopping cart served over SSL. The problem today is demand is minimal, but those who need it must have it. Therefore price gouging is the norm.

Seriously, we need to do this?

Yes, seriously. HTTPS is the right direction for the Internet. There’s valid arguments for not switching your site over today, but those roadblocks will disappear and you should be re-evaluating where you stand periodically. I’ve moved a few sites including this blog (SPDY for now, HTTP/2 soon) to experience what would happen. It was largely a smooth transition. I’ve got some sites still on HTTP. Some will be on HTTP for the foreseeable future due to other circumstances, others will switch sooner. This doesn’t mean HTTP is dead tomorrow, or next year. It just means the future of the internet is HTTPS, and you should be part of it.

Comment Count

Kim MoirMozilla pushes - April 2015

Here's April 2015's  monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.  

The number of pushes decreased from those recorded in the previous month with a total of 8894.  This is due to the fact that gaia-try is managed by taskcluster and thus these jobs don't appear in the buildbot scheduling databases anymore which this report tracks.


  • 8894 pushes
  • 296 pushes/day (average)
  • Highest number of pushes/day: 528 pushes on Apr 1, 2015
  • 17.87 pushes/hour (highest average)

General Remarks

  • Try has around 58% of all the pushes now that we no longer track gaia-try
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 28% of all the pushes.


  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes  

I've changed the graphs to only track 2015 data.  Last month they were tracking 2014 data as well but it looked crowded so I updated them.  Here's a graph showing the number of pushes over the last few years for comparison.

Mozilla Reps CommunityReps Weekly Call – April 30th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.



  • Reps metrics.
  • Balkans Meetup – updates.
  • Council elections – Campaign phase.
  • Webmaker for Android.
  • Creating a Community Communications list.
  • Firefox Friends.

AirMozilla video

Detailed notes

Reps metrics

The first iteration of a metrics dashboard has been implemented in the portal, where you can get some initial info about Reps, events and activities.

You can know more about it on this blog post or give your feedback on discourse.

Thanks everyone involved in this, specially our dev team @pierros, @comzeradd, Tasos and Nemo.

Council elections

We are in the campaign phase and the eight candidates will be posting some answers to relevant questions on this topic. Make sure you follow it to get updates.

This is a really good opportunity for candidates to show why you you want to be part of the council and for the Reps to know them better, why they want to be on the Council and their vision.

Next week, we’ll have a special Elections Weekly Call where most of the candidates will attend to talk with Reps. Don’t miss it!

Webmaker for Android

The team is pushing and alpha release really soon and they are developing content and localization for the launch markets: US, UK, Canada, Brazil, Indonesia, Bangladesh and Kenya.

If you are from one of the countries and want to help or learn more – contact them.

More info:

Balkans Meetup updates

The event will take place in Bucharest (Romania) on 22-24 of May. They want to reboot the community and work on SUMO, l10n, participation and QA.

You can follow #mozbalkans on social media and keep an eye on the wiki for more details.

Creating a Community Communications list

Lucy, @franc, @flore and @Christos are trying to put together a list of all the community communication channels to improve communication for important Mozilla announcements.

There isn’t currently a full up-to-date list of all of the communication groups and channels so they need people helping populating the active communities to have a single source of truth.

You can contact Lucy (lharris at mozilla.com) if you have any questions or if you want to get involved.

Firefox Friends

@josorio presented Firefox Friends site where people will be able to share relevant content related with Mozilla and Firefox and that will replace the former Firefox Affiliates.

The site has different feeds and social options that will help you schedule your publications and also track the impact they had.

Please, when logging in, indicate if you are a Mozilla Rep or a Mozilla contributor, your shares could get some surprises!

More public announcements will be done by next week.

Non-verbal updates

Full raw notes.

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Daniel Stenbergtalking curl on the changelog

The changelog is the name of a weekly podcast on which the hosts discuss open source and stuff.

Last Friday I was invited to participate and I joined hosts Adam and Jerod for an hour long episode about curl. It all started as a response to my post on curl 17 years, so we really got into how things started out and how curl has developed through the years, how much time I’ve spent on it and if I could mention a really great moment in time that stood out over the years?

They day before, they released the little separate teaser we made about about the little known –remote-name-all command line option that basically makes curl default to do -O on all given URLs.

The full length episode can be experienced in all its glory here: https://changelog.com/153/

Mozilla Addons BlogMay 2015 Featured Add-ons

Pick of the Month: Save Text To File

by Robert Byrne

Save highlighted text to a file in the directory of your choice.

“One of a kind, save snippets as you surf with a single click including URL. Most important — excellent support.”

Featured: Adblock Plus Pop-up Addon

by Jesse Hakanen
Adblock Plus Pop-up Addon extends the blocking functionality of Adblock Plus to those annoying pop-up windows that open on mouse clicks and other user actions.

Featured: gTranslate

by Pau Tomàs, Pierre Bertet, Éric Lemoine
With gTranslate, you can translate any text in a webpage just by selecting and right-clicking over it. The extension uses the Google translation services to translate the text.

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. The deadline to apply for the next community board is May 10, 2015!

Each quarter, the board also selects a featured complete theme and featured mobile add-on.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on.

Brian BirtlesWhat do we do with SMIL?

Earlier this week, Blink announced their intention to deprecate SMIL. I thought they were going to replace their native implementation with a Javascript one so this was a surprise to me.

Prompted by this, the SVG WG decided it would be better to split the animation features in SVG2 out into a separate spec. (This was something I started doing a while ago, calling it Animation Elements, but I haven’t had time to follow up on it recently.)

I’ve spent quite a lot of time working on SMIL in Gecko (Firefox) so I’m probably more attached to it than most. I also started work on Web Animations specifically to address Microsoft’s concern that we needed a unified model for animations on the Web and I was under the impression they were finally open to the idea of a Javascript implementation of SMIL in Edge.

I’m not sure what will happen next, but it’s interesting to think about what we would lose without SMIL and what we could do to fix that. Back in 2011 I wrote up a gap analysis of features missing in CSS that exist in SVG animation. One example, is that even with CSS Animations, CSS Transitions, Web Animations and the Motion Path module, we still couldn’t create a font using SVG-in-OpenType where the outlines of the glyphs wiggle. That’s because even though Web Animations lets you animate attributes (and not just CSS properties), that feature is only available via the script API and you can’t run script in some contexts like font glyph documents.

So what would we need? I think some of the following might be interesting specs:

  • Path animation module – We need some means of animating path data such as the ‘d’ attribute on an SVG element. With SMIL this is actually really hard—you need to have exactly the same number and type of segments in order to interpolate between two paths. Tools could help with this but there aren’t any yet.

    It would be neat to be able to interpolate between, say, a and a . Once you allow different numbers of segments you probably need a means of annotating anchor points so you can describe how the different paths are supposed to line up.
    (If, while we’re at it, we could define a way of warping paths that would be great for doing cartoons!)

  • Animation group module – SMIL lets you sequence and group animations so they play perfectly together. That’s not easy with CSS at the moment. Web Animations level 2 actually defines grouping and synchronization primitives for this but there’s no proposed CSS syntax for it.

    I think it would be useful if CSS Animations Level 2 added a single level of grouping, something like an animation-group property where all animations with a matching group name were kept in lock-step (with animation-group-reset to create new groups of the same name). A subsequent level could extend that to the more advanced hierarchies of groups described in Web Animations level 2.

  • Property addition – SMIL lets you have independent animations target the same element and add together. For example, you can have a ‘spin’ animation and a ‘swell’ animation defined completely independently and then applied to the same element, and they combine together without conflict. Allowing CSS properties to add together sounds like a big change but you can actually narrow down the problem space in three ways:
    1. Most commonly you’re adding together lists: e.g. transform lists or filter lists. A solution that only lets you add lists together would probably solve a lot of use cases.
    2. Amongst lists, transform lists are the most common. For this the FXTF already resolved to add translate, rotate and scale properties in CSS transforms level 2 so we should be able to address some of those use cases in the near future.
    3. While it would be nice to add properties together in static contexts like below, if it simplifies the solution, we could just limit the issue to animations at first.

.blur {  filter: blur(10px); }
.sepia { filter: sepia(50%); }
<img class=”blur sepia”>

There are other things that SMIL lets you do such as change the source URL of an image in response to an arbitrary event like a click without writing any programming code but I think the above cover some of the bigger gaps? What else would we miss?

Mike HommeyUsing a git clone of gecko-dev to push to mercurial

The next branch of git-cinnabar now has minimal support for grafting, enough to allow to graft a clone of gecko-dev to mozilla-central and other Mozilla branches. This will be available in version 0.3 (which I don’t expect to release before June), but if you are interested, you can already try it.

There are two ways you can work with gecko-dev and git-cinnabar.

Switching to git-cinnabar

This is the recommended setup.

There are several reasons one would want to start from gecko-dev instead of a fresh clone of mozilla-central. One is to get the full history from before Mozilla switched to Mercurial, which gecko-dev contains. Another is if you already have a gecko-dev clone with local branches, where rebasing against a fresh clone would not be very convenient (but not impossible).

The idea here is to use gecko-dev as a start point, and from there on, use cinnabar to pull and push from/to Mercurial. The main caveat is that new commits pulled from Mercurial after this will not have the same SHA-1s as those on gecko-dev. Only the commits until the switch will. This also means different people grafting gecko-dev at different moments will have different SHA-1s for new commits. Eventually, I hope we can switch gecko-dev itself to use git-cinnabar instead of hg-git, which would solve this issue.

Assuming you already have a gecko-dev clone, and git-cinnabar next installed:

  • Change the remote URL:
    $ git remote set-url origin hg::https://hg.mozilla.org/mozilla-central

    (replace origin with the remote name for gecko-dev if it’s not origin)

  • Add other mercurial repositories you want to track as well:
    $ git remote add inbound hg::https://hg.mozilla.org/integration/mozilla-inbound
    $ git remote add aurora hg::https://hg.mozilla.org/releases/mozilla-aurora
  • Pull with grafting enabled:
    $ git -c cinnabar.graft=true -c cinnabar.graft-refs=refs/remotes/origin/* remote update

    (replace origin with the remote name for gecko-dev if it’s not origin)

  • Finish the setup by setting push urls for all those remotes:
    $ git remote set-url --push origin hg::ssh://hg.mozilla.org/mozilla-central
    $ git remote set-url --push inbound hg::ssh://hg.mozilla.org/integration/mozilla-inbound
    $ git remote set-url --push aurora hg::ssh://hg.mozilla.org/releases/mozilla-aurora

Now, you’re mostly done setting things up. Check out my git workflow for Gecko development for how to work from there.

Following gecko-dev

This setup allows to keep pulling from gecko-dev and thus keep the same SHA-1s. It relies on everything pulled from Mercurial existing in gecko-dev, which makes it cumbersome, which is why I don’t recommend using it. For instance, you may end up pulling from Mercurial before the server-side mirroring made things available in gecko-dev, and that will fail. Some commands such as git pull --rebase will require you to ensure gecko-dev is up-to-date first (that makes winning push races essentially impossible). And more importantly, in some cases, what you push to Mercurial won’t have the same commit SHA-1 in gecko-dev, so you’ll have to manually deal with that (fortunately, it most cases, this shouldn’t happen).

Assuming you already have a gecko-dev clone, and git-cinnabar next installed:

  • Add a remote for all mercurial repositories you want to track:
    $ git remote add central hg::https://hg.mozilla.org/mozilla-central
    $ git remote add inbound hg::https://hg.mozilla.org/integration/mozilla-inbound
    $ git remote add aurora hg::https://hg.mozilla.org/releases/mozilla-aurora
  • For each of those remotes, set a push url:
    $ git remote set-url --push origin hg::ssh://hg.mozilla.org/mozilla-central
    $ git remote set-url --push inbound hg::ssh://hg.mozilla.org/integration/mozilla-inbound
    $ git remote set-url --push aurora hg::ssh://hg.mozilla.org/releases/mozilla-aurora
  • For each of those remotes, set a refspec that limits to pulling the tip of the default branch:
    $ git remote.central.fetch +refs/heads/branches/default/tip:refs/remotes/central/default
    $ git remote.inbound.fetch +refs/heads/branches/default/tip:refs/remotes/inbound/default
    $ git remote.aurora.fetch +refs/heads/branches/default/tip:refs/remotes/aurora/default

    Other branches can be added, but that must be done with care, because not all branches are exposed on gecko-dev.

  • Set git-cinnabar grafting mode permanently:
    $ git config cinnabar.graft only
  • Make git-cinnabar never store metadata when pushing:
    $ git config cinnabar.data never
  • Make git-cinnabar only graft with gecko-dev commits:
    $ git config cinnabar.graft-refs refs/remotes/origin/*

    (replace origin with the remote name for gecko-dev if it’s not origin)

  • Then pull from all remotes:
    $ git remote update

    Retry as long as you see errors about “Not allowing non-graft import”. This will keep happening until all Mercurial changesets you’re trying to pull make it to gecko-dev.

With this setup, you can now happily push to Mercurial and see your commits appear in gecko-dev after a while. As long as you don’t copy or move files, their SHA-1 should be the same as what you pushed.

Mike HommeyAnnouncing git-cinnabar 0.2.2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

What’s new since 0.2.1?

  • Don’t require core.ignorecase to be set to false on the repository when using a case-insensitive file system. If you did set core.ignorecase to false because git-cinnabar told you to, you can now set it back to true.
  • Raise an exception when git update-ref or git fast-import return an error. Silently ignoring those errors could lead to bad repositories after an upgrade from pre-0.1.0 versions on OS X, where the default maximum number of open files is low (256), and where git update-ref uses a lot of lock files for large transactions.
  • Updated git to 2.4.0, when building with the native helper.
  • When doing git cinnabar reclone, skip remotes with remote.$remote.skipDefaultUpdate set to true.

Niko MatsakisA few more remarks on reference-counting and leaks

So there has been a lot of really interesting discussion in response to my blog post. I wanted to highlight some of the comments I’ve seen, because I think they raise good points that I failed to address in the blog post itself. My comments here are lightly edited versions of what I wrote elsewhere.

Isn’t the problem with objects and leak-safe types more general?

Reem writes:

I posit that this is in fact a problem with trait objects, not a problem with Leak; the exact same flaw pointed about in the blog post already applies to the existing OIBITs, Send, Sync, and Reflect. The decision of which OIBITs to include on any trait object is already a difficult one, and is a large reason why std strives to avoid trait objects as part of public types.

I agree with him that the problems I described around Leak and objects apply equally to Send (and, in fact, I said so in my post), but I don’t think this is something we’ll be able to solve later on, as he suggests. I think we are working with something of a fundamental tension. Specifically, objects are all about encapsulation. That is, they completely hide the type you are working with, even from the compiler. This is what makes them useful: without them, Rust just plain wouldn’t work, since you couldn’t (e.g.) have a vector of closures. But, in order to gain that flexibility, you have to state your requirements up front. The compiler can’t figure them out automatically, because it doesn’t (and shouldn’t) know the types involved.

So, given that objects are here to stay, the question is whether adding a marker trait like Leak is a problem, given that we already have Send. I think the answer is yes; basically, because we can’t expect object types to be analyzed statically, we should do our best to minimize the number of fundamental splits people have to work with. Thread safety is pretty fundamental. I don’t think Leak makes the cut. (I said some of the reasons in conclusion of my previous blog post, but I have a few more in the questions below.)

Could we just remove Rc and only have RcScoped? Would that solve the problem?

Original question.

Certainly you could remove Rc in favor of RcScoped. Similarly, you could have only Arc and not Rc. But you don’t want to because you are basically failing to take advantage of extra constraints. If we only had RcScoped, for example, then creating an Rc always requires taking some scoped as argument – you can have a global constant for 'static data, but it’s still the case that generic abstractions have to take in this scope as argument. Moreover, there is a runtime cost to maintaining the extra linked list that will thread through all Rc abstractions (and the Rc structs get bigger, as well). So, yes, this avoids the “split” I talked about, but it does it by pushing the worst case on all users.

Still, I admit to feeling torn on this point. What pushes me over the edge, I think, is that simple reference counting of the kind we are doing now is a pretty fundamental thing. You find it in all kinds of systems (Objective C, COM, etc). This means that if we require that safe Rust cannot leak, then you cannot safely integrate borrowed data with those systems. I think it’s better to just use closures in Rust code – particularly since, as annodomini points out on Reddit, there are other kinds of cases where RAII is a poor fit for cleanup.

Could a proper GC solve this? Is reference counting really worth it?

Original question.

It’ll depend on the precise design, but tracing GC most definitely is not a magic bullet. If anything, the problem around leaks is somewhat worse: GC’s don’t give any kind of guarantee about when the destructor bans. So we either have to ban GC’d data from having destructors or ban it from having borrowed pointers; either of those implies a bound very similar to Leak or 'static. Hence I think that GC will never be a “fundamental building block” for abstractions in the way that Rc/Arc can be. This is sad, but perhaps inevitable: GC inherently requires a runtime as well, which already limits its reusability.

The Servo BlogServo Continues Pushing Forward

Servo is a new prototype web browser layout engine written in Rust that was launched by Mozilla in 2012 with a new architecture to achieve high parallelism on components like layout and painting.

It has been progressing at an amazing pace, with over 120 CSS properties currently supported, and work is ongoing to implement the remaining properties. For a full list of the current set of CSS properties with initial support in Servo, check out the Google Docs spreadsheet servo team is using to track development.

The current supported properties allow Servo to be mostly operational on static sites like Wikipedia and GitHub, with a surprisingly small code footprint. It has only about 126K lines of Rust code, and the Rust compiler and libraries are about 360K lines. For comparison, in 2014 Blink had about 700K lines of C++ code, and WebKit had around 1.3M lines, including platform specific code.

Another exciting development is servo-shell, which allows the implementation and customization of a WebBrowser using only Javascript and CSS. It’s essentially a browser chrome that uses mozbrowser APIs (i.e. iFrame extensions) running on top of Servo and provides the ability to separate the browser content from loaded pages, which has led to fairly good performance so far.

Finally, Rust (the programming language used to implement Servo) is approaching the 1.0 launch and a big group of people are ready to celebrate the occasion in San Francisco.

Improving Debugging and Testing

One of the most challenging parts of developing a browser engine from scratch is re-implementing all of the CSS features, because they often have complicated interactions. For a developer to solve any layout rendering bugs they run into, they must first inspect the graphical representation of the DOM tree to see if it is correct. In case of Servo, the DOM tree will generate a FlowTree and DisplayLists while performing layout and rendering, compared to WebKit and Blink, which uses a RenderTree as graphical representation (and features DumpRenderTree tool for accessing the RenderTree). Debugging support was improved remarkably with addition of the ability to dump Optimized display lists, Flow tree, and Display List, as well as the implementation of reflow events debugging, which can be used to inform developers when and why a layout was recalculated.

Integration of the Firefox timeline has recently been started on Servo. This is a tool that allows tracking of operations performed by the web engine and is useful for debugging and profiling a site. Additionally, W3C organization has created a test suite to help in verifying CSS features across browsers, which enhances interoperability. Servo now has support for running these W3C CSS tests.

Additional Servo Highlights

General Developments

  • Servo was ported to Gonk (the low level layer of Firefox OS) last February.
  • Servo has some state of the art components (e.g. HTML5 parser, CSS parser) implemented in Rust as independent libraries, which may be beneficial to integrate with Firefox. Work has started on this integration, but whether the image decoder or the URL parser will be integrated first is undefined at this time.
  • WebGL implementation has begun.
  • Another cool feature is the visualization of parallel painting, where Servo executes in a mode in which tiles rendered by each distinct thread will have an overlay rendered on top of it. This makes it possible to visualize Servo’s parallel painting.
  • Support for displaying a placeholder when an image link is broken.
  • Cookies are now supported as well as SSL certification verification. These allow users to login to most websites that have user accounts.
  • Providing the ability to embed Servo on applications in the future is important, and work on this subject is progressing. Instead of creating a new API for developers, the community decided to use the Chromium Embedded Framework (CEF): an API that is quite successful and stable. Servo has a CEF-like API that provides the ability to embed a Servo-powered webview on native apps, as demonstrated by Miniservo on Mac and Linux. Work on supporting the API has been progressing well.

Improved HTML/CSS Support

The Road Ahead

As you can see, Servo has advanced remarkably in the last few months by implementing many new features that benefit both Servo developers as well as future users. It is moving at a fast pace, implementing support for several of the web features needed by any modern browser engine while proving that Rust, as a systems-level programing language, is up to the task of writing a web engine from scratch.

Are you interested in contributing to Servo or just curious to give it a try? Visit the project site, or feel free to chat with the developers on #servo on the mozilla IRC server.

This post was originally published on the Samsung OSG Blog, which is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Mozilla Addons BlogJoin the Featured Add-ons Community Board

Want to have a voice in which add-ons are featured on addons.mozilla.org (AMO)? If so, we invite you to apply for the featured add-ons board. Board members are responsible for deciding which add-ons are featured on AMO in the next six months. Featured add-ons help users discover what’s new and useful, and downloads increase drastically in the months they are featured, so your participation really makes an impact!

Anyone from the add-ons community is welcome to apply: power users, theme designers, developers, and evangelists. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board. This page provides more information on the duties of a board member.

To be considered, please email us at amo-featured@mozilla.org with your name, and tell us how you’re involved with AMO. The deadline is Sunday, May 10, 2015 at 23:59 PDT. The new board will be announced about a week after.

We look forward to hearing from you!

Mozilla Open Policy & Advocacy BlogComments sought on Internet governance transition

The Internet is part of the fabric of our society and our economy, and its governance affects countless aspects of our lives. Empowering individuals to have a voice in shaping the Internet is one of Mozilla’s core principles. Another of those core principles is that continued effectiveness of the Internet depends on decentralized participation worldwide.

We’ve engaged with global Internet governance a few times in past years, from the perspective of advocating for meaningful empowerment of Internet users. Often, the contrast is between so-called “multistakeholder” models and increased direct governmental control – as was the case in 2012, when we engaged in an active debate over whether to expand International Telecommunications Union (ITU) jurisdiction deep into the Internet, something we consider to be a bad idea. In contrast, open discussion forums like the Internet Governance Forum allow for more equitable participation between governments, businesses, and users of the Internet. We consistently support IGF as the best home for collective policy development as it touches the Internet.

Today, one of the major issues in Internet governance is the transition of oversight of certain technical administrative functions away from the U.S. government, where it has resided for decades, to a multistakeholder body. These functions are implemented by a number of groups, but the most well-known is ICANN, a non-profit organization that holds significant responsibility for managing policy decisions around domain names (like “mozilla.org” and “firefox.com”). (For more background: The Global Commission on Internet Governance has produced a thorough paper on this issue.)

At Mozilla, we’ve been tracking this transition, along with other Internet governance developments, from the perspective of promoting trust online and protecting a healthy future for the global, open Internet. We support shifting oversight in this space from the U.S. government to an accountable multistakeholder body, and we’re glad to see progress on the transition, as well as transparency and openness in the process.

For the next few weeks, ICANN is seeking feedback from the public on its proposed transition processes. The first of these is a proposal to transition naming related functions. We encourage you to make your voice heard at this inflection point on evolving global Internet governance.

Mozilla Security BlogDeprecating Non-Secure HTTP

Today we are announcing our intent to phase out non-secure HTTP.

There’s pretty broad agreement that HTTPS is the way forward for the web.  In recent months, there have been statements from IETF, IAB (even the other IAB), W3C, and the US Government calling for universal use of encryption by Internet applications, which in the case of the web means HTTPS.

After a robust discussion on our community mailing list, Mozilla is committing to focus new development efforts on the secure web, and start removing capabilities from the non-secure web.  There are two broad elements of this plan:

  1. Setting a date after which all new features will be available only to secure websites
  2. Gradually phasing out access to browser features for non-secure websites, especially features that pose risks to users’ security and privacy.

For the first of these steps, the community will need to agree on a date, and a definition for what features are considered “new”.  For example, one definition of “new” could be “features that cannot be polyfilled”.  That would allow things like CSS and other rendering features to still be used by insecure websites, since the page can draw effects on its own (e.g., using <canvas>).  But it would still restrict qualitatively new features, such as access to new hardware capabilities.

The second element of the plan will need to be driven by trade-offs between security and web compatibility.  Removing features from the non-secure web will likely cause some sites to break.  So we will have to monitor the degree of breakage and balance it with the security benefit.  We’re also already considering softer limitations that can be placed on features when used by non-secure sites.  For example, Firefox already prevents persistent permissions for camera and microphone access when invoked from a non-secure website.  There have also been some proposals to limit the scope of non-secure cookies.

It should be noted that this plan still allows for usage of the “http” URI scheme in legacy content. With HSTS and the upgrade-insecure-requests CSP attribute, the “http” scheme can be automatically translated to “https” by the browser, and thus run securely.

Since the goal of this effort is to send a message to the web developer community that they need to be secure, our work here will be most effective if coordinated across the web community.  We expect to be making some proposals to the W3C WebAppSec Working Group soon.

Thanks to the many people who participated in the mailing list discussion of this proposal.  Let’s get the web secured!

Richard Barnes, Firefox Security Lead

Update (2015-05-01): Since there are some common threads in the comments, we’ve put together a FAQ document with thoughts on free certificates, self-signed certificates, and more.

Andrew SutherlandTalk Script: Firefox OS Email Performance Strategies

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...>
<script ...></script>

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo');
var htmlString = require('text!./foo.html');
var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.


The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Doug BelshawWeb Literacy Map v1.5 is now live at teach.mozilla.org

Mozilla has soft-launched teach.mozilla.org. This provides a new home for the Web Literacy Map, which now stands at v1.5.

Web Literacy Map v1.5

While I’m a bit sad at the lack of colour compared to the previous version, at least it’s live and underpinning the ‘Teach Like Mozilla’ work!

Questions? Comments? I’m @dajbelshaw or you can email me: mail@dougbelshaw.com

Gregory SzorcAutomatically Redirecting Mercurial Pushes

Managing URLs in distributed version control tools can be a pain, especially if multiple repositories are involved. For example, with Mozilla's repository-based code review workflow (you push to a special review repository to initiate code review - this is conceptually similar to GitHub pull requests), there exist separate code review repositories for each logical repository. Figuring out how repositories map to each other and setting up remote paths for each new clone can be a pain and time sink.

As of today, we can now do something better.

If you push to ssh://reviewboard-hg.mozilla.org/autoreview, Mercurial will automatically figure out the appropriate review repository and redirect your push automatically. In other words, if we have MozReview set up to review whatever repository you are working on, your push and review request will automatically go through. No need to figure out what the appropriate review repo is or configure repository URLs!

Here's what it looks like:

$ hg push review
pushing to ssh://reviewboard-hg.mozilla.org/autoreview
searching for appropriate review repository
redirecting push to ssh://reviewboard-hg.mozilla.org/version-control-tools/
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files
remote: Trying to insert into pushlog.
remote: Inserted into the pushlog db successfully.
submitting 1 changesets for review

changeset:  11043:b65b087a81be
summary:    mozreview: create per-commit identifiers (bug 1160266)
review:     https://reviewboard.mozilla.org/r/7953 (draft)

review id:  bz://1160266/gps
review url: https://reviewboard.mozilla.org/r/7951 (draft)
(visit review url to publish this review request so others can see it)

Read the full instructions for more details.

This requires an updated version-control-tools repository, which you can get by running mach mercurial-setup from a Firefox repository.

For those that are curious, the autoreview repo/server advertises a list of repository URLs and their root commit SHA-1. The client automatically sends the push to a URL sharing the same root commit. The code is quite simple.

While this is only implemented for MozReview, I could envision us doing something similar for other centralized repository-centric services, such as Try and Autoland. Stay tuned.

Mozilla Release Management TeamFirefox 38 beta8 to beta9

This beta is not busy as it might seem. 17 patches were test-only or NPOTB (Not Part Of The Build) changes.

  • 38 changesets
  • 87 files changed
  • 713 insertions
  • 287 deletions



List of changesets:

Ryan VanderMeulenBug 1062496 - Disable browser_aboutHome.js on OSX 10.6 debug. a=test-only - 657cfe2d4078
Ryan VanderMeulenBug 1148224 - Skip timeout-prone subtests in mediasource-duration.html on Windows. a=test-only - 82de02ddde1b
Ehsan AkhgariBug 1095517 - Increase the timeout of browser_identity_UI.js. a=test-only - 611ca5bd91d4
Ehsan AkhgariBug 1079617 - Increase the timeout of browser_test_new_window_from_content.js. a=test-only - 1783df5849c7
Eric RahmBug 1140537 - Sanity check size calculations. r=peterv, a=abillings - a7d6b32a504c
Hiroyuki IkezoeBug 1157985 - Use getEntriesByName to search by name attribute. r=qdot, a=test-only - 55b58d5184ce
Morris TsengBug 1120592 - Create iframe directly instead of using setTimeout. r=kanru, a=test-only - a4f506639153
Gregory SzorcBug 1128586 - Properly look for Mercurial version. r=RyanVM, a=NPOTB - 49abfe1a8ef8
Gregory SzorcBug 1128586 - Prefer hg.exe over hg. r=RyanVM, a=NPOTB - a0b48af4bb54
Shane TomlinsonBug 1146724 - Use a SendingContext for WebChannels. r=MattN, r=markh, a=abillings - 56d740d0769f
Brian HackettBug 1138740 - Notify Ion when changing a typed array's data pointer due to making a lazy buffer for it. r=sfink, a=sledru - e1fb2a5ab48d
Seth FowlerBug 1151309 - Part 1: Block until the previous multipart frame is decoded before processing another. r=tn, a=sledru - 046c97d2eb23
Seth FowlerBug 1151309 - Part 2: Hide errors in multipart image parts both visually and internally. r=tn, a=sledru - 0fcbbecc843d
Alessio PlacitelliBug 1154518 - Make sure extended data gathering (Telemetry) is disabled when FHR is disabled. r=Gijs, a=sledru - cb2725c612b2
Bas SchoutenBug 1151821 - Make globalCompositeOperator work correctly when a complex clip is pushed. r=jrmuizel, a=sledru - 987c18b686eb
Bas SchoutenBug 1151821 - Test whether simple canvas globalCompositeOperators work when a clip is set. r=jrmuizel, a=sledru - 1bbb50c6a494
Bob OwenBug 1087565 - Verify the child process with a secret hello on Windows. r=dvander, a=sledru - c1f04200ed98
Randell JesupBug 1157766 - Mismatched DataChannel initial channel size in JSEP database breaks adding channels. r=bwc, a=sledru - a8fb9422ff13
David MajorBug 1130061 - Block version 1.5 of vwcsource.ax. r=bsmedberg, a=sledru - 053da808c6d9
Martin ThomsonBug 1158343 - Temporarily enable TLS_RSA_WITH_AES_128_CBC_SHA for WebRTC. r=ekr, a=sledru - d10817faa571
Margaret LeibovicBug 1155083 - Properly hide reader view tablet on landscape tablets. r=bnicholson, a=sledru - f7170ad49667
Steve FinkBug 1136309 - Rename the spidermonkey build variants. r=terrence, a=test-only - 604326355be0
Mike HommeyBug 1142908 - Avoid arm simulator builds being considered cross-compiled. r=sfink, a=test-only - 517741a918b0
Jan de MooijBug 1146520 - Fix some minor autospider issues on OS X. r=sfink, a=test-only - 620cae899342
Steve FinkBug 1146520 - Do not treat osx arm-sim as a cross-compile. a=test-only - a5013ed3d1f0
Steve FinkBug 1135399 - Timeout shell builds. r=catlee, a=test-only - b6bf89c748b7
Steve FinkBug 1150347 - Fix autospider.sh --dep flag name. r=philor, a=test-only - b8f7eabd31b9
Steve FinkBug 1149476 - Lengthen timeout because we are hitting it with SM(cgc). r=me (also jonco for a more complex version), a=test-only - 16c98999de0b
Chris PearceBug 1136360 - Backout 3920b67e97a3 to fix A/V sync regressions (Bug 1148299 & Bug 1157886). r=backout a=sledru - 4ea8cdc621e8
Patrick BrossetBug 1153463 - Intermittent browser_animation_setting_currentTime_works_and_pauses.js. r=miker, a=test-only - c31c2a198a71
Andrew McCreightBug 1062479 - Use static strings for WeakReference type names. r=ehsan, a=sledru - 5d903629f9bd
Michael ComellaBug 1152314 - Duplicate action bar configuration in code. r=liuche, a=sledru - cdfd06d73d17
Ethan HuggBug 1158627 - WebRTC return error if GetEmptyFrame returns null. r=jesup, a=sledru - f1cd36f7e0e1
Jeff MuizelaarBug 1154703 - Avoid using WARP if nvdxgiwrapper.dll is around. a=sledru - 348c2ae68d50
Shu-yu GuoBug 1155474 - Consider the input to MThrowUninitializedLexical implicitly used. r=Waldo, a=sledru - daaa2c27b89f
Jean-Yves AvenardBug 1149605 - Avoid potential integers overflow. r=kentuckyfriedtakahe, a=abillings - fcfec0caa7be
Ryan VanderMeulenBacked out changeset daaa2c27b89f (Bug 1155474) for bustage. - 0a1accb16d39
Shu-yu GuoBug 1155474 - Consider the input to MThrowUninitializedLexical implicitly used. r=Waldo, a=sledru - ff65ba4cd38a

Gen Kanai10 years of Mozilla in Asia

Today, after 10 years of building Mozilla’s presence in Asia, I leave Mozilla as full-time staff. Our family is moving to Shanghai for new opportunities and I am leaving to organize our move and enjoy the summer. My last day in the office was April 27th.

I have too many people to thank for their support over these many years at Mozilla. I won’t be able to thank everyone but I do want to specifically thank Mitchell Baker and Joi Ito, who introduced me to John Lilly, Chris Beard, and Paul Kim back in January of 2006. Without Joi and John, I would not have joined Mozilla- thank you both.

Thank you to all of my colleagues at Mozilla Japan, especially Satoko Takita, who was very supportive since the beginning. Takita-san, I cannot thank you enough. Thanks to all of my colleagues who made my Mozilla journey as important as the destination: the Evangelism team under Mike Shaver, the Evangelism team under Chris Blizzard, the Contributor Engagement team under Mary Colvig, and the Participation team under Brian King.

There are too many community members for me to thank, so I will just thank you all for your commitment to Mozilla. It is your dedication to the mission of Mozilla which keeps the project alive and moving forward.

While I will be leaving full-time employment at Mozilla, I’ll be continuing my mentor position at 500 Startups. I’m also excited to announce that I am now mentoring at Chinaccelerator, a program by SOS Ventures. If you or your friends are interested in either program, or you are visiting Shanghai, please don’t hesitate to get in touch: http://kanai.net/weblog/

(Note that if I am in China, it may take longer to reach me via methods that are blocked by the Great Firewall. Email is usually best: gen at kanai dot net)

I’ll close with a selection of photos from my time at Mozilla.

John Lilly & Mike Schroepfer in Tokyo, March 2006; (I see Nakano-san on the right side there.)


Mozillagumi welcomes Scott MacGregor to Japan, June 2006

Mozillagumi welcomes Scott MacGregor to Tokyo

Mitchell & Takita-san interviewed by the Japanese media, Sept. 2006

Mitchell & Chibi interview 3

An interpreter, John Lilly and Joi Ito at the Firefox 2.0 Japan press event, October 2006

Firefox 2 Japan press event 2

John Lilly & Chris Beard in Tiananmen Square, January 2007

John Lilly & Chris Beard in Tian'anmen Square

The Mozilla organization in China that existed before Mozilla China.


Seth Spitzer & Seth Bindernagel meet their match with “medium” spicy beef noodle soup in Taipei, April 2007.

spicy beef noodle soup

Mozilla Japan hosted the Firefox Developers Conference, Summer 2007 (with guest speakers including Shaver, mfinkle, fligtar and dmills)


Chris Beard and Kaori Negoro, Feb. 2008


Mozilla Japan, Firefox 3.5 release photo

3 and 5, for Firefox 3.5!

Mitchell speaking at the OECD Ministerial Meeting on the Future of the Internet Economy, Seoul, June 2008.


Mitchell and I visited with the Mozilla Korea community on the release of Firefox 3.0, June 2008

Mozilla Korea, Firefox 3.0 launch 9831

Mitchell was interviewed on CNN from their Seoul bureau, June 2008

Mitchell Baker at CNN, Seoul, Korea

Mozilla l10n teams, Mozilla Summit 2008

Mozilla l10n-0109

meeting with HanoiLUG, Dec. 2009


speaking at Barcamp Saigon, Dec. 2009


Mozilla Indonesia gathering, May 2010


Firefox 4 launch, Bandung, May 2010


Firefox 4 launch, Manila, May 2010

Mozilla Summit 2010

"Shaver will form the head"

Mozilla Summit 2010 - 2

Mitchell Baker and I in Jakarta, Sept 2010

some of the Mozilla Indonesia community

Mozilla Thailand community, Oct. 2010 (with William & Dietrich)


PestaBlogger, Jakarta, Oct. 2010




Mozilla community managers, Sept. 2012

community managers at Mozilla

MozCamp Asia 2012 closing dance

Mozilla Bangladesh meeting, Dec. 2012


Mozilla Summit 2013


Nick Cameronrustfmt - call for contributions

I've been experimenting with a rustfmt tool for a while now. Its finally in working shape (though still very, very rough) and I'd love some help on making it awesome.

rustfmt is a reformatting tool for Rust code. The idea is that it takes your code, tidies it up, and makes sure it conforms to a set of style guidelines. There are similar tools for C++ (clang format), Go (gofmt), and many other languages. Its a really useful tool to have for a language, since it makes it easy to adhere to style guidelines and allows for mass changes when guidelines change, thus making it possible to actually change the guidelines as needed.

Eventually I would like rustfmt to do lots of cool stuff like changing glob imports to list imports, or emit refactoring scripts to rename variables to adhere to naming conventions. In the meantime, there are lots of interesting questions about how to lay out things like function declarations and match expressions.

My approach to rustfmt is very incremental. It is usable now and gives good results, but it only touches a tiny subset of language items, for example function definitions and calls, and string literals. It preserves code elsewhere. This makes it immediately useful.

I have managed to run it on several crates (or parts of crates) in the rust distro. It also bootstraps, i.e., you can rustfmt on rustfmt before every check-in, in fact this is part of the test suite.

It would be really useful to have people running this tool on their own code or on other crates in the rust distro, and filing issues and/or test cases where things go wrong. This should actually be a useful tool to run, not just a chore, and will get more useful with time.

It's a great project to hack on - you'll learn a fair bit about the Rust compiler's frontend and get a great understanding of more corners of the language than you'll ever want to know about. It's early days too, so there is plenty of scope for having a big impact on the project. I find it a lot of fun too! Just please forgive some of the hackey code that I've already written.

Here is the rustfmt repo on GitHub. I just added a bunch of information to the repo readme which should help new contributors. Please let me know if there is other information that should go in there. I've also created some good issues for new contributors. If you'd like to help out and need help, please ping me on irc (I'm nrc).

Anthony HughesThe Testday Brand

Over the last few months I’ve been surveying people who’ve participated in testdays. The purpose of this effort is to develop an understanding of the current “brand” that testdays present. I’ve come to realize that our goal to “re-invigorate” the testdays program was based on assumptions that testdays were both well-known and misunderstood. I wanted to cast aside that assumption and make gains on a new plan which includes developing a positive brand.

The survey itself was quite successful as I received over 200 responses, 10x what I normally get out of my surveys. I suspect this was because I kept it short, under a minute to complete; something I will keep in mind for the future.

Who Shared the Most?


When looking out who responded most, the majority were unaffiliated with QA (53%). Of the 47% who were affiliated with QA nearly two thirds were volunteers.

How do these see themselves?


When looking at how respondents self-identified, only people who identified as volunteers did not self-identify as a Mozillian. In terms of vouching, people affiliated with QA seem to have a higher proportion of vouched Mozillians than those unaffiliated with QA. This tells me that we need to be doing a better job of converting new contributors into Mozillians and active into vouched Mozillians.

What do they know about Testdays?


When looking at how familiar people are with the concept of testdays, people affiliated with QA are most aware while people outside of QA are least aware. No group of people are 100% familiar with testdays which tells me we need to do a better job of educating people about testdays.

What do they think about Testdays?


Most respondents identified testdays with some sort of activity (30%), a negative feeling (22%), a community aspect (15%), or a specific product (15%). Positive characteristics were lowest on the list (4%). This was probably the most telling question I asked as it really helps me see the current state of the brand of testdays and not just for the responses I received. Reading between the lines, looking for what is not there, I can see testdays relate poorly to anything outside the scope of blackbox testing on Firefox (eg. automation, services, web qa, security qa, etc).

Where do I go from here?

1. We need to diversify the testday brand to be more about testing Firefox and expand it to enable testing across all areas in need.

2. We need to solve some of the negative brand associations by making activities more understandable and relevant, by having shorter events more frequently, , and by rewarding contributions (even those who do work that doesn’t net a bug).

3. We need to teach people that testdays are about more than just testing. Things like writing tests, writing new documentation, updating and translating existing documentation, and mentoring newcomers is all part of what testdays can enable.

4. Once we’ve identified the brand we want to put forward, we need to do a much better job of frequently educating and re-educating people about testdays and the value they provide.

5. We need to enable testdays to facilitate converting newcomers into Mozillians and active contributors into vouched Mozillians.

My immediate next step is to have the lessons I’ve learned here integrated into a plan of action to rebrand testdays. Rest assured I am going to continue to push my peers on this, to be an advocate for improving the ways we collaborate, and to continually revisit the brand to make sure we aren’t losing sight of reality.

I’d like to end with a thank you to everyone who took the time to respond to my survey. As always, please leave a comment below if you have any interesting insights or questions.

Thank you!

Air MozillaQuality Team (QA) Public Meeting

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Niko MatsakisOn reference-counting and leaks

What’s a 1.0 release without a little drama? Recently, we discovered that there was an oversight in one of the standard library APIs that we had intended to stabilize. In particular, we recently added an API for scoped threads – that is, child threads which have access to the stack frame of their parent thread.

The flaw came about because, when designing the scoped threads API, we failed to consider the impact of resource leaks. Rust’s ownership model makes it somewhat hard to leak data, but not impossible. In particular, using reference-counted data, you can construct a cycle in the heap, in which case the components of that cycle may never be freed.

Some commenters online have taken this problem with the scoped threads API to mean that Rust’s type system was fundamentally flawed. This is not the case: Rust’s guarantee that safe code is memory safe is as true as it ever was. The problem was really specific to the scoped threads API, which was making flawed assumptions; this API has been marked unstable, and there is an RFC proposing a safe alternative.

That said, there is an interesting, more fundamental question at play here. We long ago decided that, to make reference-counting practical, we had to accept resource leaks as a possibility. But some recent proposals have suggested that we should place limits on the Rc type to avoid some kinds of reference leaks. These limits would make the original scoped threads API safe. However, these changes come at a pretty steep price in composability: they effectively force a deep distinction between “leakable” and “non-leakable” data, which winds up affecting all levels of the system.

This post is my attempt to digest the situation and lay out my current thinking. For those of you don’t want to read this entire post (and I can’t blame you, it’s long), let me just copy the most salient paragraph from my conclusion:

This is certainly a subtle issue, and one where reasonable folk can disagree. In the process of drafting (and redrafting…) this post, my own opinion has shifted back and forth as well. But ultimately I have landed where I started: the danger and pain of bifurcating the space of types far outweighs the loss of this particular RAII idiom.

All right, for those of you who want to continue, this post is divided into three sections:

  1. Section 1 explains the problem and gives some historical background.
  2. Section 2 explains the “status quo”.
  3. Section 3 covers the proposed changes to the reference-counted type and discusses the tradeoffs involved there.

Section 1. The problem in a nutshell

Let me start by summarizing the problem that was uncovered in more detail. The root of the problem is an interaction between the reference-counting and threading APIs in the standard library. So let’s look at each in turn. If you’re familiar with the problem, you can skip ahead to section 2.

Reference-counting as the poor man’s GC

Rust’s standard library includes the Rc and Arc types which are used for reference-counted data. These are widely used, because they are the most convenient way to create data whose ownership is shared amongst many references rather than being tied to a particular stack frame.

Like all reference-counting systems, Rc and Arc are vulnerable to reference-count cycles. That is, if you create a reference-counted box that contains a reference to itself, then it will never be collected. To put it another way, Rust gives you a lot of safety guarantees, but it doesn’t protect you from memory leaks (or deadlocks, which turns out to be a very similar problem).

The fact that we don’t protect against leaks is not an accident. This was a deliberate design decision that we made while transitioning from garbage-collected types (@T and @mut T) to user-defined reference counting. The reason is that preventing leaks requires either a runtime with a cycle collector or complex type-system tricks. The option of a mandatory runtime was out, and the type-system tricks we explored were either too restrictive or too complex. So we decided to make a pragmatic compromise: to document the possibility of leaks (see, for example, this section of the Rust reference manual) and move on.

In practice, the possibility of leaks is mostly an interesting technical caveat: I’ve not found it to be a big issue in practice. Perhaps because problems arose so rarely in practice, some things—like leaks—that should not have been forgotten were… partially forgotten. History became legend. Legend became myth. And for a few years, the question of leaks seemed to be a distant, settled issue, without much relevance to daily life.

Thread and shared scopes

With that background on Rc in place, let’s turn to threads. Traditionally, Rust threads were founded on a “zero-sharing” principle, much like Erlang. However, as Rust’s type system evolved, we realized we could do much betterthe same type system rules that ensured memory safe in sequential code could be used to permit sharing in parallel code as well, particularly once we adopted RFC 458 (a brilliant insight by pythonesque).

The basic idea is to start a child thread that is tied to a particular scope in the code. We want to guarantee that before we exit that scope, the thread will be joined. If we can do this, then we can safely permit that child thread access to stack-allocated data, so long as that data outlives the scope; this is safe because Rust’s type-system rules already ensure that any data shared between multiple threads must be immutable (more or less, anyway).

So the question then is how can we designate the scope of the children threads, and how can we ensure that the children will be joined when that scope exits. The original proposal was based on closures, but in the time since it was written, the language has shifted to using more RAII, and hence the scoped API is based on RAII. The idea is pretty simple. You write a call like the following:

fn foo(data: &[i32]) {
  let guard = thread::scoped(|| /* body of the child thread */);

The scoped function takes a closure which will be the body of the child thread. It returns to you a guard value: running the destructor of this guard will cause the thread to be joined. This guard is always tied to a particular scope in the code. Let’s call the scope 'a. The closure is then permitted access to all data that outlives 'a. For example, in the code snippet above, 'a might be the body of the function foo. This means that the closure could safely access the input data, because that must outlive the fn body. The type system ensures that no reference to the guard exists outside of 'a, and hence we can be sure that guard will go out of scope sometime before the end of 'a and thus trigger the thread to be joined. At least that was the idea.

The conflict

By now perhaps you have seen the problem. The scoped API is only safe if we can guarantee that the guard’s destructor runs, so that the thread will be joined; but, using Rc, we can leak values, which means that their destructors never run. So, by combining Rc and scoped, we can cause a thread to be launched that will never be joined. This means that this thread could run at any time and try to access data from its parents stack frame – even if that parent has already completed, and thus the stack frame is garbage. Not good!

So where does the fault lie? From the point of view of history, it is pretty clear: the scoped API was ill designed, given that Rc already existed. As I wrote, we had long ago decided that the most practical option was to accept that leaks could occur. This implies that if the memory safety of an API depends on a destructor running, you can’t relinquish ownership of the value that carries that destructor (because the end-user might leak it).

It is totally possible to fix the scoped API, and in fact there is already an RFC showing how this can be done (I’ll summarize it in section 2, below). However, some people feel that the decision we made to permit leaks was the wrong one, and that we ought to have some limits on the RC API to prevent leaks, or at least prevent some leaks. I’ll dig into those proposals in section 3.

Section 2. What is the impact of leaks on the status quo?

So, if we continue with the status quo, and accept that resource leaks can occur with Rc and Arc, what is the impact of that? At first glance, it might seem that the possibility of resource leaks is a huge blow to RAII. After all, if you can’t be sure that the destructor will run, how can you rely on the destructor to do cleanup? But when you look closer, it turns out that the problem is a lot more narrow.

“Average Rust User”

I think it’s helpful to come at this problem from two difference perspectives. The first is: what do resource leaks mean for the average Rust user? I think the right way to look at this is that the user of the Rc API has an obligation to avoid cycle leaks or break cycles. Failing to do so will lead to bugs – these could be resource leaks, deadlocks, or other things. But leaks cannot lead to memory unsafety. (Barring invalid unsafe code, of course.)

It’s worth pointing out that even if you are using Rc, you don’t have to worry about memory leaks due to forgetting to decrement a reference or anything like that. The problem really boils down to ensuring that you have a clear strategy for avoiding cycles, which usually boils to an “ownership DAG” of strong references (though in some cases, breaking cycles explicitly may also be an option).

“Author of unsafe code”

The other perspective to consider is the person who is writing unsafe code. Unsafe code frequently relies on destructors to do cleanup. I think the right perspective here is to view a destructor as akin to any other user-facing function: in particular, it is the user’s responsibility to call it, and they may accidentally fail to do so. Just as you have to write your API to be defensive about users invoking functions in the wrong order, you must be defensive about them failing to invoke destructors due to a resource leak.

It turns out that the majority of RAII idioms are actually perfectly memory safe even if the destructors don’t run. For example, if we examine the Rust standard library, it turns out that all of the destructors therein are either optional or can be made optional:

  1. Straight-forward destructors like Box or Vec leak memory if they are not freed; clearly no worse than the original leak.
  2. Leaking a mutex guard means that the mutex will never be released. This is likely to cause deadlock, but not memory unsafety.
  3. Leaking a RefCell guard means that the RefCell will remain in a borrowed state. This is likely to cause thread panic, but not memory unsafety.
  4. Even fancy iterator APIs like drain, which was initially thought to be problematic, can be implemented in such a way that they cause leaks to occur if they are leaked, but not memory unsafety.

In all of these cases, there is a guard value that mediates access to some underlying value. The type system already guarantees that the original value cannot be accessed while the guard is in scope. But how can we ensure safety outside of that scope in the case where the guard is leaked? If you look at the the cases above, I think they can be grouped into two patterns:

  1. Ownership: Things like Box and Vec simply own the values they are protecting. This means that if they are leaked, those values are also leaked, and hence there is no way for the user to access it.
  2. Pre-poisoning: Other guards, like MutexGuard, put the value they are protecting into a poisoned state that will lead to dynamic errors (but not memory unsafety) if the value is accessed without having run the destructor. In the case of MutexGuard, the “poisoned” state is that the mutex is locked, which means a later attempt to lock it will simply deadlock unless the MutexGuard has been dropped.

What makes scoped threads different?

So if most RAII patterns continue to work fine, what makes scoped different? I think there is a fundamental difference between scoped and these other APIs; this difference was well articulated by Kevin Ballard:

thread::scoped is special because it’s using the RAII guard as a proxy to represent values on the stack, but this proxy is not actually used to access those values.

If you recall, I mentioned above that all the guards serve to mediate access to some value. In the case of scoped, the guard is mediating access to the result of a computation – the data that is being protected is “everything that the closure may touch”. The guard, in other words, doesn’t really know the specific set of affected data, and it thus cannot hope to either own or pre-poison the data.

In fact, I would take this a step farther, and say that I think that in this kind of scenario, where the guard doesn’t have a connection to the data being protected, RAII tends to be a poor fit. This is because, generally, the guard doesn’t have to be used, so it’s easy for the user to accidentally drop the guard on the floor, causing the side-effects of the guard (in this case, joining the thread) to occur too early. I’ll spell this out a bit more in the section below.

Put more generally, accepting resource leaks does mean that there is a Rust idiom that does not work. In particular, it is not possible to create a borrowed reference that can be guaranteed to execute arbitrary code just before it goes out of scope. What we’ve seen though is that, frequently, it is not necessary to guarantee that the code will execute – but in the case of scoped, because there is no direct connection to the data being protected, joining the thread is the only solution.

Using closures to guarantee code execution when exiting a scope

If we can’t use an RAII-based API to ensure that a thread is joined, what can we do? It turns out that there is a good alternative, laid out in RFC 1084. The basic idea is to restructure the API so that you create a “thread scope” and spawn threads into that scope (in fact, the RFC lays out a more general version that can be used not only for threads but for any bit of code that needs guaranteed execution on exit from a scope). This thread scope is delinated using a closure. In practical terms, this means that started a scoped thread look something like this:

fn foo(data: &[i32]) {
  thread::scope(|scope| {
    let future = scope.spawn(|| /* body of the child thread */);

As you can see, whereas before calling thread::scoped started a new thread immediately, it now just creates a thread scope – it doesn’t itself start any threads. A borrowed reference to the thread scope is passed to a closure (here it is the argument scope). The thread scope offers a method spawn that can be used to start a new thread tied to a specific scope. This thread will be joined when the closure returns; as such, it has access to any data that outlives the body of the closure. Note that the spawn method still returns a future to the result of the spawned thread; this future is similar to the old join guard, because it can be used to join the thread early. But this future doesn’t have a destructor. If the thread is not joined through the future, it will still be automatically joined when the closure returns.

In the case of this particular API, I think closures are a better fit than RAII. In particular, the closure serves to make the scope where the threads are active clear and explicit; this in turn avoids certain footguns that were possible with the older, RAII-based API. To see an example of what I mean, consider this code that uses the old API to do a parallel quicksort:

fn quicksort(data: &mut [i32]) {
  if data.len() <= 1 { return; }
  let pivot = data.len() / 2;
  let index = partition(data, pivot);
  let (left, right) = data.split_at_mut(data, index);
  let _guard1 = thread::scoped(|| quicksort(left));
  let _guard2 = thread::scoped(|| quicksort(right));

I want to draw attention to one snippet of code at the end:

  let (left, right) = data.split_at_mut(data, index);
  let _guard1 = thread::scoped(|| quicksort(left));
  let _guard2 = thread::scoped(|| quicksort(right));

Notice that we have to make dummy variables like _guard1 and _guard2. If we left those variables off, then the thread would be immediately joined, which means we wouldn’t get any actual parallelism. What’s worse, the code would still work, it would just run sequentially. The need for these dummy variables, and the resulting lack of clarity about just when parallel threads will be joined, is a direct result of using RAII here.

Compare that code above to using a closure-based API:

  thread::scope(|scope| {
    let (left, right) = data.split_at_mut(data, index);
    scope.spawn(|| quicksort(left));
    scope.spawn(|| quicksort(right));

I think it’s much clearer. Moreover, the closure-based API opens the door to other methods that could be used with scope, like convenience methods to do parallel maps and so forth.

Section 3. Can we prevent (some) resource leaks?

Ok, so in the previous two sections, I summarized the problem and discussed the impact of resource leaks on Rust. But what if we could avoid resource leaks in the first place? There have been two RFCs on this topic: RFC 1085 and RFC 1094.

The two RFCs are quite different in the details, but share a common theme. The idea is not to avoid all resource leaks altogether; I think everyone recognizes that this is not practical. Instead, the goal is to try and divide types into two groups: those that can be safely leaked, and those that cannot. You then limit the Rc and Arc types so that they can only be used with types that can safely be leaked.

This approach seems simple but it has deep ramifications. It means that Rc and Arc are no longer fully general container types. Generic code that wishes to operate on data of all types (meaning both types that can and cannot leak) can’t use Rc or Arc internally, at least not without some hard choices.

Rust already has a lot of precedent for categorizing types. For example, we use a trait Send to designate “types that can safely be transferred to other threads”. In some sense, dividing types into leak-safe and not-leak-safe is analogous. But my experience has been that every time we draw a fundamental distinction like that, it carries a high price. This distinction “bubbles up” through APIs and affects decisions at all levels. In fact, we’ve been talking about one case of this rippling effect through this post – the fact that we have two reference-counting types, one atomic (Arc) and one not (Rc), is precisely because we want to distinguish thread-safe and non-thread-safe operations, so that we can get better performance when thread safety is not needed.

What this says to me is that we should be very careful when introducing blanket type distinctions. The places where we use this mechanism today – thread-safety, copyability – are fundamental to the language, and very important concepts, and I think they carry their weight. Ultimately, I don’t think resource leaks quite fit the bill. But let me dive into the RFCs in question and try to explain why.

RFC 1085 – the Leak trait

The first of the two RFCs is RFC 1085. This RFC introduces a trait called Leak, which operates exactly like the existing Send trait. It indicates “leak-safe” data. Like Send, it is implemented by default. If you wish to make leaks impossible for a type, you can explicitly opt out with a negative impl like impl !Leak for MyType. When you create a Rc<T> or Arc<T>, either T: Leak must hold, or else you must use an unsafe constructor to certify that you will not create a reference cycle.

The fact that Leak is automatically implemented promises to make it mostly invisible. Indeed, in the prototype that Jonathan Reem implemented, he found relatively little fallout in the standard library and compiler. While encouraging, I still think we’re going to encounter problems of composability over time.

There are a couple of scenarios where the Leak trait will, well, leak into APIs where it doesn’t seem to belong. One of the most obvious is trait objects. Imagine I am writing a serialization library, and I have a Serializer type that combines an output stream (a Box<Writer>) along with some serialization state:

struct Serializer {
  output_stream: Box<Writer>,
  serialization_state: u32,

So far so good. Now someone else comes along and would like to use my library. They want to put this Serializer into a reference counted box that is shared amongst many users, so they try to make a Rc<Serializer>. Unfortunately, this won’t work. This seems somewhat surprising, since weren’t all types were supposed to be Leak by default?

The problem lies in the Box<Writer> object – an object is designed to hide the precise type of Writer that we are working with. That means that we don’t know whether this particular Writer implements Leak or not. For this client to be able to place Serializer into an Rc, there are two choices. The client can use unsafe code, or I, the library author, can modify my Serializer definition as follows:

struct Serializer {
  output_stream: Box<Writer+Leak>,
  serialization_state: u32,

This is what I mean by Leak “bubbling up”. It’s already the case that I, as a library author, want to think about whether my types can be used across threads and try to enable that. Under this proposal, I also have to think about whether my types should be usable in Rc, and so forth.

Now, if you avoid trait objects, the problem is smaller. One advantage of generics is that they don’t encapsulate what type of writer you are using and so forth, which means that the compiler can analyze the type to see whether it is thread-safe or leak-safe or whatever. Until now we’ve found that many libraries avoid trait objects partly for this reason, and I think that’s good practice in simple cases. But as things scale up, encapsulation is a really useful mechanism for simplifying type annotations and making programs concise and easy to work with.

There is one other point. RFC 1085 also includes an unsafe constructor for Rc, which in principle allows you to continue using Rc with any type, so long as you are in a position to assert that no cycles exist. But I feel like this puts the burden of unsafety into the wrong place. I think you should be able to construct reference-counted boxes, and truly generic abstractions built on reference-counted boxes, without writing unsafe code.

My allergic reaction to requiring unsafe to create Rc boxes stems from a very practical concern: if we push the boundaries of unsafety too far out, such that it is common to use an unsafe keyword here and there, we vastly weaken the safety guarantees of Rust in practice. I’d rather that we increase the power of safe APIs at the cost of more restrictions on unsafe code. Obviously, there is a tradeoff in the other direction, because if the requirements on unsafe code become too subtle, people are bound to make mistakes there too, but my feeling is that requiring people to consider leaks doesn’t cross that line yet.

RFC 1094 – avoiding reference leaks

RFC 1094 takes a different tack. Rather than dividing types arbitrarily into leak-safe and not-leak-safe, it uses an existing distinction, and says that any type which is associated with a scope cannot leak.

The goal of RFC 1094 is to enable a particular “mental model” about what lifetimes mean. Specifically, the RFC aims to ensure that if a value is limited to a particular scope 'a, then the value will be destroyed before the program exits the scope 'a. This is very similar to what Rust currently guarantees, but stronger: in current Rust, there is no guarantee that your value will be destroyed, there is only a guarantee that it will not be accessed outside that scope. Concretely, if you leak an Rc into the heap today, that Rc may contain borrowed references, and those references could be invalid – but it doesn’t matter, because Rust guarantees that you could never use them.

In order to guarantee that borrowed data is never leaked, RFC 1094 requires that to construct a Rc<T> (or Arc<T>), the condition T: 'static must hold. In other words, the payload of a reference-counted box cannot contain borrowed data. This by itself is very limiting: lots of code, including the rust compiler, puts borrowed pointers into reference-counted structures. To help with this, the RFC includes a second type of reference-counted box, ScopedRc. To use a ScopedRc, you must first create a reference-counting scope s. You can then create new ScopedRc instances associated with s. These ScopedRc instances carry their own reference count, and so they will be freed normally as soon as that count drops to zero. But if they should get placed into a cycle, then when the scope s is dropped, it will go along and “cycle collect”, meaning that it runs the destructor for any ScopedRc instances that haven’t already been freed. (Interestingly, this is very similar to the closure-based scoped thread API, but instead of joining threads, exiting the scope reaps cycles.)

I originally found this RFC appealing. It felt to me that it avoided adding a new distinction (Leak) to the type system and instead piggybacked on an existing one (borrowed vs non-borrowed). This seems to help with some of my concerns about “ripple effects” on users.

However, even though it piggybacks on an existing distinction (borrowed vs static), the RFC now gives that distinction additional semantics it didn’t have before. Today, those two categories can be considered on a single continuum: for all types, there is some bounding scope (which may be 'static), and the compiler ensures that all accesses to that data occur within that scope. Under RFC 1094, there is a discontinuity. Data which is bounded by 'static is different, because it may leak.

This discontinuity is precisely why we have to split the type Rc into two types, Rc and ScopedRc. In fact, the RFC doesn’t really mention Arc much, but presumably there will have to be both ScopedRc and a ScopedArc types. So now where we had only two types, we have four, to account for this new axis:

|                 || Static | Borrowed |
| Thread-safe     || Rc     | RcScoped |
| Not-thread-safe || Arc    | ArcScope |

And, in fact, the distinction doesn’t end here. There are abstractions, such as channels, that built on Arc. So this means that this same categorization will bubble up through those abstractions, and we will (presumably) wind up with Channel and ChannelScoped (otherwise, channels cannot be used to send borrowed data to scoped threads, which is a severe limitation).

Section 4. Conclusion.

This concludes my deep dive into the question of resource leaks. It seems to me that the tradeoffs here are not simple. The status quo, where resource leaks are permitted, helps to ensure composability by allowing Rc and Arc to be used uniformly on all types. I think this is very important as these types are vital building blocks.

On a historical note, I am particularly sensitive to concerns of composability. Early versions of Rust, and in particular the borrow checker before we adopted the current semantics, were rife with composability problems. This made writing code very annoying – you were frequently refactoring APIs in small ways to account for this.

However, this composability does come at the cost of a useful RAII pattern. Without leaks, you’d be able to use RAII to build references that reliably execute code when they are dropped, which in turn allows RAII-like techniques to be used more uniformly across all safe APIs.

This is certainly a subtle issue, and one where reasonable folk can disagree. In the process of drafting (and redrafting…) this post, my own opinion has shifted back and forth as well. But ultimately I have landed where I started: the danger and pain of bifurcating the space of types far outweighs the loss of this particular RAII idiom.

Here are the two most salient points to me:

  1. The vast majority of RAII-based APIs are either safe or can be made safe with small changes. The remainder can be expressed with closures.
    • With regard to RAII, the scoped threads API represents something of a “worst case” scenario, since the guard object is completely divorced from the data that the thread will access.
    • In cases like this, where there is often no need to retain the guard, but dropping it has important side-effects, RAII can be a footgun and hence is arguably a poor fit anyhow.
  2. The cost of introducing a new fundamental distinction (“leak-safe” vs “non-leak-safe”) into our type system is very high and will be felt up and down the stack. This cannot be completely hidden or abstracted away.
    • This is similar to thread safety, but leak-safety is far less fundamental.

Bottom line: the cure is worse than the disease.

Doug BelshawGuiding Students as They Explore, Build, and Connect Online

Ian O'Byne, Greg McVerry and I have just had an article published in the Journal of Adolescent & Adult Literacy (JAAL). Entitled Guiding Students as They Explore, Build, and Connect Online it’s an attempt to situate and explain the importance of the Web Literacy Map work.


I’d have preferred it be published in an open access journal, but there was a window of opportunity that we decided to take advantage of. Happily, you can access the pre-publication version via Ian’s blog here.


Cite this article:

McVerry, J.G., Belshaw, D. & Ian O'Byrne, W. (2015). Guiding Students as They Explore, Build, and Connect Online. Journal of Adolescent & Adult Literacy, 58(8), 632–635. doi: 10.1002/jaal.411

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Mozilla Addons BlogAdd-ons Update – Week of 2015/04/29

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

  • Most nominations for full review are taking less than 9 weeks to review.
  • 126 nominations in the queue awaiting review.
  • Most updates are being reviewed within 6 weeks.
  • 48 updates in the queue awaiting review.
  • Most preliminary reviews are being reviewed within 9 weeks.
  • 127 preliminary review submissions in the queue awaiting review.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 38 Compatibility

The Firefox 38 compatibility blog post is up. The automatic AMO validation will be run soon.

Expect a compatibility post soon about a special 38.0.5 Firefox release that will follow 38.0. I don’t think this will break any extensions, but it might affect themes and it’s a heads up so you can test on beta in advance of the release.

Firefox 39 Compatibility

I expect to publish the Firefox 39 compatibility blog post this week.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

We announced that we will require extensions to be signed in order for them to continue to work in release and beta versions of Firefox. If you’re an extension developer, please read the post and participate in the discussions. A followup post was published recently, addressing some of the reasons behind this initiative.


Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running each content tab in a different one. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.

We will be talking more about these changes in this blog in the future. For now we recommend you start looking at the available documentation.

Eitan IsaacsoneSpeak Web Speech API Addon

Now that eSpeak runs pretty well in JS, it is time for a Web Speech API extension!

What is the Web Speech API? It gives any website access to speech synthesis (and recognition) functionality, Chrome and Safari already have this built-in. This extension adds speech synthesis support in Firefox, and adds eSpeak voices.

For the record, we had speech synthesis support in Gecko for about 2 years. It was introduced for accessibility needs in Firefox OS, now it is time to make sure it is supported on desktop as well.

Why an extension instead of built-in support? A few reasons:

  1. An addon will provide speech synthesis to Firefox now as we implement built-in platform-specific solutions for future releases.
  2. An addon will allow us to surface current bugs both in our Speech API implementation, and in the spec.
  3. We designed our speech synthesis implementation to be extensible with addons, this is a good proof of concept.
  4. People are passionate about eSpeak. Some people love it, some people don’t.

So now I will shut up, and let eSpeak do the talking:

Mozilla Release Management TeamFirefox 38 beta6 to beta8

This beta was harder then usual to release. For the 38.0.5 dev cycle, we decided to merge mozilla-beta into mozilla-release. As this is unusual, we encountered several issues:

  • Some l10n updates had to be run (beta2release_l10n.sh). To help with the diagnostics, we reported bug 1158126.
  • The automation tool had an expectation that, coming from mozilla-release, the version would be a release (bug 1158124). We hardcoded a temporary fix.

Because it took some time to fix these issues, we haven't been able to publish beta 7 last week. We decided to skip beta 7 from releasing and start the beta 8 build on Sunday evening. Unfortunately, this didn't go smoothly either:

  • During the merge, we update some of the configurations. This caused beta 8 build 1 to be built using the release branding. This change was backout. See bug 1158760 for more information.
  • Last but not least, because of the previous issue, we had to do a second build of 38 beta 8. This caused some CDN issue and it took a while to get that resolved. We also reported a bug to simplify this in the future.

Besides that, these two betas are regular. We disabled readling list and reader view (reader view is going to ship into 38.0.5). We took some fixes for EME and MSE.

Finally, we took some stability fixes.

  • 56 changesets
  • 131 files changed
  • 1488 insertions
  • 1911 deletions



List of changesets:

David KeelerBug 1150114 - Allow PrintableString to match UTF8String in name constraints checking. r=briansmith, a=sledru - a00d0de3202f
Justin DolskeBug 1155191 - Please disable readling list and reader view for 38. r=markh, a=me - 69173cc17556
Kai EngertBug 1156428 - Upgrade Firefox 38 to use NSS 3.18.1, a=dveditz - 8f9c08f19f6a
Patrick BrossetBug 1155172 - Intermittent browser_webconsole_notifications.js. r=past, a=test-only - 52322e98f739
Matt WoodrowBug 1154536 - Disable 4k H264 video for vista since it performs poorly. r=ajones, a=sledru - 650ed1bb5a04
Philipp KewischBug 1153192 - Cannot pass extra arguments to l10n-repack.py. r=gps, a=lmandel - d1e5b60cd47c
Chris PearceBug 1156131 - Use correct DLL on WinVista, 7, and 8 for WMF decoding in gmp-clearkey. r=edwin, a=sledru - e7210d2ce8a9
Chris PearceBug 1156131 - Expand list of WMF DLLs that are whitelisted for use by EME plugins. r=bobowen, a=sledru - 5712fefbace8
Mark HammondBug 1152193 - Ensure sync/readinglist log directory exists. r=rnewman, a=sledru - fc98815acf5f
Ed LeeBug 1156921 - Backout Suggested Tiles (Bug 1120311) from 38.0 [a=sylvestre] - d7ca3b75c842
Ryan VanderMeulenBug 1123563 - Skip test-animated-image-layers.html and test-animated-image-layers-background.html on Android and Linux. a=test-only - 1cd478c3e0b5
Hannes VerschoreBug 1140890 - Make sure the first argument cannot bail in between negative zero removal and creating result in substraction. r=nbp, a=sledru - d55fdde73ac8
Valentin GosuBug 1145812 - Fix assertion with dom.url.encode_decode_hash pref set to true. r=mcmanus, a=sledru - 5f0e381a7afd
Hannes VerschoreBug 1143878 - IonMonkey: Test conversion of MToInt32 for testing congruence. r=jandem, a=sledru - 0b3c5b65610e
Valentin GosuBug 1149913 - Disable Bug 1093611. Set pref dom.url.encode_decode_hash to true. r=honzab, a=sledru - a9be9167d92b
Chris PearceBug 1155432 - Don't flush WMF PDM task queues. r=jya, a=sledru - 0920ace0d8b0
Julian SewardBug 1153173 - Uninitialised value use in AutoJSExceptionReporter::~AutoJSExceptionReporter. r=aklotz, a=sledru - 92fb098ace7a
Jean-Yves AvenardBug 1154683 - Fix potential size overflow. r=kentuckyfriedtakahe, a=sledru - 22f8fa3a9273
Milan SreckovicBug 1133119 - ::Map should fail if the data is null, and add more null pointer checks. r=mattwoodrow, a=sledru - 90d2538212ab
Florian QuèzeBug 1109728 - Intermittent browser_devices_get_user_media.js | popup WebRTC indicator visible - Got false, expected true. r=Gijs, a=test-only - fe8c5e74565f
Florian QuèzeBug 1126107 - Intermittent browser_devices_get_user_media.js | WebRTC indicator hidden - Got true, expected false. r=Gijs, a=test-only - 8d4a0b33d32e
Jim MathiesBug 1100501 - Add StatisticsRecorder initialization to xpcshell. r=georg, a=sledru - 71d1d59db847
Jim MathiesBug 1100501 - Avoid a late shutdown of chromium's StatisticsRecorder. r=georg, a=sledru - 8661ed4cbdb9
Mark BannerBug 1153630 - Allow buttons in the Loop panel to be bigger if required as L10n needs. r=dmose, a=sledru - a6fe316e7571
Milan SreckovicBug 1154003 - More protection for failed surface drawable creation. r=bas, a=sledru - 474ffd404414
Valentin GosuBug 1139831 - End timestamps are before start timestamps. r=baku, a=sledru - 9fe28719e4fd
Mats PalmgrenBug 1152354 - Re-introduce the incremental reflow hack in nsSimplePageSequenceFrame for now, since the regressions are worse than the original problem (Bug 1108104). r=roc, a=sledru - 92a269ca564d
Garvan KeeleyBug 1155237 - Part 1: Remove contextless access to NetworkUtils, causes NPE. r=rnewman, a=sledru - 1ec2ee773b51
Garvan KeeleyBug 1155237 - Part 2: Make upload service non-sticky. r=rnewman, a=sledru - 645fc5aa6a49
Mark BannerBug 1145541. r=mikedeboer, a=sledru - db41e8e267ed
Ryan VanderMeulenBug 1108104 - Fix rebase bustage. a=bustage - df5d106c2607
Ryan VanderMeulenBug 1152354 - Remove no longer needed assertion expectation. a=orange - 50550eca1fa2
JW WangBug 1091155 - Don't check if 'playing' has fired for it depends on how fast decoding is which is not reliable. r=cpearce, a=test-only - 2161d1dc7e2b
Randell JesupBug 1151628 - Re-enable MJPEG in libyuv (especially for getUserMedia). r=glandium, a=sledru - f6448c4cf87f
Randell JesupBug 1152016 - Suppress fprintf(stderr)'s from jpeg in MJPEG decode. r=pkerr, a=sledru - 97d33db56113
Ganesh SahukariBug 1009465 - Set the read-only attribute for temporary downloads on Windows. r=paolo, a=sledru - b7d8d79c1ee5
Tom SchusterBug 1152550 - Make sure that cross-global Iterator can not be broken. r=Waldo, a=sledru - 6b096f9b31d3
Mark FinkleBug 1154960 - Fennec should explicitly block the DOM SiteSpecificUserAgent.js file from packaging. r=nalexander, a=sledru - da1d9ba28360
Richard NewmanBug 1155684 - Part 0: Disable reading list sync in confvars.sh. r=nalexander, a=sledru - 18c8180670c7
Richard NewmanBug 1155684 - Part 1-3: Remove reading list sync integration. r=nalexander, a=sledru - 309ed42a5999
Richard MartiBug 1156913 - Use highlighttext color also for :active menus. r=Gijs, a=sledru - 98086516ce8f
Edwin FloresBug 1156560 - Prefer old CDMs on update if they are in use. r=cpearce, ba=sledru - 7c66212e4c09
Ryan VanderMeulenBacked out changeset 6b096f9b31d3 (Bug 1152550) for bustage. - d20a4e36e508
Ryan VanderMeulenBug 1139591 - Skip browser_timeline_overview-initial-selection-01.js on OSX debug. a=test-only - c0624fb0b902
Ganesh SahukariBug 1022816 - OS.File will now be able to change the readOnly, hidden, and system file attributes on Windows. r=paolo, a=sledru - 8a2c933394da
Blake KaplanBug 1156939 - Don't stash a reference to a CPOW and then spin the event loop. r=mconley, a=test-only - 0efa961d5162
Jonas JenwaldBug 1112947 - Replace a setTimeout with an EventListener to fix an intermittent failure in browser/extensions/pdfjs/test/browser_pdfjs_navigation.js. r=mossop, a=test-only - b29a45098630
Jared WeinBug 1153403 - Don't allow dialogs to resize if they didn't resize in windowed preferences. r=Gijs, a=sledru - e46c9612492a
Matt WoodrowBug 1144257 - Blacklist DXVA for one NVIDIA driver that was causing crashes. r=ajones, a=sledru - 78c6b3ce2ce2
Tom SchusterBug 1152550 - Make sure that cross-global Iterator can not be broken. r=Waldo, a=sledru - 2025aa8c5b1b
travisBug 1154803 - Put our sigaction diversion in __sigaction if it exists. r=glandium, a=sledru - fd5c74651fb2
Neil RashbrookBug 968334 - Allow disabling content retargeting on child docshells only. r=smaug, ba=sledru - 38ff61772a2e
Nicolas B. PierronBug 1149119 - Use Atoms in the template object hold by Baseline. r=jandem, a=abillings - 7298f6e3943e
Nicolas B. PierronBug 1149119 - Do not inline bound functions with non-atomized arguments. r=jandem, a=abillings - 0e69c76cbbe2
Ryan VanderMeulenBacked out changeset b29a45098630 (Bug 1112947) for test bustage. - 8fc6195511e5
Rail AliievBug 1158760 - Wrong branding on the 38 Beta 8, backout d27c9211ebb3. IGNORE BROKEN CHANGESETS CLOSED TREE a=release ba=release - 9d105ed6f35a

Mike TaylorReferenceError onTouchStart is not defined jquery.flexslider.js

I was supposed to write this blog post like a year ago, but have been very busy in the last 12 months not writing this blog post. But yet here we are.

Doing some compatibility research on top Japanese sites, I ran into my old nemesis: ReferenceError: onTouchStart is not defined jquery.flexslider.js:397:12.

I first ran into this in January of 2014 in its more spooky form ReferenceError: g is not defined. Eventually I figured out it was a problem in a WooThemes jQuery plugin called FlexSlider, the real bug being the undefined behavior of function declaration hoisting in conditions (everyone just nod like that makes sense).

In JavaScript-is-his-co-pilot Juriy's words,

Another important trait of function declarations is that declaring them conditionally is non-standardized and varies across different environments. You should never rely on functions being declared conditionally and use function expressions instead.

In this case, they were conditionally declaring a function, but referencing it before said declaration in the if block, as an event handler, i.e.,

if (boop) {
  blah.addEventListener("touchstart", wowowow);

  function wowowow() {}

No big deal, easy fix. I wrote a patch. Some people manually patched their sites and moved on with their lives. I tried to.

We ran into this a few more times in Bugzilla and Webcompat.com land.

TC-39 bosom buddies Brendan and Allen noted that ES6 specifies things in such a way that these sites will eventually work in all ES62015 compliant browsers. Here's the bug to track that work in SpiderMonkey.

Cool! Until then, my lonely pull request is still hanging out at https://github.com/woothemes/FlexSlider/pull/986 (16 months later). The good news is FlexSlider is open source, so you're allowed to fix their bugs by manually applying that patch on your site. Then your touch-enabled slider widget stuff will work in Mobile Firefox browsers.

Mozilla Open Policy & Advocacy BlogMozilla statement on USA FREEDOM Act

Today, a new version of the USA FREEDOM Act is being introduced in both the House and Senate, with bipartisan support. We’re sharing the following statement:

“At Mozilla, we believe that privacy and security on the Internet are fundamental. The version of the USA FREEDOM Act of 2015 proposed today represents a significant step toward enhancing user privacy and ending mass surveillance. The bill curtails bulk collection practices, increases transparency around surveillance requests, and keeps data retention and other new surveillance mandates out of the legislation. There is more to do to make sure that privacy and security on the Internet is protected for everyone around the world. We urge Members of Congress to follow through and enact these important reforms.”

Christian HeilmannThe new challenges of “open”

These are my notes for my upcoming keynote at he Oscal conference in Tirana, Albania.

Today I want to talk about the new challenges of “open”. Open source, Creative Commons, and many other ideas of the past have become pretty much mainstream these days. It is cool to be open, it makes sense for a lot of companies to go that way. The issue is, that – as with anything fashionable and exciting – people are wont to jump on the bandwagon without playing the right tune. And this is one of the big challenges we’re facing.

Before we go into that, though, let’s recap the effects of going into the open with our work has.

Creating in the open is an empowering and frightening experience. The benefits are pretty obvious:

  • You share the load – people can help you with feedback, doing research for you, translating your work, building adapters to other environments for you.
  • You have a good chance your work will go on without you – as you shared, others can build upon your work when you moved on to other challenges; or got run over by a bus.
  • Sharing feels good – it’s a sort of altruism that doesn’t cost you any money and you see the immediate effect.
  • You become a part of something bigger – people will use your work in ways you probably never intended, and never thought of. Seeing this is incredibly exciting.

The downsides of working in the open are based on feedback and human interaction.

  • You’re under constant surveillance – you can’t hide things away when you openly share your work in progress. This can be a benefit as it means your product is higher quality when you’re under constant scrutiny. It can, however, also be stifling as you’re more worried about what people think about your work rather than what the work results in.
  • You have to allocate your time really well – feedback will come 24/7 and in many cases not in a format that is pleasing or – in some cases – even intelligible.
  • You have to pick your battles – people will come with all kind of requests and it is easy to get lost in pleasing the audience instead of finishing your product.
  • You have to prepare yourself for having to adhere to existing procedures – years of open source work resulted in many best practices and very outspoken people are quick to demand you adhere to them or stay off the lawn.

Hey cool kids, we’re doing the open thing!

One of the main issues with open is that people are not really aware of the amount of work it is. It is very fashionable to release products as open source. But, in many cases, this means putting the code on GitHub and hoping for a magical audience to help you and fix your problems. This is not how open source prospers.

Open Source and related ways of working does not mean you give out your work for free and leave it at that. It means that you make it available, that you nurture it and that you are open to giving up control for the benefit of the wisdom of the crowd. It is a two-way, three-way, many way exchange of data and information. You give something out, but you also get a lot back, and either deserves the same attention and respect.

More and more I find companies and individuals seeing open sourcing not as a way of working, but as an advertising and hiring exercise. Products get released openly but there is no infrastructure or people in place to deal with the overhead this means. It has become a ribbon to attach to your product – “also available on GitHub”.

We’ve been through that before – the mashup web and open APIs promised us developers that we can build great, scaling and wonderful products by using the power of the web. We pick and mix our content providers with open APIs and build our interfaces on top of that data. This died really quickly and today most APIs we played with are either shut down or became pay-to-play.

Other companies see “open” as a means to keep things alive that are not supported any longer. It’s like the mythical farm the family dog goes to when the kids ask where you take him when he gets old and sick. “Don’t worry, the product doesn’t mesh with the core business of our company any longer, but it will live on as it is now open source” is the message. And it is pretty much a useless one. We need products that are supported, maintaned and used. Simply giving stuff out for free doesn’t mean this will happen to that product, as it means a lot of work for the maintainers. In many cases shutting a product down is the more honest thing to do.

If you want to be open about it – do it our way

The other issue with open is that – ironically – open communities can come across as uninviting and aggressive. We are a passionate bunch, and care a lot about what we do. That can make us appear overly defensive and aggressive. Many long-standing open communities have methodologies in place to ensure quality that on first look can be daunting and off-putting.

Many companies understand the value of open, but are hesitant to open up their products because of this. The open community can come across as very demanding. And it is very easy to get an avalanche of bad feedback when you release something into the open but you fail to tick all the boxes. This is poison to anyone in a large company trying to release something closed into the open. You have to justify your work to the business people in the company. And if all you have to show is an avalanche of bad feedback and passive-aggressive “oh look, evilcorp is trying to play nice” comments, they are not going to be pleased with you.

We’re not competing with technology – we’re competing with marketing and tech propaganda

The biggest issue I see with open is that it has become a tool. Many of the closed environments that are in essence a replacement for the open web are powered by open technologies. This is what they are great for. The plumbing of the web runs on open. We’re a useful cog, and – to be fair – a lot of closed companies also support and give back to these products.

On the other hand, when you talk about a fully open product and try to bring it to end users, you are facing an uphill battle. Almost every open alternative to closed (or partly open systems) struggles or – if we are honest with ourselves – failed. Firefox OS is not taking the world by storm and brings connectivity to people who badly need it. The Ubuntu phone as an alternative didn’t cause a massive stir. Ello and Diaspora didn’t make a dent in the Facebooks and Twitters of this world. The Ouya game console ran into debt very quickly and now is looking to be bought out.

The reason is that we’re fooling ourselves when it comes to the current market and how it uses technology.

Longevity is dead

We love technology. We love the web. We love how it made us who we are and we celebrate the fights we fought to keep it open. We fight for freedom of choice, we fight for data retention and ownership of data and we worry where our data goes, if it will be available in the future or what happens with it.

But we are not our audience. Our audience are the digital natives. The people who see a computer, a smartphone and the internet as a given. The people who don’t even know what it means to be offline, and who watch streaming TV shows in bulk without a sense of dread at how much this costs or if it will work. If it stops working, who cares? Let’s do something else. If our phones or computers are broken, well let’s replace them. Or go to the shop and get them repaired for us. If the phone is too slow for the next version of its operating system, fine. Obviously we need to buy a better one.

The internet and technology has become a commodity, like running water and electricity. Of course, this is not the case all over the world, and in many cases also not when you’re traveling outside the country of your contracts. But, to those who never experienced this, it is nothing to worry about. Bit by bit, the web has become the new TV. Something people consume without knowing how it works or really taking part in it.

In England, where I live, it is almost impossible to get an internet connection without some digital TV deal as part of the package. The internet access is the thing we use to consume content provided to us by the same people who sold us CDs, DVDs, and BluRays. And those who consume over the internet also fill it up with content taken from this source material. Real creativity on the web, writing and publishing is on the way out. When something is always available, you stop caring for it. It is simply a given.

Closed by design, consumable by nature

This really scares me. It means that the people who always fought the open web and the free nature of software have won. Not by better solutions or by more choice. But by offering convenience. We’ve allowed companies with better marketing than us to take over and tell people that by staying in their world, everything is easy and works magically. People trade freedom of choice and ownership of their information for convenience. And that is hard to beat. When everything works, why put effort in?

The dawn of this was the format of the app. It was a genius idea to make software a consumable, perishable product. We moved away from desktop apps to web based apps a long time ago. Email, Calendaring, even document handling has gone that way and Google showed how that can be done.

With the smartphone revolution and the lack of support for open technologies in the leading platform the app was re-born: a bespoke piece of software written for a single platform in a closed format that needs OS-specific skills and tools to create. For end users, it is an icon. It works well, it looks amazing and it ties in perfectly with the OS. Which is no surprise, as it is written exclusively for it.

Consumable, perishable products are easier to sell. That’s why the market latched on to this quickly and defined it as the new, modern way to create software.

Even worse, instead of pointing out the lack of support for interoperable and standardised technology in the operating systems of smart devices, the tech press blamed said technologies for not working on them as well as the native counterparts do.

Develop where the people are

This constant re-inforcement of closed as a good business and open as not ready and hard to do has become a thing in the development world. Most products these days are not created for the web, independent of OS or platform. The first port of call is iOS, and once its been a success there, maybe Android. But only after complaining that the fragmentation is making it impossible to work. Fragmentation that has always been a given in the open world.

A fool’s errand

It seems open has lost. It has, to a degree. But there are already signs that what’s happening is not going to last. People are getting tired of apps and being constantly reminded by them to do things for them. People are getting bored of putting content in a system that doesn’t keep them excited and jump from product to product almost monthly. The big move of almost every platform towards light-weight messaging systems instead of life streams shows that there is a desperate attempt to keep people interested.

The big market people aim for is teenagers. They have a lot of time, they create a lot of interactions and they have their parent’s money to spend if they nag long enough.

The fallacy here is that many companies think that the teenagers of now will be the users of their products in the future. When I remember what I was like as a teenager, there is a small chance that this will happen.

We’re in a bubble and it is pretty much ready to burst. When the dust settles and people start wondering how anyone could be foolish enough to spend billions on dollars on companies that promise profits and pivot every few months when it didn’t come we’ll still be there. Much like we were during the first dotcom boom.

We’re here to help!

And this is what I want to close with. It looks dire for the open web and for open technologies right now. Yes, a lot is happening, but a lot is lip-service and many of the “open solutions” are trojan horses trying to lock people into a certain service infrastructure.

And this is where I need you. The open source and open in general enthusiasts. Our job now is to show that what we do works. That what we do matters. And that what we do will not only deliver now, but also in the future.

We do this by being open. By helping people to move from closed to open. Let’s be a guiding mentor, let’s push gently instead of getting up in arms when something is not 100% open. Let’s show that open means that you build for the users and the creators of now and of tomorrow – regardless of what is fashionable or shiny.

We have to move with the evolution of computing much like anybody else. And we do it by merging with the closed, not by trying to replace it. This failed and will also in the future. We’re not here to create consumables. We’re here to make sure they are made from great, sustainable and healthy parts.

Air MozillaThe Well Tempered API

The Well Tempered API Centuries ago, a revolution in music enabled compositions to still be playable hundreds of years later. How long will your software last? This talk, originally...

Kim MoirReleng 2015 program now available

Releng 2015 will take place in concert with ICSE in Florence, Italy on May 19, 2015. The program is now available. Register here!

via romana in firenze by ©pinomoscato, Creative Commons by-nc-sa 2.0

Kim MoirLess testing, same great Firefox taste!

Running a large continuous integration farm forces you to deal with many dynamic inputs coupled with capacity constraints. The number of pushes increase.  People add more tests.  We build and test on a new platform.  If the number of machines available remains static, the computing time associated with a single push will increase.  You can scale this for platforms that you build and test in the cloud (for us - Linux and Android on emulators), but this costs more money.  Adding hardware for other platforms such as Mac and Windows in data centres is also costly and time consuming.

Do we really need to run every test on every commit? If not, which tests should be run?  How often do they need to be run in order to catch regressions in a timely manner (i.e. able to bisect where the regression occurred)

Several months ago, jmaher and vaibhav1994, wrote code to analyze the test data and determine the minimum number of tests required to run to identify regressions.  They named their software SETA (search for extraneous test automation). They used historical data to determine the minimum set of tests that needed to be run to catch historical regressions.  Previously, we coalesced tests on a number of platforms to mitigate too many jobs being queued for too few machines.  However, this was not the best way to proceed because it reduced the number of times we ran all tests, not just less useful ones.  SETA allows us to run a subset of tests on every commit that historically have caught regressions.  We still run all the test suites, but at a specified interval. 

SETI – The Search for Extraterrestrial Intelligence by ©encouragement, Creative Commons by-nc-sa 2.0
In the last few weeks, I've implemented SETA scheduling in our our buildbot configs to use the data that the analysis that Vaibhav and Joel  implemented.  Currently, it's implemented on mozilla-inbound and fx-team branches which in aggregate represent around 19.6% (March 2015 data) of total pushes to the trees.  The platforms configured to run fewer pushes for both opt and debug are
  • MacOSX (10.6, 10.10)
  • Windows (XP, 7, 8)
  • Ubuntu 12.04 for linux32, linux64 and ASAN x64
  • Android 2.3 armv7 API 9

As we gather more SETA data for newer platforms, such as Android 4.3, we can implement SETA scheduling for it as well and reduce our test load.  We continue to run the full suite of tests on all platforms other branches other than m-i and fx-team, such as mozilla-central, try, and the beta and release branches. If we did miss a regression by reducing the tests, it would appear on other branches mozilla-central. We will continue to update our configs to incorporate SETA data as it changes.

How does SETA scheduling work?
We specify the tests that we would like to run on a reduced schedule in our buildbot configs.  For instance, this specifies that we would like to run these debug tests on every 10th commit or if we reach a timeout of 5400 seconds between tests.


Previously, catlee had implemented a scheduling in buildbot that allowed us to coallesce jobs on a certain branch and platform using EveryNthScheduler.  However, as it was originally implemented, it didn't allow us to specify tests to skip, such as mochitest-3 debug on MacOSX 10.10 on mozilla-inbound.  It would only allow us to skip all the debug or opt tests for a certain platform and branch.

I modified misc.py to parse the configs and create a dictionary for each test specifying the interval at which the test should be skipped and the timeout interval.  If the tests has these parameters specified, it should be scheduled using the  EveryNthScheduler instead of the default scheduler.

There are still some quirks to work out but I think it is working out well so far. I'll have some graphs in a future post on how this reduced our test load. 

Further reading
Joel Maher: SETA – Search for Extraneous Test Automation

Tanner FilipDo you host a wiki for your community? Community Ops wants to hear from you!

I'm cross-posting this to my blog, I'm hoping to get as much feedback as possible.

If you are hosting a wiki for your community rather than using wiki.mozilla.org, Community Ops has a few questions for you. If you would be so kind to reply to my post on Discourse, answering the questions I have below, we'd be extremely appreciative.

  1. How did you decide that you need a wiki?
  2. Why did you decide to host your own, rather than using the Mozilla Wiki?
  3. How did you choose your Wiki software (MediaWiki, TikiWiki, etc.)?
  4. What could make your wiki better? For example, would you like any extensions, or technical support?

Thank you in advance for taking the time to answer these questions!

Gervase MarkhamHSBC: Bad Security

I would like to use a stronger word than “bad” in the title, but decency forbids.

HSBC has, or used to have, a compulsory 2-factor system for logging in to their online banking. It used a small widget called a Secure Key. This is good. Now, they have rolled out an Android/iOS/Blackberry app alternative. This is also good, on balance.

However, at the same time, they have instituted a system where you can log on and see all your banking information and even take some actions without the key, just using a password. This is bad. Can I opt out, and say “no, I’d like to always use the key, please?” No, it seems I can’t. Compulsory lowered security for me. Even if I don’t use the password, that login mechanism will always be there.

OK, so I go to set a password. Never mind, I think, I’ll pick something long and complicated. But no; the guidance says:

Your password is not case sensitive and must be between 8 and 30 characters. It must include letters and numbers.

So the initial passphrase I picked was both too long, and didn’t include a number. However, the only error it gives is “This data is invalid”. I tried several other variants of my thought-of passphrase, but couldn’t get it to accept it. Painful reverse-engineering showed that the space character is also forbidden. Thank you so much, HSBC.

I finally find a password it’ll accept and click “Continue”. But, no. “Your session is invalidated – please log in again.” It’s taken so long to find a password it’ll accept that it has timed me out.

QMOSeeking participants interested in attending a Firefox Desktop QA meetup in Latin America

July 11 – 12th we are planning a joint l10n/QA meetup in Lima, Peru. If you are interested in participating, please visit https://wiki.mozilla.org/QA/LATAM_QA_Meetup_20159 to learn more about the event. We will select 5 contributors who reside in Latin America to participate in this exciting event. This is a great opportunity to learn more about Firefox Desktop QA and directly contribute to work on one of the Mozilla QA functional teams.

The deadline for application submission is Saturday, May 2, 2015.

Please contact marcia@mozilla.com if you have any questions.

Air MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

QMOFirefox 38 Beta 6 Testday Results

Hey mozillians!

As you already may know, last Friday – April 24th – we held another Testday event, Firefox 38 Beta 6.

We’d like to take this opportunity to thank everyone for getting involved in the proposed testing activities and in general, for helping us make Firefox better.

Many thanks go out to Bangladesh QA Community (Hossain Al Ikram, Nazir Ahmed Sabbir, Mohammad Maruf, Fariha Afrin, Rezaul Huque Nayeem and Towkir Ahmed), Aleksej, gaby2300 and kenkon for their efforts and contributions, and to all our moderators. Thanks a bunch!

Keep an eye on QMO for the upcoming events! ;)

Adam LoftingOptimizing for Growth

In my last post I spent some time talking about why we care about measuring retention rates, and tried to make the case that retention rate works as a meaningful measure of quality.

In this post I want to look at how a few key metrics for a product, business or service stack up when you combine them. This is an exercise for people who haven’t spent time thinking about these numbers before.

  • Traffic
  • Conversion
  • Retention
  • Referrals

If you’re used to thinking about product metrics, this won’t be new to you.

I built a simple tool to support this exercise. It’s not perfect, but in the spirit of ‘perfect is the enemy of good‘ I’ll share it in it’s current state.

>> Follow this link, and play with the numbers.

Optimizing for growth isn’t just ‘pouring’ bigger numbers into the top of the  ‘funnel‘. You need to get the right mix of results across all of these variables. And if your results for any of these measurable things are too low, your product will have a ‘ceiling’ for how many active users you can have at a single time.

However, if you succeed in optimizing your product or service against all four of these points you can find the kind of growth curve that the start-up world chases after every day. The referrals part in particular is important if you want to turn the ‘funnel’ into a ‘loop’.

Depending on your situation, improving each of these things has varying degrees of difficulty. But importantly they can all be measured, and as you make changes to the thing you are building you can see how your changes impact on each of these metrics. These are things you can optimize for.

But while you can optimize for these things, that doesn’t make it easy.

It still comes down to building things of real value and quality, and helping the right people find those things. And while there are tactics to tweak performance rates against each of these goals, the tactics alone won’t matter without the product being good too.

As an example, Dropbox increased their referral rate by rewarding users with extra storage space for referring their friends. But that tactic only works if people like Dropbox enough to (a) want extra storage space and (b) feel happy recommending the product to their friends.

In summary:

  • Build things of quality
  • Optimize them against these measurable goals

Marco ZeheAn update on office solutions in the browser and on mobile

Regular readers of my blog may remember my January 2014 shout out to Microsoft for implementing great accessibility in their Office Online offering. Later in the year, I also gave an overview over the accessibility in Google apps. Now, in late April 2015, it is time for an update, since both have made progress. We will also take a look at what has changed in Apple’s iCloud on the web suite, and I’ll introduce an open-source alternative that is ramping up to becoming more accessible.

Google apps

The Google apps suite for both free Gmail and paid Google Apps for Business accounts has grown enormously in functionality. And so have the accessibility features. No matter which are you look at, be it Docs with its wide variety of document navigation and collaboration features, Sheets with its ever more comprehensive spreadsheet editing and navigation abilities, or Slides, which allows a blind person to create full-featured slide shows. Gmail itself and Drive are also running very smoothly nowadays. Creating a Google Form to conduct surveys is also going very strongly.

One of the most remarkable facts about this is the enhancements to documentation the Google apps have received. Docs now has dedicated help pages for navigation, formatting, collaboration, and a huge list of keyboard shortcuts for all supported platforms. Take alone the Collaborating on a document with a screen reader, and just try a few things described in there with a friend, co-worker or family member. Each time I use these features, I am totally blown away by the experience.

Docs also introduced braille support and has by now expanded this to Firefox and screen readers as well. If you use it, you’ll get braille output (of course), but may lose some announcements that are available when braille support is not enabled. I have found that a combination of both modes works well for me: Non-braille mode for editing and collaboration, and braille mode for the final proof-reading.

The iOS apps have also made huge leaps forward. If you use an external keyboard with your iPhone or iPad, you have a similarly rich set of key strokes available to you that you have on the web. Compared to where these apps were a year ago, … Uh … forget it, there is no comparison. It’s like day and night!

On Android, the situation looks good as well, within, of course, the limitations that TalkBack still imposes on the users in general. Future versions may vastly improve this situation, let’s keep our fingers crossed! Until then, I suggest you look at the help documentation for Docs with Android.


Microsoft has also enhanced its accessibility features. Word Online, Excel Online, and PowerPoint Online work even better than when I wrote my first article. I found that the collaboration features don’t work as smoothly for me as they do with Google Docs, but because of OneDrive and Dropbox integrations, many tasks can be accomplished using the Office for Windows suite with all its features if the browser version falls short. The start page for accessibility in Office Online gives good pointers to articles with further information.

I also found that the Outlook.com web mailer behaves more like a web page than a real application in many instances. But of course, it has tight integration with Microsoft Outlook and Windows Mail in Windows, so again, if the web version falls short for you if you use these services, you can use the desktop versions.

The iOS versions also have seen great improveents for VoiceOver. The new kid on the block, Outlook for iOS, is getting frequent updates which usually also contain VoiceOver fixes.

And some good news for all the Mac enthusiasts out there: The Microsoft Office 2016 for Mac preview received an update on April 14, 2015 which, according to this support article, also contains VoiceOver improvements for Word, Excel, and PowerPoint. Outlook is also said to be accessible via a different support article on this.

I can’t say much about the Android versions of the Microsoft Office applications, and the official documentation channels haven’t revealed anything. If you have any experience, please let me know in the comments! Especially the MS Word for Android Tablet, and friends, are the interesting ones I think, since they are more feature-rich than the Office for Android phone app.


As much as Apple is great when it comes to accessibility in their hardware devices, including the latest new device category Apple Watch, as dismal is the situation with their iCloud.com offering. This thing just doesn’t have the fit and finish that the other products have. Yes, many buttons are now labeled, and yes, in Pages on the web, lines are read when you navigate them, as well as some other information. But the overall experience is not good. The keyboard focus gets lost every so often, and unpredictably, the interface is confusing and keeps stuff around that might, in a certain situation, not even be activable. This is nothing I can recommend to any screen reader user to use productively, even after some upgrades it received over the past year.

If you want, or have to, use iWork for iCloud, use the Mac or iOS versions. They work quite OK and get the job done.


And here’s the new kid on the block that I promised you! It’s called Open-Xchange App Suite, and is actually not quite as new in the field of collaborative tools. But its accessibility efforts are fairly new, and they look promising. Open-Xchange is mostly found in web mail providers such as the Germany-based Mailbox.org or 1&1, but also used internationally. Furthermore, anyone who runs certain types of Linux distributions as a server can run their own instance, with mail and cloud storage services. It also offers standards like IMAP and SMTP, CalDav for calendar sync, CardDav for contact sync, and WebDav for access to the files. It works in the MS Office formats, so is compatible with most, if not all, other office suites.

Its accessibility features on the web are on their way to becoming really good. They’ve still got some ways to go, primarily also in the way the keyboard focus handling works and how to get some tasks done really efficiently, but Mail, some parts of Calendar, Contacts, Drive, Text and the dashboard really work quite well already. It is nothing that compares yet to what Google is offering, but it comes close to the quality of what Microsoft is offering on the web, and definitely surpasses that in some areas.

This is definitely something to keep an eye on. I certainly will be watching its progress closely.

In summary

As of this writing, Google definitely comes out strongest in terms of accessibility and fit and finish when it comes to working efficiently with their apps. Granted, it takes some getting used to, and it requires that a screen reader user know their assistive technology and are willing to learn some keyboard shortcuts and familiarize themselves with certain usability concepts. But once that is mastered, Google Apps is definitely something I can whole-heartedly recommend for online collaboration. Furthermore, if you look at new features for commercial screen readers such as JAWS, you can see that they’re paying attention and improve their support for Google apps bit by bit where support is still lacking.

Microsoft is close behind, with some areas that are better accomplished in their desktop apps or on tablets rather than on the web.

Open-Xchange still has its bumps in the road, but is on a good way to becoming a viable alternative for those who can rely on their own infrastructure and want to go to a full open-source stack.

And for Apple, I recommend staying away from the web offering and doing all the office work in iWork apps for Mac or iOS. The web stuff is just too much of a hassle still.

Gervase MarkhamTop 50 DOS Problems Solved: Renaming Directories

Q: How do I rename a sub-directory? The only way I can find is to make a new one with the new name, copy all the files over from the old one, and then delete the original!

A: As you have found, the MS-DOS REN command doesn’t work on sub-directories. For a programmer it is a relatively trivial task to write a utility to do this job, and DR DOS 6 has a RENDIR command used in the same way as REN.

The manual for MS-DOS 5.0 advises the reader to do what you’re doing already, and indeed DR DOS 5 didn’t make provision for renaming directories. You can, however, use the DOS shell program to rename directories. If you want to stick with the command line, the best alternative is to get hold of a utility program written to do the job. Such programs are commonly found in shareware/PD catalogues.

Better think carefully before choosing that directory name…