Nicholas NethercoteFirefox OS phones on sale in Australia

Firefox OS phones are now on sale in Australia! You can buy a ZTE Open C with Firefox OS 1.3 installed for $99 (AUD) at JB Hi-Fi. (For non-Australian readers: JB Hi-Fi is probably the biggest electronics and home entertainment retailer in Australia.)

Australia’s not the ideal market for the current versions of Firefox OS, being a  country where a large fraction of people already use high-end phones. But it’s nice that they’re easily available :)

Mozilla Reps CommunityNew council members – Fall 2014

We are happy to announce that four new members of the Council have been elected.

Welcome San James, Ankit, Luis and Bob! They bring with them skills they have picked up as Reps mentors, and as community builders both inside Mozilla and in other fields. A HUGE thank you to the outgoing council members – Guillermo Movia, Sayak Sarkar, Nikos Roussos and Majda Nafissa. We are hoping you continue to use your talents and experience to continue in a leadership role in Reps and Mozilla.

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more on the ReMo wiki.

new-council

Congratulate new Council members on this Discourse topic!

Curtis KoenigCurtis Report 2014-10-17

This week was a real grab bag of stuff starting and stopping and interruption that kept me all over the place.

I’m particularly happy to see the python learning group starting the week of 2014010-20 and I feel like the work for my local Mozillians and KitHerder is getting closer to done. I spent all day Fri working out install issues with KitHerder with the developer (thanks WiredCrow for all the help).

  • local Mozillians working
  • Announced Python learning group
  • Working on more details and bits for next weeks launch
  • SecChamps Report
  • vendor review
  • SeaSponge video feedback
  • EME research
  • EME PTR setup
  • KitHerder wrangling / install

Meetings

Mon

  • Weekly meeting

Tue

  • SecAutomation
  • Cloud Security Team Meeting

Wed

  • MWoS team meeting
  • Web Bug Triage

Thu

  • Sec Open Mic
  • Community Building Team
  • 1:1

Jet VillegasThe Graphical Web (conference video)

I occasionally accept invitations to speak at conferences and events. Here’s the video from my recent talk at The Graphical Web in Winchester, England. I discuss how and why I now work on Web Platform Rendering, and how disruptive innovations are enabled by seemingly mundane key technologies that bridge gaps for developers and audiences.

Soledad PenadesA VERY BELATED Mozilla Festival 2013 post

Note: I started writing this past year after the festival finished, and then I went heads down into an spiral of web audio hacking and conferencing and what not, so I didn’t finish it.

But with the festival starting this Friday, it’s NOW OR NEVER!

Ahead with the PUBLISH button!

~~~

(AKA #MozFest everywhere else)

MozFest finished a week almost a year ago already, but I’m still feeling its effects on my brain: tons of new ideas, and a pervasive feeling of not having enough time to develop them all. I guess it’s good (if I manage it properly).

I came to the Festival without knowing what it would be about. The Mozilla London office had been pretty much taken over by the Mozilla Foundation people from all over the world who were doing their last preparations in there. Meeting rooms were a scarce resource, and one of them was even renamed as “MF WAR ROOM”, until someone came next day and re-renamed it as “MW PEACE ROOM”. So, it was all “a madhouse”, in Potch’s words, but amicable, friendly chaos after all. Hard to gather what the festival would consist of, though. So I just waited until Friday…

Friday

Well, saying that I waited wouldn’t be true. I wasn’t sitting, arms crossed. I was furiously stealing sleep hours to finish a hack that Dethe Elza (from Mozilla Vancouver, and curator for the Make the web physical track) had asked me to bring and present at the Science Fair on Friday.

My hack, HUMACCHINA, briefly consisted on using my QuNeo to control an instrument running on the browser, with Web Audio. I will talk about more technicalities in a future post, but what interests me here is the experience of presenting my creation to people on a fair. It was quite enlightening to observe how people react to the unknown and how they interpret what is in front of them according to their existing knowledge. Granted, my experiment was a little bit cryptic, specially if you were not a musician already (which would give you some hints), and it was hard to even listen to the music because of the noise in the environment, but still most people seemed to have fun and spent a while playing with the pads, others were puzzled by it (“but… why did you do this?”) and finally others were able to take the QuNeo out of its current preset and into another (wrong) one by just pressing all the buttons randomly at the right times (!!!). I’m glad I noticed, and I’m even gladder than I had programmed a test pattern to ensure everything was properly setup, so I could reset it and ensure all was OK before the next person came to the booth.

The sad part of this is that… I couldn’t get to see any other of the booths, so I missed a great lot. You can’t have it all, I guess.

At some point I was super tired, first because it was the end of a long day (and week!) and second because explaining the same thing over and over again to different people is not something I do every day, so I was exhausted. I decided to call it a day and we went for dinner to a nearby place… where we happened to find a sizeable amount of Mozillians having dinner there too. So we all gathered together for a final drink and then quickly rushed before the last tube left.

Saturday

I couldn’t be on time for the opening, but as soon as I arrived to Ravensbourne College I dashed through to the “Pass the App” session that Bobby Richter was running and had asked me to join. I, again, had not much of a clue of how it would develop. He paired me with a startup who’s trying to crowdsource custom built prosthetic parts for children in need, and we set up to prototype ideas for an app that could help them get to their goal. I think I should have drank a couple litres of coffee before joining this session, but although I wasn’t in my best shape, I think we did good enough. We came up with a mockup for an app that would use a futuristic hypothetical AppMaker to start with some sort of template app that parents could customise to describe their children’s needs, and then generate an app that they could then upload to a Marketplace and use that Marketplace payment features to fund the goal. It was fun to draw the mockup at giant scale and just discuss ideas without going technical for a change!

Some people stayed to try to build a demo for Sunday, but I was honest with myself and declined building any hack during the event. I know that after a few hours of hacking while many other activities were happening, I would be hating myself, at the end of the week-end I would hate everyone else, and on Monday I would hate the whole universe. Or worse.

I ended up chatting with Kumar, who’s actually worked on the payments system in the Firefox Marketplace, and then Piotr (of JSFiddle fame) showed up. He had brought his daughter–she had been translating WebMaker into Polish first, and now she was happily designing a voxel based pig using Voxel painter under the careful supervision of Max Ogden. Behind us, a whole group of tables were covered with the most varied stuff: plasticine, a water-colour machine, lots of Arduinos, sensors and wires, Makey makey pseudo joysticks, and whatnot.

It was also time for lunch, so we grabbed some sandwiches carefully arranged in a nearby table. They also were yummy! But I was totally yearning for a coffee, and a social break, so I popped out of the building and into the O2 for some sugary coffee based ice cream. Back into the College, I got a tweet suggesting me I visit the Makers Academy booth, which I did. It was interesting to know about their existence, because I get many questions about where to learn programming in a practical way, and I never know what to answer. Now I think I’d recommend Makers Academy as their approach seems quite sensible!

Then I decided that since I was on the ground floor, I could just as well try to visit all booths starting from that floor and work my way upwards. So I went to the Mozilla Japan booth, where I had quite a lot of fun playing with their Parapara animation tool. Basically you draw some frame-based animation in a tablet, which gets saved into an SVG image. This is then played in several devices, moving along a certain path, and it seemed as if the character I drawn was travelling around the world. Here it went crossing Tower Bridge, then on the next device he would be crossing Westminster Bridge… all the way until it reached Mount Fuji. (Here’s a better explanation of how Parapara works, with pictures)

I was also really honoured to spend some time speaking with Satoko Takita, better known as “Chibi”. She worked for Netscape before, of all places! She’s a survivor! But today, she humbly insisted, “she’s mostly retired”. She was also super kind and helped me de-Mac-ify my laptop with a couple of vinyl stickers in vivid orange. Now when the lid is open, an orange dinosaur glows inside the apple. Gecko inside!

After saying “arigato” many times (the only word I can say in Japanese… but probably a very useful one), I tried to continue my building tour. I tried to enter the first huge room which resembled a coffee place but it was so thriving with activity that it was impossible to get past the first meters. Also, I found Kate Hudson too, which I hadn’t had time to speak to during the Science Fair. She had to buy a SIM card, so we ventured out to the O2 shop. Something funny happened there. She was wearing a “Firefox” hoodie, and the guy in the shop asked her if she worked for Firefox. I was looking at the whole scene, partly amused because of my anonymous condition (I wasn’t wearing any branded apparel), and partly intrigued as to how the thing would end.

She started explaining that she actually worked for Mozilla… but then the guy interrupted her, and said that Internet Explorer was the best browser. He was a real-life troll! But Kate wouldn’t shut up–oh no! Actually it was good that she was the one wearing the hoodie, because she was taller and way more imposing than the troll (and than me! ha!). So she entered Evangelist Mode™ and calmly explained the facts while the other guy lost steam and… he finally left.

After the incident, we went back to the college, and up to the Plenary area, where the keynotes would be held. But it was still early, so I used that time to make a few commits (I have a goal of making at least one small fix every day), since I hadn’t made anything productive yet.

The keynote speakers were sort of walking/rocking back and fro behind the stage, rehearsing their lines, which was a curious insight since that is not something you get to see normally. It was also quite humanising–made them approachable. In the meantime, we logged into chat.meatspac.es and said “hi” to my team mate and former MozToronto office resident, Jen Fong (she’s now in Portland!).

Finally the keynotes started, live broadcasted by Air Mozilla. Mitchell Baker’s keynote was quite similar to her summit’s keynote. Camille’s speech had a few memorable quotes, including the “some people have never done homework without the web”, and “people often say that democracies are like plumbing, because you only care about plumbing when there are really bad smells… We are the plumbers of the Information Society”.

After her, Dethe himself got up on stage to show Lightbeam, a project that displays in a graphical way the huge amount of information that is “leaked” when you visit any given website. There was a most unsettling moment, when somebody sitting behind me said “Oh wow I never realised this was happening while I browsed!”. That was a moment of tension, and of revelation–people really need to understand how the web works; hopefully Lightbeam and similar tools will help them.

The co-founder from Technologywillsaveus gave us a tour about their products, and what they had learnt while building and marketing them, and although it seemed interesting, my brain was just refusing to accept any more information :-(

After that–MozParty! We went to a pub in (guess where…) the O2, where the party would happen. Somebody was livecoding visuals and music with livecodelab, but I couldn’t see who or where he was. At some point we went for dinner, and although the initial intention was to go back, we ended up retreating home as the first day had been quite exhausting!

Sunday

I think my brain was still fried when I woke up. Also, I was super hungry, almost to the point of being “hangry” (an invented word I had learnt about on Saturday), so –unsurprisingly– my feet brought me to the usual breakfast place. After a flat white and an unfinished “French Savoury Toast” (because it was massive) I was so high on sugar that I could say I was even levitating some centimeters over the floor. I took the tube in Victoria–gross error. The platform was crowded with people dressed as comic characters and tourists dressed as English souvenirs (basically: Union Jack-themed apparel), and I mildly cursed myself for taking the tube in Victoria instead of walking to any of the other nearby stations. Only mildly, because I was under the effects of a sugar kick, and couldn’t really get angry. At least, not for a few more hours.

When I arrived, the opening keynote was finishing. Way to start the day! I had decided to “go analogic” and left my computer home, so I wanted to attend sessions where computers were not required.

I finally attended the “Games on the urban space” session by Sebastian Quack, which was quite funny (and definitely didn’t require us using computers!). This got me thinking about the urban environment and the activities that can take place in it–can everything be converted into a game? when’s the best moment to play a game, or to involve passing pedestrians in your “gamified” activity? and do you tell them, or do you involve them without letting them know they are part of a game?

I had lunch with a few participants of that session –it was funny that Myrian Schwitzner from Apps4Good was there too. We had met already at a ladieswhocode meetup in February, but I couldn’t quite pinpoint it. We were like: “your face… looks familiar!” And this was something that happened frequently during the week-end: there was plenty of acquaintances to say hi to! It certainly slowed down the movement from one place to another, because you couldn’t be rude and ignore people.

Downstairs on the first floor, some people from the Webmaker team were hacking on something-something-audio for Appmaker. Meanwhile, Kumar was learning how to program his QuNeo. Turns out the star (*) “trick” I found out of pure chance can be extended to more uses, so we tried finding the limits of the trick, sending different combinations to the device and seeing what would happen. I also explained how the LED control for the sliders worked (you control the brightness of each LED in the slider separately).

After we ran out of ideas to send to the QuNeo, I browsed nearby tables. There was a woman with a bunch of planets modelled with plasticine (quite convincingly, I must say). She invited me to build something but I was fearing I would miss another session I was interested in–the “debate” with some journalists that had unveiled the NSA scandal. Still, I asked if I could smell the plasticine. If you ever used plasticine in the 80s/90s know what it smelled like, right? Well, it doesn’t smell like that anymore. I wouldn’t get hooked to it nowadays…

A quick escape for some coffee and back from the stormy, inclement weather outside, I was all set for the session. It ended up feeling a bit too long, and at times quite hard to follow because they weren’t using any microphone and relied entirely on their lungs to get the message across. I’m glad we all were super quiet, but the noise around the area and the speaker announcements coming from the Plenary were quite disruptive. Staying so focused for so long left me quite tired and I’m afraid to confess–I don’t remember anything about the closing keynote. I know it happened, but that’s it.

After it, the “Demo Fest” was set up, and similarly to the Science Fest people set up booths and tables to show what they had been working on during the week-end. For once, I didn’t have anything to show… which meant I could wander around looking at other people’s work!

I stayed for a little more, then we asked some people whether they’d like to join us in a quest to find a French steak restaurant in Marylebone, but they wouldn’t, so we went there anyway. It was pouring with rain and that was the day in which I decided to wear canvas shoes. My feet stayed wet until 1 AM. Awful.

We then went back to the official Moz-Hotel, where the Mozilla people were staying. There was no sign of after party first, then some people showed up with bags from the offlicense (too telling), and the hotel people weren’t happy about that, so they asked them to consume whatever was in the bags in their rooms. I decided to discreetly head back home before St. Jude’s storm got stronger. It was certainly an “atmospheric walk”, with rain and wind blasting either way, which made quite difficult to hold the umbrella still. I ended up running as much as I could, to shorten the misery. My recent running exercises proved its worth!

A few minutes after arriving home, Rehan told me that everyone had gone downstairs again and they were partying. But I had already changed into dry clothes and wasn’t venturing out into the wild again, so that was it for me.

In short: quite a good event. It was refreshing to do something not purely technical for a change, although I have this cunning feeling that I missed many sessions because there were so many of them. It was also good that kids were not only allowed but indeed encouraged into the festival, as they got to be involved in “grown up” activities such as translating, or for example designing things. I like how they question things and assumptions we take for granted—makes for refreshing points of view!

~~~

And written in October 2014: MozFest 2014 is coming! Here are some details of what I’ll be doing there. See you!

flattr this!

Peter Bengtssondjango-html-validator

In action
A couple of weeks ago we had accidentally broken our production server (for a particular report) because of broken HTML. It was an unclosed tag which rendered everything after that tag to just plain white. Our comprehensive test suite failed to notice it because it didn't look at details like that. And when it was tested manually we simply missed the conditional situation when it was caused. Neither good excuses. So it got me thinking how can we incorporate HTML (html5 in particular) validation into our test suite.

So I wrote a little gist and used it a bit on a couple of projects and was quite pleased with the results. But I thought this might be something worthwhile to keep around for future projects or for other people who can't just copy-n-paste a gist.

With that in mind I put together a little package with a README and a setup.py and now you can use it too.

There are however some caveats. Especially if you intend to run it as part of your test suite.

Caveat number 1

You can't flood htmlvalidator.nu. Well, you can I guess. It would be really evil of you and kittens will die. If you have a test suite that does things like response = self.client.get(reverse('myapp:myview')) and there are many tests you might be causing an obscene amount of HTTP traffic to them. Which brings us on to...

Caveat number 2

The htmlvalidator.nu site is written in Java and it's open source. You can basically download their validator and point django-html-validator to it locally. Basically the way it works is java -jar vnu.jar myfile.html. However, it's slow. Like really slow. It takes about 2 seconds to run just one modest HTML file. So, you need to be patient.

Alina MierlusBack to idiotism and a few other changes…

This blog has been quiet for a while now. I do hope to start writing soon (although I think I’ll do most of my next writing in Catalan – and the focus will not be only in technology).
A few people asked me what I’m doing now, since I’m not so active in projects I used to contribute (e.g. Mozilla) or doing things I used to do locally (e.g. participating in tech. events and open source groups).

Well, part of the answer is that during the last year I tried to find some time for myself. I started to feel kind of overwhelmed by the amount and quality of the information around me (yes, yes, I mean that kind of BuzzMachine that most of Open Source/Tech. “community” seems to be nowadays).

And what is it better than becoming an idiot? Experimenting that state of being useless to the world, of being less intelligent than most of the “smart citizens”, and start questioning again why and how things happen.

Professor Han, a contemporary essayist and cultural theorist from Germany, puts it very well:

It’s a function of philosophy to represent the role of the idiot. From the beginning, the philosophy goes along with the idiotism. Any philosopher that generates a new language, a new style, a new thinking, needs to be an idiot before.

The history of philosophy is a history of idiotisms.
Socrates, who only knows that he doesn’t know anything, is an idiot. An idiot is also Descartes, because he doubts of everything. Cogito ergo sum is an idiotism.
An internal contraction of thinking makes possible a new beginning. Descartes thinks about the thinking. The thinking recovers its virginal state when it connects with itself. Deluze opposes to the Cartesian idiot an another idiot [...]

Today, it looks like the somewhat marginalised, the crazy and the idiot basically disappeared from society. The whole network connectedness and the digital communications increase considerably the coercion over conformity. The violence of consensus repress the idiotisms.

(from the book “Psychopolitik: Neoliberalismus und die neuen Machttechniken”)

Yes, part of my time right now is devoted to studying philosophy at the University (offline and back to the system!). And I really like it very much. Only because you are there with a group of people who disagrees with you, who appreciates your critique, your skepticism and negativity… makes me feel much more being part of a community.

However, I’m not out of the technology world, as the other part of my time is devoted to work, which means applications development, deployments and even trainings.
But indeed, I’m feeling that I have to step back from the BuzzMachine that OpenSource/OpenWeb/OpenWathever has became. I’ve also developed more of a critical view on what is “connectedness or a connected society”, including the “sharing economy” – a deviation of open source concept (which on its turn is a mutation of Free Software social movement).

Soledad PenadesThis week… and beyond

  • Monday: shyly open my inbox after a week of holidays, and probably duck to avoid the rolling ball of stale mail coming my way.
  • Wednesday: maybe meet Karolina who’s in London for a conference!
  • Thursday: my talk is closing a conference O_O — when the organiser mentioned “closing” the day I thought he meant closing the first day, not the second. NO PRESSURE. Although the conf is held at Shoreditch Village Hall, which is a venue where I feel like at home, so I’ll probably be OK. There’s a meatspace meatup afterwards, and I’m glad it’s around Shoreditch too or I’ll be dropping on that.
  • Friday: MozFest facilitators meeting, and also the Science Fair during the evening (if it is still called Science Fair)
  • Saturday and Sunday: MozFest, MozFest, MozFest! Paul Rouget asked me to show WebIDE there, and then Bobby (aka SecretRobotron and your best friend) came up with this idea of a MEGABOOTH where people can go and learn something about app-making in sessions of 5-20 minutes. Of course I can’t be all week-end there or I’ll basically die of social extenuation, so I asked some friends and together we’ll be helping spread the word about Firefox OS development in its various facets: Gaia/Gonk/the operating system itself, Gaia apps, DevTools and WebIDE. Come to the MEGABOOTH and hang with Nicola, Wilson, Francisco, Potch and me! (linking to myself and wondering if the Internet will break with so much recursion, teehee)

That’ll be for this week. Beyond, there’s a few more conferences—some I can announce and some I can’t:

  • dotJS — Paris, France 17th November, which is pretty exciting to be in because the venue and the looks of everything are so sophisticated…!
  • OSOM — Cluj-Napoca, Romania 22th November, which I’m moderately nervous about because I’m keynoting (!!!), but I’m also excited because it’s in TRANSYLVANIA!!! :-[

We’ll also be hosting a Firefox OS Bug Squash Party at the Mozilla London office the week-end after Halloween. Expect weirdnesses. There are only 5 spots left!

Add to this the new thing I’m working on (details to be revealed as soon as there’s something to show) and it makes for a very busy Autumn!

I’m glad I took those holidays past week. I went to Tenerife, which makes it the furthest South I’ve ever been, and then the highest even in Spain since I climbed to el Teide! Also, my hotel was an hour away from the airport so I rented a car and drove myself around the island. After many years of not driving, that was mega-awesome, and a tad terrifying as well (I’ll detail in a future post).

This is going down the hill, with the cable car:

The whole scenery in the National Park is super incredible–it definitely looks like out of this world. Since the rocks are of volcanic origin they have very interesting textures, and are super lightweight, so it was funny to go picking random stones up and realising how little they weighted.

Also the vegetation and fauna were unlike most of what I’d seen before. In particular, the lizards were ENORMOUS. I would be walking and hearing huge noises on the dry leaves, turn around expecting to find a dog or a cat, only to find a huge lizard looking at me. What do they feed them? Maybe it’s better not to know… I’ll leave you with this not-so-little fella:

flattr this!

Mozilla Reps CommunityReps Weekly Call – October 16th 2014

Last Thursday we had our regular weekly call about the Reps program, this time we moved one hour later to avoid some conflicts and allow Reps on the West Coast to join us in the very morning.

reps-polo

Summary

  • Council elections this weekend.
  • Tech 4 Africa.
  • AdaCamp – Post event.
  • Mozfest – updates.
  • Firefox OS Bus.
  • Get Involved Re-design.

Detailed notes

AirMozilla video

https://air.mozilla.org/reps-weekly-20141016/

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Christian HeilmannWhy Microsoft matters more than we think

I’m guilty of it myself, and I see it a lot: making fun of Microsoft in a presentation. Sure, it is easy to do, gets a laugh every time but it is also a cheap shot and – maybe – more destructive to our goals than we think.

is it HTML5? if it doesn't work in IE, it is joke

Let’s recap a bit. Traditionally Microsoft has not played nice. It destroyed other companies, it kept things closed that open source could have benefited from and it tried to force a monoculture onto something that was born open and free: the web.

As standard conscious web developers, IE with its much slower adaption rate of newer versions was always the bane of our existence. It just is not a simple thing to upgrade a browser when it is an integral part of the operating system. This is exacerbated by the fact that newer versions of Windows just weren’t exciting or meant that a company would have to spend a lot of money buying new software and hardware and re-educate a lot of people. A massive investment for a company that wasn’t worth it just to stop the web design department from whining.

Let’s replace IE then!

Replacing IE also turned out to be less easy than we thought as the “this browser is better” just didn’t work when the internal tools you use are broken in them. Chrome Frame was an incredible feat of engineering and – despite being possible to roll out on server level even – had the adoption rate of Halal Kebabs at a Vegan festival.

Marketing is marketing. Don’t try to understand it

It seems also fair to poke fun at Microsoft when you see that some of their marketing at times is painful. Bashing your competition is to me never a clever idea and neither is building shops that look almost exactly the same as your main competitor next to theirs. You either appear desperate or grumpy.

Other things they do

The thing though is that if you look closely and you admit to yourself that what we call our community is a tiny part of the overall market, then Microsoft has a massive part to play to do good in our world. And they are not cocky any longer, they are repentant. Not all departments, not all people, and it will be easy to find examples, but as a whole I get a good vibe from them, without being all marketing driven.

Take a look at the great tools provided at Modern.ie to allow you to test across browsers. Take a look at status.modern.ie which – finally – gives you a clear insight as to what new technology IE is supporting or the team is working on. Notice especially that this is not only for Explorer – if you expand the sections you get an up-to-date cross-browser support chart linked to the bugs in their trackers.

status of different web technologies provided by Microsoft

This is a lot of effort, and together with caniuse.com makes it easier for people to make decisions whether looking into a technology is already worth-while or not.

Reaching inside corporations

And this to me is the main point why Microsoft matters. They are the only ones that really reach the “dark matter” developers they created in the past. The ones that don’t read hacker news every morning and jump on every new experimental technology. The ones that are afraid of using newer features of the web as it might break their products. The ones that have a job to do and don’t see the web as a passion and a place to discuss, discard, hype and promote and troll about new technologies. And also the ones who build the products millions of people use every day to do their non-technology related jobs. The booking systems, the CRM systems, the fiscal data tools, all the “boring” things that really run our lives.

We can moan and complain about all our great new innovations taking too long to be adopted. Or we could be open to feeding the people who talk to those who are afraid to try new things with the information they need.

Let’s send some love and data

I see Microsoft not as the evil empire any longer. I see them as a clearing house to graduate experimental cool web technology into something that is used in the whole market. Chances are that people who use Microsoft technologies are also audited and have to adhere to standard procedures. There is no space for wild technology goose chases there. Of course, you could see this as fundamentally broken – and I do to a degree as well – but you can’t deny that these practices exist. And that they are not going to go away any time soon.

With this in mind, I’d rather have Microsoft as a partner in crime with an open sympathetic ear than someone who doesn’t bother playing with experimental open technology of competitors because these don’t show any respect to begin with.

If we want IT to innovate and embrace new technologies and make them industrial strength we need an ally on the inside. That can be Microsoft.

Kartikaya GuptaGoogle-free android usage

When I switched from using a BlackBerry to an Android phone a few years ago it really irked me that the only way to keep my contacts info on the phone was to also let Google sync them into their cloud. This may not be true universally (I think some samsung phones will let you store contacts to the SD card) but it was true for phone I was using then and is true on the Nexus 4 I'm using now. It took a lot of painful digging through Android source and googling, but I successfully ended up writing a bunch of code to get around this.

I've been meaning to put up the code and post this for a while, but kept procrastinating because the code wasn't generic/pretty enough to publish. It still isn't but it's better to post it anyway in case somebody finds it useful, so that's what I'm doing.

In a nutshell, what I wrote is an Android app that includes (a) an account authenticator, (b) a contacts sync adapter and (c) a calendar sync adapter. On a stock Android phone this will allow you to create an "account" on the device and add contacts/calendar entries to it.

Note that I wrote this to interface with the way I already have my data stored, so the account creation process actually tries to validate the entered credentials against a webhost, and the the contacts sync adapter is actually a working one-way sync adapter that will download contact info from a remote server in vcard format and update the local database. The calendar sync adapter, though, is just a dummy. You're encouraged to rip out the parts that you don't want and use the rest as you see fit. It's mostly meant to be a working example of how this can be accomplished.

The net effect is that you can store contacts and calendar entries on the device so they don't get synced to Google, but you can still use the built-in contacts and calendar apps to manipulate them. This benefits from much better integration with the rest of the OS than if you were to use a third-party contacts or calendar app.

Source code is on Github: staktrace/pimple-android.

Kalpa WelivitigodaLight Level Meter | Firefox OS App

Light Level Meter [1], is a Firefox OS app developed by myself to demonstrate the use of Mozilla WebAPI [2]. The app measures the ambient light level in lux [3] and present in realtime. It records the max and min values and plots the variation of the light level over time.

I've made use of DeviceLightEvent [4] to get the current ambient light level from the light level detector in the device (I have tested it with Keon [5]). The real time chart is implemented using Smoothie Charts [6] which is a simple, easy to use javascript charting library for streaming data.

Measurement of ambient light level has many uses. One is that it could be used to adjust the light level of electronic visual displays that are there in many of the devices we use today such as mobile phones and tablets. By such adjustments based on the ambient light level, we could save energy while delivering a comfortable reading experience to the user.

Another use of measuring ambient light level is in electrical lighting design. For example, the light level recommended for reading is different from that is recommended for hand tailoring. Recommended light levels in building designing in Sri Lanka can be found in page 38 of "Code of Practice for Energy Efficient Buildings in Sri Lanka" [7].

Source code of Light Level Meter [8].

[1] https://marketplace.firefox.com/app/light-level-meter
[2] https://wiki.mozilla.org/WebAPI
[3] http://en.wikipedia.org/wiki/Lux
[4] https://developer.mozilla.org/en-US/docs/Web/API/DeviceLightEvent
[5] http://en.wikipedia.org/wiki/GeeksPhone_Keon
[6] http://smoothiecharts.org/
[7] http://www.energy.gov.lk/pdf/Building%20CODE.pdf
[8] https://github.com/callkalpa/callkalpa.github.io/tree/master/LightLevelMeter

Erik VoldJetpack Pro Tip - Using JPM on Travis CI

First, enable Travis on your repo.

Then, Add the following .travis.yml file to the repo:

This will download Firefox nightly, install jpm, and run jpm test -v on your JPM based Firefox add-on.

Examples

Add-ons

Third Party NPM Modules

Asa DotzlerPrivate Browsing Coming Soon to Firefox OS

This week, the team landed code changes for Bug 832700 – Add private browsing to Firefox OS. This was the back end implementation in Gecko and we still have to determine how this will surface in the front end. That work is tracked at Bug 1081731 - Add private browsing to Firefox OS in Gaia.

We also got a couple of nice fixes to one of my favorite new features, the still experimental “app grouping” feature for the Firefox OS home screen. The fixes for Bug 1082627 and Bug 1082629 ensure that the groups align properly and have the right sizes. You can enable this experimental feature in settings -> developer -> homescreen -> app grouping.

There’s lots going on every day in Firefox OS development. I’ll be keeping y’all up to date here and on Twitter.

 

 

Frédéric HarperI’m leaving Mozilla, looking for a new challenge

Copyright: Eva Blue https://flic.kr/p/nDrPAL

Copyright: Eva Blue https://flic.kr/p/nDrPAL

January 1st (or before if needed) will be my last day as a Senior Technical Evangelist at Mozilla. I truly believe in the Mozilla’s mission, and I’ll continue to share my passion for the open web, but this time, as a volunteer. From now on, I’ll be on the search for a new challenge.

I want to thank my rock star team for everything: Havi Hoffman, Jason Weathersby, Robert Nyman, and Christian Heilmann. I also want to thank Mark Coggins for his strong leadership as my manager. It was a real pleasure to work with you all! Last, but not least, thanks to all Mozillians, and continue the good work: let’s keep in touch!

What’s next

I’m now reflecting on what will be next for me, and open to discussing all opportunities. Having ten years as a software developer, and four years as a technical evangelist in my backpack, here are some ideas (I’m not limited to those), in no particular order, I have in mind:

  • Principal Technical Evangelist about a product/service/technology I believe in;
  • General manager of a startup accelerator program;
  • CTO of a startup.

I have no issue to travel extensively: I was on the road one-third of last year – speaking in more than twelves countries. I may not have an issue to move depending on the offer, and country. I like to share my passion on stage – more than 100 talks in the last three years. Also, my book on personal branding for developers will be published at Apress before the end of the year.

I like technology, but I’m not a developer anymore, and not looking to go back in a developer role. I may also be open to a non-technical role, but it need to target other of my passions like startups. For the last five years, I’ve been working at home, with no schedule, just end goals to reach. I can’t deal with micro-management, so I need some freedom to be effective. No matter what will be next, it need to be an interesting challenge as I have a serial entrepreneur profile: I like to take ideas, and make them a reality.

You can find more about my experience on my LinkedIn profile. If you want to grab a coffee or discuss any opportunities, send me an email.

P.S.: I see no values in highlighting the reasons of my departure, but I’m sad to leave, and keep in mind I’m not the only one in my team who resigned. If you have concerns, please send me an email.


--
I’m leaving Mozilla, looking for a new challenge is a post on Out of Comfort Zone from Frédéric Harper

David BoswellMozillians of the world, unite!

When i got involved with Mozilla in 1999, it was clear that something big was going on. The mozilla.org site had a distinctly “Workers of the world, unite!” feel to it. It caught my attention and made me interested to find out more.

600px-1998_site2_cropped

The language on the site had the same revolutionary feel as the design. One of the pages talked about Why Mozilla Matters and it was an impassioned rallying cry for people to get involved with the audacious thing Mozilla was trying to do.

“The mozilla.org project is terribly important for the state of open-source software. [...] And it’s going to be an uphill battle. [...] A successful mozilla.org project could be the lever that moves a dozen previously immobile stones. [...] Maximize the opportunity here or you’ll be kicking yourself for years to come.”

With some minor tweaks, these words are still true today. One change: we call the project just Mozilla now instead of mozilla.org. Our mission today is also broader than creating software, we also educate people about the web, advocate to keep the Internet open and more.

Another change is that our competition has adopted many of the tactics of working in the open that we pioneered. Google, Apple and Microsoft all have their own open source communities today. So how can we compete with companies that are bigger than us and are borrowing our playbook?

We do something radical and audicious. We build a new playbook. We become pioneers for 21st century participation. We tap into the passion, skills and expertise of people around the world better than anyone else. We build the community that will give Mozilla the long-term impact that Mitchell spoke about at the Summit.

mitchell_summit

Mozilla just launched the Open Standard site and one of the first articles posted is “Struggle For An Open Internet Grows“. This shows how the challenges of today are not the same challenges we faced 16 years ago, so we need to do new things in new ways to advance our mission.

If the open Internet is blocked or shut down in places, let’s build communities on the ground that turn it back on. If laws threaten the web, let’s make that a public conversation. If we need to innovate to be relevant in the coming Internet of Things, let’s do that.

Building the community that can do this is work we need to start on. What doesn’t serve our community any more? What do we need to do that we aren’t? What works that needs to get scaled up? Mozillians of the world, unite and help answer these questions.


Daniel Stenbergcurl is no POODLE

Once again the internet flooded over with reports and alerts about a vulnerability using a funny name: POODLE. If you have even the slightest interest in this sort of stuff you’ve already grown tired and bored about everything that’s been written about this so why on earth do I have to pile on and add to the pain?

This is my way of explaining how POODLE affects or doesn’t affect curl, libcurl and the huge amount of existing applications using libcurl.

Is my application using HTTPS with libcurl or curl vulnerable to POODLE?

No. POODLE really is a browser-attack.

Motivation

The POODLE attack is a combination of several separate pieces that when combined allow attackers to exploit it. The individual pieces are not enough stand-alone.

SSLv3 is getting a lot of heat now since POODLE must be able to downgrade a connection to SSLv3 from TLS to work. Downgrade in a fairly crude way – in libcurl, only libcurl built to use NSS as its TLS backend supports this way of downgrading the protocol level.

Then, if an attacker manages to downgrade to SSLv3 (both the client and server must thus allow this) and get to use the sensitive block cipher of that protocol, it must maintain a connection to the server and then retry many similar requests to the server in order to try to work out details of the request – to figure out secrets it shouldn’t be able to. This would typically be made using javascript in a browser and really only HTTPS allows this so no other SSL-using protocol can be exploited like this.

For the typical curl user or a libcurl user, there’s A) no javascript and B) the application already knows the request it is doing and normally doesn’t inject random stuff from 3rd party sources that could be allowed to steal secrets. There’s really no room for any outsider here to steal secrets or cookies or whatever.

How will curl change

There’s no immediate need to do anything as curl and libcurl are not vulnerable to POODLE.

Still, SSLv3 is long overdue and is not really a modern protocol (TLS 1.0, the successor, had its RFC published 1999) so in order to really avoid the risk that it will be possible exploit this protocol one way or another now or later using curl/libcurl, we will disable SSLv3 by default in the next curl release. For all TLS backends.

Why? Just to be extra super cautious and because this attack helped us remember that SSLv3 is old and should be let down to die.

If possible, explicitly requesting SSLv3 should still be possible so that users can still work with their legacy systems in dire need of upgrade but placed in corners of the world that every sensible human has since long forgotten or just ignored.

In-depth explanations of POODLE

I especially like the ones provided by PolarSSL and GnuTLS, possibly due to their clear “distance” from browsers.

Daniel Stenbergcurl and POODLE

Once again the internet flooded over with reports and alerts about a vulnerability using a funny name.

Justin DolskeSans Flash

I upgraded to a new MacBook about a week ago, and thought I’d use the opportunity to try living without Flash for a while. I had previously done this two years ago (for my last laptop upgrade), and I lasted about a week before breaking down and installing it. In part because I ran into too many sites that needed Flash, but the main reason was that the adoption and experience of HTML5 video wasn’t great. In particular, the HTML5 mode on YouTube was awful — videos often stalled or froze. (I suspect that was an issue on YouTube’s end, but the exact cause didn’t really matter.) So now that the Web has had a few additional years to shift away from Flash, I wanted to see if the experience was any better.

The short answer is that I’m pleased (with a few caveats). The most common Flash usage for me had been the major video sites (YouTube and Vimeo), and they now have HTML5 video support that’s good. YouTube previously had issues where they still required the use of Flash for some popular videos (for ads?), but either they stopped or AdBlock avoids the problem.

I was previously using Flash in click-to-play mode, which I found tedious. On the whole, the experience is better now — instead of clicking a permission prompt, I find myself just happy to not be bothered at all. Most of the random Flash-only videos I encountered (generally news sites) were not worth the time anyway, and on the rare occasion I do want to see one it’s easy to find an equivalent on YouTube. I’m also pleased to have run across very few Flash-only sites this time around. I suspect we can thank the rise of mobile (thanks iPad!) for helping push that shift.

There are a few problem sites, though, which so far I’m just living with.

Ironically, the first site I needed Flash for was our own Air Mozilla. We originally tried HTML5, but streaming at scale is (was?) a hard problem, so we needed a solution that worked. Which meant Flash. It’s unfortunate, but that’s Mozilla pragmatism. In the meantime, I just cheat and use Chrome (!) which comes with a private copy of Flash. Facebook (and specifically the videos people post/share) were the next big thing I noticed, but… I can honestly live without that too. Sorry if I didn’t watch your recent funny video.

I will readily admit that my Web usage is probably atypical. I’ve rarely play online Flash games, which are probably close to video usage. And I’m willing to put up with at least a little bit of pain to avoid Flash, which isn’t something fair to expect of most users.

But so far, so good!

[Coincidental aside: Last month I removed the Plugin Finder Service from Firefox. So now Firefox won't even offer to install a plugin (like Flash) that you don't have installed.]


Asa DotzlerFirefox OS 2.0 Pre-release for Flame

About 4,000 of y’all have a Flame Firefox OS reference phone. This is the developer phone for Firefox OS. If you’re writing apps or contributing directly to the open source Firefox OS project, Flame is the device you should have.

The Flame shipped with Firefox OS 1.3 and we’re getting close to the first major update for the device, Firefox OS 2.0. This will be a significant update with lots of new features and APIs for app developers and for Firefox OS developers. I don’t have a date to share with y’all yet, but it should be days and not weeks.

If you’re like me, you cannot wait to see the new stuff. With the Flame reference phone, you don’t have to wait. You can head over to MDN today and get a 2.0 pre-release base image, give that a whirl, and report any problems to Bugzilla. You can even flash the latest 2.1 and 2.2 nightly builds to see even further into the future.

If you don’t have a Flame yet, and you’re planning on contributing testing or coding to Firefox OS or to write apps for Firefox OS, I encourage you to get one soon. We’re going to be wrapping up sales in about 6 weeks.

Asa DotzlerI’m Back!

PROTIP: Don’t erase the Android phone with your blog’s two-factor authentication setup to see if you can get Firefox OS running on it unless you are *sure* you have printed out your two-factor back-up codes. Sort of thinking you probably printed them out is not the same thing as being sure :-)

Thank you to fellow Tennessean, long-time Mozillian, and WordPress employee Daryl Houston for helping me get my blog back.

Daniel StenbergFOSS them students

On October 16th, I visited DSV at Stockholm University where I had the pleasure of holding a talk and discussion with students (and a few teachers) under the topic Contribute to Open Source. Around 30 persons attended.

Here are the slides I use, as usual possibly not perfectly telling stand-alone without the talk but there was no recording made and I talked in Swedish anyway…

Contribute to Open Source from Daniel Stenberg

Julien VehentMitigating Poodle SSLv3 vulnerability on a Go server

If you run a Go server that supports SSL/TLS, you should update your configuration to disable SSLv3 today. The sample code below sets the minimal accepted version to TLSv1, and reorganizes the default ciphersuite to match Mozilla's Server Side TLS guidelines.

Thank you to @jrconlin for the code cleanup!

package main

import (
    "crypto/rand"
    "crypto/tls"
    "fmt"
)

func main() {
    certificate, err := tls.LoadX509KeyPair("server.pem", "server.key")
    if err != nil {
        panic(err)
    }
    config := tls.Config{
        Certificates:             []tls.Certificate{certificate},
        MinVersion:               tls.VersionTLS10,
        PreferServerCipherSuites: true,
        CipherSuites: []uint16{
            tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
            tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
            tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
            tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
            tls.TLS_RSA_WITH_AES_128_CBC_SHA,
            tls.TLS_RSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
            tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA},
    }
    config.Rand = rand.Reader

    netlistener, err := tls.Listen("tcp", "127.0.0.1:50443", &config)
    if err != nil {
        panic(err)
    }
    newnetlistener := tls.NewListener(netlistener, &config)
    fmt.Println("I am listening...")
    for {
        newconn, err := newnetlistener.Accept()
        if err != nil {
            fmt.Println(err)
        }
        fmt.Printf("Got a new connection from %s. Say Hi!\n", newconn.RemoteAddr())
        newconn.Write([]byte("ohai"))
        newconn.Close()
    }
}

Run the server above with $ go run tls_server.go and test the output with cipherscan:

$ ./cipherscan 127.0.0.1:50443
........
Target: 127.0.0.1:50443

prio  ciphersuite                  protocols              pfs_keysize
1     ECDHE-RSA-AES128-GCM-SHA256  TLSv1.2                ECDH,P-256,256bits
2     ECDHE-RSA-AES128-SHA         TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
3     ECDHE-RSA-AES256-SHA         TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
4     AES128-SHA                   TLSv1,TLSv1.1,TLSv1.2
5     AES256-SHA                   TLSv1,TLSv1.1,TLSv1.2
6     ECDHE-RSA-DES-CBC3-SHA       TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
7     DES-CBC3-SHA                 TLSv1,TLSv1.1,TLSv1.2

Certificate: UNTRUSTED, 2048 bit, sha1WithRSAEncryption signature
TLS ticket lifetime hint: None
OCSP stapling: not supported
Server side cipher ordering

Gregory SzorcThe Rabbit Hole of Using Docker in Automated Tests

Warning: This post is long and rambling. There is marginal value in reading beyond the first few paragraphs unless you care about Docker.

I recently wrote about how Mozilla tests version control. In this post, I want to talk about the part of that effort that consumed the most time: adding Docker support to the test harness.

Introducing the Problem and Desired End State

Running Docker containers inside tests just seems like an obvious thing you'd want to do. I mean, wouldn't it be cool if your tests could spin up MySQL, Redis, Cassandra, Nginx, etc inside Docker containers and test things against actual instances of the things running in your data centers? Of course it would! If you ask me, this approach beats mocking because many questions around accuracy of the mocked interface are removed. Furthermore, you can run all tests locally, while on a plane: no data center or staging environment required. How cool is that! And, containers are all isolated so there's no need to pollute your system with extra packages and system services. Seems like wins all around.

When Mozilla started adding customizations to the Review Board code review software in preparation for deploying it at Mozilla as a replacement for Bugzilla's Splinter, it quickly became apparant that we had a significant testing challenge ahead of us. We weren't just standing up Review Board and telling people to use it, we were integrating user authentication with Bugzilla, having Review Board update Bugzilla after key events, and were driving the initiation of code review in Review Board by pushing code to a Mercurial server. That's 3 user-visible services all communicating with each to expose a unified workflow. It's the kind of thing testing nightmares are made of.

During my early involvement with the project, I recognized the challenge ahead and was quick to insist that we write automated tests for as much as possible. I insisted that all the code (there are multiple extensions to ReviewBoard, a Mercurial hook, and a Mercurial extension) live under one common repository and share testing. That way we could tinker with all the parts easily and test them in concern without having to worry about version sync. We moved all the code to the version-control-tools repository and Review Board was the driving force behind improvements to the test harness in that repository. We had Mercurial .t tests starting Django dev servers hosting Review Board running from per-test SQLite databases and all was nice. Pretty much every scenario involving the interaction between Mercurial and ReviewBoard was tested. If you cared about just these components, life was happy.

A large piece of the integration story was lacking in this testing world: Bugzilla. We had somewhat complex code for having Review Board and Bugzilla talk to each other but no tests for it because nobody had yet hooked Bugzilla up to the tests. As my responsibilities in the project expanded from covering just the Mercurial and Review Board interaction to Bugzilla as well, I again looked at the situation and thought there's a lot of complex interaction here and alpha testing has revealed the presence of many bugs: we need a better testing story. So, I set out to integrate Bugzilla into the test harness.

My goals were for Review Board tests to be able to make requests against a Bugzilla instance configured just like bugzilla.mozilla.org, to allow tests to execute concurrently (don't make developers wait on machines), for tests to run as quickly as possible, to run tests in an environment as similar to production as possible, and to be able to run tests from a plane or train or anywhere without internet connectivity. I was unwilling to budge on these core testing requirements because they represent what's best from test accuracy and developer productivity standpoints: you want your tests to be representative of the real world and you want to enable people to hack on this service anywhere, anytime, and not be held back by tests that take too long to execute. Move fast and don't break things.

Before I go on, it's time for a quick aside on tolerable waiting times. Throughout this article I talk about minimizing the run time of tests. This may sound like premature optimization. I argue it isn't, at least not if you are optimizing for developer productivity. There is a fair bit of academic research in this area. A study on tolerable waiting time: how long are Web users willing to wait gets cited a lot. It says 2 seconds for web content. If you read a few paragraphs in, it references other literature. They disagree on specific thresholds, but one thing is common: the thresholds are typically low - just a few seconds. The latencies I deal with are all longer than what research says leads to badness. When given a choice, I want to optimize workflows for what humans are conditioned to tolerate. If I can't do that, I've failed and the software will be ineffective.

The architecture of Bugzilla created some challenges and eliminated some implementation possibilities. First, I wasn't using any Bugzilla: I was using Mozilla's branch of Bugzilla that powers bugzilla.mozilla.org. Let's call it BMO. I could try hosting it from local SQLite files and running a local, Perl-based HTTP server (Bugzilla is written in Perl). But my experience with Perl and takeaways from talking to the BMO admins was that pain would likely be involved. Plus, this would be a departure from test accuracy. So, I would be using MySQL, Apache HTTPd, and mod_perl, just like BMO uses them in production.

Running Apache and MySQL is always a... fun endeavor. It wasn't a strict requirement, but I also highly preferred that the tests didn't pollute the system they ran on. In other words, having tests connect to an already-running MySQL or Apache server felt like the wrong solution. That's just one more thing people must set up and run locally to run the tests. That's just one more thing that could differ from production and cause bad testing results. It felt like a dangerous approach. Plus, there's the requirement to run things concurrently. Could I have multiple tests talking to the same MySQL server concurrently? They'd have to use separate databases so they don't conflict. That's a possibility. Honestly, I didn't entertain the thought of running Apache and MySQL manually for too long. I knew about this thing called Docker and that it theoretically fit my use case perfectly: construct building blocks for your application and then dymanically hook things up. Perfect. I could build Docker containers for all the required services and have each test start a new, independent set of containers for just that test.

So, I set out integrating Docker into the version-control-tools test harness. Specifically, my goal was to enable the running of independent BMO instances during individual tests. It sounded simple enough.

What I didn't know was that integrating a Dockerized BMO into the test harness would take the better part of 2 weeks. And it's still not up to my standards. This post is the story about the trials and tribulations I encountered along the way. I hope it serves as a warning and potentially a guide for others attempting similar feats. If any Docker developers are reading, I hope it gives you ideas on how to improve Docker.

Running Bugzilla inside Docker

First thing's first: to run BMO inside Docker I needed to make Docker containers for BMO. Fortunately, David Lawrence has prior art here. I really just wanted to take that code, dump it into version-control-tools and call it a day. In hindsight, I probably should have done that. Instead, armed with the knowledge of the Docker best practice of one container per service and David Lawrence's similar wishes to make his code conform to that ideal, I decided to spend some time to fix David's code so that MySQL and Apache were in separate containers, not part of a single container running supervisord. Easy enough, right?

It was relatively easy extracting the MySQL and Apache parts of BMO into separate containers. For MySQL, I started with the official MySQL container from the Docker library and added a custom my.cnf. Simple enough. For Apache, I just copied everything from David's code that wasn't MySQL. I was able to manually hook the containers together using the Docker CLI. It sort of just worked. I was optimistic this project would only take a few hours.

A garbage collection bug in Docker

My first speed bump came as I was iterating on Dockerfiles. All of a sudden I get an error from Docker that it is out of space. Wat? I look at docker images and don't see anything too obvious eating up space. What could be going on? At this point, I'm using boot2docker to host Docker. boot2docker is this nifty tool that allows Windows and OS X users to easily run Docker (Docker requires a Linux host). boot2docker spins up a Linux virtual machine running Docker and tells you how to point your local docker CLI interface at that VM. So, when Docker complains it is out of space, I knew immediately that the VM must be low on space. I SSH into it, run df, and sure enough, the VM is nearly out of space. But I looked at docker images -a and confirmed there's not enough data to fill the disk. What's going on? I can't find the issue right now, but it turns out there is a bug in Docker! When running Docker on aufs filesystems (like boot2docker does), Docker does not always remove data volumes containers when deleting a container. It turns out that the MySQL containers from the official Docker library were creating a data-only container to hold persistent MySQL data that outlives the container itself. These containers are apparently light magic. They are containers that are attached to other containers, but they don't really show up in the Docker interfaces. When you delete the host container, these containers are supposed to be garbage collected. Except on aufs, they aren't. My MySQL containers were creating 1+ GB InnoDB data files on start and the associated data containers were sitting around after container deletion, effectively leaking 1+ GB every time I created a MySQL container, quickly filling the boot2docker disk. Derp.

I worked around this problem by forking the official MySQL container. I didn't need persistent MySQL data (the containers only need to live for one invocation - for the lifetime of a single test), so I couldn't care less about persisted data volumes. So, I changed the MySQL container to hold its data locally, not in a data volume container. The solution was simple enough. But it took me a while to identify the problem. Here I was seeing Docker do something extremely stupid. Surely my understanding of Docker was wrong and I was doing something stupid to cause it to leak data. I spent hours digging through the documentation to make sure I was doing things exactly as recommended. It wasn't until I started an Ubuntu VM and tried the same thing there did I realize this looked like a bug in boot2docker. A few Google searches later led me to a comment hiding at the bottom of an existing GitHub issue that pins aufs as the culprit. And here I thought Docker reached 1.0 and wouldn't have bad bugs like this. I certainly wouldn't expect boot2docker to be shipping a VM with a sub-par storage driver (shouldn't it be using devicemapper or btrfs instead). Whatever.

Wrangling with Mozilla's Branch of Bugzilla

At this point, I've got basic Docker containers for MySQL and Apache+mod_perl+Bugzilla being created. Now, I needed to convert from vanilla Bugzilla to BMO. Should be straightforward. Just change the Git remote URL and branch to check out. I did this and all-of-a-sudden my image started encountering errors building! It turns out that the BMO code base doesn't work on a fresh database! Fortunately, this is a known issue and I've worked around it previously. When I tackled it a few months ago, I spent a handful of hours disecting this problem. It wasn't pretty. But this time I knew what to do. I even had a Puppet manifest for installing BMO on a fresh machine. So, I just needed to translate that Puppet config into Dockerfile commands. No big deal, right? Well, when I did that Puppet config a few months ago, I based it on Ubuntu because I'm more familiar with Debian-based distros and figured Ubuntu would be the easiest since it tends to have the largest package diversity. Unfortunately, David's Docker work is based on Fedora. So, I spent some time converting the Dockerfile to Ubuntu rather than trying to port things to Fedora. Arguably the wrong decision since Mozilla operates the RedHat flavor of Linux distributions in production. But I was willing to trade accuracy for time here, having lost time dealing with the aufs bug.

Unfortunately, I under-estimated how long it would take to port the image to Ubuntu. It didn't take so long from a code change perspective. Instead, most of the time was spent waiting for Docker to run the commands to build the image. In the final version, Apt is downloading and installing over 250 packages. And Bugzilla's bootstrap process installs dozens of packages from CPAN. Every time I made a small change, I invalidated Docker's image building cache, causing extreme delays while waiting for Apt and CPAN to do their thing. This experience partially contributed to my displeasure with how Docker currently handles image creation. If Docker images were composed of pre-built pieces instead of stacked commands, my cache hit rate would have been much higher and I would have converted the image in no time. But no, that's not how things work. So I lost numerous hours through this 2 week process waiting for Docker images to perform operations I've already done elsewhere dozens of times before.

Docker Container Orchestration

After porting the Bugzilla image to Ubuntu and getting BMO to bootstrap in a manually managed container (up to this point I'm using the docker CLI to create images, start containers, etc), it was time to automate the process so that tests could run the containers. At this time, I started looking for tools that performed multiple container orchestration. I had multiple containers that needed to be treated as a single logical unit, so I figured I'd use an existing tool to solve this problem for me. Don't reinvent the wheel unless you have to, right? I discovered Fig, which seemed to fit the bill. I read that it is being integrated into Docker itself, so it must be best of breed. Even if it weren't its future seems to be more certain than other tools. So, I stopped my tools search and used Fig without much consideration for other tools.

Lack of a useful feature in Fig

I quickly whipped up a fig.yml and figured it would just work. Nope! Starting the containers from scratch using Fig resulted in an error. I wasn't sure what the error was at first because Fig didn't tell me. After some investigation, I realized that my bmoweb container (the container holding Apache + BMO code) was failing in its entrypoint command (that's a command that runs when the container starts up, but not the primary command a container runs - that's a bit confusing I know - read the docs). I added some debug statements and quickly realized that Bugzilla was erroring connecting to MySQL. Strange, I thought. Fig is essentially a DSL around manual docker commands, so I checked everything by typing everything into the shell. No error. Again on a new set of containers. No error. I thought maybe my environment variable handling was wrong - that the dynamically allocated IP address and port number of the linked MySQL container being passed to the bmoweb container weren't getting honored. I added some logging to disprove that theory. The wheels inside my brain spun for a little bit. And, aided by some real-time logging, I realized I was dealing with a race condition: Fig was starting the MySQL and bmoweb containers concurrently and bmoweb was attempting to access the MySQL server before MySQL had fully initialized and started listening on its TCP port! That made sense. And I think it's a reasonable optimization for Fig to start containers concurrently to speed up start time. But surely a tool that orchestrates different containers has considered the problem of dependencies and has a mechanism to declare them to prevent these race conditions. I check the online docs and there's nothing to be found. A red panda weeps. So, I change the bmoweb entrypoint script to wait until it can open a TCP socket to MySQL before actually using MySQL and sure enough, the race condition goes away and the bmoweb container starts just fine!

OK, I'm real close now. I can feel it.

Bootstrapping Bugzilla

I start playing around with manually starting and stopping containers as part of a toy test. The good news is things appear to work. The bad news is it is extremely slow. It didn't take long for me to realize that the reason for the slowness is Bugzilla's bootstrap on first run. Bugzilla, like many complex applications, has a first run step that sets up database schema, writes out some files on the filesystem, inserts some skeleton data in the database, creates an admin user, etc. Much to my dismay this was taking a long time. Something on the order of 25 to 30 seconds. And that's on a Haswell with plenty of RAM and an SSD. Oy. The way things are currently implemented would result in a 25 to 30 second delay when running every test. Change 1 line and wait say 25s for any kind of output. Are you kidding me?! Unacceptable. It violated my core goal of having tests that are quick to run. Again, humans should not have to wait on machines.

I think about this problem for like half a second and the solution is obvious: take a snapshot of the bootstrapped images and start instances of that snapshot from tests. In other words, you perform the common bootstrap operations once and only once. And, you can probably do that outside the scope of running tests so that the same snapshot can be used across multiple invocations of the test harness. Sounds simple! To the Docker uninitiated, it sounds like the solution would be to move the BMO bootstrapping into the Dockerfile code so it gets executed at image creation time. Yes, that would be ideal. Unfortunately, when building images via Dockerfile, you can't tell Docker to link that image to another container. Without container linking, you can't have MySQL. Without MySQL, you can't do BMO bootstrap. So, BMO bootstrap must be done during container startup. And in Docker land, that means putting it as part of your entrypoint script (where it was conveniently already located for this reason).

Talking Directly to the Docker API

Of course, the tools that I found that help with Docker image building and container orchestration don't seem to have an answer for this create a snapshot of a bootstrapped container problem. I'm sure someone has solved this problem. But in my limited searching, I couldn't find anything. And, I figured the problem would be easy enough to solve manually, so I set about creating a script to do it. I'm not a huge fan of shell script for automation. It's hard to debug and simple things can be hard and hard things can be harder. Plus, why solve solutions such as parsing output for relevant data when you can talk to an API directly and get native types. Since the existing test harness automation in version-control-tools was written in Python, I naturally decided to write some Python to create the bootstrapped images. So, I do a PyPI search and discover docker-py, a Python client library to the Docker Remote API, an HTTP API that the Docker daemon runs and is what the docker CLI tool itself uses to interface with Docker. Good, now I have access to the full power of Docker and am not limited by what the docker CLI may not expose. So, I spent some time looking at source and the Docker Remote API documentation to get an understanding of my new abilities and what I'd need to do. I was pleasantly surprised to learn that the docker CLI is pretty similar to the Remote API and the Python API was similar as well, so the learning was pretty shallow. Yay for catching a break!

Confusion Over Container Stopping

I wrote some Python for building the BMO images, launching the containers, committing the result, and saving state to disk (so it could be consulted later - preventing a bootstrap by subsequent consumers). This was pleasantly smooth at first, but I encountered some bumps along the way. First, I didn't have a complete grasp on the differences between stop and kill. I was seeing some weird behavior by MySQL on startup and didn't know why. Turns out I was forcefully killing the container after bootstrap via the kill API and this was sending a SIGKILL to MySQL, effectively causing unclean shutdown. After some documentation reading, I realized stop is the better API - it issues SIGTERM, waits for a grace period, then issues SIGKILL. Issuing SIGTERM made MySQL shut down gracefully and this issue stemming from my ignorance was resolved. (If anyone from Docker is reading, I think the help output for docker kill should mention the forcefullness of the command versus stop. Not all of us remember the relative forcefullness of the POSIX signals and having documentation reinforce their cryptic meaning could help people select the proper command.) A few lines of Python later and I was talking directly to the Docker Remote API, doing everything I needed to do to save (commit in Docker parlance) a bootstrapped BMO environment for re-use among multiple tests.

It was pretty easy to hook the bootstrapped images up to a single test. Just load the bootstrapped image IDs from the config file and start new containers based on them. That's Docker 101 (except I was using Python to do everything).

Concurrent Execution Confuses Bugzilla

Now that I could start Dockerized BMO from a single test, it was time to make things work concurrently. I hooked Docker up to a few tests and launched them in parallel to see what would happen. The containers appeared to start just fine! Great anticipation on my part to design for concurrency from the beginning, I thought. It appeared I was nearly done. Victory was near. So, I changed some tests to actually interact with BMO running from Docker. (Up until this point I was merely starting containers, not doing anything with them.) Immediately I see errors. Cannot connect to Bugzilla http://... connection refused. Huh? It took a few moments, but I realized the experience I had with MySQL starting and this error were very similar. I changed my start BMO containers code to wait for the HTTP server's TCP socket to start accepting connections before returning control and sure enough, I was able to make HTTP requests against Bugzilla running in Docker! Woo!

Next step, make an authenticated query against Bugzilla running in Docker. HTTP request completes... with an internal server error. What?! I successfully browsed BMO from containers days before and was able to log in just fine - this shouldn't be happening. This problem took me ages to diagnose. I traced every step of provisioning and couldn't figure out what was going on. After resorting to print debugging in nearly every component, including Bugzilla's Perl code itself, I found the culprit: Bugzilla wasn't liking the dynamic nature of the MySQL and HTTP endpoints. You see, when you start Docker containers, network addresses change. The IP address assigned to the container is whatever is available to Docker at the time the container was started. Likewise the IP address and port number of linked services can change. So, your container entrypoint has to deal with this dynamic nature of addresses. For example, if you have a configuration file, you need to update that configuration file on every run with the proper network address info. My Bugzilla entrypoint script was doing this. Or so I thought. It turns out that Bugzilla's bootstrap process has multiple config files. There's an answers file that provides static answers to questions asked when running the bootstrap script (checksetup.pl). checksetup.pl will produce a localconfig file (actually a Perl script) containing all that data. There's also a data/params file containing yet more configuration options. And, the way I was running bootstrap, checksetup.pl refused to update files with new values. I initially had the entrypoint script updating only the answers file and running checksetup.pl, thinking checksetup.pl would update localconfig if the answers change. Nope! checksetup.pl only appears to update localconfig if localconfig is missing a value. So, here my entrypoint script was, successully calling checksetup.pl with the proper network values, which checksetup.pl was more than happy to use. But when I started the web application, it used the old values from localconfig and data/params and blew up. Derp. So, to have dynamic MySQL hosts and ports and a dynamic self-referential HTTP URL, I needed to manually update localconfig and data/params during the entrypoint script. The entrypoint script now rewrites Perl scripts during container load to reflect appropriate variables. Oy.

Resource Constraints

At some point I got working BMO containers running concurrently from multiple tests. This was a huge milestone. But it only revealed my next problem: resource constraints. The running containers were consuming gobs of memory and I couldn't run more than 2 or 3 tests concurrently before running out of memory. Before, I was able to run 8 tests concurrently no problem. Well crap, I just slowed down the test harness significantly by reducing concurrency. No bueno.

Some quick investigation revealed the culprit was MySQL and Apache being greedier than they needed to be. MySQL was consuming 1GB RSS on start. Apache was something like 350 MB. It had been a while since I ran a MySQL server, so I had to scour the net for settings to put MySQL on a diet. The results were not promising. I knew enough about MySQL to know that the answers I found had similar quality to comments on the php.net function documentation circa 2004 (it was not uncommon to see things like SQL injection in the MySQL pages back then - who knows, maybe that's still the case). Anyway, a little tuning later and I was able to get MySQL using a few hundred MB RAM and I reduced the Apache worker pool to something reasonable (maybe 2) to free up enough memory to be able to run tests with the desired concurrency again. If using Docker as part of testing ever takes off, I imagine there will be two flavors of every container: low memory and regular. I'm not running a production service here: I'll happily trade memory for high-end performance as long as it doesn't impact my tests too much.

Caching, Invalidating, and Garbage Collecting Bootstrapped Images

As part of iterating on making BMO bootstrap work, I encountered another problem: knowing when to perform a bootstrap. As mentioned earlier, bootstrap was slow: 25 to 30 seconds. While I had reduced the cost of bootstrap to at most once per test suite execution (as opposed to once per test), there was still the potential for a painful 25-30s delay when running tests. Unacceptable! Furthermore, when I changed how bootstrap worked, I needed a way to invalidate the previous bootstrapped image. Otherwise, we may use an outdated bootstrapped image that doesn't represent the environment it needs to and test execution would fail. How should I do this?

Docker has considered this problem and they have a solution: build context. When you do a docker build, Docker takes all the files from the directory containing the Dockerfile and makes them available to the environment doing the building. If you ADD one of these files in your Dockerfile, the image ID will change if the file changes, invalidating the cache used by Docker to build images. So, if I ADDed the scripts that perform BMO bootstrap to my Docker images, Docker would automagically invalidate the built images and force a bootstrap for me. Nice! Unfortunately, docker build doesn't allow you to add files outside of the current directory to the build context. Internet sleuthing reveals the solution here is to copy things to a temporary directory and run docker build from that. Seriously? Fortunately, I was using the Docker API directly via Python. And that API simply takes an archive of files. And since you can create archives dynamically inside Python using e.g. tarfile, it wasn't too difficult to build proper custom context archives that contained my extra data that could be used to invalidate bootstrapped images. I threw some simple ADD directives into my Dockerfiles and now I got bootstrapped image invalidation!

To avoid having to perform bootstrap on every test run, I needed a mapping between the base images and the bootstrapped result. I ended up storing this in a simple JSON file. I realize now I could have queried Docker for images having the base image as its parent since there is supposed to be a 1:1 relationship between them. I may do this as a follow-up.

With the look-up table in place, ensuring bootstrapped images were current involved doing a couple docker builds, finding the bootstrapped images from those base images, and doing the bootstrap if necessary. If everything is up-to-date, docker build finishes quickly and we have less than 1s of delay. Very acceptable. If things aren't current, well, there's not much you can do there if accuracy is important. I was happy with where I was.

Once I started producing bootstrapped images every time the code impacting the generation of that image changed, I ran into a new problem: garbage collection. All those old bootstrapped images were piling up inside of Docker! I needed a way to prune them. Docker has support for associating a repository and a tag with images. Great, I thought, I'll just associate all images with a well-defined repository, leave the tag blank (because it isn't really relevant), and garbage collection will iterate over all images in to-be-pruned repositories and delete all but the most recent one. Of course, this simple solution did not work. As far as I can tell, Docker doesn't really let you have multiple untagged images. You can set a repository with no tag and Docker will automagically assign the latest tag to that image. But the second you create a new image in that repository, the original image loses that repository association. I'm not sure if this is by design or a bug, but it feels wrong to me. I want the ability to associate tags with images (and containers) so I can easily find all entities in a logical set. It seemed to me that repository facilitated that need (albeit with the restriction of only associating 1 identifier per image). My solution here was to assign type 1 UUIDs to the tag field for each image. This forced Docker to retain the repository association when new images were created. I chose type 1 UUIDs so I can later extract the time component embedded within and do time-based garbage collection e.g. delete all images created more than a week ago.

Making Things Work in Jenkins/Ubuntu

At about this point, I figured things were working well enough on my boot2docker machine that it was time to update the Jenkins virtual machine / Vagrant configuration to run Docker. So, I hacked up the provisioner to install the docker.io package and tried to run things. First, I had to update code that talks to Docker to know where Docker is in an Ubuntu VM. Before, I was keying things off DOCKER_HOST, which I guess is used by the docker CLI and boot2docker reminds you to set. Easy enough. When I finally got things talking to Docker, my scripts threw a cryptic error when talking to Docker. Huh? This worked in boot2docker! When in doubt, always check your package versions. Sure enough, Ubuntu was installing an old Docker version. I added the Docker Apt repo to the Vagrant provisioner and tried again. Bingo - working Docker in an Ubuntu VM!

Choice of storage engines

I started building the BMO Docker images quickly noticed something: building images was horribly slow. Specifically, the part where new images are committed was taking seemingly forever. 5 to 8 seconds or something. Ugh. This wouldn't really bother me except due to subsequent issues, I found myself changing images enough as part of debugging that image building latency became a huge time sink. I felt I was spending more time waiting for layers to commit than making progress. So, I I decided to do something about it. I remembered glancing at an overview of storage options in Docker the week or two prior. I instinctively pinned the difference on different storage drivers between boot2docker and Ubuntu. Sure enough, boot2docker was using aufs and Ubuntu was using devicemapper. OK, now I identified a potential culprit. Time to test the theory. A few paragraphs into that blog post, I see a sorted list of storage driver priorities. I see aufs first, btrfs second, and devicemapper third. I know aufs has kernel inclusion issues (plus a nasty data leakage bug). I don't want that. devicemapper is slow. I figured the list is ordered for a reason and just attempted to use btrfs without really reading the article. Sure enough, btrfs is much faster at committing images than devicemapper. And, it isn't aufs. While images inside btrfs are building, I glance over the article and come to the conclusion that btrfs is in fact good enough for me.

So now I'm running Docker on btrfs on Ubuntu and Docker on aufs in boot2docker. Hopefully that will be the last noticable difference between host environments. After all, Docker is supposed to abstract all this away, right? I wish.

The Mystery of Inconsistent State

It was here that I experienced the most baffling, mind bending puzzle yet. As I was trying to get things working on the Jenkins/Ubuntu VM - things that had already been proved out in boot2docker - I was running into inexplicable issues during creation of the bootstrapped BMO containers. It seemed that my bootstrapped containers were somehow missing data. It appeared as if bootstrap had completed but data written during bootstrap failed to write. You start the committed/bootstrapped image and bootstrap had obviously completed partially, but it appeared to have never finished. Same Docker version. Same images. Same build scripts. Only the host environment was different. Ummmm, Bueller?

This problem had me totally and completely flabbergasted. My brain turned to mush exhausting possibilities. My initial instinct was this was a filesystem buffering problem. Different storage driver (btrfs vs aufs) means different semantics in how data is flushed, right? I once again started littering code with print statements to record the presence or non-presence of files and content therein. MySQL wasn't seeing all its data, so I double and triple check I'm shutting down MySQL correctly. Who knows, maybe one of the options I used to trim the fat from MySQL removed some of the safety from writing data and unclean shutdown is causing MySQL to lose data?

While I was investigating this problem, I noticed an additional oddity: I was having trouble getting reliable debug output from running containers (docker log -f). It seemed as if I was losing log events. I could tell from the state of a container that something happened, but I was seeing no evidence from docker logs -f that that thing actually happened. Weird! On a hunch, a threw some sys.stdout.flush() calls in my Python scripts, and sure enough my missing output started arriving! Pipe buffering strikes again. So, now we have dirty hacks in all the Python scripts related to Docker to unbuffer stdout to prevent data loss. Don't ask how much time was wasted tracking down bad theories due to stdout output being buffered.

Getting back to the problem at hand, I still hand Docker containers seemingly lose data. And it was only happening when Ubuntu/btrfs was the host environment for Docker. I eventually exhausted all leads in my filesystem wasn't flushed theory. At some point, I compared the logs of docker logs -f between boot2docker and Ubuntu and eventually noticed that the bmoweb container in Ubuntu wasn't printing as much. This wasn't obvious at first because the output from bootstrap on Ubuntu looked fine. Besides, the script that waits for bootstrap to complete waits for the Apache HTTP TCP socket to come alive before it gracefully stops the container and snapshots the bootstrapped result: bootstrap must be completing, ignore what docker logs -f says.

Eventually I hit an impasse and resort to context dumping everything on IRC. Ben Kero is around and he picks up on something almost immediately. He simply says ... systemd?. I knew almost instantly what he was referring to and knew the theory fit the facts. Do you know what I was doing wrong?

I still don't know what and quite frankly I don't care, but something in my Ubuntu host environment had a trigger on the TCP port the HTTP server would be listening on. Remember, I was detecting bootstrap completion by waiting until a TCP socket could be opened to the HTTP server. As soon as that connection was established, we stopped the containers gracefully and took a snapshot of the bootstrapped result. Except on Ubuntu something was accepting that socket open, giving a false positive to my wait code, and triggering early shutdown. Docker issued the signal to stop the container gracefully, but it wasn't finished bootstrapping yet, so it forcefully killed the container, resulting in bootstrap being in a remarkably-consistent-across-runs inconsistent state. Changing the code from wait on TCP socket to wait for valid HTTP response fixed the problem. And just for good measure, I changed the code waiting on the MySQL server to also try to establish an actual connection to the MySQL application layer, not merely a TCP socket.

After solving this mystery, I thought there's no way I could be so blind as to not see the container receiving the stop signal during bootstrap. So, I changed things back to prove to myself I wasn't on crack. No matter how hard I tried, I could not get the logs to show that the signal was received. I think what was happening was that my script was starting the container and issuing the graceful stop so quickly that it wasn't captured by log clients. Sure enough, adding some sleeps in the proper places made it possible to catch the events in action. In hindsight, I suppose I could have used docker events to shed some light on this as well. If Docker persisted logs/output from containers and allowed me to scroll back in time, I think this would have saved me. Although, there's a chance my entrypoint script wouldn't have informed me about the received signal. Perhaps checksetup.pl was ignoring it? What I really need is a unified event + log stream from Docker containers so I can debug exactly what's going on.

Everything is Working, Right?

After solving the inconsistent bootstrap state problem, things were looking pretty good. I had BMO bootstrapping and running from tests on both boot2docker and Ubuntu hosts. Tests were seeing completely independent environments and there were no race conditions. I was nearly done.

So, I started porting more and more tests to Docker. I started running tests more and more. Things worked. Most of the time. But I'm still frustrated by periodic apparent bugs in Docker. For example, our containers periodically fail to shut down. Our images periodically fail to delete.

During container shutdown and delete at the end of tests, we periodically see error messagess like the following:

docker.errors.APIError: 500 Server Error: Internal Server Error ("Cannot destroy container f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255: Driver aufs failed to remove root filesystem f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255: rename /mnt/sda1/var/lib/docker/aufs/mnt/f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255 /mnt/sda1/var/lib/docker/aufs/mnt/f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255-removing: device or resource busy")
500 Server Error: Internal Server Error ("Cannot destroy container 7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: Unable to remove filesystem for 7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: remove /mnt/sda1/var/lib/docker/containers/7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: directory not empty")

Due to the way we're executing tests (Mercurial's .t test format), this causes the test's output to change and the test to fail. Sadness.

I think these errors are limited to boot2docker/aufs. But we haven't executed enough test runs in the Jenkins/Ubuntu/btrfs VM yet to be sure. This definitely smells like a bug in Docker and it is very annoying.

Conclusion

After much wrangling and going deeper in a rabbit hole than I ever felt was possible, I finally managed to get BMO running inside Docker as part of our test infrastructure. We're now building tests for complicated components that touch Mercurial, Review Board, and Bugzilla and people are generally happy with how things work.

There are still a handful of bugs, workarounds, and components that aren't as optimal as I would like them to be. But you can't always have perfection.

My takeaway from this ordeal is that Docker still has a number of bugs and user experience issues to resolve. I really want to support Docker and to see it succeed. But every time I try doing something non-trivial with Docker, I get bit hard. Yes, some of the issues I experienced were due to my own ignorance. But at the same time, if one of Docker's mantras is about simplicity and usability, then should there be such gaping cracks for people like me to fall through?

In the end, the promise of Docker fits my use case almost perfectly. I know the architecture is a good fit for testing. We will likely stick with Docker, especially now that I've spent the time to make it work. I really wish this project would have taken a few days, not a few weeks.

Robert NymanGetting started with & understanding the power of Vim

Being a developer and having used a lot of code editors over the years, I think it’s a very interesting area both when it comes to efficiently but also in the program we spend many many hours in. At the moment, I’m back with Vim (more specifically, MacVim).

The last years I’ve been using Sublime Text extensively, and before that, TextMate. I’ve really liked Sublime Text, it supports most of what I want to do and I’m happy with it.

As the same time, my belief is that you need to keep on challenging yourself. Try and learn new things, get another perspective, learn about needs and possibilities you didn’t even knew you had. Or, at the very least, go back to what you used before, now being more aware of how much you like and appreciate it.

Vim redux

A few years ago I tried out Vim (MacVim) to see what it was like. A lot of great developers use it, and a few friends swore by how amazing it was. So, naturally I had to try it.

Tried for a while, and did it with the Janus distribution. I did end up in a situation where I didn’t have enough control; or rather, didn’t understand how it all works and didn’t take the time to learn. So I tried Vim for a while, got fed up and aggravated that I could get things done quickly. While I learned a lot about Vim after a while, at that time and during its circumstances, the cost was too big to continue.

But now I’m back again, and so far I’m happy about it. :-)

Let’s be completely honest, though: the learning curve is fairly steep and there are a lot of annoying moments in the beginning, in particular since it is very different from what most people have used before.

Getting started

My recommendation to get started, and really grasp Vim, is to download a clean version, and probably something with a graphical user interface/application wrapper for your operating system. As mainly a Mac OS X user, my choice has been MacVim.

In your home folder, you will get (or create) a folder and a file (there could be more, but this is the start):

.vim folder
Contains your plugins and more
.vimrc file
A file with all kinds of configurations, presets and customizations. For a Vim user, the .vimrc file is the key to success (for my version, see below)

Editing Modes

One of things is that Vim offers a number of different modes, depending on what you want to do. The core ones are:

normal
This is the default mode in Vim, for navigating and manipulating text. Pressing <Esc> at any time takes you back to this mode
insert
Inserting and writing text and code
visual
Any kinds of text selections
command-line
Pressing : takes you to the command line in Vim, from which you can call a plethora of commands

Once you’ve gotten used to switching between these commands, you will realize how extremely powerful they are and, when gained control, how they dramatically improves efficiency. Search/substitute is also very powerful in Vim, but I really do recommend checking out vimregex.com for the low-down on commands and escaping.

Keyboard shortcuts

With the different Modes, there’s an abundance of keyboard shortcuts, some of them for one mode, some of them spanning across modes (and all this customizable as well through your .vimrc file).

Also, Vim is a lot about intent. Not just what you want to do now, but thinking 2, 3 or 4 steps ahead. Where are you going with this entire flow, not just action by action without connections.

For instance, let’s say I have a <h2> element with text in it that I want to replace, like this:

<h2>I am a heading</h2>

My options are (going from most most complicated to most efficient):

  • Press v to go into Visual mode, then use the w (jump by start of words) or e (jump to end of words) to select the text and then delete it (with the delete key or pressing d), press i to go into Insert mode, then enter the new text
  • Press v to go into Visual mode, then use the w (jump by start of words) or e (jump to end of words) to select the text, then press c to go into Insert mode with a change action, i.e. all selected text will be gone and what you type is the new value
  • Press dit in Normal mode, which means “delete in tag”, then press i or c to go into Insert mode and write the new text
  • Press ct< in Normal mode, which means “change to [character]“, then just write the new text
  • Press cit in Normal mode, which means “change in tag”, then just write the new text

Using ct[character] or dt[character], e.g. ct< will apply the first action (“change”) to the specified character (“<” in this case). Other quick ways of changing or deleting things on a row is pressing C or D which will automatically do that action to the end of the current line.

There is a ton of options and combinations, and I’ve listed the most common ones below (taken from http://worldtimzone.com/res/vi.html):

Cursor movement
h - move left
j - move down
k - move up
l - move right
w - jump by start of words (punctuation considered words)
W - jump by words (spaces separate words)
e - jump to end of words (punctuation considered words)
E - jump to end of words (no punctuation)
b - jump backward by words (punctuation considered words)
B - jump backward by words (no punctuation)
0 - (zero) start of line
^ - first non-blank character of line
$ - end of line
G - Go To command (prefix with number - 5G goes to line 5)

Note: Prefix a cursor movement command with a number to repeat it. For example, 4j moves down 4 lines.
Insert Mode – Inserting/Appending text

i - start insert mode at cursor
I - insert at the beginning of the line
a - append after the cursor
A - append at the end of the line
o - open (append) blank line below current line (no need to press return)
O - open blank line above current line
ea - append at end of word
Esc - exit insert mode
Editing
r - replace a single character (does not use insert mode)
J - join line below to the current one
cc - change (replace) an entire line
cw - change (replace) to the end of word
c$ - change (replace) to the end of line
s - delete character at cursor and substitute text
S - delete line at cursor and substitute text (same as cc)
xp - transpose two letters (delete and paste, technically)
u - undo
. - repeat last command

Marking text (visual mode)

v - start visual mode, mark lines, then do command (such as y-yank)
V - start Linewise visual mode
o - move to other end of marked area
Ctrl+v - start visual block mode
O - move to Other corner of block
aw - mark a word
ab - a () block (with braces)
aB - a {} block (with brackets)
ib - inner () block
iB - inner {} block
Esc - exit visual mode
Visual commands
> - shift right
< - shift left
y - yank (copy) marked text
d - delete marked text
~ - switch case
Cut and Paste
yy - yank (copy) a line
2yy - yank 2 lines
yw - yank word
y$ - yank to end of line
p - put (paste) the clipboard after cursor
P - put (paste) before cursor
dd - delete (cut) a line
dw - delete (cut) the current word
x - delete (cut) current character
Exiting
:w - write (save) the file, but don't exit
:wq - write (save) and quit
:q - quit (fails if anything has changed)
:q! - quit and throw away changes
Search/Replace
/pattern - search for pattern
?pattern - search backward for pattern
n - repeat search in same direction
N - repeat search in opposite direction
:%s/old/new/g - replace all old with new throughout file
:%s/old/new/gc - replace all old with new throughout file with confirmations
Working with multiple files
:e filename - Edit a file in a new buffer
:bnext (or :bn) - go to next buffer
:bprev (of :bp) - go to previous buffer
:bd - delete a buffer (close a file)
:sp filename - Open a file in a new buffer and split window
ctrl+ws - Split windows
ctrl+ww - switch between windows
ctrl+wq - Quit a window
ctrl+wv - Split windows vertically

Plugins

There are a number of different ways of approaching plugins with Vim, but the most simple and clearest one that I’ve found, in the form of a plugin itself, is using pathogen.vim. Then you will place all other plugins you install in .vim/bundle

These are the plugins I currently use:

command-t
Mimicking the Command + T functionality in TextMate/Sublime Text, to open any file in the current project. I press , + f to use it (where , is my Leader key)
vim-snipmate
To import snippet support in Vim. For instance, in a JavaScript file, type for then tab to have it completed into a full code snippet. As part of this, some other plugins were needed:

vim-multiple-cursors
I love the multiple selection feature in Sublime Text; Command + D to select the next match(es) in the document that are the same as what is currently selected.
This is a version of this for Vim that works very well. Use Ctrl + n to select any matches, and then act on them with all the powerful commands available in Vim. For instance, after you are done selecting, the simplest thing is to press c to change all those occurrences to what you want.
vim-sensible
A basic plugin to help out with some of the key handling.
vim-surround
surround is a great plugin for surround text with anything you wish. Commands starts with pressing ys which stands for “you surround” and then you enter the selection criteria and finally what to surround it with.
Examples:

  • ysiw" – “You surround in word”
  • ysip<C-t> – “You surround in paragraph” and then ask for which tag to surround with
nerdtree
This offers a fairly rudimentary tree navigation to Vim. Don’t use it much at the moment, though, but rather prefer pressing : to go to the command line in Vim and then just type in e. to open a file tree.

My .vimrc file

Here’s is my .vimrc file which is vital for me in adapting Vim to all my needs – keyboard shortcuts, customizations, eficiency flows:

HyperLinkHelper in Vim

Another thing I really like in TextMate and Sublime Text is the HyperlinkHelper, basically wrapping the current selection as a link with what’s in the clipboard set as the href value. So I created this command for Vim, to add in your .vimrc file:

vmap <Space>l c<a href="<C-r>+"><C-r>"</a>

In Visual mode, select text and then press space bar + l to trigger this action.

Scratching the surface

This has only been scratching the surface of all the power in Vim, but I hope it has been inspiring, understandable and hopefully motivated you to give it a go, alternatively taught you something you didn’t know.

Any input, thoughts and suggestions are more than welcome!

Doug Belshaw99% finished: Badge Alliance Digital & Web Literacies working group's Privacy badge pathway

I’m the co-chair of the Badge Alliance’s working group on Digital & Web Literacies. We’ve just finished our first cycle of meetings and are almost finished the deliverable. Taking the Web Literacy Map (v1.1) as a starting point, we created a document outlining considerations for creating a badged pathway around the Privacy competency.

Cat x-ray

The document is currently on Google Docs and open for commenting. After the Mozilla Festival next week the plan is to finalise any edits and then use the template we used for the Webmaker whitepaper.

Click here to access the document: http://goo.gl/40byub


Comments? Questions? Get in touch: @dajbelshaw / doug@mozillafoundation.org

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1079476] Allow getting and updating groups via the web services
  • [1079463] Bugzilla::WebService::User missing update method
  • [1080600] CVE ID format change: CVE-\d{4}-\d{4} becomes CVE-\d{4}-\d{4,7} this year
  • [1080554] Create custom entry form for submissions to Mozilla Communities newsletter
  • [1074586] Add “Bugs of Interest” to the dashboard
  • [1062775] Create a form to create/update bounty tracking tracking attachments
  • [1074350] “new to bugzilla” indicator should be removed when a user is added to ‘editbugs’, not ‘canconfirm’
  • [1082887] comments made when setting a flag from the attachment details page are not included in the “flag updated” email

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jennie Rose HalperinNew /contribute page

In an uncharacteristically short post, I want to let folks know that we just launched our new /contribute page.

I am so proud of our team! Thank you to Jess, Ben, Larissa, Jen, Rebecca, Mike, Pascal, Flod, Holly, Sean, David, Maryellen, Craig, PMac, Matej, and everyone else who had a hand. You all are the absolute most wonderful people to work with and I look forward to seeing what comes next!

I’ll be posting intermittently about new features and challenges on the site, but I first want to give a big virtual hug to all of you who made it happen and all of you who contribute to Mozilla in the future.

David BoswellInvesting more in community building

I’m very excited to see the new version of Mozilla’s Get Involved page go live. Hundreds of people each week come to this page to learn about how they can volunteer. Improvements to this page will lead to more people making more of an impact on Mozilla’s projects.

get_involved_2014

This page has a long history—this page existed on www.mozilla.org when Mozilla launched in 1998 and it has been redesigned a few times before. There is something different about the effort this time though.

We’ve spent far more time researching, prototyping, designing, testing, upgrading and building than ever before. This reflects Mozilla’s focus this year of enabling communities that have impact and that goal has mobilized experts from many teams who have made the experience for new volunteers who use this page much better.

Mozilla’s investment in community in 2014 is showing up in other ways too, including a brand new contribution dashboard, a relaunched contributor newsletter, a pilot onboarding program, the first contributor research effort in three years and much more.

All of these pieces are coming together and will give us a number of options for how we can continue and increase the investment in community in 2015. Look for more thoughts soon on why that is important, what that could look like and how you could help shape it.


Mozilla Open Policy & Advocacy BlogSpotlight on Amnesty International: A Ford-Mozilla Open Web Fellows Host

{This is the second installment in our series highlighting the 2015 Host Organizations for the Ford-Mozilla Open Web Fellows program. We are now accepting applications to be a 2015 fellow. Amnesty International is a great addition to the program, especially as new technologies have such a profound impact – both positive and negative – on human rights. With its tremendous grassroots advocacy network and decades of experience advocating for fundamental human rights, Amnesty International, its global community and its Ford-Mozilla Fellow are poised to continue having impact on shaping the digital world for good.}

Spotlight on Amnesty International: A Ford-Mozilla Open Web Fellow Host
By Tanya O’Carroll, Project Officer, Technology and Human Rights, Amnesty International

For more than fifty years Amnesty International has campaigned for human rights globally: exposing information that governments will go to extreme measures to hide; connecting individuals who are under attack with solidarity networks that span the globe; fighting for policy changes that often seem impossible at first.

We’ve developed many tools and tactics to help us achieve change.

But the world we operate in is also changing.

Momentous developments in information and communications networks have introduced new opportunities and threats to the very rights we defend.

amnesty-logoThe Internet has paved the way for unprecedented numbers of people to exercise their rights online, crucially freedom of expression and assembly.

The ability for individuals to publish information and content in real-time has created a new world of possibilities for human rights investigations globally. Today, we all have the potential to act as witnesses to human rights violations that once took place in the dark.

Yet large shadows loom over the free and open Web. Governments are innovating and seeking to exploit new tools to tighten their control, with daunting implications for human rights.

This new environment requires specialist skills to respond. When we challenge the laws and practices that allow governments to censor individuals online or unlawfully interfere with their privacy, it is vital that we understand the mechanics of the Internet itself–and integrate this understanding in our analysis of the problem and solutions.

That’s why we’re so excited to be an official host for the Ford-Mozilla Open Web Fellowship.

We are seeking someone with the expert skill set to help shape our global response to human rights threats in the digital age.

Amnesty International’s work in this area builds on our decades of experience campaigning for fundamental human rights.

Our focus is on the new tools of control – that is the technical and legislative tools that governments are using to clamp down on opposition, restrict lawful expression and the free flow of information and unlawfully spy on private communications on a massive scale.

In 2015 we will be actively campaigning for an end to unlawful digital surveillance and for the protection of freedom of expression online in countries across the world.

Amnesty International has had many successes in tackling entrenched human rights violations. We know that as a global movement of more than 3 million members, supporters and activists in more than 150 countries and territories we can also help to protect the ideal of a free and open web. Our success will depend on building the technical skills and capacities that will keep us ahead of government efforts to do just the opposite.

Demonstrating expert leadership, the fellow will contribute their technical skills and experience to high-quality research reports and other public documents, as well as international advocacy and public campaigns.

If you are passionate about stopping the Internet from becoming a weapon that is used for state control at the expense of freedom, apply now to become a Ford-Mozilla Open Web Fellow and join Amnesty International in the fight to take back control.


Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.

Doug BelshawWeb Literacy Map 2.0 community calls

To support the development of Web Literacy Map v2.0, we’re going to host some calls with the Mozilla community.

Dogs on phone

There is significant overlap between the sub-section of the community interested in the Web Literacy Map and the sub-section involved in the Badge Alliance working group on Digital/Web Literacies. It makes sense, therefore, to use the time between cycles of the Badge Alliance working group to focus on developing the Web Literacy Map.


Calls

We’ll have a series of seven community calls on the following dates. The links take you to the etherpad for that call.


Discussion

You can subscribe to a calendar for these calls at the link below:

Calendar: http://bit.ly/weblitmap2-calls

We’ll be using the #TeachTheWeb forum for asynchronous discussion. I do hope you’ll be able to join us!


Questions? Comments? Direct them to @dajbelshaw / doug@mozillafoundation.org

Robert NymanHow to become efficient at e-mail

For many years I’ve constantly been receiving hundreds of e-mails every day. A lot of them work-related, a number of them personal. And I’ve never seen this is an issue, since I have an approach that works for me.

Many people complain that e-mail is broken, but I think it’s a great communication form. I can deal with it when I feel I have the time and can get back to people when it suits me. If I need to concentrate on something else, I won’t let it interrupt my flow – just have notifications off/e-mail closed/don’t read it, and then get to it while you can.

Your miles might, and will, vary, of course, but here are the main things that have proven to work very well for me.

Deal with it

When you open up your Inbox with all new e-mail, deal with it. Now and then. Because having seen the e-mail, maybe even glanced at some of the contents beyond the subjects as well, I believe it has already reserved a mental part of your brain. You’ll keep on thinking about it till you actually deal with it.

In some cases, naturally it’s good to ponder your reply, but mostly, just go with your knowledge and act on it. Some things are easiest to deal with directly, some need a follow-up later on (more on that in Flags and Filters below).

Flags

Utilize different flags for various actions you want. Go through your Inbox directly, reply to the e-mails or flag them accordingly. It doesn’t have to be Inbox Zero or similar, but just that you know and are on top of each and every e-mail.

These are the flags/labels I use:

This needs action
Meaning, I need to act on this: that could be replying, checking something out, contact someone else before I know more etc
Watch this
No need for an immediate action, but watch and follow up on this and see what it happens. Good for things when you never got a reply from people and need to remind them
Reference
No need to act, no need to watch it. But it is plausible that this topic and discussion might come up in the future, so file it just for reference.

The rest of it is Throw away. No need to act, watch or file it? Get rid of it.

Filters

Getting e-mails from the same sender/on the same topic on a regular basis? Set up a filter. This way you can have the vast majority of e-mail already sorted for you, bypassing the Inbox directly.

Make them go into predefined folders (or Gmail labels) per sender/topic. That way you can see in the structure that you have unread e-mails from, say, LinkedIn, Mozilla, Netflix, Facebook, British Airways etc. Or e-mails from your manager or the CEO. Or e-mail sent to the team mailing list, company branch or all of the company. And then deal with it when you have the time.

Gmail also has this nice feature of choosing to only show labels in the left hand navigation if they have unread e-mails in them, making it even easier to see where you’ve got new e-mails.

Let me stress that this is immensely useful, drastically reducing which e-mails you need to manually filter and decide an action for.

Acknowledge people

If you have a busy period when replying properly is hard, still make sure to take the time to acknowledge people. Reply, say that you’ve seen their e-mail and that you will get back to them as soon as you have a chance.

They took the time to write to you, and they respect the common decency of a reply.

Unsubscribe

How many newsletters or information e-mails are you getting that you don’t really care about? Maybe on a monthly basis, so it’s annoying, but not annoying enough? Apply the above suggestion filters with them or, even better, start unsubscribing from crap you don’t want.

Get to know your e-mail client

Whether you use an e-mail client, Gmail or similar, make sure to learn its features. Keyboard shortcuts, filters and any way you can customize it to make you more efficient.

For instance, I’ve set up keyboard shortcuts for the above mentioned flags and for moving e-mails into pre-defined folders. Makes the manual part of dealing with e-mail really fast.

Summing up

E-mail doesn’t have to be bad. On the contrary, it can be extremely powerful and efficient if you just make the effort to streamline the process and use it for you, not against you.

E-mails aren’t a problem, they’re an opportunity.

Kent JamesThunderbird Summit in Toronto to Plan a Viable Future

On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time)  using the same channels as the regular weekly Thunderbird status meetings.

Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.

Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014

Feel free to join in if you are interested in the future of Thunderbird.

J. Ryan StinnettDevTools for Firefox OS browser tabs

We've had various tools for inspecting apps on remote devices for some time now, but for a long time we've not had the same support for remote browser tabs.

To remedy this, WebIDE now supports inspecting browser tabs running on Firefox OS devices.

Inspecting a tab in WebIDE

A few weeks back, WebIDE gained support for inspecting tabs on the remote device, but many of the likely suspects to connect to weren't quite ready for various reasons.

We've just landed the necessary server-side bits for Firefox OS, so you should be able try this out by updating your device to the next nightly build after 2014-10-14.

How to Use

After connecting to your device in WebIDE, any open browser tabs will appear at the bottom of WebIDE's project list.

Browser tab list in WebIDE

The toolbox should open automatically after choosing a tab. You can also toggle the toolbox via the "Pause" icon in the top toolbar.

What's Next

We're planning to make this work for Firefox for Android as well. Much of that work is already done, so I am hopeful that it will be available soon.

If there are features you'd like to see added, file bugs or contact the team via various channels.

Andreas GalOpenH264 Now in Firefox

The Web is an open ecosystem, generally free of proprietary control and technologies—except for video.

Today in collaboration with Cisco we are shipping support for H.264 in our WebRTC implementation. Mozilla has always been an advocate for an open Web without proprietary controls and technologies. Unfortunately, no royalty-free codec has managed to get enough adoption to become a serious competitor to H.264. Mozilla continues to support the VP8 video format, but we feel that VP8 has failed to gain sufficient adoption to replace H.264. Firefox users are best served if we offer a video codec in WebRTC that maximises interoperability, and since much existing telecommunication infrastructure uses H.264 we think this step makes sense.

The way we have structured support for H.264 with Cisco is quite interesting and noteworthy. Because H.264 implementations are subject to a royalty bearing patent license and Mozilla is an open source project, we are unable to ship H.264 in Firefox directly. We want anyone to be able to distribute Firefox without paying the MPEG LA.

Instead, Cisco has agreed to distribute OpenH264, a free H.264 codec plugin that Firefox downloads directly from Cisco. Cisco has published the source code of OpenH264 on Github and Mozilla and Cisco have established a process by which the binary is verified as having been built from the publicly available source, thereby enhancing the transparency and trustworthiness of the system.

OpenH264

OpenH264 is not limited to Firefox. Other Internet-connected applications can rely on it as well.

Here is how Jonathan Rosenberg, Cisco’s Chief Technology Officer for Collaboration, described today’s milestone: “Cisco is excited to see OpenH264 become available to Firefox users, who will then benefit from interoperability with the millions of video communications devices in production that support H.264”.

We will continue to work on fully open codecs and alternatives to H.264 (such as Daala), but for the time being we think that OpenH264 is a significant victory for the open Web because it allows any Internet-connected application to use the most popular video format. And while OpenH264 is not truly open, at least it is the most open widely used video codec.

Note: Firefox currently uses OpenH264 only for WebRTC and not for the <video> tag, because OpenH264 does not yet support the high profile format frequently used for streaming video. We will reconsider this once support has been added.


Filed under: Mozilla

Doug BelshawSome interesting feedback from the Web Literacy Map 2.0 community survey

Last week we at Mozilla launched a community survey containing five proposals for Web Literacy Map v2.0. I don’t want to share how positively or negatively the overall sentiment is for each proposal as the survey is still open. However, I do want to pull out some interesting comments we’ve seen so far.

Mickey Mouse - piano

There’s really good points to be made for and against each of the proposals - as the following (anonymized) examples demonstrate. While I’d like to share the whole spreadsheet, there’s people’s contact details on there, and I haven’t asked them if I can share their feedback with names attached.

What I’ve done here - and I guess you’ll have to trust me on this - is to try and give examples that show the range of feedback we’re getting.

 

1. I believe the Web Literacy Map should explicitly reference the Mozilla manifesto.

The map can be about putting our manifesto and principles into practice. It’s a way to teach Mozilla’s beliefs.
I think this would put some people off using the map as an educational resource as they will think it has some political angle to it.
100% yes. The manifesto needs to be much more broadly spread - it is an inviting and inclusive document by nature and it is important that people engaging in our vision of web literacy understand the context from which we speak.
I often present the main ideas from the Manifesto when introducing the map and Webmaker. This aligns with my teaching practice.
While I like the manifesto, I don’t think the Web Literacy Map should be tied to it. I think that might decrease the likelihood of partners feeling ownership over it.
I think it is important for Mozilla to embrace its output – we shouldn’t shy away from taking credit for the things we work so hard to produce. But I do not believe Mozilla should try to achieve mission alignment as part of the literacy map: Literacy is a tool that helps people decide for themselves what to believe, and disagreement with Mozilla’s manifesto is a valid result of that.
Not sure if it needs to reference the manifesto, if the principles are followed when needed they would be implicit?

 

2. I believe the three strands should be renamed ‘Reading’, 'Writing’ and 'Participating’.

Definitely easier to understand off the bat.
No. Exploring, Building and Connecting are better descriptions.
Reading is not navigating. Writing is not building (necessarily). Communicating is more then participating.
Kinda torn on this. A lot of the time when literacy people from schools of education take over, they come up with weaker definitions of reading and writing, and I like the existing descriptions. But at the same time, R/W/P might make it more appealing for those folks, and trojan-horse them into using stronger standards.
Reading, writing, participating sounds more like school which is a turn off for many.
There’s a lot more than reading and writing going on.
I think reading and writing are too limited in their understood meanings and prefer the existing terms exploring and building. I prefer participating as a term over connecting.

 

3. I believe the Web Literacy Map should look more like a 'map’.

Naw. As I said before, while it might help visualize the connections, it could quickly become a plate of spaghetti that’s not approachable or user friendly. But – there’s no reason there couldn’t be a map tool for exploring the things on the side.
I think it would seem more accessible as a map and people will stay connected/interested for longer.
There should be an easy to read way to see all of the map. It’s not so important what that looks like, although having some more map-like versions of it is interesting.
A list is not good enough and it’s necessary to show off relation between the various skills and competencies. But a true interactive map is maybe a bit to much.
It should look like whatever it needs to look like for people to understand it. If “Map” is causing confusion, rename it rather than change the form to match the name.
I like this idea a lot. It could even have “routes” like the pathways tool.
But you should provide both options - we all interpret data differently, have preferred means of reading information, so leave the list style for those who think better that way, and the map for those who take a more graphic-based approach.

 

4. I believe that concepts such as 'Mobile’, 'Identity’, and 'Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.

Even if they’re included solely as reference points or suggested teaching angles, having them in there strengthens the entire project.
I think adding cross-cutting themes (like the vertical themes at Mozfest) will be quite confusing for some people.
Yeah, I think that’s a good way to deal with those. They’re useful as themes, and this keeps them from overpowering the track structure in the way I was worried they would. Good work!
Well if you introduce the readers to many *new* terms (that may be new to them) you risk to confuse them and they end up missing the content in the map.
An idea I discussed with Doug was concepts that could act as lens through which to view the web literacy map (i.e., mobile or digital citizenship). I support the idea of demonstrating how remix the map with cross-cutting ideas but I think the ideas should be provided as examples and users should also bring their own concepts forward.
Agreed. There are these larger themes or elements across the map. I’d be interested to see how these are represented. Perhaps this is the focus between cycles of the working group.
The problem here is that there are many other themes that could be added. Perhaps these are better emphasised in resources and activities at the point of learning rather than in the map itself?

 

5. I believe a 'remix’ button should allow me to remix the Web Literacy Map for my community and context.

I’d love to see a remix button, but one that integrated with GitHub to include proper historical attribution and version control. Simply spinning up multiple HTML pages without knowing who changed what when would cause more confusion I think.

Only a basic level of skill is needed to fork a repo on GitHub and edit text files directly on the website. We could provide guidelines on how to do that for people who want to contribute.
Definitely! Some of the things on the map now is strictly no-go in my context, I would love to have the ability to Remix to better match my needs.
In some contexts remixing it would be good to relate to situations better.
Perhaps if the name is required to be changed. But much better to get these people to help make the core map work for them.
Agree in principle but not many people will do this. I wouldn’t make it a high priority. Those who like to remix will always find a way.
Completely torn on this one. On the one hand it would embody the open principles on which both the map and Mozilla is built. It is also useful to be able to adapt tools for contexts. However, it could also potentially lead to mixed messages and dilution of the core 'literacy’ principles that are in the map.
意見該被傾聽,讓此份文件更加完善 *(“The views to be heard, so that this document be more perfect.” - according to Google Translate…)*

Many thanks to those who have completed the survey so far. Please do so if you haven’t yet! https://goo.gl/forms/LKNSNrXCnu

If you’ve got meta-level questions and feedback, please send it to @dajbelshaw / doug@mozillafoundation.org

Gregory SzorcRobustly Testing Version Control at Mozilla

Version control services and interaction with them play an important role at any company. Despite version control being a critical part of your infrastructure, my experience from working at a few companies and talking with others is that version control often doesn't get the testing love that other services do. Hooks get written, spot-tested by the author, and deployed. Tools that interact with version control often rely on behavior that may or may not change over time, especially when the version of your version control software is upgraded.

We've seen this pattern at Mozilla. Mercurial hooks and extensions were written and deployed to the server without test coverage. As a result, things break when we try to upgrade the server. This happens a few times and you naturally develop an attitude of fear, uncertainty, and doubt around touching anything on the server (or the clients for that matter). If it isn't broken, why fix it prevails for months or years. Then one an enthusiastic individual comes around wanting to deploy some hot new functionality. You tell them the path is arduous because the server is running antiquated versions of software and nothing is tested. The individual realizes the amazing change isn't worth the effort and justifiably throws up their hands and gives up. This is almost a textbook definition of how not having test coverage can result in technical debt. This is the position Mozilla is trying to recover from.

One of the biggest impacts I've had since joining the Developer Services Team at Mozilla a little over a month ago has been changing the story about how we test version control at Mozilla.

I'm proud to say that Mozilla now has a robust enough testing infrastructure in place around our Mercurial server that we're feeling pretty good about silencing the doubters when it comes to changing server behavior. Here's how we did it.

The genesis of this project was likely me getting involved with the hg-git and Mercurial projects. For hg-git, I learned a bit about Mercurial internals and how extensions work. When I looked at Mercurial extensions and hooks used by Mozilla, I started to realize what parts were good and what parts were bad. I realized what parts would likely break after upgrades. When I started contributing patches to Mercurial itself, I took notice of how Mercurial is tested. When I discovered T Tests, I thought, wow, that's pretty cool: we should use them to test Mozilla's Mercurial customizations!

After some frustrations with Mercurial extensions breaking after Mercurial upgrades, I wanted to do something about it to prevent this from happening again. I'm a huge fan of unified repositories. So earlier this year, I reached out to the various parties who maintain all the different components and convinced nearly everyone that establishing a single repository for all the version control code was a good idea. The version-control-tools repository was born. Things were slow at first. It was initially pretty much my playground for hosting Mercurial extensions that I authored. Fast forward a few months, and the version-control-tools repository now contains full history imports of our Mercurial hooks that are deployed on hg.mozilla.org, the templates used to render HTML on hg.mozilla.org, and pretty much every Mercurial extension authored by Mozillians, including pushlog. Having all the code in one repository has been very useful. It has simplified server deployments: we now pull 1 repository instead of 3. If there is a dependency between different components, we can do the update atomically. These are all benefits of using a single repository instead of N>1.

While version-control-tools was still pretty much my personal playground, I introduced a short script for running tests. It was pretty basic: just find test files and invoke them with Mercurial's test harness. It served my needs pretty well. Over time, as more and more functionality was rolled into version-control-tools, we expanded the scope of the test harness.

We can now run Python unit tests (in addition to Mercurial .t tests). Test all of the things!

We set up continuous integration with Jenkins so tests run after check-in and alert us when things fail.

We added code coverage so we can see what is and isn't being tested. Using code coverage data, we've identified a server upgrade bug before it happens. We're also using the data to ensure that code is tested as thoroughly as it needs to be. The code coverage data has been invaluable at assessing the quality of our tests. I'm still shocked that Firefox developers tolerate not having JavaScript code coverage when developing Firefox features. (I'm not saying code coverage is perfect, merely that it is a valuable tool in your arsenal.)

We added support for running tests against multiple versions of Mercurial. We even test the bleeding edge of Mercurial so we know when an upstream Mercurial change breaks our code. So, no more surprises on Mercurial release day. I can tell you today that we have a handful of extensions that are broken in Mercurial 3.2, due for release around November 1. (Hopefully we'll fix them before release.)

We have Vagrant configurations so you can start a virtual machine that runs the tests the same way Jenkins does.

The latest addition to the test harness is the ability to spin up Docker containers as part of tests. Right now, this is limited to running Bugzilla during tests. But I imagine the scope will only increase over time.

Before I go on, I want to quickly explain how amazing Mercurial's .t tests are. These are a flavor of tests used by Mercurial and the dominant form of new tests added to the version-control-tools repository. These tests are glorified shell scripts annotated with expected command output and other metadata. It might be easier to explain by showing. Take bzpost's tests as an example. The bzpost extension automatically posts commit URLs to Bugzilla during push. Read more if you are interested. What I like so much about .t tests is that they are actually testing the user experience. The test actually runs hg push and verifies the output is exactly what is intended. Furthermore, since we're running a Dockerized Bugzilla server during the test, we're able to verify that the bzpost extension actually resulted in Bugzilla comments being added to the appropriate bug(s). Contrast this with unit tests that only test a subset of functionality. Or, contrast with writing a lot of boilerplate and often hard-to-read code that invokes processes and uses regular expressions, etc to compare output. I find .t tests are more concise and they do a better job of testing user experience. More than once I've written a .t test and thought this user experience doesn't feel right, I should change the behavior to be more user friendly. This happened because I was writing actual end-user commands as part of writing tests and seeing the exact output the user would see. It is much harder to attain this sense of understanding when writing unit tests. I can name a few projects with poor command line interfaces that could benefit from this approach... I'm not saying .t tests are perfect or that they should replace other testing methodologies such as unit tests. I just think they are very useful for accurately testing higher-level functionality and for assessing user experience. I really wish we had these tests for mach commands...

Anyway, with a proper testing harness in place for our version control code, we've been pretty good about ensuring new code is properly tested. When people submit new hooks or patches to existing hooks, we can push back and refuse to grant review unless tests are included. When someone requests a new deployment to the server, we can look at what changed, cross-reference to test coverage, and assess the riskiness of the deployment. We're getting to the point where we just trust our tests and server deployments are minor events. Concerns over accidental regressions due to server changes are waning. We can tell people if you really care about this not breaking, you need a test and if you add a test, we'll support it for you. People are often more than happy to write tests to ensure them peace of mind, especially when that test's presence shifts maintenance responsibility away from them. We're happy because we don't have many surprises (and fire drills) at deployment time. It's a win-win!

So, what's next? Good question! We still have a number of large gaps in our test coverage. Our code to synchronize repositories from the master server to read-only slaves is likely the most critical omission. We also don't yet have a good way of reproducing our server environment. Ideally, we'd run the continuous integration in an environment that's very similar to production. Same package versions and everything. This would also allow us to simulate the actual hg.mozilla.org server topology during tests. Currently, our tests are more unit-style than integration-style. We rely on the consistent behavior of Mercurial and other tools as sufficient proxies for test accuracy and we back those up with running the tests on the staging server before production deployment. But these aren't a substitute for an accurate reproduction of the production servers, especially when it comes to things like the replication tests. We'll get there some day. I also have plans to improve Mercurial's test harness to better facilitate some of our advanced use cases. I would absolutely love to make Mercurial's .t test harness more consumable outside the context of Mercurial. (cram is one such attempt at this.) We also need to incorporate the Git server code into this repository. Currently, I'm pretty sure everything Git at Mozilla is untested. Challenge accepted!

In summary, our story for testing version control at Mozilla has gone from a cobbled together mess to something cohesive and comprehensive. This has given us confidence to move fast without breaking things. I think the weeks of people time invested into improving the state of testing was well spent and will pay enormous dividends going forward. Looking back, the mountain of technical debt now looks like a mole hill. I feel good knowing that I played a part in making this change.

Jordan LundThis week in Releng - Oct 5th, 2014

Major highlights:

  • kmoir ended our official tegra support. All code referencing them has been deleted in bug 1016453
  • kmoir is preparing material teaching a releng class next week http://polymorse.polymtl.ca/plow/
  • bhearsum added signing support for firefox 64bit windows builds in bug 711210

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Allison NaaktgeborenApplying Privacy Series: Introduction

Introduction

In January, I laid out information in a presentation & blog post information for a discussion about applying Mozilla’s privacy principles in practice to engineering.  Several fellow engineers wanted to see it applied in a concrete example, complaining that the material presented was too abstract to be actionable. This  is a fictional series of conversations around the concrete development of a fictional mobile app feature. Designing and building software is a process of evolving and refining ideas, and this example is designed for engineers to understand actionable privacy and data safety concerns can and should be a part of the development process.

Disclaimer

The example is fictional. Any resemblance to any real or imagined feature, product, service, or person is purely accidental. Some technical statements to flesh out the fictional dialogues. They are assumed to only apply to this fictional feature of a fictional mobile application. The architecture might not be production-quality. Don’t get too hung up on it, it’s a fictional teaching example.

Thank You!

    Before I begin, a big thank you to Stacy Martin, Alina Hua, Dietrich Ayala, Matt Brubeck, Mark Finkle, Joe Stevenson, and Sheeri Cabral for their input on this series of posts.

The Cast of Characters

so fictional they don’t even get real names

  1. Engineer
  2. Engineering Manager
  3. Service Operations Engineer
  4. Database Administrator (DBA)
  5. Project Manager
  6. Product Manager
  7. Privacy Officer, Legal’s Privacy Auditor, Privacy & Security there are many names & different positions here
  8. UX Designer

Fictional Problem Setup

Imagine that the EU provides a free service to all residents that will translate English text to one of the EU’s supported languages. The service requires the target language and the device Id. It is however, rather slow.

For the purposes of this fictional example, the device id is a hard coded number on each computer, tablet, or phone. It is globally unique and unchangeable, and so highly identifiable.

A mobile application team wants to use this service to offer translation in page (a much desired feature non-English speakers) to EU residents using their mobile app.  For non-english readers, the ability to read the app’s content in their own language is a highly desired feature.

After some prototyping & investigation, they determine that the very slow speed of the translation service adversely affects usability. They’d still like to use it, so they decide to evolve the feature. They’d also like to translate open content while the device is offline so the translated content comes up quicker when the user reopens the app.

Every New Feature Starts Somewhere

Engineer sees announcement in tech press about the EU’s new service and its noble goal of overcoming language barriers on the web for its citizens. She sends an email to her team’s public mailing list “wouldn’t it be cool apply this to our content for users instead of them having to copy/paste blocks of text into an edit box? We have access to those values on the phone already”

Engineering Team, Engineering Manager & Product Manager on the thread are enthusiastic about the idea.  Engineering Manager assigns Engineer to make it happen.

 

She schedules the initial meeting to figure out what the heck that actually means and nail down a specification.

Robert O'CallahanBack In New Zealand

I just finished a three-week stint in North America, mostly a family holiday but some work too. Some highlights:

  • Visited friends in Vancouver. Did the Grouse Grind in just over an hour. Lovely mountain.
  • Work week in Toronto. Felt productive. Ran barefoot from downtown to Humber River and back a couple of times. Lovely.
  • Rendezvoused with my family in New York. Spent a day in White Plains where we used to live, and at Trinity Presbyterian Church where we used to be members. Good sermon on the subject of "do not worry", and very interesting autobiographical talk by a Jewish Christian. Great time.
  • Visited the 9/11 Museum. Very good, though perhaps a shade overstressing the gravity of 3000 lives lost. One wonders what kind of memorial there will be if a nuke kills 100x that many.
  • Our favourite restaurant in Chinatown, Singapore Cafe, is gone :-(.
  • Had some great Persian food :-).
  • The amazingness of New York is still amazing.
  • Train to Boston. Gave a talk about rr at MIT, hosted by my former supervisor. Celebrated 20-year anniversary of me starting as his first (equal) grad student. Had my family watch Dad at work.
  • Spent time with wonderful friends.
  • Flew to Pittsburgh. More wonderful friends. Showed up at our old church with no prior warning to anyone. Enjoyed reactions. God continues to do great things there.
  • La Feria and Fuel-and-Fuddle still great. Still like Pittsburgh a lot.
  • Flew to San Francisco. Late arrival due to flight being rerouted through Dallas, but did not catch Ebola.
  • Saw numerous of seals and dolphins from the Golden Gate Bridge.
  • Showed my family a real Mozilla office.
  • Two days in Mountain View for Gecko planning meetings. Hilarious dinner incident. Failed to win at Settlers.
  • Took family to Big Basin Redwoods State Park; saw pelicans, deer, a dead snake, a banana slug, and a bobcat.
  • Ever since we made liquid nitrogen ice cream for my bachelor party, I've thought it would make a great franchise; Smitten delivers.
  • Kiwi friends in town for Salesforce conference; took them to Land's End for a walk. Saw a coyote.
  • Watched Fleet Week Blue Angels display from Twin Peaks. Excellent.
  • Played disc golf; absolutely hopeless.
  • Went to church at Home of Christ #5 with friends. Excellent sermon about the necessity of the cross.
  • Flew home on Air NZ's new 777. Upgraded entertainment system is great; more stuff than you could ever hope to watch.

Movie picoreviews:

    Edge Of Tomorrow: Groundhog Day meets Starship Troopers. Not as good as Groundhog Day but pretty good.

    X-Men: Days Of Future Past: OK.

    Godzilla: OK if your expectations are set appropriately.

    Dawn Of The Planet Of The Apes: watched without sound, which more or less worked. OK.

    Amazing Spider-Man 2: Bad.

    Se7en: Good.

Gregory SzorcDeterministic and Minimal Docker Images

Docker is a really nifty tool. It vastly lowers the barrier to distributing and executing applications. It forces people to think about building server side code as a collection of discrete applications and services. When it was released, I instantly realized its potential, including for uses it wasn't primary intended for, such as applications in automated build and test environments.

Over the months, Docker's feature set has grown and many of its shortcomings have been addressed. It's more usable than ever. Most of my early complaints and concerns have been addressed or are actively being addressed.

But one supposedly solved part of Docker still bothers me: image creation.

One of the properties that gets people excited about Docker is the ability to ship execution environments around as data. Simply produce an image once, transfer it to a central server, pull it down from anywhere, and execute. That's pretty damn elegant. I dare say Docker has solved the image distribution problem. (Ignore for a minute that the implementation detail of how images map to filesystems still has a few quirks to work out. But they'll solve that.)

The ease at which Docker manages images is brilliant. I, like many, was overcome with joy and marvelled at how amazing it was. But as I started producing more and more images, my initial excitement turned to frustration.

The thing that bothers me most about images is that the de facto and recommended method for producing images is neither deterministic nor results in minimal images. I strongly believe that the current recommended and applied approach is far from optimal and has too many drawbacks. Let me explain.

If you look at the Dockerfiles from the official Docker library (examples: Node, MySQL), you notice something in common: they tend to use apt-get update as one of their first steps. For those not familiar with Apt, that command will synchronize the package repository indexes with a remote server. In other words, depending on when you run the command, different versions of packages will be pulled down and the result of image creation will differ. The same thing happens when you clone a Git repository. Depending on when you run the command - when you create the image - you may get different output. If you create an image from scratch today, it could have a different version of say Python than it did the day before. This can be a big deal, especially if you are trying to use Docker to accurately reproduce environments.

This non-determinism of building Docker images really bothers me. It seems to run counter to Docker's goal of facilitating reliable environments for running applications. Sure, one person can produce an image once, upload it to a Docker Registry server, and have others pull it. But there are applications where independent production of the same base image is important.

One area is the security arena. There are many people who are justifiably paranoid about running binaries produced by others and pre-built Docker images set off all kinds of alarms. So, these people would rather build an image from source, from a Dockerfile, than pull binaries. Except then they build the image from a Dockerfile and the application doesn't run because of an incompatibility with a new version of some random package whose version wasn't pinned. Of course, you probably lost numerous hours tracing down this obscure reason. How frustrating! Determinism and verifiability as part of Docker image creation help solve this problem.

Deterministic image building is also important for disaster recovery. What happens if your Docker Registry and all hosts with copies of its images go down? If you go to build the images from scratch again, what guarantee do you have that things will behave the same? Without determinism, you are taking a risk that things will be different and your images won't work as intended. That's scary. (Yes, Docker is no different here from existing tools that attempt to solve this problem.)

What if your open source product relies on a proprietary component that can't be legally distributed? So much for Docker image distribution. The best you can do is provide a base image and instructions for completing the process. But if that doesn't work deterministically, your users now have varying Docker images, again undermining Docker's goal of increasing consistency.

My other main concern about Docker images is that they tend to be large, both in size and in scope. Many Docker images use a full Linux install as their base. A lot of people start with a base e.g. Ubuntu or Debian install, apt-get install the required packages, do some extra configuration, and call it a day. Simple and straightforward, yes. But this practice makes me more than a bit uneasy.

One of the themes surrounding Docker is minimalism. Containers are lighter than VMs; just ship your containers around; deploy dozens or hundreds of containers simultaneously; compose your applications of many, smaller containers instead of larger, monolithic ones. I get it and am totally on board. So why are Docker images built on top of the bloaty excess of a full operating system (modulo the kernel)? Do I really need a package manager in my Docker image? Do I need a compiler or header files so I can e.g. build binary Python extensions? No, I don't, thank you.

As a security-minded person, I want my Docker images to consist of only the files they need, especially binary files. By leaving out non-critical elements from your image and your run-time environment, you are reducing the surface area to attack. If your application doesn't need a shell, don't include a shell and don't leave yourself potentially vulnerable to shellshock. I want the attacker who inevitably breaks out of my application into the outer container to get nothing, not something that looks like an operating system and has access to tools like curl and wget that could potentially be used to craft a more advanced attack (which might even be able to exploit a kernel vulnerability to break out of the container). Of course, you can and should pursue additional security protections in addition to attack surface reduction to secure your execution environment. Defense in depth. But that doesn't give Docker images a free pass on being bloated.

Another reason I want smaller containers is... because they are smaller. People tend to have relatively slow upload bandwidth. Pushing Docker images that can be hundreds of megabytes clogs my tubes. However, I'll gladly push 10, 20, or even 50 megabytes of only the necessary data. When you factor in that Docker image creation isn't deterministic, you also realize that different people are producing different versions of images from the same Dockerfiles and that you have to spend extra bandwidth transferring the different versions around. This bites me all the time when I'm creating new images and am experimenting with the creation steps. I tend to bypass the fake caching mechanism (fake because the output isn't deterministic) and this really results in data explosion.

I understand why Docker images are neither deterministic nor minimal: making them so is a hard problem. I think Docker was right to prioritize solving distribution (it opens up many new possibilities). But I really wish some effort could be put into making images deterministic (and thus verifiable) and more minimal. I think it would make Docker an even more appealing platform, especially for the security conscious. (As an aside, I would absolutely love if we could ship a verifiable Firefox build, for example.)

These are hard problems. But they are solvable. Here's how I would do it.

First, let's tackle deterministic image creation. Despite computers and software being ideally deterministic, building software tends not to be, so deterministic image creation is a hard problem. Even tools like Puppet and Chef which claim to solve aspects of this problem don't do a very good job with determinism. Read my post on The Importance of Time on Machine Provisioning for more on the topic. But there are solutions. NixOS and the Nix package manager have the potential to be used as the basis of a deterministic image building platform. The high-level overview of Nix is that the inputs and contents of a package determine the package ID. If you know how Git or Mercurial get their commit SHA-1's, it's pretty much the same concept. In theory, two people on different machines start with the same environment and bootstrap the exact same packages, all from source. Gitian is a similar solution. Although I prefer Nix's content-based approach and how it goes about managing packages and environments. Nix feels so right as a base for deterministically building software. Anyway, yes, fully verifiable build environments are turtles all the way down (I recommend reading Tor's overview of the problem and their approach. However, Nix's approach addresses many of the turtles and silences most of the critics. I would absolutely love if more and more Docker images were the result of a deterministic build process like Nix. Perhaps you could define the full set of packages (with versions) that would be used. Let's call this the package manifest. You would then PGP sign and distribute your manifest. You could then have Nix step through all the dependencies, compiling everything from source. If PGP verification fails, compilation output changes, or extra files are needed, the build aborts or issues a warning. I have a feeling the security-minded community would go crazy over this. I know I would.

OK, so now you can use Nix to produce packages (and thus images) (more) deterministically. How do you make them minimal? Well, instead of just packaging the entire environment, I'd employ tools like makejail. The purpose of makejail is to create minimal chroot jail environments. These are very similar to Docker/LXC containers. In fact, you can often take a tarball of a chroot directory tree and convert it into a Docker container! With makejail, you define a configuration file saying among other things what binaries to run inside the jail. makejail will trace file I/O of that binary and copy over accessed files. The result is an execution environment that (hopefully) contains only what you need. Then, create an archive of that environment and pipe it into docker build to create a minimal Docker image.

In summary, Nix provides you with a reliable and verifiable build environment. Tools like makejail pair down the produced packages into something minimal, which you then turn into your Docker image. Regular people can still pull binary images, but they are much smaller and more in tune with Docker's principles of minimalism. The paranoid among us can produce the same bits from source (after verifying the inputs look credible and waiting through a few hours of compiling). Or, perhaps the individual files in the image could be signed and thus verified via trust somehow? The company deploying Docker can have peace of mind that disaster scenarios resulting in Docker image loss should not result in total loss of the image (just rebuild it exactly as it was before).

You'll note that my proposed solution does not involve Dockerfiles as they exist today. I just don't think Dockerfile's design of stackable layers of commands is the right model, at least for people who care about determinism and minimalism. You really want a recipe that knows how to create a set of relevant files and some metadata like what ports to expose, what command to run on container start, etc and turn that into your Docker image. I suppose you could accomplish this all inside Dockerfiles. But that's a pretty radical departure from how Dockerfiles work today. I'm not sure the two solutions are compatible. Something to think about.

I'm pretty sure of what it would take to add deterministic and verifiable building of minimal and more secure Docker images. And, if someone solved this problem, it could be applicable outside of Docker (again, Docker images are essentially chroot environments plus metadata). As I was putting the finishing touches on this article, I discovered nix-docker. It looks very promising! I hope the Docker community latches on to these ideas and makes deterministic, verifiable, and minimal images the default, not the exception.

Mozilla Release Management TeamFirefox 33 rc1 to rc2

A important last change forced us to generate a build 2 of Firefox 33. We took this opportunity to backout an OMTC-related regression and two startup fixes on fennec.

  • 9 changesets
  • 19 files changed
  • 426 insertions
  • 60 deletions

ExtensionOccurrences
cpp10
java5
h3
build1

ModuleOccurrences
security12
mobile5
widget1
gfx1

List of changesets:

Ryan VanderMeulenBacked out changeset 9bf2a5b5162d (Bug 1044975) - 1dd4fb21d976
Ryan VanderMeulenBacked out changeset d89ec5b69c01 (Bug 1076825) - 1233c159ab6d
Ryan VanderMeulenBacked out changeset bbc35ec2c90e (Bug 1061214) - 6b3eed217425
Jon CoppeardBug 1061214. r=terrence, a=sledru - a485602f5cb1
Ryan VanderMeulenBacked out changeset e8360a0c7d74 (Bug 1074378) - 7683a98b0400
Richard NewmanBug 1077645 - Be paranoid when parsing external intent extras. r=snorp, a=sylvestre - 628f8f6c6f72
Richard NewmanBug 1079876 - Handle unexpected exceptions when reading external extras. r=mfinkle, a=sylvestre - 96bcea5ee703
David KeelerBug 1058812 - mozilla::pkix: Add SignatureAlgorithm::unsupported_algorithm to better handle e.g. roots signed with RSA/MD5. r=briansmith, a=sledru - 4c62d5e8d5fc
David KeelerBug 1058812 - mozilla::pkix: Test handling unsupported signature algorithms. r=briansmith, a=sledru - fe4f4c9342b1

Daniel GlazmanHappy birthday Disruptive Innovations!

Daniel StenbergWhat a removed search from Google looks like

Back in the days when I participated in the starting of the Subversion project, I found the mailing list archive we had really dysfunctional and hard to use, so I set up a separate archive for the benefit of everyone who wanted an alternative way to find Subversion related posts.

This archive is still alive and it recently surpassed 370,000 archived emails, all related to Subversion, for seven different mailing lists.

Today I received a notice from Google (shown in its entirety below) that one of the mails received in 2009 is now apparently removed from a search using a name – if done within the European Union at least. It is hard to take this seriously when you look at the page in question, and as there aren’t that very many names involved in that page the possibilities of which name it is aren’t that many. As there are several different mail archives for Subversion mails I can only assume that the alternative search results also have been removed.

This is the first removal I’ve got for any of the sites and contents I host.


Notice of removal from Google Search

Hello,

Due to a request under data protection law in Europe, we are no longer able to show one or more pages from your site in our search results in response to some search queries for names or other personal identifiers. Only results on European versions of Google are affected. No action is required from you.

These pages have not been blocked entirely from our search results, and will continue to appear for queries other than those specified by individuals in the European data protection law requests we have honored. Unfortunately, due to individual privacy concerns, we are not able to disclose which queries have been affected.

Please note that in many cases, the affected queries do not relate to the name of any person mentioned prominently on the page. For example, in some cases, the name may appear only in a comment section.

If you believe Google should be aware of additional information regarding this content that might result in a reversal or other change to this removal action, you can use our form at https://www.google.com/webmasters/tools/eu-privacy-webmaster. Please note that we can’t guarantee responses to submissions to that form.

The following URLs have been affected by this action:

http://svn.haxx.se/users/archive-2009-08/0808.shtml

Regards,

The Google Team

Christian HeilmannEvangelism conundrum: Don’t mention the product

Being a public figure for a company is tough. It is not only about what you do wrong or right – although this is a big part. It is also about fighting conditioning and bad experiences of the people you are trying to reach. Many a time you will be accused of doing something badly because of people’s preconceptions. Inside and outside the company.

The outside view: oh god, just another sales pitch!

One of these conditionings is the painful memory of the boring sales pitch we all had to endure sooner or later in our lives. We are at an event we went through a lot of hassle to get tickets for. And then we get a presenter on stage who is “excited” about a product. It is also obvious that he or she never used the product in earnest. Or it is a product that you could not care less about and yet here is an hour of it shoved in your face.

Many a time these are “paid for” speaking slots. Conferences offer companies a chance to go on stage in exchange for sponsorship. These don’t send their best speakers, but those who are most experienced in delivering “the cool sales pitch”. A product the marketing department worked on hard to not look like an obvious advertisement. In most cases these turn out worse than a – at least honest – straight up sales pitch would have.

I think my favourite nonsense moment is “the timelapse excitement”. That is when when a presenter is “excited” about a new feature of a product and having used it “for weeks now with all my friends”. All the while whilst the feature is not yet available. It is sadly enough often just too obvious that you are being fed a make-believe usefulness of the product.

This is why when you go on stage and you show a product people will almost immediately switch into “oh god, here comes the sale” mode. And they complain about this on Twitter as soon as you mention a product for the first time.

This is unfair to the presenter. Of course he or she would speak about the products they are most familiar with. It should be obvious when the person knows about it or just tries to sell it, but it is easier to be snarky instead of waiting for that.

The inside view: why don’t you promote our product more?

From your company you get pressure to talk more about your products. You are also asked to show proof that what you did on stage made a difference and got people excited. Often this is showing the Twitter time line during your talk which is when a snarky comment can be disastrous.

Many people in the company will see evangelists as “sales people” and “show men”. Your job is to get people excited about the products they create. It is a job filled with fancy hotels, a great flight status and a general rockstar life. They either don’t understand what you do or they just don’t respect you as an equal. After all, you don’t spend a lot of time coding and working on the product. You only need to present the work of others. Simple, isn’t it? Our lives can look fancy to the outside and jealousy runs deep.

This can lead to a terrible gap. You end up as a promoter of a product and you lack the necessary knowledge that makes you confident enough to talk about it on stage. You’re seen as a sales guy by the audience and as a given by your peers. And it can be not at all your fault as your attempts to reach out to people in the company for information don’t yield any answers. Often it is fine to be “too busy” to tell you about a new feature and it should be up to you to find it as “the documentation is in the bug reports”.

Often your peers like to point out how great other companies are at presenting their products. And that whilst dismissing or not even looking at what you do. That’s because it is important for them to check what the competition does. It is less exciting to see how your own products “are being sold”.

How to escape this conundrum?

Frustration is the worst thing you can experience as an evangelist.

Your job is to get people excited and talk to another. To get company information out to the world and get feedback from the outside world to your peers. This is a kind of translator role, but if you look deep inside and shine a hard light on it, you are also selling things.

Bruce Lawson covered that in his talk about how he presents. You are a sales person. What you do though is sell excitement and knowledge, not a packaged product. You bring the angle people did not expect. You bring the inside knowledge that the packaging of the product doesn’t talk about. You show the insider channels to get more information and talk to the people who work on the product. That can only work when these people are also open to this. When they understand that any delay in feedback is not only seen as a disappointment for the person who asked the question. It is also diminishing your trustworthiness and your reputation and without that you are dead on stage.

In essence, do not mention the product without context. Don’t show the overview slides and the numbers the press and marketing team uses. Show how the product solves issues, show how the product fits into a workflow. Show your product in comparison with competitive products, praising the benefits of either.

And grow a thick skin. Our jobs are tiring, they are busy and it is damn hard to keep up a normal social life when you are on the road. Each sting from your peers hurts, each “oh crap, now the sales pitch starts” can frustrate you. You’re a person who hates sales pitches and tries very hard to be different. Being thrown in the same group feels terribly hurtful.

It is up to you to let that get you down. You could also concentrate on the good, revel in the excitement you see in people’s faces when you show them a trick they didn’t know. Seeing people grow in their careers when they repeat what they learned from you to their bosses.

If you aren’t excited about the product, stop talking about it. Instead work with the product team to make it exciting first. Or move on. There are many great products out there.

Rob HawkesLeaving Pusher to work on ViziCities full time

On the 7th of November I'll be leaving my day job heading up developer relations at Pusher. Why? To devote all my time and effort toward ensuring ViziCities gets the chance it very much deserves. I'm looking to fund the next 6–12 months of development and, if the opportunity is right, to build out a team to accelerate the development of the wider vision for ViziCities (beyond 3D visualisation of cities).

I'm no startup guru (I often feel like I'm making this up as I go), all I know is that I have a vision for ViziCities and, as a result of a year talking with governments and organisations, I'm beyond confident that there's demand for what ViziCities offers.

Want to chat? Send me an email at rob@vizicities.com. I'd love to talk about potential options and business models, or simply to get advice. I'm not ruling anything out.

Leaving your day job. Are you crazy?

Probably. I certainly don't do things by halves and I definitely thrive under immense pressure with the distinct possibility of failure. I've learnt that life isn't fulfilling for me unless I'm taking a risk with something unknown. I'm obsessed with learning something new, whether in programming, business or something else entirely. The process of learning and experimentation is my lifeblood, the end result of that is a bonus.

I think quitting your day job without having the funding in place to secure the next 6 to 12 months counts as immense pressure, some may even call it stupid. To me it wasn't even a choice; I knew I had to work on ViziCities so my time at Pusher had to end, simple. I'm sure I'll work the rest out.

Let me be clear. I thoroughly enjoyed my time at Pusher, they are the nicest bunch of people and I'm going to miss them dearly. My favourite thing about working at Pusher was being around the team every single day. Their support and advice around my decision with ViziCities has really helped other the past few weeks. I wish them all the best for the future.

As for my future, I'm absolutely terrified about it. That's a good thing, it keeps me focused and sharp.

So what's the plan with ViziCities

Over the past 18 months ViziCities has evolved from a disparate set of exciting experiments into a concise and deliberate offering that solves real problems for people. What has been learnt most over that time is that visualising cities in 3D isn't what makes ViziCities so special (though it's really pretty), rather it's the problems it can solve and the ways it can help governments, organisations and citizens. That's where ViziCities will make its mark.

After numerous discussions with government departments and large organisations worldwide it's clear that not only can ViziCities solve their problems, it's also financially viable as a long-term business. The beauty of what ViziCities offers is that people will always need tools to help turn geographic data into actionable results and insight. Nothing else provides this in the same way ViziCities can, both as a result of the approach but also as a result of the people working on it.

ViziCities now needs your help. I need your help. For this to happen it needs funding, and not necessarily that much to start with. There are multiple viable business models and avenues to explore, all of which are flexible and complementary, none of which compromise the open-source heart.

I'm looking to fund the next 6–12 months of development, and if the opportunity is right, to build out a team to accelerate the development of the wider vision for ViziCities (beyond 3D visualisation of cities).

I'll be writing about the quest for funding in much more detail.

You can help ViziCities succeed

This is the part where you can help. I can't magic funds out of no where, though I'm trying my best. I'd love to talk about potential options and business models, or simply to get advice. I'm not ruling anything out.

Want to chat? Send me an email at rob@vizicities.com.

James LongTransducers.js Round 2 with Benchmarks

A few weeks ago I released my transducers library and explained the algorithm behind it. It's a wonderfully simple technique for high-performant transformations like map and filter and was created by Clojure (mostly Rich Hickey I think).

Over the past week I've been hard at work polishing and benchmarking it. Today I published version 0.2.0 with a new API and completely refactored internals that make it easy to use and get performance that beats other popular utility libraries. (This is a different library than the recently released one from Cognitect)

A Few Benchmarks

Benchmarking is hard, but I think it's worthwhile to post a few of them that backs up these claims. All of these were run on the latest version of node (0.10.32). First I wanted to prove how transducers devastates many other libraries for large arrays (update: lodash + laziness comes the closest, see more in the next section). The test performs two maps and two filters. Here is the transducer code:

into([],
     compose(
       map(function(x) { return x + 10; }),
       map(function(x) { return x * 2; }),
       filter(function(x) { return x % 5 === 0; }),
       filter(function(x) { return x % 2 === 0; })
     ),
     arr);

The same transformations were implemented in lodash and underscore, and benchmarked with an arr of various sizes. The graph below shows the time it took to run versus the size of arr, which starts at 500 and goes up to around 300,000. Here's the full benchmark (it outputs Hz so the y-axis is 1/Hz).

Once the array reaches around the size of 90,000, transducers completely blow the competition away. This should be obvious; we never need to allocate anything between transformations, while underscore and lodash always have to allocation an intermediate array.

Laziness would not help here, since we are eagerly evaluating the whole array.

Update: More Detailed Benchmark

This section was added after requests for a more thorough benchmark, particularly including lodash's new lazy behavior

The master branch of lodash supports laziness, which should provide performance gains. Let's include that in the benchmark to see how well it helps. Laziness is a technique where a chain doesn't evaluate the transformations until a final value method is called, and it attempts to reduce intermediate allocations. Here's the full benchmark that generated the following graph.

We also added comparisons with native map and filter, and a baseline that manually performs the same operations in a for loop (thanks @stefanpenner for that).

First, as expected the baseline performs the best. But the cost of transducers isn't too bad, and you get a far better and easier to use abstraction than manually hand-coding for loops. Unfortunately, native is slowest for various reasons.

The real interesting thing is that the laziness of lodash does help it out a lot. For some reason there's still a jump, but it's at a much higher point, around 280,000 items. In general transducers take about 2/3rds of the time though, and the performance is more consistent. Note that for there's actually a perf hit for lodash laziness for smaller arrays under 90,000.

This benchmark was run with node 0.10.32, and it most likely looks different on various engines. Transducers don't beat a lazy lodash as much (for some array sizes not at all) in Firefox, but I think that's more due to poor optimization in Firefox. The algorithm is inherently open to great optimizations as the process is only a few functions calls per item, so I think it will only get better across each engine. My guess is that Firefox needs to do a better job inlining functions, but I still need to look into it.

Small Arrays

While it's not as dramatic, even with arrays as small as 1000 you will see performance wins. Here is the same benchmarks but only running it twice with a size of 1000 and 10,000:

_.map/filter (1000) x 22,302 ops/sec ±0.90% (100 runs sampled)
u.map/filter (1000) x 21,290 ops/sec ±0.65% (96 runs sampled)
t.map/filter+transduce (1000) x 26,638 ops/sec ±0.77% (98 runs sampled)

_.map/filter (10000) x 2,277 ops/sec ±0.49% (101 runs sampled)
u.map/filter (10000) x 2,155 ops/sec ±0.77% (99 runs sampled)
t.map/filter+transduce (10000) x 2,832 ops/sec ±0.44% (99 runs sampled)

Take

If you use the take operation to only take, say, 10 items, transducers will only send 10 items through the transformation pipeline. Obviously if I ran benchmarks we would also blow away lodash and underscore here because they do not lazily optimize for take (and transform all the array first and then runs take). You can do this in some of the other libraries like lodash with explicitly marking a chain as lazy and then requesting the value at the end. We get this for free though, and still beat it in this scenario because we don't have any laziness machinery.

I ran a benchmark here but I don't have it anymore, but it's worth noting that we don't need to be explicitly lazy to optimize for take.

immutable-js

The immutable-js library is fantastic collection of immutable data structures. They implement lazy transformations so you get a lot of perf wins with that. Even so, there is a cost to the laziness machinery. I implemented the same map->map->filter->filter transformation above in another benchmark which compares it with their transformations. Here is the output with arr sizes of 1000 and 100,000:

Immutable map/filter (1000) x 6,414 ops/sec ±0.95% (99 runs sampled)
transducer map/filter (1000) x 7,119 ops/sec ±1.58% (96 runs sampled)

Immutable map/filter (100000) x 67.77 ops/sec ±0.95% (72 runs sampled)
transducer map/filter (100000) x 79.23 ops/sec ±0.47% (69 runs sampled)

This kind of perf win isn't a huge deal, and their transformations perform well. But we can apply this to any data structure. Did you notice how easy it was to use our library with immutable-js? View the full benchmark here.

Transducers.js Refactored

I just pushed v0.2.0 to npm with all the new APIs and performance improvements. Read more in the new docs.

You may have noticed the Cognitect, where Rich Hickey and other core maintainers of Clojure(Script) work, released their own JavaScript transducers library on Friday. I was a little bummed because I had just spent a lot of time refactoring mine, but I think I offer a few improvements. Internally, we basically converged on the exact same technique for implementing transducers, so you should find the same performance characteristics above with their library.

All of the following features are things you can find in my library transducers.js.

My library now offers several integration points for using transducers:

  • seq takes a collection and a transformer and returns a collection of the same type. If you pass it an array, you will get back an array. An iterator will give you back an iterator. For example:
// Filter an array
seq([1, 2, 3], filter(x => x > 1));
// -> [ 2, 3 ]

// Map an object
seq({ foo: 1, bar: 2 }, map(kv => [kv[0], kv[1] + 1]));
// -> { foo: 2, bar: 3 }

// Lazily transform an iterable
function* nums() {
  var i = 1;
  while(true) {
    yield i++;
  }
}

var iter = seq(nums(), compose(map(x => x * 2),
                               filter(x => x > 4));
iter.next().value; // -> 6
iter.next().value; // -> 8
iter.next().value; // -> 10
  • toArray, toObject, and toIter will take any iterable type and force them into the type that you requested. Each of these can optionally take a transform as the second argument.
// Make an array from an object
toArray({ foo: 1, bar: 2 });
// -> [ [ 'foo', 1 ], [ 'bar', 2 ] ]

// Make an array from an iterable
toArray(nums(), take(3));
// -> [ 1, 2, 3 ]

That's a very quick overview, and you can read more about these in the docs.

Collections as Arguments

All the transformations in transducers.js optionally take a collection as the first argument, so the familiar pattern of map(coll, function(x) { return x + 1; }) still works fine. This is an extremely common use case so this will be very helpful if you are transitioning from another library. You can also pass a context as the third argument to specify what this should be bound to.

Read more about the various ways to use transformations.

Laziness

Transducers remove the requirement of being lazy to optimize for things like take(10). However, it can still be useful to "bind" a collection to a set of transformations and pass it around, without actually evaluating the transformations. It's also useful if you want to apply transformations to a custom data type, get an iterator back, and rebuild another custom data type from it (there is still no intermediate array).

Whenever you apply transformations to an iterator it does so lazily. It's easy to convert array transformations into a lazy operation, just use the utility function iterator to grab an iterator of the array instead:

seq(iterator([1, 2, 3]),
    compose(
      map(x => x + 1),
      filter(x => x % 2 === 0)))
// -> <Iterator>

Our transformations are completely blind to the fact that our transformations may or may not be lazy.

The transformer Protocol

Lastly, transducers.js supports a new protocol that I call the transformer protocol. If a custom data structure implements this, not only can we iterate over it in functions like seq, but we can also build up a new instance. That means seq won't return an iterator, but it will return an actual instance.

For example, here's how you would implement it in Immutable.Vector:

var t = require('./transducers');
Immutable.Vector.prototype[t.protocols.transformer] = {
  init: function() {
    return Immutable.Vector().asMutable();
  },
  result: function(vec) {
    return vec.asImmutable();
  },
  step: function(vec, x) {
    return vec.push(x);
  }
};

If you implement the transformer protocol, now your data structure will work with all of the builtin functions. You can just use seq like normal and you get back an immutable vector!

t.seq(Immutable.Vector(1, 2, 3, 4, 5),
      t.compose(
        t.map(function(x) { return x + 10; }),
        t.map(function(x) { return x * 2; }),
        t.filter(function(x) { return x % 5 === 0; }),
        t.filter(function(x) { return x % 2 === 0; })));
// -> Vector [ 30 ]

I hope you give transducers a try, they are really fun! And unlike Cognitect's project, mine is happy to receive pull requests. :)

Brett GaylorFrom Mozilla to new making

Yesterday was my last day as an employee of the Mozilla Foundation. I’m leaving my position as VP, Webmaker to create an interactive web series about privacy and the economy of the web.

I’ve had the privilege of being a “crazy Mofo” for nearly five years. Starting in early 2010, I worked with David Humphrey and researchers at the Center for Development of Open Technology to create Popcorn.js. Having just completed “Rip!”, I was really interested in mashups - and Popcorn was a mashup of open web technology questions (how can we make video as elemental an element of the web as images or links?) and formal questions about documentary (what would a “web native” documentary look like? what can video do on the web that it can’t do on TV?). That mashup is one of the most exciting creative projects I’ve ever been involved with, and lead to a wonderful amount of unexpected innovation and opportunity. An award winning 3D documentary by a pioneer of web documentaries, the technological basis of a cohort of innovative(and fun) startups, and a kick ass video creation tool that was part of the DNA of Webmaker.org - which this year reached 200,000 users and facilitated the learning experience of over 127,200 learners face to face at our annual Maker Party.

Thinking about video and the web, and making things that aim to get the best of both mediums, is what brought me to Mozilla - and it’s what’s taking me to my next adventure.

I’m joining my friends at Upian in Paris (remotely, natch) to direct a multi-part web series around privacy, surveillance and the economy of the web. The project is called Do Not Track and it’s supported by the National Film Board of Canada, Arte, Bayerischer Rundfunk (BR), the Tribeca Film Institute and the Centre National du Cinéma. I’m thrilled by the creative challenge and humbled by the company I’ll be keeping - I’ve wanted to work with Upian since their seminal web documentary Gaza/Sderot and have been thrilled to watch from the sidelines as they’ve made Prison Valley, Alma, MIT’s Moments of Innovation project, and the impressive amount of work they do for clients in France and around the world. These are some crazy mofos, and they know how to ship.

Fake it Till You Make it

Mozilla gave me a wonderful gift: to innovate on the web, to dream big, without asking permission to do so. To in fact internalize innovation as a personal responsibility. To hammer into me every day the belief that for the web to remain a public resource, the creativity of everyone needs to be brought to the effort. That those of us in positions of privilege have a responsibility to wake up every day trying to improve the network. It’s a calling that tends to attract really bright people, and it can elicit strong feelings of impostor syndrome for a clueless filmmaker. The gift Mozilla gave me is to witness first hand that even the most brilliant people, or especially the most brilliant people, are making it up every single day. That’s why the web remains as much an inspiration to me today as when I first touched it as a teenager. Even though smart people criticize sillicon valley’s hypercapitalism, or while governments are breeding cynics and mistrust by using the network for surveillance, I still believe the web remains the best place to invent your future.

I’m very excited, and naturally a bit scared, to be making something new again. Prepare yourself - I’m going to make shit up. I’ll need your help.

Working With

source

“Where some people choose software projects in order to solve problems, I have taken to choosing projects that allow me to work with various people. I have given up the comfort of being an expert , and replaced it with a desire to be alongside my friends, or those with whom I would like to be friends, no matter where I find them. My history among this crowd begins with friendships, many of which continue to this day.

This way of working, where collegiality subsumes technology or tools, is central to my personal and professional work. Even looking back over the past two years, most of the work I’ve done is influenced by a deep desire to work with rather than on. ” - On Working With Instead of On

David Humphrey, who wrote that, is who I want to be when I grow up. I will miss daily interactions with him, and many others who know who they are, very much. "In the context of working with, technology once again becomes the craft I both teach and am taught, it is what we share with one another, the occasion for our time together, the introduction, but not the reason, for our friendship.”

Thank you, Mozilla, for a wonderful introduction. Till the next thing we make!

Mozilla WebDev CommunityWebdev Extravaganza – October 2014

Once a month, web developers from across Mozilla don our VR headsets and connect to our private Minecraft server to work together building giant idols of ourselves for the hoards of cows and pigs we raise to worship as gods. While we build, we talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, view a recording of the meeting in Air Mozilla, or attempt to decipher the aimless scrawls that are the meeting notes. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Phonebook now Launches Dialer App

lonnen shared the exciting news that the Mozilla internal phonebook now launches the dialer app on your phone when you click phone numbers on a mobile device. He also warned that anyone who has a change they want to make to the phonebook app should let him know before he forgets all that he had to learn to get this change out.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

django-browserid 0.11 is out

I (Osmose) chimed in to share the news that a new version of django-browserid is out. This version brings local assertion verification, support for offline development, support for Django 1.7, and other small fixes. The release is backwards-compatible with 0.10.1, and users on older versions can use the upgrade guide to get up-to-date. You can check out the release notes for more information.

mozUITour Helper Library for Triggering In-Chrome Tours

agibson shared a wrapper around the mozUITour API, which was used in the Australis marketing pages on mozilla.org to trigger highlights for new features within the Firefox user interface from JavaScript running in the web page. More sites are being added to the whitelist, and more features are being added to the API to open up new opportunities for in-chrome tours.

Parsimonious 0.6 (and 0.6.1) is Out!

ErikRose let us know that a new version of Parsimonious is out. Parsimonious is a parsing library written in pure Python, based on formal Parsing Expression Grammars (PEGs). You write a specification for the language you want to parse in a notation similar to EBNF, and Parsimonious does the rest.

The latest version includes support for custom rules, which let you hook in custom Python code for handling cases that are awkward or impossible to describe using PEGs. It also includes a @rule decorator and some convenience methods on the NodeVisitor class that simplify the common case of single-visitor grammars.

contribute.json Wants More Prettyness

peterbe stopped by to show of the design changes on the contribute.json website. There’s more work to be done; if you’re interested in helping out with contribute.json, let him know!

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.

Name IRC Nick Role Project
Cory Price ckprice Web Production Engineer Various

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Leeroy was Broken for a Bit

lonnen wanted to let people know that Leeroy, a service that triggers Jenkins test runs for projects on Github pull requests, was broken for a bit due to accidental deletion of the VM that was running the app. But it’s fixed now! Probably.

Webdev Module Updates

lonnen also shared some updates that have happened to the Mozilla Websites modules in the Mozilla Module System:

Static Caching and the State of Persona

peterbe raised a question about the cache timeouts on static assets loaded from Persona by implementing sites. In response, I gave a quick overview of the current state of Persona:

  • Along with callahad, djc has been named as co-maintainer, and the two are currently focusing on simplifying the codebase in order to make contribution easier.
  • A commitment to run the servers for Persona for a minimum period of time is currently working it’s way through approval, in order to help ease fears that the Persona service will just disappear.
  • Mozilla still has a paid operations employee who manages the Persona service and makes sure it is up and available. Persona is still accepting pull requests and will review, merge, and deploy them when they come in. Don’t be shy, contribute!

The answer to peterbe’s original question was “make a pull request and they’ll merge and push!”.

Graphviz graphs in Sphinx

ErikRose shared sphinx.ext.graphviz, which allows you to write Graphviz code in your documentation and have visual graphs be generated from the code. DXR uses it to render flowcharts illustrating the structure of a DXR plugin.


Turns out that building giant statues out of TNT was a bad idea. On the bright side, we won’t be running out of pork or beef any time soon.

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

 

Carsten BookMozilla Plugincheck – Its a Community Thing

Hi,

A lot of people are using Mozilla’s Plugincheck Page to make sure all the Plugins like Adobe Flash are up-to-date.

Schalk Neethling has create a great Blog Post about Plugincheck here.

So if you are interested in contributing to Plugincheck check out Schalk’s Blogpost!

Thanks!

 

– Tomcat

Carsten BookThe past, Current and future

- The past -
I’m now about a year member of the A-Team (Automation and Tools Team) and also Fulltime Sheriff.

It was the end of a lot of changes personal and job wise. I moved from the Alps to the Munich Area to the City of Freising and (and NO i was not at the Beer Oktoberfest ;) and working as Fulltime Sheriff after my QA/Partner Build and Releng Duties.

Its awesome to be part of the Sheriff Team and was also awesome to get so much help from the Team like from Ed, Wes and Ryan to get started.

At one time i took over the Sheriff-Duty for my European Timezone and it was quite challenging having the responsible for all the Code Trees with Backouts etc and also later checkin-neededs :) What i really like as Sheriff is the work across the different Divisions at Mozilla and its exciting to work as Mozilla Sheriff too :)

One of main contribution beside getting started was helping creating some How-To Articles at https://wiki.mozilla.org/Sheriffing/How:To . I hope that will help also others to get involved into sheriffing.

- Current -

We just switched over to use Treeherder as new tool for Sheriffing.

Its quite new and so it feels (as everything like a new car there are this questions like how i do things i used to do in my old car etc) and there are still some bugs and things we can improve but we will get there. Also its a ideal time for getting involved into sheriffing like with hammering on Treeherder.

and that leads into ….:)

- The Future is You – !

As every Open Source Project Mozilla is heavily depended on Community Members like you and there is even the oppourtunity beside all the other Areas at Mozilla to work as Community Sheriff.So let us know if you want to be involved as Community-Sheriff. You are always welcome. You can find us in the #ateam Channel on irc.mozilla.org

For myself i’m planning to work beside my other tasks more in Community Building like blogging about Sheriffing and also taking more part in the Open Source Meetings in Munich.

– Tomcat

Mozilla Reps CommunityCouncil Elections – Campaing and candidates

We’re excited to announce that we have 6 candidates for the forth cycle
of the Council elections, scheduled for October 18th. The Council has
carefully reviewed the candidates and agrees that they are all
extremely strong candidates to represent the Mozilla Reps program and
the interests of Reps.

The candidates are:

Now, it is up to Reps to elect the candidates to fill the four available
seats for a 12-month term
.

As detailed in the wiki, we are now entering the “campaign” phase
of this election cycle. This means that for the next 7 days,
candidates will all have an opportunity to communicate their agenda,
plans, achievements as a Rep/Mentor, and personal strengths, to the
Mozilla Reps voting body. They are encouraged to use their personal
Mozilla Reps profile page, their personal website/blog, Mozilla wiki
page or any other channel that they see fit to post information
regarding your candidacy.

To guide them in this effort, the Council has prepared 6 questions
that each candidate is asked to answer. We had originally wanted to
have candidates go through mozmoderator but due lack of time we will
do this next election cycle. The questions are the following:

  • What are the top three issues that you would want the Council to address were you to join the Council?
  • What is in your view the Mozilla Reps program’s biggest strength and weakness?
  • Identify something that is currently not working well in the Mozilla Reps program and which you think could be easy to fix?
  • What past achievement as a Rep or Mentor are you most proud of?
  • What are the specific qualities and skills that you have that you think will help you be an effective Council member?
  • As a Mentor, what do you do to try to encourage your inactive Mentees to be active again?

In the spirit of innovation and to help bring a human face to the
election process, the Council would like to add a new element to the
campaign: video. This video is optional, but we strongly encourage
candidates to create one.

That’s it for now. As always, if you have any questions, please don’t
hesitate to ask the Council at

reps-council at mozilla dot com

We’ll be giving regular election updates throughout these next two weeks
so stay tuned!

And remember, campaigning ends and voting starts on October 18th!

Comments on discourse.

Daniel Stenberginternal timers and timeouts of libcurl

wall clockBear with me. It is time to take a deep dive into the libcurl internals and see how it handles timeouts and timers. This is meant as useful information to libcurl users but even more as insights for people who’d like to fiddle with libcurl internals and work on its source code and architecture.

socket activity or timeout

Everything internally in libcurl is using the multi, asynchronous, interface. We avoid blocking calls as far as we can. This means that libcurl always either waits for activity on a socket/file descriptor or for the time to come to do something. If there’s no socket activity and no timeout, there’s nothing to do and it just returns back out.

It is important to remember here that the API for libcurl doesn’t force the user to call it again within or at the specific time and it also allows users to call it again “too soon” if they like. Some users will even busy-loop like crazy and keep hammering the API like a machine-gun and we must deal with that. So, the timeouts are mostly to be considered advisory.

many timeouts

A single transfer can have multiple timeouts. For example one maximum time for the entire transfer, one for the connection phase and perhaps even more timers that handle for example speed caps (that makes libcurl not transfer data faster than a set limit) or detecting transfers speeds below a certain threshold within a given time period.

A single transfer is done with a single easy handle, which holds a list of all its timeouts in a sorted list. It allows libcurl to return a single time left until the nearest timeout expires without having to bother with the remainder of the timeouts (yet).

Curl_expire()

… is the internal function to set a timeout to expire a certain number of milliseconds into the future. It adds a timeout entry to the list of timeouts. Expiring a timeout just means that it’ll signal the application to call libcurl again. Internally we don’t have any identifiers to the timeouts, they’re just a time in the future we ask to be called again at. If the code needs that specific time to really have passed before doing something, the code needs to make sure the time has elapsed.

Curl_expire_latest()

A newcomer in the timeout team. I figured out we need this function since if we are in a state where we need to be called no later than a certain specific future time this is useful. It will not add a new timeout entry in the timeout list in case there’s a timeout that expires earlier than the specified time limit.

This function is useful for example when there’s a state in libcurl that varies over time but has no specific time limit to check for. Like transfer speed limits and the like. If Curl_expire() is used in this situation instead of Curl_expire_latest() it would mean adding a new timeout entry every time, and for the busy-loop API usage cases it could mean adding an excessive amount of timeout entries. (And there was a scary bug reported that got “tens of thousands of entries” which motivated this function to get added.)

timeout removals

We don’t remove timeouts from the list until they expire. Like for example if we have a condition that is timing dependent, then we set a timeout with Curl_expire() and we know we should be called again at the end of that time.

If we wouldn’t add the timeout and there’s no socket activity on the socket then we may not be called again – ever.

When an internal state transition into something else and we therefore don’t need a previously set timeout anymore, we have no handle or identifier to the timeout so it cannot be removed. It will instead lead to us getting called again when the timeout triggers even though we didn’t really need it any longer. As we’re having an API that allows this anyway, this is already handled by the logic and getting called an extra time is usually very cheap and is not considered a problem worth addressing.

Timeouts are removed automatically from the list of timers when they expire. Timeouts that are in passed time are removed from the list and the timers following will then get moved to the front of the queue and be used to calculate how long the single timeout should be next.

The only internal API to remove timeouts that we have removes all timeouts, used when cleaning up a handle.

many easy handles

I’ve mentioned how each easy handle treats their timeouts above. With the multi interface, we can have any amount of easy handles added to a single multi handle. This means one list of timeouts for each easy handle.

To handle many thousands of easy handles added to the same multi handle, all with their own timeout (as each easy handle only show their closest timeout), it builds a splay tree of easy handles sorted on the timeout time. It is a splay tree rather than a sorted list to allow really fast insertions and removals.

As soon as a timeout expires from one of the easy handles and it moves to the next timeout in its list, it means removing one node (easy handle) from the splay tree and inserting it again with the new timeout timer.

Raniere SilvaMathml October Meeting

Mathml October Meeting

../../../_images/mathml.jpg

This is a report about the Mozilla MathML October Meeting (see the announce). The topics of the meeting can be found in this PAD (local copy of the PAD). This meeting happens all at appear.in and because of that we don’t have a log.

The next meeting will be in November 14th (note that November 14th is Friday). Some countries will move to winter time and others to summer time so we will change the time and announce it later on mozilla.dev.tech.mathml. Please add topics in the PAD.

Leia mais...

Erik VoldWhat is the Jetpack/Add-on SDK?

There are many opinions on this, and I think I’ve heard them all, but no one has worked on this project for as long as I have, so I’d like to write what I think the Jetpack/Add-on SDK is.

Originally the Jetpack prototype was developed as a means to make add-on development easier for web developers, I say this because it was both the impression that I got and it was one of the bullet points Aza Raskin listed for me in an email he sent to me asking me to be a project ambassador. This was very appealing to me at the time because I had no idea how to write add-ons back then. The prototype however provided chrome access from the beginning, which is basically the ability to do almost anything that you want with the browser and the system it runs on. So to my mind the Jetpack prototype was an on-ramp to add-on and Firefox development, because it also did not have the same power that add-ons had, it had subset of abilities.

When Jetpack graduated from being a prototype it was renamed to the Add-on SDK, and it included the seeds of something that was lacking in add-on development, sharable modules. These modules could be written using the new tech at the time, CommonJS, which is now widely used and commonplace. The reason for this as I understood it was both to make add-on development easier, and to make reviewing add-ons easier (because each version of a module would only need to be reviewed once). When I started writing old school add-ons I quickly saw the value in the first, and later when I became an AMO reviewer the deep value of the latter was also quickly apparent.

In order to make module development decentralized it was important to provide chrome access to those modules that need it, otherwise all of the SDK features would have to be developed and approved in-house by staffers, as is done with Google Chrome, which would not only hamper creativity, but also defeat the purpose for having a module system. This is our advantage over Google Chrome, not our weakness.

To summarize I feel that the Jetpack/Add-on SDK is this:

  1. An on-ramp to extension and Firefox development for web devs, with a shallow learning curve.
  2. A means for sharing code/modules, which reduces review time.
  3. A quicker way to develop add-ons than previous methods, because there is less to learn (see a chrome.manifest or bootstrap.js file or if you have doubts).
  4. A means for testing both add-ons and the browser itself (possibly the easiest way to write tests for add-ons and Firefox when used in combination with point 2).
  5. A more reliable way to write extensions than previous methods, because the platform code changes so much the modules system (point 2) can provide an abstraction layer such that developers can blissfully ignore platform changes, which reinforces point 3.

Ben KeroSeptember ’14 Mercurial Code Sprint

A week ago I was fortunate enough to attend the latest code sprint of the Mercurial project. This was my second sprint with this project, and took away quite a bit from the meeting. The attendance of the sprint was around 20 people and took the form of a large group, with smaller groups splitting out intermittently to discuss particular topics. I had seen a few of the attendees before at a previous sprint I attended.

Joining me at the sprint were two of my colleagues Gregory Szorc (gps) and Mike Hommey (glandium). They took part in some of the serious discussions about core bugfixes and features that will help Mozilla scale its use of Mercurial. Impressively, glandium had only been working on the project for mere weeks, but was able to make serious contributions to the bundle2 format (an upcoming feature of Mercurial). Specifically, we talked to Mercurial developers about some of the difficulties and bugs we’ve encountered with Mozilla’s “try” repository due to the “tens of thousands of heads” and the events that cause a serving request to spin forever.

By trade I’m a sysadmin/DevOps person, but I also do have a coder hat that I don from time to time. Still though, the sprint was full of serious coders who seemingly worked on Mercurial full-time. There were attendees who had big named employers, some of whom would probably prefer that I didn’t reveal their identities here.

Unfortunately due to my lack of familiarity with a lot of the deep-down internals I was unable to contribute to some of the discussions. It was primarily a learning experience for me for both the process which direction-driving decisions are made for the project (mpm’s BDFL status) and all of the considerations that go into choosing a particular method to implement an idea.

That’s not to say I was entirely useless. My knowledge of systems and package management meant I was able to collaborate with another developer (kiilerix) to improve the Docker package building support, including preliminary work for building (un)official Debian packages for the first time.

I also learned about some infrequently used features or tips about Mercurial. For example, folks who come from a background of using git often complain about Mercurial’s lack of interactive rebase functionality. The “histedit” extension provides this feature. Much like many other features of Mercurial, this is technically “in core”, but not enabled by default. Adding a line in the “[extensions]” section your “hgrc” file such as “histedit =” enables it. It allows all the expected picking, folding, dropping, editing, or modifying commit messages.

Changeset evolution is another feature that’s been coming for a long time. It enables developers to safely modify history and be able to propagate those changes to any down/upstream clones. It’s still disabled by default, but is available as an extension. Gregory Szorc, a colleague of mine, has written about it before. If you’re curious you can read more about it here.

One of the features I’m most looking forward to is sparse checkouts. Imagine a la Perforce being able to only check out a subtree or subtrees of a repository using ‘–include subdir1/’ and –exclude subdir2/’ arguments during cloning/updating. This is what sparse checkouts will allow. Additionally, functionality is being planned to enable saved ‘profiles’ of subdirs for different uses. For instance, specifying the ‘–enable-profile mobile’ argument will allow a saved list of included and excluded items. This seems like a really powerful way of building lightweight build profiles for each different type of build we do. Unfortunately to be properly implemented it is waiting on some other code to be developed such as sharded manifests.

One last thing I’d like to tell you about is an upcoming free software project for Mercurial hosting named Kallithea. It was borne from the liberated code from the RhodeCode project. It is still in its infancy (version 0.1 as of the writing of this post), but has some attractive features for viewing repositories, such visualizations of changelog graphs, diffs, code reviews, a built-in editor, LDAP support, and even a JSON-RPC API for issue tracker integration.

In all I feel it was a valuable experience for me to attend that benefited both the Mercurial project and myself. I was able to lend some of my knowledge about building packages and familiarity with operations of large-scale hgweb serving, and was able to learn a lot about the internals of Mercurial and understand that even the deep core code of the project isn’t very scary.

I’m very thankful for my ability to attend and look forward to attending the next Sprint in the following year.

Mozilla Open Policy & Advocacy BlogSpotlight on the ACLU: A Ford-Mozilla Open Web Fellow Host

{The Ford-Mozilla Open Web Fellows applications are now open. To shed light on the fellowship, we will be featuring posts from the 2015 Host Organizations. Today’s post comes from Kade Crockford, the Director of the Technology for Liberty program at the ACLU of Massachusetts. We are so excited to have the ACLU as a host organization. It has a rich history of defending civil liberties, and has been on the forefront of defending Edward Snowden following his revelations of the NSA surveillance activities. The Ford-Mozilla Open Web fellow at the ACLU of Massachusetts will have a big impact in protecting Internet freedom.}


Spotlight on the ACLU: A Ford-Mozilla Open Web Fellow Host
By Kade Crockford, Director of Technology for Liberty, ACLU of Massachusetts

Intellectual freedom, the right to criticize the government, and freedom of association are fundamental characteristics of a democratic society. Dragnet surveillance threatens them all. Today, the technologies that provide us access to the world’s knowledge are mostly built to enable a kind of omnipotent tracking human history has never before seen. The law mostly works in favor of the spies and data-hoarders, instead of the people. We are at a critical moment as the digital age unfolds: Will we rebuild and protect an open and free Internet to ensure the possibility of democracy for future generations?

We need your help at the ACLU of Massachusetts to make sure we, as a society, answer that question in the affirmative.

aclu_massachusetts

The ACLU is the oldest civil rights and civil liberties organization in the U.S. It was founded in 1920 in the wake of the imprisonment of anti-World War I activists for distributing anti-war literature, and in the midst of widespread government censorship of materials deemed obscene, radical or insufficiently patriotic. In 1917, the U.S. Congress had passed the Espionage Act, making it a crime to interfere with military recruitment. A blatantly unconstitutional “Sedition Act” followed in 1918, making illegal the printing or utterance of anything “disloyal…scurrilous, or abusive” about the United States government. People like Rose Pastor Stokes were subsequently imprisoned for long terms for innocuous activity such as writing letters to the editor critical of US policy. In 1923, muckraking journalist Upton Sinclair was arrested simply for reading the text of the First Amendment at a union rally. Today, thanks to almost one hundred years of effective activism and impact litigation, people would be shocked if police arrested dissidents for writing antiwar letters to the editor.

But now we face an even greater threat: our primary means of communication, organization, and media—the Internet—is threatened by pervasive, dragnet surveillance. The Internet has opened up the world’s knowledge to anyone with a connection, but it has also put us under the microscope like never before. The stakes couldn’t be higher.

That’s why the ACLU—well versed in the Bill of Rights, constitutional precedent, community organizing, advocacy, and public education—needs your help. If we want to live in an open society, we must roll back corporate and government electronic tracking and monitoring, and pass on a free Internet to our children and theirs. We can’t do it without committed technologists who understand systems and code. Democracy requires participation and agitation; today, it also requires freedom fighters with computer science degrees.

Apply to become a Ford-Mozilla Open Web Fellow at the ACLU of Massachusetts if you want to put your technical skills to work on a nationally-networked team made up of the best lawyers, advocates, and educators. Join us as we work to build a free future. There’s much to be done, and we can’t wait for you to get involved.

After all, Internet freedom can’t protect itself.


Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.

Mozilla Reps CommunityReps Weekly Call – October 9th 2014

Last Thursday we had our regular weekly call about the Reps program where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps-polo

Summary

Detailed notes

AirMozilla video

https://air.mozilla.org/reps-weekly-20141009/

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Julien VehentAutomated configuration analysis for Mozilla's TLS guidelines

Last week, we updated Mozilla's Server Side TLS guidelines to add a third recommended configurations. Each configuration maps to a target compatibility level:

  1. Old supports Windows XP pre-SP2 with IE6 and IE7. Those clients do not support AES ciphers, and for them we need to maintain a configuration that accepts 3DES, SSLv3 and SHA-1 certificates.
  2. Intermediate is the new default, and supports clients from Firefox 1 until now. Unlike the old configuration, SSLv3, 3DES and SHA-1 are disabled. We also recommend using a Diffie-Hellman parameter of 2048 bits when PFS DHE ciphers are in use (note that java 6 fails with a DH param > 1024 bits, use the old configuration if you need java 6 compatibility).
  3. Modern is what we would really love to enable everywhere, but is not yet supported by enough clients. This configuration only accepts PFS ciphers and TLSv1.1+. Unfortunately, clients older than Firefox 27 will fail to negotiate this configuration, so we reserve it for services that do not need backward compatibility before FF27 (webrtc, sync1.5, ...).

Three recommended configurations means more choice, but also more work to evaluate a given endpoint.To help with the analysis of real-world TLS setups, we rely on cipherscan, a wrapper to openssl s_client that quickly pulls TLS configuration from a target. I wrote the initial version of cipherscan last year, and I'm very happy to see it grow with major contributions from Hubert Kario (Red Hat) and a handful of other people.

Today I'm releasing an extension to cipherscan that evaluates a scan result against our guidelines. By running it against a target, it will tell you what the current configuration level is, and what should be changed to reach the next level.

$ ./analyze.py -t jve.linuxwall.info
jve.linuxwall.info:443 has intermediate tls

Changes needed to match the old level:
* consider enabling SSLv3
* add cipher DES-CBC3-SHA
* use a certificate with sha1WithRSAEncryption signature
* consider enabling OCSP Stapling

Changes needed to match the intermediate level:
* consider enabling OCSP Stapling

Changes needed to match the modern level:
* remove cipher AES128-GCM-SHA256
* remove cipher AES256-GCM-SHA384
* remove cipher AES128-SHA256
* remove cipher AES128-SHA
* remove cipher AES256-SHA256
* remove cipher AES256-SHA
* disable TLSv1
* consider enabling OCSP Stapling

The analysis above evaluates my blog. I'm aiming for intermediate level here, and it appears that I reach it. I could improve further by enabling OCSP Stapling, but that's not a hard requirement.

If I wanted to reach modern compatibility, I would need to remove a few ciphers that are not PFS, disable TLSv1 and, again, enable OCSP Stapling. I would probably want to update my ciphersuite to the one proposed on Server Side TLS #Modern compatibility.

Looking at another site, twitter.com, the script return "bad ssl". This is because twitter still accepts RC4 ciphers, and in the opinion of analyze.py, this is a bad thing to do. We really don't trust RC4 anymore.

$ ./analyze.py -t twitter.com
twitter.com:443 has bad ssl

Changes needed to match the old level:
* remove cipher ECDHE-RSA-RC4-SHA
* remove cipher RC4-SHA
* remove cipher RC4-MD5
* use a certificate with sha1WithRSAEncryption signature
* consider enabling OCSP Stapling

Changes needed to match the intermediate level:
* remove cipher ECDHE-RSA-RC4-SHA
* remove cipher RC4-SHA
* remove cipher RC4-MD5
* remove cipher DES-CBC3-SHA
* disable SSLv3
* consider enabling OCSP Stapling

Changes needed to match the modern level:
* remove cipher ECDHE-RSA-RC4-SHA
* remove cipher AES128-GCM-SHA256
* remove cipher AES128-SHA
* remove cipher RC4-SHA
* remove cipher RC4-MD5
* remove cipher AES256-SHA
* remove cipher DES-CBC3-SHA
* disable TLSv1
* disable SSLv3
* consider enabling OCSP Stapling

The goal of analyze.py is to help operators define a security level for their site, and use this script to verify their configuration. If you want to check compatibility with a target level, you can use the -l flag to specify the level you want:

$ ./analyze.py -t stooge.mozillalabs.com -l modern
stooge.mozillalabs.com:443 has modern tls

Changes needed to match the modern level:
* consider enabling OCSP Stapling

Our guidelines are opinionated, and you could very well disagree with some of the recommendations. The discussion is open on the Talk section of the wiki page, I'm always happy to discuss them, and make them helpful to as many people as possible.

You can get cipherscan and analyze.py from the github repository at https://github.com/jvehent/cipherscan.

Mozilla Release Management TeamFirefox 33 beta9 to RC

  • 13 changesets
  • 30 files changed
  • 215 insertions
  • 157 deletions

ExtensionOccurrences
cpp6
h4
ini2
sh1
nsh1
mm1
html1
hgtags1

ModuleOccurrences
mobile12
content5
browser4
gfx3
widget2
toolkit1
testing1
js1

List of changesets:

JW WangBug 994292 - Call SpecialPowers.pushPermissions() to ensure permission change is completed before continuing the rest of the tests. r=baku, a=test-only - 16bd77984527
Ryan VanderMeulenBug 1025040 - Disable test_single_finger_desktop.py on Windows for frequent failures. a=test-only - 3d1029947008
J. Ryan StinnettBug 989168 - Disable browser_manifest_editor. r=jryans, a=test-only - 7fefb97d2f75
Jim MathiesBug 1068189 - Force disable browser.tabs.remote.autostart in non-nightly builds. r=felipe, a=sledru - 5217e39df54c
Jim MathiesBug 1068189 - Take into account 'layers.offmainthreadcomposition.testing.enabled' settings when disabling remote tabs. r=billm, a=sledru - 7b2887bd78a0
Randell JesupBug 1077274: Clean up tracklists r=jib a=dveditz - f0253d7268bb
Kan-Ru Chen (陳侃如) Bug 942411 - Use SpecialPowers.pushPermissions to make sure the permission is set before test run. r=smaug, a=test-only - bbf1c4e2ddce
Brian BondyBug 1049521 - Only register for types when there is no default in either of HKLM or HKCU and fix users affected by bad file associations. r=rstrong, a=sledru - d126cd83b4b8
Jon CoppeardBug 1061214. r=terrence, a=sledru - bbc35ec2c90e
Nicolas SilvaBug 1074378 - Blocklist driver Intel GMAX4500HD v 8,15,10,1749. r=Bas, a=sledru - e8360a0c7d74
Ralph GilesBug 772347 - Back out MacOS X video wakelock. a=sledru - df37248fafcb
Nicolas SilvaBug 1076825 - Don't crash release builds if allocating the buffer on white failed in RotatedBuffer.cpp. r=Bas, a=sledru - d89ec5b69c01
Nicolas SilvaBug 1044975 - Don't crash if mapping D3D11 shader constant buffers fails. r=Bas a=sledru - 9bf2a5b5162d

Michael KaplyFirefox 24 ESR EOL

I just want to take a moment to remind everyone that the Firefox 24 ESR will be officially replaced by the Firefox 31 ESR this coming Tuesday, October 14, 2014. At that time, the Firefox 24 ESR will be unsupported. Firefox 24 ESR users will be automatically upgraded to the Firefox 31 ESR.

I would hope by now everyone has tested with the Firefox 31 ESR, but if you haven't, it might be time to start.

The CCK2 has been fully updated to work with Firefox 31 and beyond.

On another note, there are major packaging changes coming to Firefox on Mac due to changes to the way applications are signed. You can read more about it in this bug. This will primarily impact the locations of autoconfig files, preferences and the distribution directory. I'll try to find some time soon to document these changes.

Doug BelshawSurvey: 5 proposals for Web Literacy Map 2.0

We’re currently working on a v2.0 of Mozilla’s Web Literacy Map. From the 38 interviews with stakeholders and community members I’ve identified 21 emerging themes for Web Literacy Map 2.0 as well as some ideas for Webmaker. The canonical home for everything relating to this update can now be found on the Mozilla wiki.

Cat with tail

While there are some decisions that need to be made by paid contributors / staff (e.g. design, combining competencies, wording of skills) there are some that should be made by the wider community. I’ve come up with five proposals in this survey:


http://goo.gl/forms/LKNSNrXCnu


The five proposals are:

  1. I believe the Web Literacy Map should explicitly reference the Mozilla manifesto.
  2. I believe the three strands should be renamed ‘Reading’, 'Writing’ and 'Participating’.
  3. I believe the Web Literacy Map should look more like a 'map’.
  4. I believe that concepts such as 'Mobile’, 'Identity’, and 'Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.
  5. I believe a 'remix’ button should allow me to remix the Web Literacy Map for my community and context.

Please do take the time to fill in the survey. Any meta feedback should go to @dajbelshaw / doug@mozillafoundation.org.

Daniel StenbergCoverity scan defect density: 0.00

A couple of days ago I decided to stop slacking and grab this long dangling item in my TODO list: run the coverity scan on a recent curl build again.

Among the static analyzers, coverity does in fact stand out as the very best one I can use. We run clang-analyzer against curl every night and it hasn’t report any problems at all in a while. This time I got almost 50 new issues reported by Coverity.

To put it shortly, a little less than half of them were issues done on purpose: for example we got several reports on ignored return codes we really don’t care about and there were several reports on dead code for code that are conditionally built on other platforms than the one I used to do this with.

But there were a whole range of legitimate issues. Nothing really major popped up but a range of tiny flaws that were good to polish away and smooth out. Clearly this is an exercise worth repeating every now and then.

End result

21 new curl commits that mention Coverity. Coverity now says “defect density: 0.00” for curl and libcurl since it doesn’t report any more flaws. (That’s the number of flaws found per thousand lines of source code.)

Want to see?

I can’t seem to make all the issues publicly accessible, but if you do want to check them out in person just click over to the curl project page at coverity and “request more access” and I’ll grant you view access, no questions asked.

Eitan Isaacson

An understated feature in desktop Firefox is the option to suppress the text and background colors that content authors choose for us, and instead go with the plain old black on white with a smattering of blue and purple links. In other words, 1994.

Why is this feature great? Because it hands control back to the user and allows people with visual impairments to tweak things just enough to make the web readable.

Somebody once asked on the #accessibility IRC channel why they can’t turn off content colors in Firefox for Android. So it seemed like a good idea to re-introduce that option in the form of an extension. There are a few color related addons in AMO, but I just submitted another one, and you could get it here. This is what the toggle option looks like:

Remove colors option in tools menu

Remove colors option in tools menu

Since the color attribute was introduced, the web has evolved a lot. We really can’t go back to the, naive, monochrome days of the 90s. Many sites use background images and colors in novel ways, and use backgrounds to portray important information. Sometimes disabling page colors will really break things. So once you remove colors from AMO, you get:

AMO with colors removed

Okayish, eh?

As you can see, it isn’t perfect, but it does make the text more readable to some. Having a menu item that doesn’t take too much digging to find will hopfully help folks go back and forth between the two modes and gt the best out of both worlds.


Peter BengtssonPremailer on Python 3

Premailer is probably my most successful open source project in recent years. I base that on the fact that 25 different people have committed to it.

Today I merged a monster PR by Michael Jason Smith of OnlineGroups.net.

What it does is basically that it makes premailer work in Python 3, PyPy and Python 2.6. Check out the tox.ini file. Test coverage is still 100%.

If you look at the patch the core of the change is actually surprisingly little. The majority of the "secret sauce" is basically a bunch of import statements which are split by if sys.version_info >= (3, ): and some various minor changes around encoding UTF-8. The rest of the changes are basically test sit-ups.

A really interesting thing that hit us was that the code had assumptions about the order of things. Basically the tests assumed the the order of certain things in the resulting output was predictable even though it was done using a dict. dicts are famously unreliable in terms of the order you get things out and it's meant to be like that and it's a design choice. The reason it worked till now is not only luck but quite amazing.

Anyway, check it out. Now that we have a tox.ini file it should become much easier to run tests which I hope means patches will be better checked as they come in.

Mozilla Open Policy & Advocacy BlogLaunching the Ford-Mozilla Open Web Fellows Program, a Global Initiative to Recruit the Heroes of the Open Internet

{Re-posting from the Mozilla Blog on Sep 30, 2014}

By Mark Surman, Executive Director, Mozilla Foundation; and Darren Walker, President, Ford Foundation

We are at a critical point in the evolution of the Internet. Despite its emergence as an integral part of modern life, the Internet remains a contested space. Far too often, we see its core ethos – a medium where anyone can make anything and share it with anyone – undermined by forces that wish to make it less free and open. In a world in which the future health of the Internet is vital to democratic discourse and a free flow of ideas, we need a band of dedicated Mozilla Advocacyindividuals standing ready to protect it.

That’s why we are joining together today to launch the Ford-Mozilla Open Web Fellows program, a landmark initiative to create a worldwide community of leaders who will advance and protect the free and open Web.

Working in the Open on Core Issues with the World’s Most Innovative Organizations

Ford-Mozilla Fellows will be immersed in projects that create a better understanding of Internet policy issues among civil society, policy makers, and the broader public. Fellows will be technologists, hackers, and makers who work on a range of Internet policy issues, from privacy and security to surveillance and net neutrality. They will create an affirmative agenda and improve coordination across the sector, boosting the overall number of people throughout society (in nonprofit, government, philanthropy, academic and corporate sectors) that protect the Internet. At present, a whole new architecture is emerging at NGOs and in government where a technology lens is vital to achieving results, just as a focus on law and communications were important in building previous capacity. Fellows will be encouraged to work in the open so that they can share their experiences and learnings with others. Around the world, civil society organizations are working under difficult situations to advance social justice and create a thriving digital society where all voices have an opportunity to be heard.

Fellows will serve as technology advisors, mentors and ambassadors to host organizations, helping to better inform the policy discussion. We are thrilled to name the first cohort organizations that will host a Fellow in the first year of the program. They include:

A Call for Fellowship Applicants

Today also marks the official opening of the application window. Beginning immediately, people can apply to be a Ford-Mozilla Open Web Fellow by visiting www.mozilla.org/advocacy. The application deadline is December 31, 2014.

We are looking for emerging leaders who have a passion for influencing and informing the public policies that impact the Internet. Selected Fellows will have a track record of making and contributing to projects and an interest in working with Mozilla, the Ford Foundation, and our host organizations on specific initiatives to advance and protect the free and open Web.

Protecting the Internet

The Internet has the potential to be the greatest global resource in the history of the world, accessible to and shaped by all people. It has the promise to be the first medium in which anyone can make anything, and share it with anyone. In many ways, it already has helped bend the arc of history towards enlightenment and justice.

But continuing in that direction isn’t guaranteed without help. For all the good that can come from the Internet, in some areas it is already being used to weaken society and concentrate power in the hands of the few, and to shut down democratic discourse. The fight over preserving net neutrality in the U.S.; the debate over governments undermining the Internet to further surveillance efforts; the curtailing of speech and access to the Internet by authoritarian regimes — these are all threats to the Internet and to civil rights.

We need to take up the challenge to prevent this from happening. We must support the heroes – the developers, advocates and people who are fighting to protect and advance the free and open Internet. We must train the next generation of leaders in the promise and pitfalls of technology. We need to build alliances and infrastructure to bridge technology policy and social policy.

The Ford-Mozilla Open Web Fellows program is an effort to find and support the emerging leaders in the fight to protect the free and open Internet. Apply to become a Ford-Mozilla Fellow and tell us how you would stand up to protect and advance the Web to continue the effort to bend the arc toward justice.

Jeff WaldenHolt v. Hobbs: Is a prisoner’s 1/2″ beard so dangerous that he can’t have it even if his religion requires it?

Now the second, final argument this trip. (There are other arguments this week, some interesting enough to attend. But I ran out of time to prepare for them or attend them.) Holt v. Hobbs is much simpler than Heien v. North Carolina, because one side’s arguments are “almost preposterous”. So this post is (slightly) breezier.

This line was a bit different from the Heien line: more people attending for (this) argument, fewer people present simply for opening day. The line was possibly less talkative (and I still had briefs to read, although I never intended to read all twenty-one [!] of them), but there were still good discussions with local law students, the author of one of the amicus briefs (which I naturally read standing in line), and others. Good fun again.

The line at 05:49 for Holt v. Hobbs
Another day, another line

Gregory Holt and his would-be beard

Gregory Holt is a Muslim inmate in the Arkansas prison system. (He actually goes by Abdul Maalik Muhammad now; Gregory Holt is his birth [legal?] name. News stories and legal discussion refer to him as Holt, and in some sense I want this in that corpus, so I use Holt here.) Holt interprets Islamic law to require he have a beard.

Allah’s Messenger said, “Cut the moustaches short and leave the beard (as it is).”

The Translation of the Meanings of Sahih Al-Bukhari ¶ 5893 (Muhammad Muhsin Khan trans., Darussalam Pubs. 1997)

A small request. Reasonable? Quoting the ever-colorful Justice Scalia in oral argument, “Religious beliefs aren’t reasonable. I mean, religious beliefs are categorical. You know, it’s God tells you. It’s not a matter of being reasonable.” Reasonable or not, a beard isn’t an obviously dangerous request like, “My religion requires I carry a broadsword.” And as a conciliatory gesture Holt moderated his request to a half-inch beard.

Sunrise over the Court, with a camera crew and reporter in the foreground
No matter how many arguments I go to (this makes ten), the sunrise over the Court will never get old

Arkansas: no beards

Arkansas doesn’t permit prisoners to grow beards (except to the natural extent between twice-weekly shaves). There’s an exception for prisoners with medical conditions (typically burn victims), shaving only to 1/4″. But no religious exceptions.

Arkansas’s justifications are three. A beard could hide contraband. A bearded prisoner can shave to disguise himself, hindering rapid identification and perhaps aiding an escape (see The Fugitive). And it’s a hassle measuring half-inch beards on everyone.

The law’s requirements

Twenty-odd years ago, Holt would likely have been out of luck. Turner v. Safley permitted regulations “reasonably related to legitimate penological objectives”. And Justice Scalia’s Employment Division v. Smith says that as a constitutional matter, generally-applicable laws may burden religious exercise, with objectors having no recourse. It’d be an uphill slog getting past the no-beard rule.

But in the mid-1990s to 2000, Congress near-unanimously statutorily protected some exercises of religion, even against generally-applicable laws. (Lest it be thought this was protection specifically, or only, of Christian beliefs: the original motivating case was a Native American group that used a hallucinogen for sacramental purposes.) In particular Congress enacted the Religious Land Use and Institutionalized Persons Act (RLUIPA, usually “ruh-loo-pah”), stating:

No government shall impose a substantial burden on the religious exercise of [a prisoner], even if the burden results from a rule of general applicability, unless the government demonstrates that imposition of the burden on that person—

  1. is in furtherance of a compelling governmental interest; and
  2. is the least restrictive means of furthering that compelling governmental interest

And “religious exercise” is later defined as:

The term “religious exercise” includes any exercise of religion, whether or not compelled by, or central to, a system of religious belief.

Now, prisons may regulate in pursuit of normal prison aims. But regulations can’t “substantial[ly] burden” a prisoner’s “religious exercise”, regardless how important the exercise is(n’t) in the prisoner’s belief system, even if the regulation is general and doesn’t target religion — unless the government demonstrates the regulation satisfies a “compelling interest” that can’t be addressed less restrictively. This phrasing comes from strict scrutiny: the strongest form of review American courts apply to laws. Unlike the Turner/Smith regime, these requirements have teeth.

The oral argument line, extending down the block at 07:19
Almost go-time to advance onto the plaza to receive line numbers

Evaluating Arkansas’s no-beard rule applied to Holt

As a threshold matter, Holt must wish to engage in “religious exercise” that is “substantial[ly] burden[ed]“. Once Holt claims the belief, courts won’t second-guess it. They will consider whether the belief is sincere: no opportunistic exception requests for unwarranted benefits. But no one contests the sincerity of Holt’s beliefs. If Holt refuses to be shaved, he’ll suffer various disciplinary actions and bad consequences: “loss of privileges, punitive segregation, punitive work assignments, and loss of good-time credits”. Certainly a substantial burden.

Now Arkansas must demonstrate — with evidence, persuasively — both a compelling interest, and least restrictive means. Put another way, does Arkansas’s regulation pass strict scrutiny?

Arkansas’s claimed interests are “prison safety and security”. But a no-beards rule only marginally advances these goals, and “the government does not have a compelling interest in each marginal percentage point by which its goals are advanced.” Arkansas’s interest must be more specific: an interest specifically in no beards.

It’s hard to say Arkansas has a compelling interest when the rules in forty-odd prison systems nationwide, and various penal code recommendations, either impose no restrictions on beards among prisoners, or would allow Holt his 1/2″ beard. Arkansas is an outlier. And Arkansas’s medical exemption undermines the argument that no beards must apply universally (compelling interests often brook no exceptions). Similarly, Arkansas can’t use the least restrictive means when forty jurisdictions use even less restrictive means.

Arkansas might justify their policy through unique local experience. But Arkansas concedes “no example” of anyone hiding contraband in a beard. (With the “caveat” that “Just because we haven’t found the example doesn’t mean they aren’t there.” A strong argument!) Disguise arguments could be addressed by taking multiple pictures (as other systems do). And measuring the few inmates requesting religious exemptions wouldn’t be much harder than measuring medical-exception beards.

Arkansas could “demonstrate” strict scrutiny is satisfied by providing evidence of evaluation and reasoned rejection of other states’ policies. But Arkansas previously admitted it considered no other systems (eliciting an acerbic suggestion to try “the common practice of picking up the phone to call other prisons”).

Arkansas could argue that Arkansas’s system, that houses many prisoners in barracks and not separate cells, justifies no beards. But such systems exist elsewhere, and no beards applies in Arkansas’s non-barracks prisons.

In short, Arkansas has demonstrated neither a compelling interest, nor least restrictive means, and it has done so presenting no evidence. Ouch.

In lower courts

An obvious question: why must Holt fight this in court if he’s so obviously right? Basically, a few lower courts are giving far too much deference (a word found in legislative history but not in the statute) to the mere assertions of prison officials, without requiring them to “demonstrate” much of anything. The magistrate judge described officials’ claim that Holt might hide something in his half-inch beard as “almost preposterous” — just before deferring to those claims. Courts below the Supreme Court similarly gave too much deference to prison officials’ bare assertions unsupported by any data.

At the Supreme Court

One indicator of lopsidedness here is the brief count, and authors, on each side. Holt has seventeen other briefs on his side, representing a wide variety of interests: Jewish, Christian, Islamic, Hindu, Sikh, American Indian and Hawaiian, former prison wardens, former corrections officials, Americans United for Separation of Church and State (whose brief, incidentally, is interesting but quite surpassed by later events), sociologists, and the United States government (and others). The authors include a who’s-who of religious freedom organizations. Arkansas has one brief on its side: from eighteen states, who don’t defend Arkansas’s policy as much as try to preserve deference as an element to consider (presumably so those states’ prison systems can be run with a freer hand).

The Court accepted this case in unusual circumstances. Holt filed a hand-written petition requesting Supreme Court review, through a special system not requiring him to pay filing fees from non-existent income. Such petitions are almost never accepted. (Holt basically won the lottery. That said, when I read his brief after the case was accepted, the form was unusual, but the discussion and presentation seemed orthodox.) It’s pretty clear the Court accepted this case to lopsidedly, probably unanimously, overturn the Eighth Circuit. The Supreme Court doesn’t take cases to correct errors, but that’s what they’ll do here.

The #12 admission card
Number 12 today: slipping back slightly, but as far as I’m concerned this means I had perfect timing

Oral argument

The argument questions roughly ran in largely three veins: pondering deference, drawing a line, and almost mocking Arkansas’s arguments. Holt’s counsel faced difficult questions, but not skeptical questions.

Deference

First, what does deference (if it even matters — the term appears only in legislative history, not in the law as enacted) look like in the context of strict scrutiny? These are somewhat contradictions in terms. Yet the Court somehow must make sense of this.

Line-drawing

Second, while beards are easy to decide, other issues (Sikh turbans that actually can conceal things, for example) will require different considerations. How can the Court provide general guidelines to address these situations? The Court doesn’t want to be in the business of reviewing every prison official’s (better-“demonstrated”) decisions. (Scalia bluntly put it this way: “Bear in mind I would not have enacted this statute, but there it is.” Recall he wrote Employment Division v. Smith, shutting off constitutional religious exemptions from generally-applicable laws. Something to remember any time Scalia’s stereotyped as reflexively pro-religion.) But Congress opened up that box, so courts have to live with it.

Almost mocking questions

Arkansas’s position is not easily defended. Not surprisingly, then, questions and comments almost made fun of Arkansas’s position. To the assertion that “Just because we haven’t found the example doesn’t mean they aren’t there”, Justice Breyer replied, “There are a lot of things we’ve never found that might be there and I’ll refrain from mentioning them. You see them on television, a lot of weird programs from time to time.” (Presumably referring to things like Sasquatch, the Loch Ness Monster, Ghost Hunters, and similar.) And later, Justice Alito proposed an alternative means of detecting beard contraband: “Why can’t the prison just…say comb your beard, and if there’s anything in there, if there’s a SIM card in there, or a revolver, or anything else you think ­­can be hidden in a half-inch beard…” (emphases added). Both lines made the audience erupt in laughter.

Post-Holt crowds on the Supreme Court plaza
The post-argument crowds, framed by visitor lines

Why Arkansas fights

It’s unclear to me why Arkansas is still arguing. They won in lower courts. But once the Court granted the in forma pauperis petition, Arkansas should have folded. The law is too clearly against them, and this Court won’t give them a pass. Arkansans should be outraged that their state is wasting taxpayer money to defend this system. (And on the policy’s merits, outraged at the petty bureaucratic nonsense at best, and bigotry at worst, it represents.)

One plausible, potentially upsetting, explanation is provided by former prison wardens: “Political Considerations May Underlie Prison Officials’ Resistance to Accommodations of Religious Practices.” These wardens had been sued (and lost) in various cases cited in briefing, and they candidly admitted that their positions were partly attributable to “political realities”.

Conclusion

Arkansas will lose. The only remaining question is how. (And as before, if I’ve made any mistakes in this discussion, please point them out.)

Doug BelshawPlaytesting for MozFest

Today I was down at Bishop Grosseteste University, giving a guest lecture and facilitating a workshop. The module was on digital literacies as part of a course for Early Years students. These are students who may go on to teacher training. Some of the work relating to my thesis and the work I’ve done with Mozilla is on their reading list.

Web

From my point of view it was a useful opportunity to playtest some sessions I’ve got planned for the Mozilla Festival at the end of the month. I’ve travelled a lot less in the year since I moved to the Webmaker team, and so I welcomed this opportunity to refine some of my thinking. It’s also good to get input from as many places as possible about Web Literacy Map v2.0.

I made the lecture as participatory as the logic of the lecture theatre allowed. You can find my slides here. We had a backchannel in the form of a Google Doc which surfaced some interesting questions. On a meta level my aim was to highlight the importance of attention. I’m coming round to Howard Rheingold’s view that it’s key to everything we do, both online and offline. Interestingly, one of the questions was whether a single Web Literacy Map can be relevant to everyone.

For the workshop, I split the students into two groups and swapped over halfway. After an introduction to both workshops, half stayed with the course leaders, and the other half came with me. Those who came with me read a chapter of my book The Essential Elements of Digital Literacies followed by a discussion. Those who stayed behind completed a Web Literacy Map activity on their iPads.

Three things stand out in particular from the discussion I had with students:

  1. Confidence. One student had the insight about the reason she’s always shied away from using technology. She explained that she wasn’t exposed to it at a young age in the same way others had been. As a result, she’s always felt under-confident around anything digital and never wants to do more than she has to with it.

  2. Filtering. As I point out in my book, I’m against filtered internet connections. This presupposes the ability to have a rational conversation instead of just filtering. In Early Years (ages 3-5) this isn’t necessarily the case.

  3. Unintended consequences. We know that people devise workarounds to problems they have. Students talked about the ways in which school filters had prevented them accessing Facebook. As a result, they resorted to ‘dodgy’ websites that had evaded filters. These often featured inappropriate advertising and malware, but promised access to Facebook. User accounts were often hacked. By filtering, the school had driven students towards those things they were trying to prevent them doing or seeing.

I’m still waiting to see all the results of the Web Literacy Map activity I set, but the couple of examples I saw were promising. Students added, renamed and re-arranged the competencies of the Web Literacy Map v1.1. This led to some curious groupings. I wouldn’t necessarily thought of putting together 'Credibility’ with 'Security’ and 'Privacy’, for example. It was also interesting that it wasn’t immediately obvious to them what 'Infrastructure’ means.

For MozFest, I’m going to:

  • Refine the Web Literacy Map activity based on the results of the survey we’re launching this week.
  • Think about where skills and competencies related to 'e-safety’ should sit.
  • Revisit Beetham & Sharpe’s (2009) taxonomy of access, skills, practices, and attributes.

All in all, it was definitely a worthwhile trip down to Lincoln for me. I hope it was for the students and course leaders, too! Many thanks to Ben Samuels for the invitation, and to Chris Bonfield, Mary-Louise Maynes and team for their warm welcome!

Jen Fong-AdwentEnd of a year, start of a new year

A year ago I publicly announced Meatspace as an experiment in realtime chat with animated GIFs

Gervase MarkhamRebel Alliance Ideas

Chris Beard has been encouraging us to think like the rebels; what can we do that other people won’t do? How can we make an impact?

Here are some of my thoughts:

  • The internet, in global average, is getting less reliable, slower and more laggy. Finish Janus and persuade our mobile partners to deploy it and default to it. Your Firefox OS phone now accesses the net faster than an Android phone.
  • Make Firefox OS connect by default to openwireless.org access points, and encourage Firefox OS users to run them. There’s a virtuous circle here. More net in more places; a global movement of being generous with net access.
  • Finish Daala faster, by finding people other than the core team to do everything except design the codec and write the algorithms (e.g., testing, speed optimizations, fuzzing, writing Windows Media plugins). We need to get the word out that this project is critical.
  • Show the core free software community, who have great influence over tech choices and who should be our natural allies, that we care about them. Be the first organization ever to make a free-from-bottom-to-top mobile phone (running Firefox OS) and give some help to the Replicant team to port to it as well, just to prove we mean it and it’s real.
  • Make it possible to search for specifically open source software in the Marketplace, and show we believe it “promotes the development of the Internet as a public resource” by promoting apps which are open source.
  • Ship Collusion (which has been in the works for years), even if there’s not a perfect mapping between what it shows you and what’s actually bad. Make sites feel they have to justify all their 3rd party links.

What are your ideas?

Lucas RochaProbing with Gradle

Up until now, Probe relied on dynamic view proxies generated at runtime to intercept View calls. Although very convenient, this approach greatly affects the time to inflate your layouts—which limits the number of use cases for the library, especially in more complex apps.

This is all changing now with Probe’s brand new Gradle plugin which seamlessly generates build-time proxies for your app. This means virtually no overhead at runtime!

Using Probe’s Gradle plugin is very simple. First, add the Gradle plugin as a dependency in your build script.

buildscript {
    ...
    dependencies {
        ...
        classpath 'org.lucasr.probe:gradle-plugin:0.1.3'
    }
}

Then apply the plugin to your app’s build.gradle.

apply plugin: 'org.lucasr.probe'

Probe’s proxy generation is disabled by default and needs to be explicitly enabled on specific build variants (build type + product flavour). For example, this is how you enable Probe proxies in debug builds.

probe {
    buildVariants {
        debug {
            enabled = true
        }
    }
}

And that’s all! You should now be able to deploy interceptors on any part of your UI. Here’s how you could deploy an OvermeasureInterceptor in an activity.

public final class MainActivity extends Activity {
   @Override
   protected void onCreate(Bundle savedInstanceState) {
       Probe.deploy(this, new OvermeasureInterceptor());
       super.onCreate(savedInstanceState);
       setContentView(R.id.main_activity);
   }
}

While working on this feature, I have changed DexMaker to be an optional dependency i.e. you have to explicitly add DexMaker as a build dependency in your app in order to use it.

This is my first Gradle plugin. There’s definitely a lot of room for improvement here. These features are available in the 0.1.3 release in Maven Central.

As usual, feedback, bug reports, and fixes are very welcome. Enjoy!

Will Kahn-GreeneInput: 2014q3 post-mortem

This is the 2014q3 Input post-mortem. It was a better quarter than 2014q2--that one kind of sucked. Instead 2014q2 started out well and then got kind of busy and then I was pretty overwhelmed by the end.

Things to know:

  • Input is Mozilla's product feedback site.
  • Fjord is the code that runs Input.
  • I unilaterally decided to extend 2014q3 to October 6th.
  • I am Will Kahn-Greene and I'm the primary developer on Input.

Bug and git stats

Bugzilla
========

Bugs created:        58
Bugs fixed:          54

git
===

Total commits: 168

      Will Kahn-Greene : 144  (+195460, -188019, files 658)
         Ian Kronquist : 12  (+402, -106, files 21)
         L. Guruprasad : 6  (+16, -36, files 6)
            Adam Okoye : 4  (+34, -34, files 34)
       Ruben Vereecken : 2  (+69, -29, files 12)

Total lines added: 195981
Total lines deleted: 188224
Total files changed: 731

We added a bunch of code this quarter:

  • October 7th. 2014: 23466 total, 11614 Python

Compare to previous quarters:

  • 2014q1: April 1st, 2014: 15195 total, 6953 Python
  • 2014q2: July 1st, 2014: 20456 total, 9247 Python

Nothing wildly interesting there other than noting that the codebase for Input continues to grow.

Contributor stats

Ian Kronquist was the Input intern for Summer 2014. He contributed several fixes to Input. Yay!

We spent a bunch of time making our docs and Vagrant provisioning script less buggy so as to reduce the problems new contributors have when working on Input. I talked with several people about things they're interested in working on. Plus several people did some really great work on Input.

Generally, I think Input is at a point where it's not too hard to get up and running, we've got several lists of bugs that are good ones to start with and the documentation is good-ish. I think the thing that's hampering us right now is that I'm not spending enough time and energy answering questions, managing the work and keeping things going.

Anyhow, welcome L. Guruprasad, Adam Okoye and Ruben Vereecken! Additionally, many special thanks to L. Guruprasad who fixed a lot of issues with the Vagrant provisioning scripts. That work is long and tedious, but it helps everyone.

Accomplishments

Dashboards for everyone: We wrote an API and some compelling examples of dashboards you can build using the API. It's being used in a few places now. We'll grow it going forward as needs arise. I'm pretty psyched about this since it makes it possible for people with needs to help themselves and not have to wait for me to get around to their work. Dashboards for everyone project plan.

Vagrant: We took the work I did last quarter and improved upon it, rewrote the docs and have a decent Vagrant setup now. Reduce contributor pain project plan.

Abuse detection: Ian spent his internship working on an abuse classifier so that we can more proactively detect and prevent abusive feedback from littering Input. We gathered some interesting data and the next step is probably to change the approach we used and apply some more complex ML things to the problem. The key here is that we want to detect abuse with confidence and not accidentally catch swaths of non-abuse. Input feedback has some peculiar properties that make this difficult. Reduce the abuse project plan.

Loop support: Loop is now using Input for user sentiment feedback.

Heartbeat support: User Advocacy is working on a project to give us a better baseline for user sentiment. This project was titled Heartbeat, but I'm not sure whether that'll change or not. Regardless, we added support for the initial prototype. Heartbeat project plan.

Data retention policy: We've been talking about a data retention policy for some time. We decided on one, finalized it and codified it in code.

Shed the last vestiges of Playdoh and funfactory: We shed the last bits of Playdoh and funfactory. Input uses the same protections and security decisions those two projects enforced, but without being tied to some of the infrastructure decisions. This made it easier to switch to peep-based requirements management.

Switched to FactoryBoy and overhauled tests: Tests run pretty fast in Fjord now. We switched to FactoryBoy, so writing model-based tests is a lot easier than the stuff we had before.

Summary

Better than 2014q2 and we fixed some more technical debt further making it easier to develop for and maintain Input. Still, there's lots of work to do.

Roberto A. VitilloUsing ML to correlate add-ons to performance bottlenecks

I started looking into exploiting our Telemetry data to determine which add-ons are causing performance issues with Firefox. So far there are three metrics that I plan to correlate with add-ons:

  • startup time,
  • shutdown time,
  • background hangs.

In this post I am going over my findings for the first scenario, i.e. the relation between startup time and installed add-ons.

In an ideal world, all add-ons would have an uniform way to initialize themselves which could be instrumented. Unfortunately that’s not possible, many add-ons use asynchronous facilities and or rely on observer notifications for initialization. In other words, there is no good way to easily measure the initialization time for all add-ons without possibly touching their codebases individually.

This is the sort of problem that screams for a multi-way ANOVA but, after some thought and data exploration, it turns out that the interaction terms can be dropped between add-ons, i.e. the relation between add-ons and the startup time can be modeled as a pure additive one. Since a multi-way ANOVA is equivalent to a linear regression between a set of predictors and their interactions, the problem can be modeled with a generalized linear model where for each Telemetry submission the add-on map is represented as a boolean vector of dummy variables that can assume a value of 0 or 1 corresponding to “add-on installed” and “add-on not installed”, respectively.

Startup time depends on many other factors that are not taken into account in the model, like current system load and hard drive parameters. This means that it would be very surprising, to say the least, if one could predict the startup time without those variables. That doesn’t mean that we can’t explain part of the variance! In fact, after training the model on the data collected during the past month, it yielded a R^2 score of about 0.15, which in other words means that we can explain about 15% of the variance. Again, as we are not trying to predict the startup time accurately this is not necessarily a bad result. The F ratio, which relates the variance between add-ons to the variance within add-ons, is significant which remarks that having or not certain add-ons installed does influence the startup time.

Many of the p-values of the predictor’s coefficients are highly significant (<< 0.001); it’s just a matter of sorting the significant results by their effect size to determine the add-ons that cause a notable slowdown of Firefox during startup:

Rplot06The horizontal axis measures the startup time overhead with respect to the average startup time of Firefox. For instance, Yandex Elements seems to be slowing down startup by about 8 seconds on average. The error-bars represent the standard errors of the sampling distributions of the coefficients.

Note that the model is based on a very small fraction of our user-base, i.e. the subset that has Telemetry enabled, so there clearly is some implicit bias. The picture might be different for a truly random sample of our users, nevertheless it is an indication of where to start digging deeper.

The next step is to “dashboardify” the whole thing and contact the developers of the various add-ons. We are also considering notifying users, in a yet to be determined way, when the browser detects add-ons that are known to cause performance issues.

References: map-reduce job and model


Adam LoftingMozilla Contributor Analysis Project (Joint MoCo & MoFo)

I’m  back at the screen after a week of paternity leave, and I’ll be working part-time for next two weeks while we settle in to the new family routine at home.

In the meantime, I wanted to mention a Mozilla contributor analysis project in case people would like to get involved.

We have a wiki page now, which means it’s a real thing. And here are some words my sleep-deprived brain prepared for you earlier today:

The goal and scope of the work:

Explore existing contribution datasets to look for possible insights and metrics that would be useful to monitor on an ongoing basis, before the co-incident workweek in Portland at the beginning of December.

We will:

  • Stress-test our current capacity to use existing contribution data
  • Look for actionable insights to support Mozilla-wide community building efforts
  • Run ad-hoc analysis before building any ‘tools’
  • If useful, prototype tools that can be re-used for ongoing insights into community health
  • Build processes so that contributors can get involved in this metrics work
  • Document gaps in our existing data / knowledge
  • Document ideas for future analysis and exploration

Find out more about the project here.

I’m very excited that three members of the community have already offered to support the project and we’ve barely even started.

In the end, these numbers we’re looking at are about the community, and for the benefit of the community, so the more community involvement there is in this process, the better.

If you’re interested in data analysis, or know someone who is, send them the link.

This project is one of my priorities over the following 4-8 weeks. On that note, this looks quite appealing right now.

So I’m going make more tea and eat more biscuits.

Patrick ClokeHow I Got Involved in Mozilla

This is discussed very briefly on my about page, but I figured it could use a bit of a longer discussion. I generally consider myself to have joined the Mozilla community in ~2006. I know that I was using Mozilla Firefox, Mozilla Thunderbird, and Mozilla Sunbird way before that (probably since ~2004, which is when I built my own computer). But I was just an enthusiast then, running beta builds, then alpha and eventually nightly builds. (This was way back when things were more dangerous to run: Minefield and Shredder.)

Anyway, back to 2006…I initially got involved in a more technical fashion by writing extensions (or maybe it was GreaseMonkey scripts). I don’t really have anyway to prove this though — I don’t seem to have any of that code. (This was before widespread distributed version control.) Anyway, let’s just assume this 2006 date is correct.

My first patch was in 2008 to move a function from the Provider for Google Calendar to the calendar core so that I could use it in Thundershows: a calendar provider for TV shows [1] [2]. (As far as I know, I’m one of a handful of people to actually implement a calendar provider.) I found the calendar project much easier to get involved in than other aspects of Mozilla since it was so much smaller. (I also toyed with adding an entire new protocol to Thunderbird, which R Kent James has now done successfully! [3] [4])

I then came across Instantbird in ~2008 (sometime in the Instantbird 0.1 era). I thought this was great — Mozilla was finally making an instant messaging client! Well, I was kind of right…Instantbird is not an official Mozilla project, but it was exactly what I wanted! The guys (mostly Florian Quèze) in the #instantbird IRC channel were awesome: kind, patient, helpful, and welcoming. They were the ones that really introduced me into the Mozilla way of doing things. I fixed my first bug for Instantbird in 2010 and haven’t stopped since! I’ve since added IRC support via JavaScript (instead of libpurple) and am now one of the lead developers. I’ve mentored Google Summer of Code students twice (2013 and 2014), contribute to Thunderbird and am a peer of the chat code shared between Instantbird and Thunderbird. (I do also occassionally contribute to other projects. [5])

[1]This was my first project to really have other users, I had people filing bugs, asking for new features, etc. It was great! I even had someone (years later) tell me in #instantbird that they had loved Thundershows!
[2]My second bug dealt with the same set of code and had tests committed (by me) over 5 years after the initial patch. Oops!
[3]My work was based off of some experiments Joshua Cranmer did to add support for web forums to Thunderbird. After all this time, I still want that extension.
[4]Oh, also rkent did EXACTLY what I wanted years later: which is add Twitter to Thunderbird.
[5]But not Firefox. After seven years (and over 1800 commits), I’ve never fixed a bug in Firefox; although I have had code committed to mozilla-central.