Luis VillaSlide embedding from Commons

A friend of a friend asked this morning:

I suggested Wikimedia Commons, but it turns out she wanted something like Slideshare’s embedding. So here’s a test of how that works (timely, since soon Wikimanians will be uploading dozens of slide decks!)

This is what happens when you use the default Commons “Use this file on the web -> HTML/BBCode” option on a slide deck pdf:

Wikimedia Legal overview 2014-03-19

Not the worst outcome – clicking gets you to a clickable deck. No controls inline in the embed, though. And importantly nothing to show that it is clickable :/

Compare with the same deck, uploaded to Slideshare:

Some work to be done if we want to encourage people to upload to Commons and share later.

Mozilla Privacy BlogPrefer:Safe — Making Online Safety Simpler in Firefox

Mozilla believes users have the right to shape the Internet and their own experiences on it. However, there are instances when people seek to shape not only their own experiences, but also those of young users and family members whose … Continue reading

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1039198] Field name is still “Bug #” for the Bug Created field
  • [922482] Change all bugs with general@js.bugs assignee to nobody@mozilla.org
  • [713307] Please add FlagTypeComments for tracking/approval flags
  • [1037571] Change Several Bugs at Once Does Not Allow Modification of the QA Whiteboard
  • [1032883] update FlagDefaultRequestee extension to use object hooks
  • [1040580] Bugzilla detects Firefox OS device as Hardware:Other, OS:Other
  • [1026416] “blocks” field is present as empty string when empty, rather than null or []
  • [1040841] Provide good error message when people can’t use form.legal
  • [1041538] A few more “Bugmail filtering” fields need to be excluded from the prefs UI
  • [1041559] “Please wait while your bugs are retrieved” shown above menu header for search error pages
  • [936468] Move OS: Windows 8 Metro to Windows 8.1

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jennie Rose HalperinNumbers are not enough: Why I will only attend conferences with explicitly enforceable Codes of Conduct and a commitment to accessibility

I recently had a bad experience at a programming workshop where I was the only woman in attendance and eventually had to leave early out of concern for my safety.

Having to repeatedly explain the situation to a group of men who promised me that “they were working on fixing this community” was not only degrading, but also unnecessary. I was shuttled to three separate people, eventually receiving some of my money back approximately a month later (which was all I asked for) along with promises and placating statements about “improvement.”

What happened could have been prevented: each participant signed a “Code of Conduct” that was buried in the payment for the workshop, but there was no method of enforcement and nowhere to turn when issues arose.

At one point while I was attempting to resolve the issue, this community’s Project Manager told me, “Three other women signed up, but they dropped out at the last minute because they had to work. It was very strange and unexpected that you were the only woman.” I felt immediately silenced. The issue is not numbers, but instead inviting people to safe spaces and building supportive structures where people feel welcomed and not marginalized. Increasing the variety of people involved in an event is certainly a step, but it is only part of the picture. I realize now that the board members of this organization were largely embarrassed, but they could have handled my feelings in a way where I didn’t feel like their “future improvements” were silencing my very real current concerns.

Similarly, I’ve been thinking a lot about a conversation I had with some members of the German Python community a few months ago. Someone told me that Codes of Conduct are an American hegemonic device and that introducing the idea of abuse opens the community up for it, particularly in places that do not define “diversity” in the same way as Americans. This was my first exposure to this argument, and it definitely gave me a lot of food for thought, though I adamantly disagree.

In my opinion, the open-source tech community is a multicultural community and organizers and contributors have the responsibility to set their rules for participation. Mainstream Western society, which unfortunately dictates many of the social rules on the Internet, does a bad job teaching people how to interact with one another in a positive and genuine way, and going beyond “be excellent to one another, we’re all friends here!” argument helps us participate in a way in which people feel safe both on and off the Web.

At a session at the Open Knowledge Festival this week, we were discussing accessibility and realized that the Code of Conduct (called a “User Guide”) was not easily located and many participants were probably not aware of its existence. The User Guide is quite good: it points to other codes of conduct, provides clear enforcement, and emphasizes collaboration and diversity.

At the festival, accessibility was not addressed in any kind of cohesive manner: the one gender-neutral bathroom in the huge space was difficult to find, sessions were loud and noisy and often up stairs, making it impossible for anyone with any kind of hearing or mobility issue to participate, and finally, the conference organizers did not inform participants that food would not be free, causing the conference’s ticket price to increase dramatically in an expensive neighborhood in Berlin.

In many ways, I’m conflating two separate issues here (accessibility and behavior of participants at an event.) I would counter that creating a safe space is not only about behavior on the part of the participants, but also on the part of the conference organizers. Thinking about how participants interact at your event not only has to do with how people interact with one another, but also how people interact with the space. A commitment to accessibility and “diversity” hinges upon more than words and takes concerted and long term action. It may mean choosing a smaller venue or limiting the size of the conference, but it’s not impossible, and incredibly important. It also doesn’t have to be expensive!  A small hack that I appreciated at Ada Camp and Open Source Bridge was a quiet chill out room. Being able to escape from the hectic buzz was super appreciated.

Ashe Dryden writes compellingly about the need for better Codes of Conduct and the impetus to not only have events be a reflection of what a community looks like, but also where they want to see them go. As she writes,

I worry about the conferences that are adopting codes of conduct without understanding that their responsibility doesn’t end after copy/pasting it onto their site. Organizers and volunteers need to be trained about how to respond, need to educate themselves about the issues facing marginalized people attending their events, and need to more thoughtfully consider their actions when responding to reports.

Dryden’s  Code of Conduct 101 and FAQ should be required reading for all event organizers and Community Managers. Codes of Conduct remove the grey areas surrounding appropriate and inappropriate behavior and allow groups to set the boundaries for what they want to see happening in their communities. In my opinion, there should not only be a Code of Conduct, but also an accessibility statement that collaboratively outlines what the organizers are doing to make the space accessible and inclusive and addresses and invites concerns and edits.  In her talk at the OKFestival, Penny pointed out that accessibility and inclusion actually makes things better for everyone involved in an event. As she said, “No one wants to sit in a noisy room! For you, it may be annoying, but for me it’s impossible.”

Diversity is not only about getting more women in the room, it is about thinking intersectionally and educating oneself so that all people feel welcome regardless of class, race, physicality, or level of education. I’ve had the remarkable opportunity to go to conferences all over the world this year, and the spaces that have made an obvious effort to think beyond “We have 50% women speakers!” are almost immediately obvious. I felt safe and welcomed at Open Source Bridge and Ada Camp. From food I could actually eat to lanyards that indicated comfort with photography to accessibility lanes, the conference organizers were thoughtful, available, and also kind enough that I could approach them if I needed anything or wanted to talk.

From now on, unless I’m presented a Code of Conduct that is explicit in its enforcement, defines harassment in a comprehensive manner, makes accessibility a priority, and provides trained facilitators to respond to issues, you can count me out of your event.

We can do better in protecting our friends and communities, but change can only begin internally. I am a Community Manager because we get together to educate ourselves and each other as a collaborative community of people from around the world. We should feel safe in the communities of practice that we choose, whether that community is the international Python community, or a local soccer league, or a university. We have the power to change our surroundings and our by extension our future, but it will take a solid commitment from each of us.

Events will never be perfect, but I believe that at least in this respect, we can come damn close.

Mozilla WebDev CommunityBeer and Tell July 2014

Once a month, web developers across the Mozilla community get together to share what side projects or cool stuff we’ve been working on in our spare time. This monthly tribute is known as “Beer and Tell”.

There’s a wiki page listing the presenters and links to what they’re showing off and useful side information. There’s also a recording on Air Mozilla of the meeting.

Pomax: RGBAnalyse and nrAPI

This month Pomax had two projects to show. The first, RGBAnalyse, is a JavaScript library that generates histographical data about the colors in an image. Originally created so he could sort ink colors by hue, the library not only generates the data, but also generates images (available as data-uris) of histograms using that data.

The second project Pomax shared was nrAPI, a Node.js-based REST API for a website for learning Japanese: nihongoresources.com. The API lets you search for basic dictionary info, data on specific Kanji, sound effects, and Japanese names. Search input is accepted in English, Romaji, Hiragana, Katakana, or Kanji.

HTML result from nrAPI search for "tiger".

HTML result from nrAPI search for “tiger”.

Bill Walker: Photo Mosaic via CSS Multi-Column

Next, bwalker shared his personal birding photo site, and talked about a new photo layout he’s been playing with that uses a multi-column layout via CSS. The result is an attractive grid of photos of various sizes without awkward gaps, that can also be made responsive without the use of JavaScript. bwalker also shared the blog post that he learned the technique from.

Dean Johnson: MusicDownloader

deanj shared MusicDownloader, a Python-based program for downloading music from SoundCloud, Youtube, Rdio, Pandora, and HypeScript. The secret sauce is in the submodules, which implement the service-specific download code.

Chris Lonnen: Alonzo

Lastly, lonnen shared alonzo, a Scheme interpreter written in Haskell as part of a mad attempt to learn both languages at the same time. It uses Parsec to implement parsing, and so far implements binary numeric operations, basic conditionals, and even rudimentary error checking. The development roughly follows along “Write Yourself a Scheme in 48 Hours” by Johnathan Tang.

Sample Runs of alonzo


Thanks to all of our presenters for sharing! If you’re interested in attending or presenting at the next Beer and Tell, subscribe to the dev-webdev mailing list! The wiki page and connection info is shared a few days before each meeting.

See you next month!

David BoswellCommunity Building Stories

One of Mozilla’s goals for 2014 is to grow the number of active contributors by 10x. For the first half of the year, the Community Building team has been supporting other teams as they connect more new contributors to their projects.

Everyone on the team recently blogged about their experience supporting projects. The stories below show different stages in the lifecycle of communities and show how we’re helping projects progress through the phases of starting, learning, scaling and then sustaining communities.

We’ve learned a lot from these experiences that will help us complete the goal in the second half of the year. For example, the Geolocation pilot event in Bangalore will be a template for more events that will connect more people to the Location Services project.

Photo courtesy of  Galaxy Kadiyala

Photo courtesy of Galaxy Kadiyala

These are just a few of the stories of community building though. There are many other blog posts to check out and even a video Dia made about how contributors made the Web We Want video available in 29 different languages.

dia_video_poster

I’d love to hear what you’ve been doing to connect with more contributors and to hear about what you’ve learned. Feel free to leave links to your stories in the comments below.


Selena DeckelmannMy recent op-ed published about Portland and startups

I was featured in the Portland Business Journal last Friday! I wrote an essay on startups and the experiences of women in the Portland tech community that have caused me to not refer women into startups for jobs unless the startups are run by fellow PyLadies.

Some excerpts:

It takes more than one CEO’s alleged behavior to cause 56 percent of women to leave technology related fields by mid-career, according to a Harvard Business Review study. That’s twice the rate that men leave the tech industry.

After all, 63 percent of women in STEM industries (science, technology engineering and math) have experienced sexual harassment, according to a 2008 study.

I can’t recommend that women work for startups in Portland.

Startup funders should keep holding executives accountable. Company cultures grow from the seeds planted by their leaders.

These companies need [qualified HR, skilled with workforce diversity issues], and our tech leaders should demand it.

Read the whole thing at the Portland Business Journal’s site!

Doug BelshawFirefoxOS v2.0 is possibly the easiest-to-use smartphone operating system I’ve experienced

My father’s always been a fairly early adopter of technology. He happily uses a device that wirelessly connects his golf club to his iPhone, for example. My mother? Not so much. Until this weekend she was still sporting an old Nokia feature phone. She kind of wanted a smartphone, but didn’t want the complexity, nor the expense.

FirefoxOS v2.0

Meanwhile, I’ve been using a Geeksphone Peak smartphone recently. It’s not the latest FirefoxOS device (that would be the Flame), but it’s a significant step up from last year’s Geeksphone Keon. I’ve been using the pre-release channel of v2.0 of FirefoxOS, which is a departure from previous versions. Whereas they were similar in look and feel to Android, FirefoxOS v2.0 is different.

Every weekend, we go over to my parents’ house for Sunday lunch. Yesterday, we got talking about technology and I showed my parents my FirefoxOS device. One thing led to another, and (because all of my stuff was backed up) I wiped the phone, transferred my mother’s contacts, and swapped SIM cards. My wife gave her some tips, and then we drove off into the sunset with her Nokia phone.

I don’t think I would have felt comfortable leaving her without her old phone to revert back to if I was giving her an Android device or iPhone. There’s something so simple yet so powerful about Firefox v2.0; I’m happy to use it myself and hand it over to other, more technophobic people. Yes, I understand that I’m a Mozilla employee fully invested in the mission, but those who know me understand I also don’t say positive things about specific technologies without good reason.

According to the roadmap, today’s the day that FirefoxOS v2.0 becomes feature complete. There’s some really nice features in there too, like WebRTC (imagine Skype/FaceTime, but just using web technologies), edge gestures (something I really missed from my old Nokia N9) and Sync. If you haven’t had a chance to try one out, I’d take a look at FirefoxOS device over the coming months. The operating system is currently being tested on tablets and TVs and means, of course, smart devices without the usual vendor lock-in.


Note: the screenshots are from this post as I forgot to take some before wiping the phone and lending it to my mother!

Mike RatcliffeView DOM Events in Firefox Developer Tools

I recently realized that support for inspection of DOM events is very poor in pretty much all developer tools. Having seen Opera Dragonflies implementation some time ago I liked the way you could very easily see the scope of an event.

I have used a similar design to add DOM event inspection to Firefox Developer Tools. The event icons are visible in the markup view and if you click on them you can see information about the event including it's handler. If you click the debugger icon at the top left of the popup it will take you to that handler in the debugger.

Visual Events in Firefox Developer Tools Visual Events in Firefox Developer Tools

Whilst developing this feature I noticed that my workflow changed considerably. I found myself repeatedly looking at the event handlers attached to e.g. a button, clicking the debug icon, adding a breakpoint and clicking the button.

We hope this feature will be useful to you. If you have any idea how we can improve this feature then please let us know via our feedback channel or Bugzilla.

Michael VerdiLocalized screencasts perform better – go figure

Planet Mozilla viewers – you can watch this video on YouTube.

I created this video about bookmarks for Firefox 29. It’s in English and has closed captions for a few languages, including German. But you can see from this audience retention data that German speakers don’t watch the video as much as English speakers.
english

So, with Kadir‘s help, I made a German version (above). You can see that this video performs much better in German speaking locales. Of course this is what we expected but it’s cool to see how plainly it shows up.
german

Note: Rewinding and re-watching can result in values higher than 100%.

Hub FiguièreGoing to Guadec

For the first time since 2008, when it was in Istanbul, I'm coming to Guadec. This time it is in Strasbourg, France. Thanks to a work week scheduled just before in Paris.

I won't present anything this year, but I hope to be able to catch up a bit more with the Gnome community. I was already at the summit last fall, as it was being held in Montréal, but Guadec participation is usually broader and wider.

Hannah KaneMaker Party Engagement: Week 1

Maker Party is here!

Last week Geoffrey sent out the Maker Party Marketing Plan and outlined the four strategies we’re using to engage the community in our annual campaign to teach the web.

Let’s see how we’re doing in each of those four areas.

First, some overall stats:

  • Events: 541 as of this writing (with more to be uploaded soon)
  • Hosts: 217 as of this writing
  • Expected attendees: 25,930 as of this writing
  • Contributors: See Adam’s post
  • Traffic: see the image below, which shows traffic to Webmaker.org during the last month. The big spike at the end of June/early July corresponds to the launch of the snippet. You can see another smaller spike at the launch of Maker Party itself.

Screen Shot 2014-07-20 at 9.29.00 AM

——————————————————————–

Engagement Strategy #1: PARTNER OUTREACH

  • # of confirmed event partners: 200 as of this writing
  • # of confirmed promotional partners: 61 as of this writing

We can see from analytics on the RIDs that Web 2.0 Labs/Learning Revolution and National 4H are the leading partners in terms of generating traffic to Webmaker.org. Links attributed to them generated 140 and 68 sessions, respectively.

Additionally, we saw blog posts from these partners:

——————————————————————–

Engagement Strategy #2: ACTIVE MOZILLIANS

  • Appmaker trainings happened at Cantinas in MozSpaces around the world last Thursday. Waiting to hear a tally of how many Mozillians were engaged through those events.
  • You’ve probably seen the event reports on the Webmaker listserve from Reps and Mentors around the world who are throwing Maker Parties.
  • Hives are in full effect! Lots of event uploads this week from the Hive networks.

Note re: metrics—though there’s evidence of a lot of movement within this strategy, I’m not quite sure how to effectively measure it. Would love to brainstorm with others.

——————————————————————–

Engagement Strategy #3: OWNED MEDIA

  • Snippet: The snippet has generated nearly 300M impressions, ~610K clicks, and ~33,500 email sign-ups to date. We now have a solid set of baseline data for the initial click-through rate, and will shift our focus to learning as much as we can about what happens after the initial click. We are working on creating several variants of the most successful icon/copy combination to avoid snippet fatigue. Captured email addresses will be a part of an engagement email campaign moving forward.
  • Mozilla.org: The Maker Party banner went live on July 16 in EN, FR, DE, and es-ES. So far there’s been no correlative spike in traffic, but it’s too early to draw any conclusions about its effectiveness.

——————————————————————–

Engagement Strategy #4: EARNED MEDIA

Our partners at Turner4D have set up several interviews for Mark and Chris as well as Mozillians in Uganda and Kenya.

Radio

Print

English:

Indonesian:

German:

Spanish:

Importantly, Maker Party was included in a Dear Colleague Letter to 435 members of the U.S. Congress this week.

What are the results of earned media efforts?

None of the press we’ve received so far can be directly correlated with a bump in traffic. Because press, when combined with social media and word of mouth, can increase general brand awareness of Mozilla and Maker Parties, one of the data points we are tracking is traffic coming from searches for brand terms like “webmaker” and “maker party.” The graph below shows a spike in that kind of searching the day before the launch, followed by a return to more average levels.

Screen Shot 2014-07-20 at 10.13.35 AM
SOCIAL:

We do not consider social media to be a key part of our strategy to draw in contributors, but it is a valuable supplement to our other efforts, as it allows us to amplify and respond to the community voice.

You can see a big spike in mentions on this #MakerParty trendline: trendline

See #MakerParty tweets here: https://twitter.com/search?q=%23makerparty&src=typd

Some highlights:

tweet1 tweet2 tweet3 tweet4 tweet5 tweet6 tweet7That’s all for this week. Stay tuned. The analysis will get deeper as we collect more data.


Nigel BabuOKFestival - Berlin, 2014

For the first time, I actually attended the OKFestival. I didn’t get to attend many sessions, but the conversations I’ve had are spectacular.

The first surprise was meeting malev. A couple of years ago, we both worked together on the Ubuntu project. Now, he’s an Open News Fellow and I work at Open Knowledge. The FOSS world is truely small :-)

I finally got to meet Christie! I’ve heard of Christie since right before she started at Mozilla, when I first heard of Open Source Bridge, and later she started at Mozilla Webdev, where I was closely involved back then.

Georg came over to say hi on Tuesday. When I realized that he was in Uganda for the Mozfest East Africa, I introduced him to Ketty who was also there, leading to an interesting conversation and great connection.

George Sattler works for XVT solutions in Australia and is our partner. He is fairly certain that I don’t sleep ;) We’ve been having conversations over email for quite a long time and it was great to meet George in person.

The Venue

It’s been a long time since I’ve met Adam Green, the editor of Public Domain Review. It was nice catching up with him. Also, Joris! I hadn’t seen him since he moved on from OKF :-)

I haven’t met Riju since he’s moved to Delhi and I met him in Berlin! Totally random and great running into him :)

The last I met Kaustubh was at Pranesh’s farewell party in October (?). We had a good time catching up.

Folks from local groups across OKF. As a part-time system, I talk to most of the OKF community folks at some point through RT. Additionally, I was going around asking feedback for the sysadmin team. It was great for me to put a face to names and I suspect vice versa as well.

The usual suspects who were great to meet, are of course, my lovely teammates. It’s nice to meet in person, grab a drink, and talk.

Congratulations again to Bea, Megan, Lou, and Naomi for making OKFestival happen!

Cutting the Cake

Will Kahn-GreeneInput status: July 20th, 2014

Summary

This is the status report for development on Input. I publish a status report to the input-dev mailing list every couple of weeks or so covering what was accomplished and by whom and also what I'm focusing on over the next couple of weeks. I sometimes ruminate on some of my concerns. I think one time I told a joke.

Last status report was at the end of June. This status report covers the last few things we landed in 2014q2 as well as everything we've done so far in 2014q3.

Development

Landed and deployed:

  • 6ecd0ce [bug 1027108] Change default doc theme to mozilla sphinx (Anna Philips)
  • 070f992 [bug 1030526] Add cors; add api feedback get view
  • f6f5bc9 [bug 1030526] Explicitly declare publicly-visible fields
  • c243b5d [bug 1027280] Add GengoHumanTranslater.translate; cleanup
  • 3c9cdd1 [bug 1027280] Add human tests; overhaul Gengo tests
  • ff39543 [bug 1027280] Add support for the Gengo sandbox
  • 258c0b5 [bug 1027280] Add test for get_balance
  • 44dd8e5 [bug 1027280] Implement Gengo Human push_translations
  • 35ae6ec [bug 1027280] Clean up API code
  • a7bf90a [bug 1027280] Finish pull_translations and tests
  • c9db147 [bug 1027286] Gengo translation system status
  • f975f3f [bug 1027291] Implement spot Gengo human translation
  • f864b6b [bug 1027295] Add translation_sync cron job
  • c58fd44 [bug 1032226] en-GB should copyover, too
  • 7480f87 [bug 1032226] Tweak the code to be more defensive
  • 7ac1114 [bug 1032571] CSRF exempt the API
  • ac856eb [bug 1032571] Fix tests to catch csrf issues in the api
  • 74e8e09 [bug 1032967] Handle unsupported language pairs
  • 74a409e [bug 1026503] First pass at vagrantification
  • a7a440f Continued working on docs; ditched hacking howto
  • 44e702b [bug 1018727] Backfill translations
  • 69f9b5b Fix date_end issue
  • e59d4f6 [bug 1033852] Better handle unsupported src languages
  • cc3c4d7 Add list of unsupported languages to admin
  • 32e7434 [bug 1014874] Fix translate ux
  • 672abba [bug 1038774] Hide responses from hidden products
  • e23eca5 Fix a goof in the last commit
  • 6f78e2e [bug 947767] Nix authentication for API stuff
  • a9f2179 Fix response view re: non-existent products
  • e4c7c6c [Bug 1030905] fjord feedback api tests for dates (Ian Kronquist)
  • 0d8e024 [bug 935731] Add FactoryBoy
  • 646156f Minor fixes to the existing API docs
  • f69b58b [bug 1033419] Heartbeat backend prototype
  • f557433 [bug 1033419] Add docs for heartbeat posting

Landed, but not deployed:

  • 7c7009b [bug 935731] Switch all tests to use FactoryBoy
  • 2351fb5 Generate locales so ubuntu will quite whining (Ian Kronquist)

Current head: 7ea9fc3

High-level

At a high level, this is:

  1. Landed automated Gengo human translation and a bunch of minor fixes to make it work more smoothly.
  2. Reworked how we build development environments to use vagrant. This radically simplifies the instructions and should make it a lot easier for contributors to build a development environment. This in turn should lead to more people working on Input.
  3. Fixed a bug where products marked as "hidden" were still showing up in the dashboard.
  4. Implemented a GET API for Input responses. (https://wiki.mozilla.org/Firefox/Input/Dashboards_for_Everyone)
  5. Implemented the backend for the Heartbeat prototype. (https://wiki.mozilla.org/Firefox/Input/Heartbeat)
  6. Also, I'm fleshing out the Input section in the wiki complete with project plans. (https://wiki.mozilla.org/Firefox/Input)

Over the next two weeks

  1. Continue fleshing out project plans for in-progress projects on the wiki.
  2. Gradient sentiment and product picker work.

What I need help with

  1. We have a new system for setting up development environments. I've tested it on Linux. Ian has, too (pretty sure he's using Linux). We could use some help testing it on Windows and Mac OSX.

Do the instructions work on Windows? Do the instructions work on Mac OSX? Are there important things the instructions don't cover? Is there anything confusing?

http://fjord.readthedocs.org/en/latest/getting_started.html

  1. I'm changing the way I'm managing Fjord development. All project plans will be codified in the wiki. A rough roadmap of which projects are on the drawing board, in-progress, completed, etc is also on the wiki. I threw together a structure for all of this that I think is good, but it could use some review.

Do these project plans provide useful information? Are there important questions that need answering that the plans do not answer?

https://wiki.mozilla.org/Firefox/Input

If you're interested in helping, let me know! We hang out on #input on irc.mozilla.org and there's the input-dev mailing list.

I think that covers it!

Soledad Penades“Just turn it into a node module”, and other mantras Edna taught me

Here are the screencast and the write up on the talk I gave today at One-Shot London NodeConf. As usual I diverge a bit from the initial narrative and I forgot to mention a couple of topics I wanted to highlight, but I have had a horrible week and considering that, this has turned out pretty well!

It was great to meet so many interesting people at the conference and seeing old friends again! Also now I’m quite excited to hack with a few things. Damn (or yay!).

Slides and code for the slides.

Creative technologist

creative technologist

A little more than a year ago I started working at an agency in London as a creative technologist. It was fun to be trying out all the new things and building cool experiences with them. So whatever new APIs were out there, we would come up with some idea to use them. Sometimes that would be because the client wanted to be “first” in using that browser functionality so theirs would be the first website that did X and that would drive a ton of traffic to them, and other times we just wanted to do some cool experiment for our own website so we would master that technology and also attract traffic and get new clients–that’s how it works in advertising.

We mostly used JavaScript on the front-end (except some old remains from the past built in Flash). On the server, we used a mix of Python and node.js. Most of the Python was actually for setting up websites and authentication in Google App Engine which is what they used to host the websites (so the server wouldn’t go down when the site got popular), but for all the real time communication we used Socket.io running in node.js because it was way easier than Python alternatives.

I had been toying with node.js on and off for a while already, but this was when I started to actually believe that I could just use JS for everything… even build scripts!

The worst habits

But I had the worst habits after a life of many other programming languages, and although I wasn’t consciously aware of them, I knew that something didn’t feel quite right. But what? It was hard to say when everyone else in your environment was OK with whatever solution you came up with as long as it worked well enough and you delivered on time.

This was also part of why I joined Mozilla–I was feeling super guilty that we were building things that were not using standards and they would break in the future, or even worse, set bad precedents and habits. I didn’t want to contribute to a web where the cool experiences only worked on one browser, but I wanted to contribute to make the web more powerful and expressive.

My buddy Jen

cult leader

A couple of months later, I was in Mountain View for my onboarding week. I was disoriented and jetlagged, but also very excited and scared! And more people on from my future team were in Mountain View for that week. One of them, Jen, sent me a message pretty much just as I checked in into the hotel: “hey, ready for dinner?”

I hadn’t even met her or spoken to her during the interviews, so I wasn’t even sure how did she look like in real life. I washed my face and told to myself in the mirror that it was 6pm, NOT 2 AM as my body was trying to protest, and that everything was OK. And went downstairs to meet her.

She was waiting in the parking lot. I had only seen a tiny picture of her with shorter hair in her (very prolific) github profile, but honestly, who waits alone in the parking lot of a hotel in Mountain View at 6pm on a Sunday? We said “hi” and she said: “it’s my birthday today, and I want to have a nice dinner”. Would I oppose such thing? No!

Jen’s a philosopher, therefore she philosophises

best github account ever

We walked to Castro street, spending some time admiring the peculiarities of the business on either side of El Camino Real. You might have a chiropractice, a massage parlour, a beauty salon, and a gun seller, all side by side. It was all very amusing. Then we went into a Moroccan restaurant, and we had to prove our age by showing our respective passports, which was amusing again.

So we started talking about the food first… how it didn’t taste like anything Moroccan I had had before, and whether the Moroccan food I had had either in London or Paris could be considered authentic, or more authentic than this–based on closeness to the “source”. But you can only analyse food so much, so we switched to other topics and at the end of dinner she was telling me how she was going to build a distributed blog system but she would do it into a module so she could then reuse it for other things because it would be generic enough and… with the wine and the jetlag I was really finding it hard to follow.

She continued discussing and elaborating on these ideas during the week. She was hacking on a module that would be called meatspace. And she excitedly told me: “to empty it you would just call the flush method!”. Not being a native English speaker, I didn’t understand what ‘meatspace’ meant initially, so the whole naming seemed disgusting to me. Flushing meat drown the drain to empty the stored messages! GROSS.

rtcamera

My first task to get acquainted with the Mozilla process was to port or rewrite one of my existing Android apps to the web. WebRTC support was coming up soon in Firefox, so I opted to build something similar to Nerdstalgia. And I built something, and then I had Jen code review it. I didn’t know it initially, but she had been appointed my “Moz buddy”, to guide me around and introduce me to the usual processes.

She would keep mentioning this notion of “making things into modules” but I still didn’t quite get it. I regularly extracted my code into libraries, right? So why all this insistence on modules?

Modules

Intrigued (or sparkled) by her insistence, I started extracting modules out of rtcamera. The first one was the Animated_GIF library, and then gumHelper. This was quite providential because a while later she was exploring this idea of a distributed multiuser chat that could use your webcam to capture your facial expression, and because we had these libraries handy, adding them to the stack was very easy. You might, or might not, have heard of a thing called Meatspace Chat

Frida is my muse
Frida is one of Jen’s cats. This is Frida after seeing comma-first JS, according to Potch.

Something that really helped me “get the node way” were her comments on how to pass parameters, and callbacks. This was one of the reasons why I felt my node.js code didn’t feel ‘right’, and that’s because I was using my own adhoc style which was the result of having programmed in many languages but not being profficient in node.js – so I wasn’t using their idioms, and my code felt weird when using other people’s code–even system modules such as fs.

She insisted a lot on using a standard “callback signature” — the function(err, result) style which honestly drove me a bit nuts at the beginning. But if you’re using the same style throughout the code you can exchange node modules or even send the callback to another function, and it’s easier than if you have different signatures on each library.

Simplify

Another of her lessons was that if you were trapped in callback hell, maybe you were doing it conceptually wrong. Maybe you should be simplifying your code so you do calls in a different way. I am not totally sure of what I like most yet–promises or just plain callbacks, but I see her point, and oftentimes I would bring a Promises library to my project, then after refactoring the code so it would be suitable for promises I find that I don’t really need them at all.

Likewise for user interfaces–most of the time we agonise over how pretty something has to look but the fact is that the site might not provide any value to a user and so they leave. We should focus on the experience first, and then maybe we can make the things prettier.

npm

Another important lesson: it’s totally OK to use other people’s modules! Believe it or not, my initial node code almost didn’t use anyone’s modules, and if I used external code that was because I downloaded the code and copied it to the src folder AND loaded it with a local require path. npm? What was that thing?

Funny fact: I was watching RealtimeConf’s live stream because Jen was doing a talk on all the experiments she had been working on and was going to present Meatspace chat for the first time, and so I stayed for a while. And then I learnt a nice lesson not from her directly but from Max Ogden on his RealtimeConf talk: you don’t need to care about the code style in a node module, as long as it works. If it doesn’t, either you replace that module with another one, or you fix it. And again, if you’re using the same signature, this is easier to accomplish–you’re just replacing “boxes”.

Having tests is incredibly useful for this. Jen often writes the module tests first and then the module code itself–so she defines the API that way and sees if it feels natural and intuitive.

At this point there’s no UI code yet, just tests that make sure the logic works. If you can run the same test input data through two different modules you can make sure they do what they are supposed to do, and can also compare their performance or whatever nit is that makes you want to switch modules. This again is way easier if your function signatures are “standard node.js style”.

Browserify

I heard about this one actually the same week I started in Mozilla. But I was unable to appreciate its awesomeness–I was just used to applying Google Closure compiler to my concatenated code and calling it a day. I either concatenated the code with a nice

cat *.js > all.js

or maybe I built Python scripts that would read certain code files and join them together before either invoking a local version of the closure compiler (which was Java), or would send the uncompressed code to the Google Closure service to hopefully get it back, if there weren’t any errors.

But I quickly forgot about it. About some time later, I was looking into building a somewhat complex thing for my JSConf.EU project, and somehow Jen reminded me about it.

This project was a fun one, because I was using everything on it: server side node with Express.js serving the slides, which were advanced according to the music player with virtual Web Audio based instruments that was running on the browser, plus I had Socket.io sending data to and from a hardware MIDI device through OSC. So there were lots of data to pass back and forth and lots of modules to orchestrate, and including script tags in the browser wasn’t going to work well if I wanted to stay sane. So all the front-end code was launched using Browserify.

Another favourite anecdote in this project is that I in fact extracted several modules out of it, with tests and all, that I then reused in other projects. So I was taking advantage of my own work later on, and I like to think that when this happens, more people might find it useful too.

Multiplatform

Finally–and this is not a thing that only Jen taught me– one of the reasons why we like node a lot in Mozilla is because it makes it so much easier to write code that works everywhere. And with that I mean different platforms. As long as node can run in that system, the code can run.

This is very important because most of the times developers assume that everyone else is running the same system they are developing on, and in rich countries this often means the assumption that people use Macs, but that’s not the case everywhere, and certainly not in poorer countries. They use Windows or Linux, and setting up a toolchain to have a working make tool with which to run Makefile is either harder or not for the faint of mind.

So in this context, distributing build scripts written for node.js is way more democratic and helps us get our code to more people than if we used Make or Bash scripts.

And here comes one of my favourite stories–when one of the meatspacers sent me a PR to improve the build system of one of the libraries I had extracted and make it use node.js with uglify instead of the bash script I was using. That simple gesture enabled all the Windows developers to contribute to my module!

Conclusions

  • node modularity is awesome, but it takes time to ‘get it’. It’s OK to not to get things at the first try.
  • If you can find a mentor, it will help you ‘get it’ faster.
  • Otherwise maybe hang on the proper channels (irc, user groups, blogs, confs), study other people’s code and BE A SPONGE (a nodesponge?)
  • Don’t be afraid to experiment but also use the safety harness: tests!
  • And don’t be afraid to publish your code – maybe someone else will find it useful OR give you advice to improve it!

flattr this!

Adam Lofting2014 Contributor Goals: Half-time check-in

We’re a little over halfway through the year now, and our dashboard is now good enough to tell us how we’re doing.

TL;DR:

  • The existing trend lines won’t get us to our 2014 goals
    • but knowing this is helpful
    • and getting there is possible
  • Ask less: How do we count our contributors?
  • Ask more: What are we doing to grow the contributor community? And, are we on track?

Changing the question

Our dashboard now needs to move from being a project to being a tool that helps us do better. After all, Mozilla’s unique strength is that we’re a community of contributors and this dashboard, and the 2014 contributor goal, exist to help us focus our workflows, decisions and investments in ways that empower the community. Not just for the fun of counting things.

The first half of the year focused us on the question “How do we count contributors?”. By and large, this has now been answered.

We need to switch our focus to:

  1. Are we on track?
  2. What are we doing to grow the contributor community?

Then repeating these two question regularly throughout the year, and adjusting our strategy as we go.

Are we on track?

Wearing my cold-dispassionate-metrics hat, and not my “I know how hard you’re all working already” hat, I have to say no (or, not yet).

I’m going to look at this team by team and then look at the All Mozilla Foundation view at the end.

Your task, for each graph below is to take an imaginary marker pen and draw the line for the rest of the year based on the data you can see to date. And only on the data you can see to-date.

  • What does your trend line look like?
  • Is it going to cross the dotted target line in 2014?

OpenNews

Screen Shot 2014-07-18 at 19.48.44

Based on the data to-date, I’d draw a flat line here. Although there are new contributors joining pretty regularly, the overall trend is flat. In marketing terms there is ‘churn’; not a nice term, but a useful one to talk about the data. To use other crass marketing terms, ‘retention’ is as important as ‘acquisition’ in changing the shape of this graph.

Science Lab

Screen Shot 2014-07-18 at 19.49.55

Dispassionately here, I’d have to draw a trend line that’s pointing slightly down. One thing to note in this view is that the Science Lab team have good historic data, so what we’re seeing here is the result of the size of the community in early 2013, and some drop-off from those people.

Appmaker

Screen Shot 2014-07-18 at 19.50.57

This graph is closest to what we want to see generally, i.e. pointing up. But I’ll caveat that with a couple of points. First, taking the imaginary marker pen, this isn’t going to cross the 2014 target line at the current rate. Second, unlike the Science Lab and OpenNews data above, much of this Appmaker counting is new. And when you count things for the first time, a 12 month rolling active total has a cumulative effect in the first year, which increases the appearance of growth, but might not be a long term trend. This is because Appmaker community churn won’t be a visible thing until next year when people could first drop out of the twelve month active time-frame.

Webmaker

Screen Shot 2014-07-18 at 19.51.47

This graph is the hardest to extend with our imaginary marker pen, especially with the positive incline we can see as Maker Party kicks off. The Webmaker plan expects much of the contributor community growth to come from the Maker Party campaign, so a steady incline was not the expectation across the year. But, we can still play with the imaginary marker pen.

I’d do the following exercise: In the first six months, active contributors grew by ~800 (~130 per month), so assuming that’s a general trend (big assumption) and you work back from 10k in December you would need to be at ~9,500 by the end of September. Mark a point at 9,500 contributors above the October tick and look at the angle of growth required throughout Maker Party to get there. That’s not impossible, but it’s a big challenge and I don’t have any historic data to make an informed call here.

Note: the Appmaker/Webmaker separation here is a legacy thing from the beginning of the year when we started this project. The de-duped datastore we’re working on next will allow us to graph: Webmaker Total > Webmaker Tools > Appmaker as separate graphs with separate goals, but which get de-duped and roll-up into the total numbers above, and in turn roll-up into the Mozilla wide total at areweamillionyet.org – this will better reflect the actual overlaps.

Metrics

[ 0 contributors ]

The MoFo metrics team currently has zero active volunteer contributors, and based on the data available to date is trending absolutely flat. Action is required here, or this isn’t going to change. I also need to set a target. Growing 0 by 10X doesn’t really work. So I’ll aim for 10 volunteer contributors in 2014.

All Mozilla Foundation

Screen Shot 2014-07-18 at 19.52.40

Here we’re adding up the other graphs and also adding in ~900 people who contributed to MozFest in October 2013. That MozFest number isn’t counted towards a particular team and simply lifts the total for the year. There is no trend for the MozFest data because all the activity happened at once, but if there wasn’t a MozFest this year (don’t worry, there is!) in October the total line would drop by 900 in a single week. Beyond that, the shape of this line is the cumulative result of the team graphs above.

In Q3, we’ll be able to de-dupe this combined number as there are certainly contributors working across MoFo teams. In a good way, our total will be less that the sum of our parts.

Where do we go from here?

First, don’t panic. Influencing these trend lines is not like trying to shift a nation’s voting trends in the next election. Much of this is directly under our control, or if not ‘control’, then it’s something we can strongly influence. So long as we work on it.

Next, it’s important to note that this is the first time we’ve been able to see these trends, and the first time we can measure the impact of decisions we make around community building. Growing a community beyond a certain scale is not a passive thing. I’ve found David Boswell’s use of the term ‘intentional’ community building really helpful here. And much more tasteful than my marketing vocabulary!

These graphs show where we’re heading based on what we’re currently doing, and until now we didn’t know if we were doing well, or even improving at all. We didn’t have any feedback mechanism on decisions we’d make relating to community growth. Now we do.

Trend setting

Here are some initial steps that can help with the ‘measuring’ part of this community building task.

Going back to the marker pen exercise, take another imaginary color and rather than extrapolate the current trend, draw a positive line that gets you to your target by the end of the year. This doesn’t have to be a straight line; allow your planned activity to shape the growth you want to see. Then ask:

  • Where do you need to be in Aug, Sep, Oct, Nov, Dec?
  • How are you going to reach each of these smaller steps?

Schedule a regular check-in that focuses on growing your contributor community and check your dashboard:

  • Are your current actions getting you to your goals?
  • What are the next actions you’re going to take?

The first rule of fundraising is ‘Ask for money’. People often overlook this. By the same measure, are you asking for contributions?

  • How many people are you asking this week or month to get involved?
  • What percentage of them do you expect to say yes and do something?

Multiply those numbers together and see if it that prediction can get you to your next step towards your goal.

Asking these questions alone won’t get us to our goals, but it helps us to know if our current approach has the capacity to get there. If it doesn’t we need to adjust the approach.

Those are just the numbers

I could probably wrap up this check-in from a metrics point of view here, but this is not a numbers game. The Total Active Contributor number is a tool to help us understand scale beyond the face-to-face relationships we can store in our personal memories.

We’re lucky at Mozilla that so many people already care about the mission and want to get involved, but sitting and waiting for contributors to show up is not going to get us to our goals in 2014. Community building is an intentional act.

Here’s to setting new trends.

Kevin NgoPoker Sessions #19 to #27 - Downswing

Just keep Hulking through the drops.

Since returning from a three-week personal/work/family trip in Florida, things have not gone too hot. Busted out of a $5K (-$80), out of a freeroll (-$60), a $3K (-$90), a couple of small $300s (-$90), a couple of $1500s (-$100), and a couple more $5Ks (-$160). That totals for a -$480 dip. Though I try not to be results-oriented.

The first couple of tournaments were rust. I chalk the rest up to "that's tournament poker". MTTs are naturally swingy, despite playing pretty solid. Most bust outs were failed steals in the higher all-in-or-fold blind levels, a couple were suckouts. But I won't recite every bust-out hand.

Though I have been doing pretty solid live, I have been getting undisciplined in my online poker play. It's time to hit the books and tighten up. Harrington has a solid guide for preflop play that I need to freshen up upon.

After doing some bookkeeping, my poker bankroll after after 27 sessions is +$3272.

Sessions Conclusions

  • Went Well: improving on hand-reading, taking less marginal lines, super patience
  • Mistakes: some loose push-fold play, thinking limpers always have marginal hands
  • Get Better At: studying on late push-fold play, whether it needs to tighten up
  • Profit: -$480

Kim MoirReminder: Release Engineering Special Issue submission deadline is August 1, 2014

Just a friendly reminder that the deadline for the Release Engineering Special Issue is August 1, 2014.  If you have any questions about the submission process or a topic that's you'd like to write about, the guest editors, including myself, are happy to help you!

Kim MoirMozilla pushes - June 2014

Here's June 2014's  analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file

Trends
This was another record breaking month with a total of 12534 pushes.  As a note of interest, this is is over double the number of pushes we had in June 2013. So big kudos to everyone who helped us scale our infrastructure and tooling.  (Actually we had 6,433 pushes in April 2013 which would make this less than half because June 2013 was a bit of a dip.  But still impressive :-)

Highlights
  • 12534 pushes
    • new record
  • 418 pushes/day (average)
    • new record
  • Highest number of pushes/day: 662 pushes on June 5, 2014
    • new record
  • Highest 23.17 pushes/hour (average)
    • new record

General Remarks
The introduction of Gaia-try in April has been very popular and comprised around 30% of pushes in June compared to 29% last month.
The Try branch itself consisted of around 38% of pushes.
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 21% of all the pushes, compared to 22% in the previous month.

Records
June 2014 was the month with most pushes (12534 pushes)
June 2014 has the highest pushes/day average with
418 pushes/day
June 2014 has the highest average of "pushes-per-hour" is
23.17 pushes/hour
June 4th, 2014 had the highest number of pushes in one day with
662 pushes





Mark SurmanQuick thoughts from Kenya

Going anywhere in Africa always energizes me. It surprises me. Challenges my assumptions. Gives me new ideas. And makes me smile. The week I just spent in Nairobi did all these things.

Airtel top up agenda in Nairobi

The main goal of my trip was to talk to people about the local content and simple appmaking work Mozilla is doing. I spent an evening talking with Mozilla community members, a day and a bit with people at Equity Bank and a bunch of time with people from iHub. Here are three of the many thoughts I had while reflecting on the flight home:

Microbusiness is our biggest opportunity for AppMaker

I talked to ALOT of people about the idea of non-techie smartphone users being able to make their own apps.

My main question was: who would want to make their own app rather than just use Facebook? Most of the good answers had to with someone running a very small business. A person selling juice to office workers who wastes alot of travel time taking orders. An up and coming musician who wants a way to pre-sell tickets to loyal fans using mobile money. A chicken farmer outside Nairobi who is always on the phone with the hotels she sells to (pic below, met her and her husband while on a trip with Equity Bank folks). The common thread: simple to make and remix apps could be very useful to very small real world businesses that would benefit from better communications, record keeping and transaction processing via mobile phone.

IMG_20140717_085731~2

Our main priority with AppMaker (or whatever we call it) right now is to get a first cut at on-device authoring out there. In the background, we also really need to be pushing on use cases like these — and the kind of app templates that would enable them. Some people at the iHub in Nairobi have offered to help with prototyping template apps specific to Kenya over the next few months, which will help with figuring this out.

Even online is offline in much of Africa

As I was reminded at MozFest East Africa, even online is offline in much of Africa (and many other parts of the world). In the city, the cost of data for high bandwidth applications like media streaming — or running a Webmaker workshop — is expensive. And, outside the city, huge areas have connections that are spotty or non-existent.

BRCK-in-use

It was great to meet the BRCK people who are building products to address issues like this. Specifically: BRCK is a ruggedized wifi router with a SIM card, useful I/O ports and local storage. Brainstorming with Juliana and Erik from iHub, it quickly became clear that it could be useful for things like Webmaker workshops in places where connectivity is expensive, slow or even non-existent. If you popped a Raspberry Pi on the side, you might even be able create a working version of Webmaker tools like Thimble and Appmaker that people could use locally — with published web pages and apps trickling back or syncing once the BRCK had a connection. The Kenyan Mozillians I talked to were very excited about this idea. Worth exploring.

People buy brands

During a dinner with local Mozillians, a question was raised: ‘what will it take for Firefox OS to succeed in Kenya?’ A debate ensued. “Price,” said one person, “you can’t get a $30 smartphone like the one Mozilla is going to sell.” “Yes you can!”, said another. “But those are China phones,” said someone else. “People want real phones backed by a real brand. If people believe Firefox phones are authentic, they will buy them.”

IMG_20140717_103451~4

Essentially, they were talking about the tension between brand / authenticity / price in commodity markets like smartphones. The contention was: young Kenyan’s are aspiring to move up in the world. An affordable phone backed by a global brand like Mozilla stands for this. Of course, we know this. But it’s a good reminder from the people who care most about Mozilla (our community, pic below of Mozillians from Kenya) that the Firefox brand really needs to shine through on our devices and in the product experience as we roll out phones in more parts of the world.

Mozillians from Nairobi

I’ve got alot more than this rumbling around in my head, of course. My week in Uganda and Kenya really has my mind spinning. In a good way. It’s all a good reminder that the diverse perspectives of our community and our partners are one of our greatest strengths. As an organization, we need to tap into that even more than we already do. I truly believe that the big brain that is the Mozilla Community will be a key factor in winning the next round in our efforts to stand up for the web.


Filed under: mozilla, webmakers

Doug BelshawHOWTO: apply for Webmaker badges

super-mentor-02.jpg

We’re in full swing with Webmaker contribution and web literacy badges at the moment, so I wanted to take a moment to give some advice to people thinking of applying. We already have a couple of pages on the Webmaker blog for the Mentor and Super Mentor badges:

However, I wanted to give some general advice and fill in the gaps.

First of all, it’s worth sharing the guidance page for the people reviewing your application. In the case of a Webmaker Super Mentor badge, this will be a Mozilla paid contributor (i.e. staff member), but for all other badges it may be community member who has unlocked the necessary privileges.

To be clear:


The best applications we’ve seen for the Webmaker badges so far take the explain how the applicant meets each one of the criteria on the badge detail page.

For example, this was Stefan Bohacek’s successful application for the Sharing ‘maker’ badge:

1) Sharing a resource using an appropriate tool and format for the audience: I wrote tutorials for people learning to make websites and web apps and shared them on my blog: http://blog.fourtonfish.com/tagged/tutorial. These also exist as a teaching kit on Webmaker – see my blogpost with a link here: http://blog.fourtonfish.com/post/89157427285/mozilla-webmaker-featured-teaching-kit. Furthermore I created resources for web developers such as http://simplesharingbuttons.com (also see: http://badges.p2pu.org/en/project/477) and some other (mini-)projects here: https://github.com/fourtonfish

2) Tracking changes made to co-created Web resources: I use GitHub for some of my personal projects (although I only received a handful of opened issues) and GitLab with clients I worked with/for.

3) Using synchronous and asynchronous tools to communicate with web communities, networks and groups https://twitter.com/fourtonfish – I follow some of the members of Webmaker (and seldomly frustrate Doug Belshaw with questions) https://plus.google.com/+StefanBohacek/posts – I am a member of the Webmaker community http://webdevrefinery.com/forums/user/18887-ftfish/ – I (infrequently) post here, share ideas, comment on ideas of others etc. stefan@fourtonfish.com – I wouldn’t be able to finish my teaching kit without the help of other webmakers and my email account to communicate with them

Note that Stefan earned his badge for numbers 1) and 3) in the above example. This was enough to meet the requirements as the badge is awarded for meeting any two of the criteria listed on the badge detail page. He did not provide any evidence for using GitHub, as mentioned in 2), so this was not used as evidence by the person reviewing his application.


Applying for a badge is just like applying for anything in life:

  • Make the reviewer’s job easy – they’re looking at lots of applications!
  • Tell the reviewer which of the criteria you think you have met.
  • Include a link for each of the criteria – more than one if you can.
  • If you are stuck, ask for help. A good place to start is the Webmaker discussion forum, or if you know someone who’s already got that badge, ask them to help you!

Questions? Comments? I’m @dajbelshaw or you can email me at doug@mozillafoundation.org. Note that specific badge questions should go in the discussion forum.


Image CC Mozilla in Europe

Botond BalloTrip Report: C++ Standards Committee Meeting in Rapperswil, June 2014

Summary / TL;DR

Project Status
C++14 On track to be published late 2014
C++17 A few minor features so far, including for (elem : range)
Networking TS Ambitious proposal to standardize sockets based on Boost.ASIO
Filesystems TS On track to be published late 2014
Library Fundamentals TS Contains optional, any, string_view and more. Progressing well, expected early 2015
Library Fundamentals TS II Follow-up to Library Fundamentals TS; will contain array_view and more. In early stage
Array Extensions TS Completely stalled. No proposal related to runtime-sized arrays/objects currently has consensus
Parallelism TS Progressing well; expected 2015
Concurrency TS Executors and resumable functions need more work
Transactional Memory TS Progressing well; expected 2015-ish
Concepts (“Lite”) TS Progressing well; expected 2015
Reflection A relatively full-featured compile-time introspection proposal was favourably reviewed. Might target a TS or C++17
Graphics Moving forward with a cairo-based API, to be published in the form of a TS
Modules Clang has a complete implementation for C++, plan to push it for C++17

Introduction

Last week I attended another meeting of the ISO C++ Standards Committee in Rapperswil, Switzerland (near Zurich). This is the third Committee meeting I have attended; you can find my reports about the previous two here (September 2013, Chicago) and here (February 2014, Issaquah). These reports, particularly the Issaquah one, provide useful context for this post.

With C++14’s final ballot being still in progress, the focus of this meeting was the various language and library Technical Specifications (TS) that are planned as follow-ups to C++14, and on C++17.

C++14

C++14 is currently out for its “DIS” (Draft International Standard) ballot (see my Issaquah report for a description of the procedure for publishing a new language standard). This ballot was sent out at the end of the Issaquah meeting, and will close mid-August. If no national standards body poses an objection by then – an outcome considered very likely – then the standard will be published before the end of the year.

Since a ballot was in progress, no changes to the C++14 draft were made during the Rapperswil meeting.

C++17, and what’s up with all these TS’s?

ISO procedure allows the Committee to publish two types of documents:

  • International Standards (IS). These are official standards with stringent backwards-compatibility requirements.
  • Technical Specifications (TS) (formerly called Technical Reports (TR)). These are for things that are not quite ready to be enshrined into an official standard yet, and have no backwards-compatibility requirements. Specifications contained in a TS may or may not be added, possibly with modifications, into a future IS.

C++98 and C++11 are IS’s, as will be C++14 and C++17. The TS/TR form factor has, up until recently, only been used once by the Committee: for TR1, the 2005 library spec that gave us std::tr1::shared_ptr and other library enhancements that were then added into C++11.

Since C++11, in an attempt to make the standardization process more agile, the Committee has been aiming to place significant new language and library features into TS’s, published on a schedule independent of the IS’s. The idea is that being in a TS allows the feature to gain user and implementation experience, which the committee can then use to re-evaluate and possibly revise the feature before putting it into an IS.

As such, much of the standardization work taking place concurrently with and immediately after C++14 is in the form of TS’s, the first wave of which will be published over the next year or so, and the contents of which may then go into C++17 or a subsequent IS, as schedules permit.

Therefore, at this stage, only some fairly minor features have been added directly to C++17.

The most notable among them is the ability to write a range-based for loop of the form for (elem : range), i.e. with the type of the element omitted altogether. As explained in detail in my Issaquah report, this is a shorthand for for (auto&& elem : range) which is almost always what you want. The Evolution Working Group (EWG) approved this proposal in Issaquah; in Rapperswil it was also approved by the Core Working Group (CWG) and voted into C++17.

Other minor things voted into C++17 include:

  • static_assert(condition), i.e. with the message omitted. An implementation-defined message is displayed.
  • auto var{expr}; is now valid and equivalent to T var{expr}; (where T is the deduced type)
  • A template template parameter can now be written as template <...> typename Name in addition to template <...> class Name, to mirror the way a type template parameter can be written as typename Name in addition to class Name
  • Trigraphs (an obscure feature that allowed certain characters, such as #, which are not present on some ancient keyboards, to be written as a three-character sequence, such as ??=) were removed from the language

Evolution Working Group (EWG)

As with previous meetings, I spent most of time in the Evolution Working Group, which spends its time looking at proposals for new language features that either do not fall into the scope of any Study Group, or have already been approved by a Study Group. There was certainly no lack of proposals at this meeting; to EWG’s credit, it got through all of them, at least the ones which had papers in the official pre-Rapperswil mailing.

Incoming proposals were categorized into three rough categories:

  • Approved. The proposal is approved without design changes. They are sent on to CWG, which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Accepted proposals:

  • Opening two nested namespaces with namespace A::B { in addition to namespace A { namespace B {
  • “Making return explicit”. This means that if a class A has an explicit constructor which takes a parameter of type B, then, in a function whose return type is A, it is valid to write return b; where b has type B. (Currently, one has to writen return A(b);.) The idea is to avoid repetition; a very common use case is A being std::unique_ptr<T> for some T, and B being T*. This proposal was relatively controversial; it passed with a weak consensus in EWG, and was also discussed in the Library Evolution Working Group (LEWG), where there was no consensus for it. I was surprised that EWG passed this to CWG, given the state of the consensus; in fact, CWG indicated that they would like additional discussion of it in a forum that includes both EWG and LEWG members, before looking at it in CWG.
  • A preprocessor feature for testing for the presence of a (C++11-style) attribute: __has_cpp_attribute(attribute_name)
  • A couple that I already mentioned because they were also passed in CWG and voted into C++17 at the same meeting:

Proposals for which further work is encouraged:

  • A proposal to make C++ more friendly for embedded systems development by reducing the overhead of exception handling, and further expanding the expressiveness of constexpr. EWG encouraged the author to gather people interested in this topic and form a Study Group to explore it in more detail.
  • A proposal to convey information to the compiler about aliasing, via attributes. This is intended to be an improvment to C99’s restrict.
  • A way to get the compiler to, on an opt-in basis, generate equality and comparison operators for a structure/class. Everyone wanted this feature, but there were disagreements about how terse the syntax should be, whether complementary operators should be generated automatically (e.g. != based on ==), how exactly the compiler should implement the operators (particularly for comparison – questions of total order vs. weaker orders came up), and how mutable members should be handled.
  • A proposal for a per-platform portable C++ ABI. I will talk about this in more detail below.
  • A way to force the compiler to omit padding between two structure fields
  • A way to specify that a class should be converted to another type in auto initialization. That is, for a class C, to specify that in auto var = c; (with c having type C), the type of var should actually be some other type D. The motivating use here is expression templates; in Matrix X, Y; auto Z = X * Y; we want the type of Z to be Matrix even if the type of X * Y is some expression template type. EWG liked the motivation, but the proposal tried to modify the semantics of template parameter deduction for by-value parameters so as to remain consistent with auto, and EWG was concerned that this was starting to encroach on too many areas of the language. The author was encouraged to come back with a more limited-scope proposal that concerned auto initialization only.
  • Fixed-size template parameter packs (typename...[K]) , and packs where all parameters must be of the same type (T...[N]). EWG liked the idea, but had some concerns about syntactic ambiguities. The proposal also generated renewed interest in the idea of subscripting parameter packs (e.g. Pack[0] gives you the first parameter), to avoid having to use recursion to iterate over the parameters in many cases.
  • Expanding parameter packs as expressions. Currently, if T is a parameter pack bound to parameters A, B, and C, then T... expands to A, B, C; this expansion is allowed in various contexts where a comma-separated list of things (types or expressions, as the parameters may be) is allowed. The proposal here is to allow things like T +... which would expand to A + B + C, which would be allowed in an expression context.

Rejected proposals:

  • Objects of runtime size. This would have allowed a pure library implementation of something like std::dynarray (and allowed users to write similar classes of their own), but it unfortunately failed to gain consensus. More about this in the Array Extensions TS section.
  • Additional flow control mechanisms like break label;, continue label; and goto case value;. EWG thought these encouraged hard-to-follow control flow.
  • Allowing specifiers such as virtual, static, override, and some others, to apply to a group of members the way acess specifiers (private: etc.) currently do. The basis for rejection here was that separating these specifiers from the members they apply to can make class definitions less readable.
  • Specializing an entity in a different namespace without closing the namespaces you are currently in. Rejected because it’s not clear what would be in scope inside the specialization (names from the entity’s namespace, the current namespace, or both).
  • <<< and >>> operators for circular bit-shifts. EWG felt these would be more appropriate as library functions.
  • A rather complicated proposal for annotating template parameter packs that claimed to be a generalization of the proposal for fixed-size template parameter packs. While no polls were taken on this proposal, the general feeling of the room was that it would make the language much more complicated, while the benefit would mostly be for template metaprogramming; also, several of the use cases can be satisfied with Concepts instead.
  • Throwing an exception on stack exhaustion. The implementers in the room felt this was not implementable.

I should also mention the proposal for named arguments that Ehsan and I have been working on. We did not prepare this proposal in time to get it into the pre-Rapperswil mailing, and as such, EWG did not look at it in Rapperswil. However, I did have some informal discussions with people about it. The main concerns were:

  • consideration for constructor calls with {...} syntax and, by extension, aggregate initialization
  • the relationship to C99 designated initializers (if we are covering aggregate initializtion, then these can be viewed as competing syntaxes)
  • most significantly: parameter names becoming part of a library’s interface that library authors then have to be careful not to break

Assuming we are able to address these concerns, we will likely write an updated proposal, get it into the pre-Urbana mailing (Urbana-Champaign, Illinois is the location of the next Committee meeting in November), and present it at the Urbana meeting.

Portable C++ ABI

One of the most exciting proposals at this meeting, in my opinion, was Herb Sutter’s proposal for a per-platform portable C++ ABI.

A per-platform portable ABI means that, on a given platform (where “platform” is generally understood to a mean the combination of an operating system, processor family, and bitness), binary components can be linked together even if they were compiled with different compilers, or different versions of the same compiler, or different compiler options. The current lack of this in C++ is, I think, one of C++’s greatest weaknesses compared to other languages like C or Java.

More specifically, there are two aspects to ABI portability: language and library. On the language side, portability means that binary components can be linked together as long as, for any interface between two components (for example, for a function that one component defines and the other calls, the interface would consist of the function’s declaration, and the definitions of any types used as parameters or return type), the two components are compiled from identical standard-conforming source code for that interface. On the library side, portability additionally means that interfaces between components can make use of standard library types (this does not follow solely from the language part, because different compilers may not have identical source code for their standard library types).

It has long been established that it is out of scope for the C++ Standard to prescribe an ABI that vendors should use (among other reasons, because parts of an ABI are inherently platform-specific, and the standard cannot enumerate every platform and prescribe something for each one). Instead, Herb’s proposal is that the standard codify the notions of a platform and a platform owner (an organization/entity who controls the platform); require that platform owners document an ABI (in the case of the standard library, this means making available the source code of a standard library implementation) which is then considered the platform ABI; and require compiler and standard library vendors to support the platform ABI to be conformant on any given platform.

In order to ease transitioning from the current world where, on a given platform, the ABI can be highly dependent on the compiler, the compiler version, or even compiler options, Herb also proposes some mechanisms for delineating a portion of one’s code which should be ABI-portable, and therefore compiled using the platform ABI. These mechanisms are a new linkage (extern "abi") on the language side, and a new namespace (std::abi, containing the same members as std) on the library side. The idea is that one can restrict the use of these mechanisms to code that constitutes component interfaces, thereby achieving ABI portability without affecting other code.

This proposal was generally well-received, and certainly people agreed that a portable ABI is something C++ needs badly, but some people had concerns about the specific approach. In particular, implementers were uncomfortable with the idea of potentially having to support two different ABI’s side-by-side in the same program (the platform ABI for extern "abi" entities, and the existing ABI for other entities), and, especially, with having two copies of every library entity (one in std and one in std::abi). Other concerns about std::abi were raised as well, such as the performance penalty arising from having to convert between std and std::abi types in some places, and the duplication being difficult to teach. It seemed that a modified proposal that concerned the language only and dropped std::abi would have greater consensus.

Array Extensions TS

The Array Extensions TS was initally formed at the Chicago meeting (September 2013) when the committee decided to pull out arrays of runtime bound (ARBs, the C++ version of C’s VLAs) and dynarray, the standard library class for encapsulating them, out of C++14 and into a TS. This was done mostly because people were concerned that dynarray required too much compiler magic to implement. People expressed a desire for a language feature that would allow them to implement a class like dynarray themselves, without any compiler magic.

In Issaquah a couple of proposals for such a language feature were presented, but they were relatively early-stage proposals, and had various issues such as having quirky syntax and not being sufficiently general. Nonetheless there was consensus that a library component is necessary, and we’d rather not have ARBs at all than get them without a library component to wrap them into a C++ interface.

At this meeting, a relatively fully fleshed-out proposal was presented that gave programmers a fairly general/flexible way to define classes of runtime size. Unfortunately, it was considered a complex change that touches many parts of the language, and there was no consensus for going forward with it.

As a result, the Array Extensions TS is completely stalled: ARBs themselves are ready, but we don’t want them without a library wrapper, and no proposal for a library wrapper (or mechanism that would enable one to be written) has consensus. This means that the status quo of not being able to use VLAs in C++ (unless a vendor enables C-stle VLAs in C++ as an extension) will remain for now.

Library / Library Evolution Working Groups (LWG and LEWG)

Library work at this meeting included the Library Fundamentals TS (and its planned follow-up, Library Fundamentals II), the Filesystems TS and Networking TS (about which I’ll talk in the SG 3 and SG 4 sections below), and reviewing library components of other projects like the Concurrency TS.

The Library Fundamentals TS was in the wording review stage at this meeting, with no new proposals being added to it. It contains general library utilities such as optional, any, string_view, and more; see my Issaquah report for a full list. The current draft of the TS can be found here. At the end of the meeting, the Committee voted to send out the TS for its first national body ballot, the PDTS (Preliminary Draft TS) ballot. This ballot concludes before the Urbana meeting in November; if the comments can be addressed during that meeting and the TS sent out for its second and final ballot, the DTS (Draft TS) ballot, it could be published in early 2015.

The Committee is also planning a follow-up to the Library Fundamentals TS, called the Library Fundamentals TS II, which will contain general utilities that did not make it into the first one. Currently, it contains one proposal, a generalized callable negator; another proposal, containing library facilities for contract programming, got rejected for several reasons, one of them being that it is expected to be obsoleted in large part by reflection. Proposals under consideration to be added include:

Study Groups

SG 1 (Concurrency)

SG 1 focuses on two areas, concurrency and parallelism, and there is one TS in the works for each.

I don’t know much about the Parallelism TS other than it’s in good shape and was sent out for its PDTS ballot at the end of the meeting, which could lead to publication in 2015.

The status of the Concurrency TS is less certain. Coming into the Rapperswil meeting, the Concurrency TS contained two things: improvements to std::future (notably a then() method for chaining a second operation to it), and executors and schedulers, with resumable function being slated for addition.

However, the pre-Rapperswil mailing contained several papers arguing against the existing designs for executors and resumable functions, and proposing alternative designs instead. These papers led to executors and schedulers being removed from the Concurrency TS, and resumable functions not being added, until people come to a consensus regarding the alternative designs. I’m not sure whether publication of the Concurrency TS (which now contains only the std::future improvements) will proceed, leaving executors and resumable functions for a follow-up TS, or be stalled until consensus on the latter topics is reached.

For resumable functions, I was in the room during the technical discussion, and found it quite interesting. The alternative proposal is a coroutines library based on Boost.Coroutine. The two proposals differ both in syntax (new keywords async and await vs. the magic being hidden entirely behind a library interface), and implementation technique for storing the local variables of a resumable function (heap-allocated “activation frames” vs. side stacks). The feedback from SG 1 was to disentangle these two aspects, possibly yielding a proposal where either syntax could be matched with either implementation technique.

There are also other concurrency-related proposals before SG 1, such as ostream buffers, latches and barries, shared_mutex, atomic operations on non-atomic data, and a synchronized wrapper. I assume these will go into either the current Concurrency TS, or a follow-up TS, depending on how they progress through the committee.

SG 2 (Modules)

Modules is, in my opinion, one of the most sorely needed features in C++. They have the potential of decreasing compile times by an order of magnitude or more, thus bringing compile-time performance more in line with more modern languages, and of solving the combinatorial explosion problem, caused by macros, that hampers the development of powerful tooling such as automated refactoring.

The standardization of modules has been a slow process for two reasons. First, it’s an objectively difficult problem to solve. Second, the solution shares some of the implementation difficulties of the export keyword, a poorly thought-out feature in C++98 that sought to allow separate compilation of templates; export was only ever implemented by one compiler (EDG), and the implementation process revealed flaws that led not only to other compilers not even bothering to implement it, but also to the feature being removed from the language in C++11. This bad experience with export led people to be uncertain about whether modules are even implementable. As a result, while some papers have been written proposing designs for modules (notably, one by Daveed Vandevoorde a couple of years back, and one by Gabriel Dos Reis very recently, what everyone has really been holding their breaths for was an implementation (of any variation/design), to see that one was possible.

Google and others have been working on such an implementation in Clang, and I was very excited to hear Chandler Carruth (head of Google’s team working on Clang) report that they have now completed it! As this work was completely very recently prior to the meeting, they did not get a chance to write a paper to present at this meeting, but Chandler said one will be forthcoming for the next meeting.

EWG held a session on Modules, where Gaby presented his paper, and the Clang folks discussed their implementation. There were definitely some differences between the two. Gaby’s proposal came across as more idealistic: this is what a module system should look like if you’re writing new code from scratch with modules. Clang’s implementation is more practical: this is how we can start using modules right away in existing codebases. For example, in Gaby’s proposal, a macro defined in a module is not visible to an importing module; in Clang’s implementation, it is, reflecting the reality that today’s codebases still use macros heavily. As another example, in Gaby’s proposal, a module writer must say which declarations in the module file are visible to importing modules by surrounding them in an export block (not to be confused with the failed language feature I talked about above); in Clang’s implementation, a header file can be used as a module without any changes, using an external “module map” file to tell the compiler it is a module. Another interesting design question that came up was whether private members of a class exported from a module are “visible” to an importing module (in the sense that importing modules need to be recompiled if such a private member is added or modified); in Clang’s implementation, this is the case, but there would certainly be value in avoiding this (among other things, it would obsolete the laborious “Pimpl” design pattern).

The takeaway was that, while everyone wants this feature, and everyone is excited about there finally being an implementation, several design points still need to be decided. EWG deemed that it was too early to take any polls on this topic, but instead encouraged the two parties (the Clang folks, and Gaby, who works for Microsoft and hinted at a possible Microsoft implementation effort as well) to collaborate on future work. Specifically, EWG encourages that the following papers be written for Urbana: one about what is common to the various proposals, and one or more about the remaining deep technical issues. I eagerly await such future work and papers.

SG 3 (Filesystems)

At this meeting, the Library Working Group finished addressing the ballot comments for the Filesystem TS’s PDTS ballot, and sent out the TS for the final “DTS” ballot. If this ballot is successful, the Filesystems TS will be published by the end of 2014.

Beman (the SG 3 chair) stated that SG 3 will entertain new filesystem-related proposals that build upon the Filesystems TS, targetting a follow-up Filesystems TS II. To my knowledge no such proposals have been submitted so far.

SG 4 (Networking)

SG 4 had been working on standardizing basic building blocks related to networking, such as IP addresses and URIs. However, these efforts are stalled.

As a result, the LEWG decided at this meeting to take it upon itself to advance networking-related proposals, and they set their sights on something much more ambitious than IP addresses and URIs: a full-blown sockets library, based on Boost.ASIO. The plan is basically to pick up Chris Kohlhoff’s (the author of ASIO) 2007 paper proposing ASIO for standardization, incorporating changes to update the library for C++11, as well as C++14 (forthcoming). This idea received very widespread support in LEWG; the group decided to give people another meeting to digest the new direction, and then propose adopting these papers as the working paper for the Networking TS in Urbana.

This change in pace and direction might seem radical, but it’s in line with the committee’s philosophy for moving more rapidly with TS’s. Adopting the ASIO spec as the initial Networking TS working paper does not mean that the committee agrees with every design decision in ASIO; on the contrary, people are likely to propose numerous changes to it before it gets standardized. However, having a working paper will give people something to propose changes against, and thus facilitate progress.

SG 5 (Transactional Memory)

The Transactional Memory TS is progressing well through the committee. CWG began reviewing its wording at this meeting, and referred one design issue to EWG. (The issue concerned functions that were declared to be neither transaction-safe nor transaction-unsafe, and defined out of line (so the compiler cannot compute the transaction safety from the definition). The state of the proposal coming into the discussion was that for such functions, the compiler must assume that they can be either transaction-safe or transaction-unsafe; this resulted in the compiler sometimes needing to generate two versions of some functions, with the linker stripping out the unused version if you’re lucky. EWG preferred avoiding this, and instead assuming that such functions are transaction-unsafe.) CWG will continue reviewing the wording in Urbana, and hopefully sendout the TS for its PDTS ballot then.

SG 6 (Numerics)

Did not meet in Rapperswil, but plans to meet in Urbana.

SG 7 (Reflection)

SG 7 met for an evening session and looked at three papers:

  • The latest version of the source code information capture proposal, which aims to replace the __LINE__, __FILE__, and __FUNCTION__ macros with a first-class language feature. There was a lot of enthusiasm for this idea in Issaquah, and now that it’s been written up as a paper, SG 7 is moving on it quickly, deciding to send it right on to LEWG with only minor changes. The publication vehicle – with possible choices being Library Fundamentals TS II, a hypothetical Reflection TS, or C++17 – will be decided by LEWG.
  • The type member property queries proposal by Andrew Tomazos. This is an evolution of an earlier proposal which concerned enumerations only, and which was favourably reviewed in Issaquah; the updated proposal extends the approach taken for enumerations, to all types. The result is already a quite featureful compile-time introspection facility, on top of which facilities such as serialization can be built. It does have one significant limitation: it relies on forming pointers to members, and thus cannot be used to introspect members to which pointers cannot be formed – namely, references and bitfields. The author acknowledged this, and pointed out that supporting such members with the current approach would require language changes. SG 7 did not deem this a deal-breaker problem, possibly out of optimism that such language changes would be forthcoming if this facility created a demand for them. Overall, the general direction of the proposal had basically unanimous support, and the author was encouraged to come back with a revised proposal that splits out the included compile-time string facility (used to represent names of members for introspection) into a separate, non-reflection-specific proposal, possibly targeted at Library Fundamentals II. The question of extending this facility to introspect things other than types (notably, namespaces, although there was some opposition to being able to introspect namespaces) also came up; the consensus here was that such extensions can be proposed separately when desired.
  • A more comprehensive static reflection proposal was looked at very, very briefly (the author was not present to speak about it in detail). This was a higher-level and more featureful proposal than Tomazos’ one; the prevailing opinion was that it is best to standardize something lower-level like Tomazos’ proposal first, and then consider standardizing higher-level libraries that build on it if appropriate.

SG 8 (Concepts)

The Concepts TS (formerly called “Concepts Lite”, but then people thought “Lite” was too informal to be in the title of a published standard) is still in the CWG review stage. Even though CWG looked at it in Issaquah, and the author and project editor, Andrew Sutton, revised the draft TS significantly for Rapperswil, the feature touches many areas of the language, and as such more review of the wording was required; in fact, CWG spent almost three full days looking at it this time.

The purpose of a CWG review of a new language feature is twofold: first, to make sure the feature meshes well with all areas of the language, including interactions that the author and EWG may have glossed over; and second, to make sure that the wording reflects the author’s intention accurately. In fulfilling the first objective, CWG often ends up making minor changes to a feature, while staying away from making fundamental changes to the design (sometimes, recommendations for more significant changes do come up during a CWG review – these are run by EWG before being acted on).

In the case of the Concepts TS, CWG made numerous minor changes over the course of the review. It was initially hoped that there would be time to revise the wording to reflect these changes, and put the reivsed wording out for a PDTS ballot by the end of the meeting, but the changes were too numerous to make this feasible. Therefore, the PDTS ballot proposal was postponed until Urbana, and Andrew has until then to implement the wording changes.

SG 9 (Ranges)

SG 9 did not meet in Rapperswil, but does plan to meet in Urbana, and I anticipate some exciting developments in Urbana.

First, I learned that Eric Niebler, who in Issaquah talked about an idea for a Ranges proposal that I thought was very exciting (I describe it in my Issaquah report), plans to write up his idea as a proposal and present it in Urbana.

Second, one of the attendees at Rapperswil, Fabio Fracassi, told me that he is also working on a (different) Ranges proposal that he plans to present in Urbana as well. I’m not familiar with his proposal, but I look forward to it. Competition is always healthy when it comes up early-stage standards proposal / choosing an approach to solving a problem.

SG 10 (Feature Test)

I didn’t follow the work of SG 10 very closely. I assume that, in addition to the __has_cpp_attribute() preprocessor feature that I mentioned above in the EWG section, they are kept busy by the flow of new features being added into working papers, for each of which they have to decide whether the feature deserves a feature-test macro, and if so standardize a name for one.

Clark (the SG 10 chair) did mention that the existence of TS’s complicates matters for SG 10, but I suppose that’s a small price to pay for the benefits of TS’s.

SG 12 (Undefined Behaviour)

Did not meet in Rapperswil, but plans to meet in Urbana.

SG 13 (Human Interaction, formerly “Graphics”)

SG 13 met for a quarter-day session, during which Herb presented an updated version of the proposal for a cairo-based 2D drawing API. A few interesting points came up in the discussion:

  • The interface being standardized is a C++ wrapper interface around cairo that was generated using a set of mechanical transformation rules applied to cairo’s interface. The transformation rules are not being standardized, only their result (so the standard interface can potentially diverge from cairo in the future, though presumably this wouldn’t be done without a very good reason).
  • I noted that Mozilla is moving away from cairo, in part due to inefficiencies caused by cairo being a stateful API (as explained here). It was pointed out that this inefficiency is an issue of implementation (due to cairo using a stateless layer internally), not one of interface. This is a good point, although I’m not sure how much it matters in practice, as standard library vendors are much more likely to ship cairo’s implementation than write their own. (Jonathan Wakely said so about libstdc++, but I think it’s very likely the case for other vendors as well.)
  • Regarding the tradeoff between a nice interface and high performance, Herb said the goal was to provide a nice interface while providing as good of a performance as we can get, without necesarily squeezing every last ounce of performance.
  • The library has the necessary extension points in place to allow for uses such as hooking into drawing onto a drawing surface of an existing library, such as a Qt canvas (with the cooperation of the existing library, cairo, and the platform, of course).

The proposal is moving forward: the authors are encouraged to come back with wording.

TS Content Guidelines

One mildly controversial issue that came to a vote in the plenary meeting at the end of the week, is the treatment of modifications/extensions to standard library types in a Technical Specification. One group held that the simplest thing to do for users is to have the TS specify modifications to the types in std:: themselves. Another group was of the view that, in order to make life easier for a third-party library vendor to implement a TS, as well as to ensure that it remains practical for a subsequent IS to break the TS if it needs to, the types that are modified should be cloned into an std::experimental:: namespace, and the modifications applied there. This second view prevailed.

Next Meeting

The next Committee meeting (“Urbana”) will be at the University of Illinois at Urbana-Champaign, the week of November 3rd.

Conclusion

The highlights of the meeting for me, personally, were:

  • The relevation that clang has completed their modules implementation, that they will be pushing it for C++17, and that they are fairly confident that they will be able to get it in. The adoption of a proper modules system has the potential to revolutionize compilation speeds and the tooling landscape – revolutions that C++ needs badly.
  • Herb’s proposal for a portable C++ ABI. It is very encouraging to see the committee, which has long held this issue to be out of its scope, looking at a concrete proposal for solving a problem which, in my opinion, plays a significant role in hampering the use of C++ interfaces in libraries.
  • LEWG looking at bringing the entire Boost.ASIO proposal into the Networking TS. This dramatically brings forward the expected timeframe of having a standard sockets library, compared to the previous approach of standardizing first URIs and IP addresses, and then who knows what before finally getting to sockets.

I eagerly await further developments on these fronts and others, and continue to be very excited about the progress of C++ standardization.


Ryan KellyAn Experiment in Improving Compressiblity

Henrik SkupinFirefox Automation report – week 25/26 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 25 and 26.

Highlights

June the 11th was actually the last Automation Training day for our team in Q3. About the results you can read here. We will implement some changes for the next quarter, when we most likely want to host 2 of them.

Henrik finally got the time to upgrade our Mozmill-CI systems to the lastest LTS version of Jenkins. There were a bit of changes necessary but in general all went fine this time, and we can see some great improvements. Especially the long delays when sending out job results seem to be gone.

Further Henrik investigated the slow behavior with the mozmill-ci production master, when it is under load, e.g. QA runs ondemand update tests for releases of Firefox. The main problem stays with Java, which is taking up about 100% of the CPU. Because of this the integrated web server cannot serve pages in a timely manner. Adding a 2nd CPU to this node gave us way better response times.

Given that the new version of Ubuntu came out already in April, we want to have our Mozmill tests also run on that platform version. So we got new VM spun-up by IT, which we now have to puppetize and bring online. But this may still take a bit, given the remaining blockers for using PuppetAgain.

While talking about Puppet we got the next big change reviewed and landed. With bug 1021230 we now have our own user account, which can be customized to our needs. And that’s what we totally need, given that our infrastructure is so different from the Releng one.

Also for TPS we made progress, so the new TPS-CI production machine came online. Yet it cannot replace the current CI due to still a fair amount of open blockers, but hopefully by end of July we should be able to turn the switch.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 25 and week 26.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 25 and week 26.

David Rajchenbach TellerThe Battle of Session Restore – Season 1 Episode 3 – All With Measure

Plot For the second time, our heroes prepared for battle. The startup of Firefox was too slow and Session Restore was one of the battle fields.

When Firefox starts, Session Restore is in charge of restoring the browser to its previous state, in case of a crash, a restart, or for the users who have configured Firefox to resume from its previous state. This entails numerous activities during startup:

  1. read sessionstore.js from disk, decode it and parse it (recall that the file is potentially several Mb large), handling errors;
  2. backup sessionstore.js in case of startup crash.
  3. create windows, tabs, frames;
  4. populate history, scroll position, forms, session cookies, session storage, etc.

It is common wisdom that Session Restore must have a large impact on Firefox startup. But before we could minimize this impact, we needed to measure it.

Benchmarking is not easy

When we first set foot on Session Restore territory, the contribution of that module to startup duration was uncharted. This was unsurprising, as this aspect of the Firefox performance effort was still quite young. To this day, we have not finished chartering startup or even Session Restore’s startup.

So how do we measure the impact of Session Restore on startup?

A first tool we use is Timeline Events, which let us determine how long it takes to reach a specific point of startup. Session Restore has had events `sessionRestoreInitialized` and `sessionRestored` for years. Unfortunately, these events did not tell us much about Session Restore itself.

The first serious attempt at measuring the impact of Session Restore on startup Performance was actually not due to the Performance team but rather to the metrics team. Indeed, data obtained through Firefox Health Report participants indicated that something wrong had happened.

Oops, something is going wrong

Indicator `d2` in the graph measures the duration between `firstPaint` (which is the instant at which we start displaying content in our windows) and `sessionRestored` (which is the instant at which we are satisfied that Session Restore has opened its first tab). While this measure is imperfect, the dip was worrying – indeed, it represented startups that lasted several seconds longer than usual.

Upon further investigation, we concluded that the performance regression was indeed due to Session Restore. While we had not planned to start optimizing the startup component of Session Restore, this battle was forced upon us. We had to recover from that regression and we had to start monitoring startup much better.

A second tool is Telemetry Histograms for measuring duration of individual operations, such as reading sessionstore.js or parsing it. We progressively added measures for most of the operations of Session Restore. While these measures are quite helpful, they are also unfortunately very unstable in real-world conditions, as they are affected both by scheduling (the operations are asynchronous), by the work load of the machine, by the actual contents of sessionstore.js, etc.

The following graph displays the average duration of reading and decoding sessionstore.js among Telemetry participants: Telemetry 4

Difference in colors represent successive versions of Firefox. As we can see, this graph is quite noisy, certainly due to the factors mentioned above (the spikes don’t correspond to any meaningful change in Firefox or Session Restore). Also, we can see a considerable increase in the duration of the read operation. This was quite surprising for us, given that this increase corresponds to the introduction of a much faster, off the main thread, reading and decoding primitive. At the time, we were stymied by this change, which did not correspond to our experience. We have now concluded that by changing the asynchronous operation used to read the file, we have simply changed the scheduling, which makes the operation appear longer, while in practice it simply does not block the rest of the startup from taking place on another thread.

One major tool was missing for our arsenal: a stable benchmark, always executed on the same machine, with the same contents of sessionstore.js, and that would let us determine more exactly (almost daily, actually) the impact of our patches upon Session Restore:Session Restore Talos

This test, based on our Talos benchmark suite, has proved both to be very stable, and to react quickly to patches that affected its performance. It measures the duration between the instant at which we start initializing Session Restore (a new event `sessionRestoreInit`) and the instant at which we start displaying the results (event `sessionRestored`).

With these measures at hand, we are now in a much better position to detect performance regressions (or improvements) to Session Restore startup, and to start actually working on optimizing it – we are now preparing to using this suite to experiment with “what if” situations to determine which levers would be most useful for such an optimization work.

Evolution of startup duration

Our first benchmark measures the time elapsed between start and stop of Session Restore if the user has requested all windows to be reopened automatically

restoreAs we can see, the performance on Linux 32 bits, Windows XP and Mac OS 10.6 is rather decreasing, while the performance on Linux 64 bits, Windows 7 and 8 and MacOS 10.8 is improving. Since the algorithm used by Session Restore upon startup is exactly the same for all platforms, and since “modern” platforms are speeding up while “old” platforms are slowing down, this suggests that the performance changes are not due to changes inside Session Restore. The origin of these changes is unclear. I suspect the influence of newer versions of the compilers or some of the external libraries we use, or perhaps new and improved (for some platforms) gfx.

Still, seeing the modern platforms speed up is good news. As of Firefox 31, any change we make that causes a slowdown of Session Restore will cause an immediate alert so that we can react immediately.

Our second benchmark measures the time elapsed if the user does not wish windows to be reopened automatically. We still need to read and parse sessionstore.js to find whether it is valid, so as to decide whether we can show the “Restore” button on about:home.

norestoreWe see peaks in Firefox 27 and Firefox 28, as well as a slight decrease of performance on Windows XP and Linux. Again, in the future, we will be able to react better to such regressions.

The influence of factors upon startup

With the help of our benchmarks, we were able to run “what if” scenarios to find out which of the data manipulated by Session Restore contributed to startup duration. We did this in a setting in which we restore windows:size-restore

and in a setting in which we do not:

size-norestore

Interestingly, increasing the size of sessionstore.js has apparently no influence on startup duration. Therefore, we do not need to optimize reading and parsing sessionstore.js. Similarly, optimizing history, cookies or form data would not gain us anything.

The single largest most expensive piece of data is the set of open windows – interestingly, this is the case even when we do not restore windows. More precisely, any optimization should target, by order of priority:

  1. the cost of opening/restoring windows;
  2. the cost of opening/restoring tabs;
  3. the cost of dealing with windows data, even when we do not restore them.

What’s next?

Now that we have information on which parts of Session Restore startup need to be optimized, the next step is to actually optimize them. Stay tuned!


Nick CameronRust for C++ programmers - part 9: destructuring pt2 - match and borrowing

(Continuing from part 8, destructuring).

(Note this was significantly edited as I was wildly wrong the first time around. Thanks to /u/dpx-infinity for pointing out my mistakes.)

When destructuring there are some surprises in store where borrowing is concerned. Hopefully, nothing surprising once you understand borrowed references really well, but worth discussing (it took me a while to figure out, that's for sure. Longer than I realised, in fact, since I screwed up the first version of this blog post).

Imagine you have some `&Enum` variable `x` (where `Enum` is some enum type). You have two choices: you can match `*x` and list all the variants (`Variant1 => ...`, etc.) or you can match `x` and list reference to variant patterns (`&Variant1 => ...`, etc.). (As a matter of style, prefer the first form where possible since there is less syntactic noise). `x` is a borrowed reference and there are strict rules for how a borrowed reference can be dereferenced, these interact with match expressions in surprising ways (at least surprising to me), especially when you a modifying an existing enum in a seemingly innocuous way and then the compiler explodes on a match somewhere.

Before we get into the details of the match expression, lets recap Rust's rules for value passing. In C++, when assigning a value into a variable or passing it to a function there are two choices - pass-by-value and pass-by-reference. The former is the default case and means a value is copied either using a copy constructor or a bitwise copy. If you annotate the destination of the parameter pass or assignment with `&`, then the value is passed by reference - only a pointer to the value is copied and when you operate on the new variable, you are also operating on the old value.

Rust has the pass-by-reference option, although in Rust the source as well as the destination must be annotated with `&`. For pass-by-value in Rust, there are two further choices - copy or move. A copy is the same as C++'s semantics (except that there are no copy constructors in Rust). A move copies the value but destroys the old value - Rust's type system ensures you can no longer access the old value. As examples, `int` has copy semantics and `Box<int>` has move semantics:

fn foo() {
    let x = 7i;
    let y = x;                // x is copied
    println!("x is {}", x);   // OK

    let x = box 7i;
    let y = x;                // x is moved
    //println!("x is {}", x); // error: use of moved value: `x`
}

Rust determines if an object has move or copy semantics by looking for destructors. Destructors probably need a post of their own, but for now, an object in Rust has a destructor if it implements the `Drop` trait. Just like C++, the destructor is executed just before an object is destroyed. If an object has a destructor then it has move semantics. If it does not, then all of its fields are examined and if any of those do then the whole object has move semantics. And so on down the object structure. If no destructors are found anywhere in an object, then it has copy semantics.

Now, it is important that a borrowed object is not moved, otherwise you would have a reference to the old object which is no longer valid. This is equivalent to holding a reference to an object which has been destroyed after going out of scope - it is a kind of dangling pointer. If you have a pointer to an object, there could be other references to it. So if an object has move semantics and you have a pointer to it, it is unsafe to dereference that pointer. (If the object has copy semantics, dereferencing creates a copy and the old object will still exist, so other references will be fine).

OK, back to match expressions. As I said earlier, if you want to match some `x` with type `&T` you can dereference once in the match clause or match the reference in every arm of the match expression. Example:

enum Enum1 {
    Var1,
    Var2,
    Var3
}

fn foo(x: &Enum1) {
    match *x {  // Option 1: deref here.
        Var1 => {}
        Var2 => {}
        Var3 => {}
    }

    match x {
        // Option 2: 'deref' in every arm.
        &Var1 => {}
        &Var2 => {}
        &Var3 => {}
    }
}

In this case you can take either approach because `Enum1` has copy semantics. Let's take a closer look at each approach: in the first approach we dereference `x` to a temporary variable with type `Enum1` (which copies the value in `x`) and then do a match against the three variants of `Enum1`. This is a 'one level' match because we don't go deep into the value's type. In the second approach there is no dereferencing. We match a value with type `&Enum1` against a reference to each variant. This match goes two levels deep - it matches the type (always a reference) and looks inside the type to match the referred type (which is `Enum1`).

Either way, we must ensure that we (that is, the compiler) must ensure we respect Rust's invariants around moves and references - we must not move any part of an object if it is referenced. If the value being matched has copy semantics, that is trivial. If it has move semantics then we must make sure that moves don't happen in any match arm. This is accomplished either by ignoring data which would move, or making references to it (so we get by-reference passing rather than by-move).

enum Enum2 {
    // Box has a destructor so Enum2 has move semantics.
    Var1(Box<int>),
    Var2,
    Var3
}

fn foo(x: &Enum2) {
    match *x {
        // We're ignoring nested data, so this is OK
        Var1(..) => {}
        // No change to the other arms.
        Var2 => {}
        Var3 => {}
    }

    match x {
        // We're ignoring nested data, so this is OK
        &Var1(..) => {}
        // No change to the other arms.
        &Var2 => {}
        &Var3 => {}
    }
}

In either approach we don't refer to any of the nested data, so none of it is moved. In the first approach, even though `x` is referenced, we don't touch its innards in the scope of the dereference (i.e., the match expression) so nothing can escape. We also don't bind the whole value (i.e., bind `*x` to a variable), so we can't move the whole object either.

We can take a reference to any variant in the second match, but not in the derferenced version. So, in the second approach replacing the second arm with `a @ &Var2 => {}` is OK (`a` is a reference), but under the first approach we couldn't write `a @ Var2 => {}` since that would mean moving `*x` into `a`. We could write `ref a @ Var2 => {}` (in which `a` is also a reference), although it's not a construct you see very often.

But what about if we want to use the data nested inside `Var1`? We can't write:

    match *x {
        Var1(y) => {}
        _ => {}
    }

or

    match x {
        &Var1(y) => {}
        _ => {}
    }

because in both cases it means moving part of `x` into `y`. We can use the 'ref' keyword to get a reference to the data in `Var1`: `&Var1(ref y) => {}`.That is OK, because now we are not dereferencing anywhere and thus not moving any part of `x`. Instead we are creating a pointer which points into the interior of `x`.

Alternatively, we could destructure the Box (this match is going three levels deep): `&Var1(box y) => {}`. This is OK because `int` has copy semantics and `y` is a copy of the `int` inside the `Box` inside `Var1` (which is 'inside' a borrowed reference). Since `int` has copy semantics, we don't need to move any part of `x`. We could also create a reference to the int rather than copy it: `&Var1(box ref y) => {}`. Again, this is OK, because we don't do any dereferencing and thus don't need to move any part of `x`. If the contents of the Box had move semantics, then we could not write `&Var1(box y) => {}`, we would be forced to use the reference version. We could also use similar techniques with the first approach to matching, which look the same but without the first `&`. For example, `Var1(box ref y) => {}`.

Now lets get more complex. Lets say you want to match against a pair of reference-to-enum values. Now we can't use the first approach at all:

fn bar(x: &Enum2, y: &Enum2) {
    // Error: x and y are being moved.
    // match (*x, *y) {
    //     (Var2, _) => {}
    //     _ => {}
    // }

    // OK.
    match (x, y) {
        (&Var2, _) => {}
        _ => {}
    }
}

The first approach is illegal because the value being matched is created by dereferencing `x` and `y` and then moving them both into a new tuple object. So in this circumstance, only the second approach works. And of course, you still have to follow the rules above for avoiding moving parts of `x` and `y`.

If you do end up only being able to get a reference to some data and you need the value itself, you have no option except to copy that data. Usually that means using `clone()`. If the data doesn't implement clone, you're going to have to further destructure to make a manual copy or implement clone yourself.

What if we don't have a reference to a value with move semantics, but the value itself. Now moves are OK, because we know no one else has a reference to the value (the compiler ensures that if they do, we can't use the value). For example,

fn baz(x: Enum2) {
    match x {
        Var1(y) => {}
        _ => {}
    }
}

There are still a few things to be aware of. Firstly, you can only move to one place. In the above example we are moving part of `x` into `y` and we'll forget about the rest. If we wrote `a @ Var1(y) => {}` we would be attempting to move all of `x` into `a` and part of `x` into `y`. That is not allowed, an arm like that is illegal. Making one of `a` or `y` a reference (using `ref a`, etc.) is not an option either, then we'd have the problem described above where we move whilst holding a reference. We can make both `a` and `y` references and then we're OK - neither is moving, so `x` remains in tact and we have pointers to the whole and a part of it.

Similarly (and more common), if we have a variant with multiple pieces of nested data, we can't take a reference to one datum and move another. For example if we had a `Var4` declared as `Var4(Box<int>, Box<int>)` we can have a match arm which references both (`Var4(ref y, ref z) => {}`) or a match arm which moves both (`Var4(y, z) => {}`) but you cannot have a match arm which moves one and references the other (`Var4(ref y, z) => {}`). This is because a partial move still destroys the whole object, so the reference would be invalid.

Marco ZeheQuick tip: Add someone to circles on Google Plus using a screen reader

In my “WAI-ARIA for screen reader users” post in early May, I was asked by Donna to talk a bit about Google Plus. Especially, she asked how to add someone to circles. Google Plus has learned a thing or two about screen reader accessibility recently, but the fact that there is no official documentation on the Google Accessibility entry page yet suggests that people inside Google are not satisfied with the quality of Google Plus accessibility yet, or not placing a high enough priority on it. That quality, however, has improved, so adding someone to one or more circles using a screen reader is not that difficult any more.

Note that I tested the below steps with Firefox 31 Beta (out July 22) and NVDA 2014.2. Other screen reader/browser combos may vary in the way they output stuff or switch between their virtual cursor and focus/forms modes.

Here are the steps:

  1. Log into Google Plus. If you already have a profile, just go ahead and find someone. If not, create a profile and add people.
  2. The easiest way to find people is to go to the People tab. Note that these currently have no “selected” state yet, but they do have the word “active” as part of the link text.
  3. Once you found someone in the list of suggestions, find the “Add to circles” menu button, and press the Space Bar. Note that it is very important that you press Space here, not Enter!
  4. NVDA now automatically switches to focus mode. What happened is that a popup menu opened that has a list of current circles, and an item at the bottom that allows you to create a new circle on the fly. The circles themselves are checkable menu items. Use the up and down arrows to select a circle, for example Friends or Acquaintances, and press the Space Bar to add the person. The number of people in that circle will dynamically increase by one, and the state will be changing to “checked”. Likewise, if you want to remove a person from a particular circle, press Space Bar just the same. These all act like regular check boxes, and the menu stays active so you can shuffle that person around your circles as you please.
  5. At the bottom, there is a non-checkable menu item called “Add new circle”. Here, you have to press Enter. If you do this, a panel opens inside the menu, and focus lands on a text field where you can enter the name of a new circle, for example Web Developers. Press Tab to reach the Create Circle button and press Space Bar. The new circle will be added, the person you’re adding to circles will automatically be added to that circle, and you’re back in the menu of circle checkboxes.
  6. Once you’re done, press Escape twice. The first will end NVDA’s focus mode, the second will close the Add to Circles menu. Focus will land back on the button for that person, but the label will change to the name of a single circle, if you added the person to only one circle, or the label “x Circles”, where x is the number of circles you just put that person into.

The above steps also work on the menu button that you find if you opened the profile page of an individual person, not just in the list of suggested people, or any other list of people. The interaction is exactly the same.

Hope this helps you get around in Google Plus a bit more efficiently!

Kent JamesFollowing Wikipedia, Thunderbird Could Raise $1,600,000 in annual donations

What will it take to keep Thunderbird stable and vibrant? Although there is a dedicated, hard-working team of volunteers trying hard to keep Thunderbird alive, there has been very little progress on improvements since Mozilla drastically reduced their funding. I’ve been an advocate for some time that Thunderbird needs income to fulfill its potential, and that the best way to generate that income would be to appeal directly to its users for donations.

One internet organization that has done this successfully has been Wikipedia. How much income could Thunderbird generate if they received the same income per user as Wikipedia? Surely our users, who rely on Thunderbird for critical daily communications, are at least as willing to donate as Wikipedia users.

Estimates of income from Wikipedia’s annual fund raising drive to users are around $20,000,000 per year. Recently Wikipedia is reporting 11824 M pageviews per month and 5 pageviews per user. That results in a daily user count of 78 million users. Thunderbird by contrast has about 6 million daily users (using hits per day to update checks), or about 8% of the daily users of Wikipedia.

If Thunderbird were willing to directly engage users asking for donations, at the same rate per user as Wikipedia, there is a potential to raise $1,600,000 per year. That would certainly be enough income to maintain a serious team to move forward.

Wikipedia’s donation requests were fairly intrusive, with large banners at the top of all Wikipedia pages. When Firefox did a direct appeal to users early this year, the appeal was very subtle (did you even notice it?). I tried to scale the Firefox results to Thunderbird, and estimated that a similar subtle appeal might raise $50,000 – $100,000 per year in Thunderbird. That is not sufficient to make a significant impact. We would have to be willing to be a little intrusive, like Wikipedia, it we are going to be successful. This will generate pushback, as has Wikipedia’s campaign, so we would have to be willing to live with the pushback.

But is it really in the best interest of our users to spare them an annual, slightly intrusive appeal for donations, while letting the product that they depend on each day slowly wither away? I believe that if we truly care about our users, we will take the necessary steps to insure that we give them the best product possible, including undertaking fundraising to keep the product stable and vibrant.

Kevin NgoMore Happiness for Your Buck

Disney is the happiest places on Earth, but one of the most expensive. But it might be well worth the wallet hit.

With increasing assets, I have been thinking lately about what to purchase next, home purchasing, vacation planning, investment. You know, personal finances. Then I wonder how we spend in order to make ourselves happier. How can we use our money most efficiently to make ourselves happiest?

We have fine choices betweem 65" 3D plasma TVs, media-integrated BMWs and Audis, Tudor-style houses on the tree-lined avenue. Although we're all aware of the American Dream and although we might even consciously scoff at it, is it really ingrained in our heads enough to affect our purchases? Despite being aware of materialism, we still spend on items such as an Apple product upgrades or matching furniture sets. But really, compared to what we could potentially be allocating our money towards, are they really worth it?. Buck by buck, there are happier things to spend money on and happier ways to spend it.

Experiences Trumps Stuff

The happiness attained from a new toy is fleeting. When I buy a gadget, I get really excited about it for a couple weeks, and then it's just another item on the shelf. Once in freshman year, I dropped $70 on an HD camcorder. "Think about all the cool life experiences I could record!", I thought. After playing around with it for a bit, it got stowed away, just as Woody had when Buzz came to town. It wasn't the actual camcorder that I really wanted, it was thinking about the future experiences I could have.

Thinking back, the best things I have ever spent my money on were experiences. Trips around the world, places like the cultural streets of Beijing, the serenity of Oahu, or the cold isolation of Alaska. They bring back warm (or perhaps cold) memories and instill a rush of nostalgia. It brought about happiness in a way that those $100 beat-up sneakers or that now-stolen iPod touch ever did.

It's changed my thoughts on getting a nice house or car. Why spend to be stuck at a mundane home or spend to be stuck in traffic (just in cushier seats)? I'd rather use the money saved from not splurging $400K on a house to see the world. Spend money to be with people, go to places, attend shows, try new things. You won't forget it.

Instant Gratification is a Drag

It's not only what we spend on that makes us happy, it's how we spend. When we spend in a way such that we attain instant gratification, such as an in-store purchase on credit of that new fridge or getting that candy bar now, it destroys the whole fun of the waiting game. Have you ever eagerly awaited a package to come for weeks? Thinking about all the future possibilites, all the things you can do, all the fun you will have once that package comes. We are happier when we await something eagerly in anticipation. It's about the journey and not the destination.

Just yesterday, I booked my flight and hotel to Florida to visit my girlfriend working at Disney. It's almost two months out. But every day, I'll be thinking about how much fun we'll have watching the Fantasmic fireworks, how relaxing it will be staying at a 1940s Atlantic-city themed Disney inn, all the delicious food at the Flying Fish. With the date marked on my calendar, it makes me happier every day just eagerly anticipating it.

When you spend on something now, and defer the actual consumption or experience for later, you will much more gratified. Try pre-ordering something you enjoy, plan trips out months ahead, or purchasing online. By practicing patience, you'll probably even save a bit of cash.

Make It Scarce

Experiencing something too frequently makes it less of an experience. If you drink a frothy mocha cappucino every day, you become more and more desensitized to its creamy joys. By making something scarce by not buying or experiencing it too often, it becomes more of a treat. So if you're eating out for lunch every day at nice restaurants, you might want to think about only eating out once a week. Or only get expensive coffees on Fridays. It'll make those times you do go out that much more satisfying, and your wallet will thank you.

Time Trumps Money

Don't dwell too much on wasting your time to pinch some money. So Starbucks is giving out free 12oz coffees today? Free sounds enticing but is it really worth the gas, time in dreadful traffic, and waiting in line? View time as happiness. If you have more time, you can do more of the things you want to do. If you just feel like you have a lot of time, you feel much more free.

With that in mind, you should consider how purchases will affect your future time. Ask "will this really make me happier next week?". If you are contemplating a new TV, you might think it'll make you happier. Have so many friends over to play FIFA on the so-much-HD. But television doesn't make you happier or any less stressed. It's a numbing time-sink. Or perhaps think when you are debating between two similar products such as a Nexus 5 or an HTC One. Sure, when placed side-by-side, those extra megapixels and megahertz might seem like a huge advantage. But think about the product in isolation and see if it will really benefit your future time.

Give it Away

Warren Buffett pledged to give away 99% of his wealth, whether in his lifetime or posthumously. Giving away, passing it forward, being charitable makes people happy. Even happier had they splurged on themselves.

Helping others in need makes it feel like you have a lot of extra free time to give away. And feeling like you have a lot of free time is less of a boulder on your back. So invest in others and invest in relationships. We're inherently social creatures although sometimes selfish. It works against us. Donate to a charity where you know exactly where your money is going to, or buy something nice for a family member or friend without pressure. It's money happily spent.

Mark SurmanHow do we get depth *and* scale?

We want millions of people learning about the web everyday with Mozilla. The ‘why’ is simple: web literacy is quickly becoming just as important as reading, writing and math. By 2024, there will be more than 5 billion people on the web. And, by then, the web will shape our everyday lives even more than it does today. Understanding how it works, how to build it and how to make it your own will be essential for nearly everyone.

Maker Party Uganda

The tougher question is ‘how’ — how do we teach the web with both the depth *and* scale that’s needed? Most people who tackle a big learning challenge pick one path of the other. For example, the educators in our Hive Learning Networks are focused on depth of learning. Everything the do is high touch, hands-on and focused on innovating so learning happens in a deep way. On the flip side, MOOCs have quickly shown what scale looks like, but they almost universally have high drop out rates and limited learning impact for all but the most motivated learners. We rarely see depth and scale go together. Yet, as the web grows, we need both. Urgently.

I’m actually quite hopeful. I’m hopeful because the Mozilla community is deeply focused on tackling this challenge head on, with people rolling up their sleeves to help people learn by making and organizing themselves in new ways that could massively grow the number of people teaching the web. We’re seeing the seeds of both depth and scale emerge.

This snapped into focus for me at MozFest East Africa in Kampala a few days ago. Borrowing from the MozFest London model, the event showcased a variety of open tech efforts by Mozilla and others: FirefoxOS app development; open data tools from a local org called Mountabatten; Mozilla localization; Firefox Desktop engineering; the work of the Ugandan National Information Technology Agency. It also included a huge Maker Party, with 200 young Ugandans showing up to learn and hack with Webmaker tools.

Maker Party Uganda

The Maker Party itself was impressive — pulled off well despite rain and limited connectivity. But what was more impressive was seeing how the Mozilla community is stepping up to plant the seeds of teaching the web at depth and scale, which I’d call out as:

Mentors: IMHO, a key to depth is humans connecting face to face to learn. We’ve set up a Webmaker Mentors program in the last year to encourage this kind of learning. The question has been: will people step up to do this kind of teaching and mentoring, and do it well? MozFest EA was promising start: 30 motivated mentors showed up prepared, enthusiastic and ready to help the 200 young people at the event learn the web.

Curriculum: one of the hard parts of scaling a volunteer-based mentor program is getting people to focus their teaching on the most important web literacy skills. We released a new collection of open source web literacy curriculum over the past couple of months designed to solve this problem. We weren’t sure how things would work out, I’d say MozFestEA is early evidence that curriculum can do a good job of helping people quickly understand what and how to teach. Here, each of the mentors was confidently and articulately teaching a piece of the web literacy framework using Webmaker tools.

Making as learning: another challenge is getting people to teach / learn deeply based on written curriculum. Mozilla focuses on ‘making by learning’ as a way past this — putting hands-on, project based learning at the heart of most of our Webmaker teaching kits. For example, the basic remix teaching kit gets learners quickly hacking and personalizing their favourite big brand web site, which almost always gets people excited and curious. More importantly: this ‘making as learning’ approach lets mentors adapt the experience to a learner’s interests and local context in real time. It was exciting to see the Ugandan mentors having students work on web pages focused on local school tasks and local music stars, which worked well in making the standard teaching kits come to life.

Clubs: mentors + curriculum + making can likely get us to our 2014 goal of 10,000 people around the world teaching web literacy with Mozilla. But the bigger question is how do we keep the depth while scaling to a much bigger level? One answer is to create more ’nodes’ in the Webmaker network and get them teaching all year round. At MozFest EA, there was a session on Webmaker Clubs — after school web literacy clubs run by students and teachers. This is an idea that floated up from the Mozilla community in Uganda and Canada. In Uganda, the clubs are starting to form. For me, this is exciting. Right now we have 30 contributors working on Webmaker in Uganda. If we opened up clubs in schools, we could imagine 100s or even 1000s. I think clubs like this is a key next step towards scale.

Community leadership: the thing that most impressed me at MozFestEA was the leadership from the community. San Emmanuel James and Lawrence Kisuuki have grown the Mozilla community in Uganda in a major way over the last couple of years. More importantly, they have invested in building more community leaders. As one example, they organized a Webmaker train the trainer event a few weeks before MozFestEA. The result was what I described above: confident mentors showing up ready to teach, including people other than San and Lawrence taking leadership within the Maker Party side of the event. I was impressed.This is key to both depth and scale: building more and better Mozilla community leaders around the world.

Of course, MozFestEA was just one event for one weekend. But, as I said, it gave me hope: it made be feel that the Mozilla community is taking the core building blocks of Webmaker shaping them into something that could have a big impact.

IMG_20140716_185205

With Maker Party kicking off this week, I suspect we’ll see more of this in coming months. We’ll see more people rolling up their sleeves to help people learn by making. And more people organizing themselves in new ways that could massively grow the number of people teaching the web. If we can make happen this summer, much bigger things lay on the path ahead.


Filed under: education, mozilla, webmakers

Gregory SzorcUpdates to firefoxtree Mercurial extension

My Please Stop Using MQ post, has been generating a lot of interest for bookmark-based workflows at Mozilla. To make adoption easier, I quickly authored an extension to add remote refs of Firefox repositories to Mercurial.

There was still a bit of confusion and gripes about workflows that I thought it would be best to update the extension to make things more pleasant.

Automatic tree names

People wanted an ability to easy pull/aggregate the various Firefox trees without additional configuration to an hgrc file.

With firefoxtree, you can now hg pull central or hg pull inbound or hg pull aurora and it just works.

Pushing with aliases doesn't yet work. It is slightly harder to do in the Mercurial API. I have a solution, but I'm validating some code paths to ensure it is safe. This feature will likely appear soon.

fxheads commands

Once people adopted unified repositories with heads from multiple repositories, they asked how they could quickly identify the heads of the pulled Firefox repositories.

firefoxtree now provides a hg fxheads command that prints a concise output of the commits constituting the heads of the Firefox repos. e.g.

$ hg fxheads
224969:0ec0b9ac39f0 aurora (sort of) bug 898554 - raise expected hazard count for b2g to 4 until they are fixed, a=bustage+hazbuild-only
224290:6befadcaa685 beta Tagging /src/mdauto/build/mozilla-beta 1772e55568e4 with FIREFOX_RELEASE_31_BASE a=release CLOSED TREE
224848:8e8f3ba64655 central Merge inbound to m-c a=merge
225035:ec7f2245280c fx-team fx-team/default Merge m-c to fx-team
224877:63c52b7ddc28 inbound Bug 1039197 - Always build js engine with zlib. r=luke
225044:1560f67f4f93 release release/default tip Automated checkin: version bump for firefox 31.0 release. DONTBUILD CLOSED TREE a=release

Please note that the output is based upon local-only knowledge.

Reject pushing multiple heads

People were complaining that bookmark-based workflows resulted in Mercurial trying to push multiple heads to a remote. This complaint stems from the fact that Mercurial's default push behavior is to find all commits missing from the remote and push them. This behavior is extremely frustrating for Firefox development because the Firefox repos only have a single head and pushing multiple heads will only result in a server hook rejecting the push (after wasting a lot of time transferring that commit data).

firefoxtree now will refuse to push multiple heads to a known Firefox repo before any commit data is sent. In other words, we fail fast so your time is saved.

firefoxtree also changes the default behavior of hg push when pushing to a Firefox repo. If no -r argument is specified, hg push to a Firefox repo will automatically remap to hg push -r .. In other words, we attempt to push the working copy's commit by default. This change establishes sensible default and likely working behavior when typing just hg push.

Installing firefoxtree

Within the next 48 hours, mach mercurial-setup should prompt to install firefoxtree. Until then, clone https://hg.mozilla.org/hgcustom/version-control-tools and ensure your ~/hgrc file has the following:

[extensions]
firefoxtree = /path/to/version-control-tools/hgext/firefoxtree

You likely already have a copy of version-control-tools in ~/.mozbuild/version-control-tools.

It is completely safe to install firefoxtree globally: the extension will only modify behavior of repositories that are clones of Firefox repositories.

Pete MooreWeekly review 2014-07-16

Highlights

Last week build duty, therefore much less to report this week. I think we’ll have plenty to talk about though (wink wink).

l10n vcs sync was done by aki, and i posted my responses, and am writing up a patch which I hope to land in the next 24 hours. That will be l10n done.

I’ve been busy traiging queues too, and inviting people to meetings that I don’t attend myself, and cleaning up a lot of bugs (not just the triaging, but in general).

Today’s major incident was fallout from panda train 3 move - finally resolved now (yay). Basically, devices.json was out-of-date on the foopies. Disappointingly I thought to check devices.json, but did not consider it would be out-of-date on foopies, as I knew we’d been having lots of reconfigs every day. But for other reasons, the foopy updates were not performed (hanging ssh sessions when updating them) - so it took a while until this was discovered (by dustin!). In the meantime had to disable and enable > 250 pandas.

Other than that, working ferociously on finishing off vcs sync.

I think I probably updated 200 bugs this week! Was quite a clean up.

Frédéric HarperCommunity Evangelist: Firefox OS developer outreach at MozCamp India

Copyright Ratnadeep Debnath  Ratnadeep Debnath  http://j.mp/1jIYxWb (click to enlarge)

Copyright Ratnadeep Debnath
http://j.mp/1jIYxWb (click to enlarge)

At the end of June, I was in India to do a train the trainer session at MozCamp India. The purpose of the session Janet Swisher (first time we worked together, and I think we got a winning combo), and I delivered was to help Mozillians to become Community Evangelists. Our goal was to help them become part of our Technical Evangelist team: helping us inspiring and enabling developers in India to be successful with Firefox OS (we are starting with this technology because of the upcoming launch).

We would have been able to do a full day or more about developer outreach, but we only had three hours in which we shown the attendees how they can contribute, did a fun speaker idol and worked on their project plan. The contribution can be done at many levels, like public speaking, helping developers to build Firefox OS application, answering questions on StackOverflow, and more.

Since we had parallel tracks during our session, we gave it twice to give them the chance to assist to more than one track. For those who were there in the Saturday session, the following slides are the one we used:

I also recorded the session for those of you that would like to refresh your memory:

For the session on Sunday, we fixed some slides, and adapted our session to give us more time for the speaker idol as the project plan. Here are the slides:

If you were not there, I would suggest you to follow the slides as the video of the second day, as it’s an improve version of the first one (not that the first one was not good, but it was the first time we gave this session);

From the feedback we got, it was a pretty good session, and we were happy so see the excitement of the Indian community about this community evangelist role. I can’t wait to see what the Mozilla community in India will do! If you too, Mozillian or not, have any interest about evangelizing the open web, you should join the Mozilla Evangelism mailing list.

 


--
Community Evangelist: Firefox OS developer outreach at MozCamp India is a post on Out of Comfort Zone from Frédéric Harper

Luis VillaDesigners and Creative Commons: Learning Through Wikipedia Redesigns

tl;dr: Wikipedia redesigns mostly ignore attribution of Wikipedia authors, and none approach the problem creatively. This probably says as much or more about Creative Commons as it does about the designers.

disclaimer-y thing: so far, this is for fun, not work; haven’t discussed it at the office and have no particular plans to. Yes, I have a weird idea of fun.

Refresh variant from interfacesketch.com.
A mild refresh from interfacesketch.com.

It is no longer surprising when a new day brings a new redesign of Wikipedia. After seeing one this weekend with no licensing information, I started going back through seventeen of them (most of the ones listed on-wiki) to see how (if at all) they dealt with licensing, attribution, and history. Here’s a summary of what I found.

Completely missing

Perhaps not surprisingly, many designers completely remove attribution (i.e., history) and licensing information in their designs. Seven of the seventeen redesigns I surveyed were in this camp. Some of them were in response to a particular, non-licensing-related challenge, so it may not be fair to lump them into this camp, but good designers still deal with real design constraints, and licensing is one of them.

History survives – sometimes

The history link is important, because it is how we honor the people who wrote the article, and comply with our attribution obligations. Five of the seventeen redesigns lacked any licensing information, but at least kept a history link.

Several of this group included some legal information, such as links to the privacy policy, or in one case, to the Wikimedia Foundation trademark page. This suggests that our current licensing information may be presented in a worse way than some of our other legal information, since it seems to be getting cut out even by designers who are tolerant of some of our other legalese?

Same old, same old

Four of the seventeen designs keep the same old legalese, though one fails to comply by making it impossible to get to the attribution (history) page. Nothing wrong with keeping the existing language, but it could reflect a sad conclusion that licensing information isn’t worth the attention of designers; or (more generously) that they don’t understand the meaning/utility of the language, so it just gets cargo-culted around. (Credit to Hamza Erdoglu , who was the only mockup designer who specifically went out of his way to show the page footer in one of his mockups.)

A winner, sort of!

Of the seventeen sites I looked at, exactly one did something different: Wikiwand. It is pretty minimal, but it is something. The one thing: as part of the redesign, it adds a big header/splash image to the page, and then adds a new credit specifically for the author of the header/splash image down at the bottom of the page with the standard licensing information. Arguably it isn’t that creative, just complying with their obligations from adding a new image, but it’s at least a sign that not everyone is asleep at the wheel.

Observations

This is surely not a large or representative sample, so all my observations from this exercise should be taken with a grain of salt. (They’re also speculative since I haven’t talked to the designers.) That said, some thoughts besides the ones above:

  • Virtually all of the designers who wrote about why they did the redesign mentioned our public-edit-nature as one of their motivators. Given that, I expected history to be more frequently/consistently addressed. Not clear whether this should be chalked up to designers not caring about attribution, or the attribution role of history being very unclear to anyone who isn’t an expect. I suspect the latter.
  • It was evident that some of these designers had spent a great deal of time thinking about the site, and yet were unaware of licensing/attribution. This suggests that people who spend less time with the site (i.e., 99.9% of readers) are going to be even more ignorant.
  • None of the designers felt attribution and licensing was even important enough to experiment on or mention in their writeups. As I said above, this is understandable but sort of sad, and I wonder how to change it.

Postscript, added next morning:

I think it’s important to stress that I didn’t link to the individual sites here, because I don’t want to call out particular designers or focus on their failures/oversights. The important (and as I said, sad) thing to me is that designers are, historically, a culture concerned with licensing and attribution. If we can’t interest them in applying their design talents to our problem, in the context of the world’s most famously collaborative project, we (lawyers and other Commoners) need to look hard at what we’re doing, and how we can educate and engage designers to be on our side.

I should also add that the WMF design team has been a real pleasure to work with on this problem, and I look forward to doing more of it. Some stuff still hasn’t made it off the drawing board, but they’re engaged and interested in this challenge. Here is one example.

Byron Joneshappy bmo push day!

switching the default monospace font on bmo yesterday highlighted a few issues with the fira-mono typeface that we’d like to see addressed before we use it.  as a result comments are now displayed using their old font.

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jess KleinThe first 6 weeks of Hive Labs

Six weeks ago, Atul Varma, Chris Lawrence, Kat Baybrooke and I embarked on an experiment we call Hive Labs. Let me tell you about, Let me show you a little slideshow I made about our first 6 weeks to the tune of Josh Gad singing In Summer from the movie Frozen.





So, in summary (or if you aren't the musical slideshare type) the first 6 weeks have been great. We did a bunch of listening and research, including attending events and hackjams run by and for Hive members. Here's a neat worksheet from a Mouse run Webmaker training in New York. 

We did some research and design on tools and resources to support prototyping:




Sherpa is a codename for a tool that helps prototypers define a design opportunity and openly work through the process for designing a solution. We designed some mockups to see if this is a direction that we should pursue. Sherpa could be a back-end for the "Cupcake dashboard" or be a stand alone tool. We spun up an instance of the "Cupcakes" dashboard  designed by the Firefox UX team to help figure out if it is a useful tool to surface prototypes.

We also prototyped a snippet for Firefox to promote Maker Party, worked on an idea for self guided Webmaking and began work on a Net Neutrality Teaching Kit.

Finally, we've shipped some things:

The No-Fi Lo-Fi Teaching Kit asks participants the question how can we empower educators to teach the web in settings where connectivity isn't guaranteed?

With the Mobile Design Teaching Kit, participants play with, break apart and modify mobile apps in order to understand how they work as systems. This teaching kit is designed to explore a few activities that can be mixed and mashed into workshops for teens or adults who want to design mobile apps. Participants will tinker with paper prototyping, design mindmaps and program apps while learning basic design and webmaking concepts.

A local and a global Hive Learning Network directory

... and a section on Webmaker.org to help guide mentors through making Teaching Kits and Activities:



The first 6 weeks have been great, and we are going to continue to listen, create and deliver based on needs from the community. We have lots more to build. We want to do this incrementally, partly to release sooner, and partly to build momentum through repeated releases.


Armen Zambrano GasparnianDeveloping with GitHub and remote branches

I have recently started contributing using Git by using GitHub for the Firefox OS certification suite.

It has been interestting switching from Mercurial to Git. I honestly believed it would be more straight forward but I have to re-read again and again until the new ways sink in with me.

jgraham shared with me some notes (Thanks!) with regards what his workflow looks like and I want to document it for my own sake and perhaps yours:
git clone git@github.com:mozilla-b2g/fxos-certsuite.git

# Time passes

# To develop something on master
# Pull in all the new commits from master

git fetch origin

# Create a new branch (this will track master from origin,
# which we don't really want, but that will be fixed later)

git checkout -b my_new_thing origin/master

# Edit some stuff

# Stage it and then commit the work

git add -p
git commit -m "New awesomeness"

# Push the work to a remote branch
git push --set-upstream origin HEAD:jgraham/my_new_thing

# Go to the GH UI and start a pull request

# Fix some review issues
git add -p
git commit -m "Fix review issues" # or use --fixup

# Push the new commits
git push

# Finally, the review is accepted
# We could rebase at this point, however,
# we tend to use the Merge button in the GH UI
# Working off a different branch is basically the same,
# but you replace "master" with the name of the branch you are working off.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Alon ZakaiMassive, a new work-in-progress asm.js benchmark - feedback is welcome!

Massive is a new benchmark for asm.js. While many JavaScript benchmarks already exist, asm.js - a strict subset of JavaScript, designed to be easy to optimize - poses some new challenges. In particular, asm.js is typically generated by compiling from another language, like C++, and people are using that approach to run large asm.js codebases, by porting existing large C++ codebases (for example, game engines like Unity and Unreal).

Very large codebases can be challenging to optimize for several reasons: Often they contain very large functions, for example, which stress register allocation and other compiler optimizations. Total code size can also cause pauses while the browser parses and prepares to execute a very large script. Existing JavaScript benchmarks typically focus on small programs, and tend to focus on throughput, ignoring things like how responsive the browser is (which matters a lot for the user experience). Massive does focus on those things, by running several large real-world codebases compiled to asm.js, and testing them on throughput, responsiveness, preparation time and variance. For more details, see the FAQ at the bottom of the benchmark page.

Massive is not finished yet, it is a work in progress - the results should not be taken seriously yet (bugs might cause some things to not be measured accurately, etc.). Massive is being developed as an open source project, so please test it and report your feedback. Any issues you find or suggestions for improvements are very welcome!

Gregory SzorcPython Packaging Do's and Don'ts

Are you someone who casually interacts with Python but don't know the inner workings of Python? Then this post is for you. Read on to learn why some things are the way they are and how to avoid making some common mistakes.

Always use Virtualenvs

It is an easy trap to view virtualenvs as an obstacle, a distraction towards accomplishing something. People see me adding virtualenvs to build instructions and they say I don't use virtualenvs, they aren't necessary, why are you doing that?

A virtualenv is effectively an overlay on top of your system Python install. Creating a virtualenv can be thought of as copying your system Python environment into a local location. When you modify virtualenvs, you are modifying an isolated container. Modifying virtualenvs has no impact on your system Python.

A goal of a virtualenv is to isolate your system/global Python install from unwanted changes. When you accidentally make a change to a virtualenv, you can just delete the virtualenv and start over from scratch. When you accidentally make a change to your system Python, it can be much, much harder to recover from that.

Another goal of virtualenvs is to allow different versions of packages to exist. Say you are working on two different projects and each requires a specific version of Django. With virtualenvs, you install one version in one virtualenv and a different version in another virtualenv. Things happily coexist because the virtualenvs are independent. Contrast with trying to manage both versions of Django in your system Python installation. Trust me, it's not fun.

Casual Python users may not encounter scenarios where virtualenvs make their lives better... until they do, at which point they realize their system Python install is beyond saving. People who eat, breath, and die Python run into these scenarios all the time. We've learned how bad life without virtualenvs can be and so we use them everywhere.

Use of virtualenvs is a best practice. Not using virtualenvs will result in something unexpected happening. It's only a matter of time.

Please use virtualenvs.

Never use sudo

Do you use sudo to install a Python package? You are doing it wrong.

If you need to use sudo to install a Python package, that almost certainly means you are installing a Python package to your system/global Python install. And this means you are modifying your system Python instead of isolating it and keeping it pristine.

Instead of using sudo to install packages, create a virtualenv and install things into the virtualenv. There should never be permissions issues with virtualenvs - the user that creates a virtualenv has full realm over it.

Never modify the system Python environment

On some systems, such as OS X with Homebrew, you don't need sudo to install Python packages because the user has write access to the Python directory (/usr/local in Homebrew).

For the reasons given above, don't muck around with the system Python environment. Instead, use a virtualenv.

Beware of the package manager

Your system's package manager (apt, yum, etc) is likely using root and/or installing Python packages into the system Python.

For the reasons given above, this is bad. Try to use a virtualenv, if possible. Try to not use the system package manager for installing Python packages.

Use pip for installing packages

Python packaging has historically been a mess. There are a handful of tools and APIs for installing Python packages. As a casual Python user, you only need to know of one of them: pip.

If someone says install a package, you should be thinking create a virtualenv, activate a virtualenv, pip install <package>. You should never run pip install outside of a virtualenv. (The exception is to install virtualenv and pip itself, which you almost certainly want in your system/global Python.)

Running pip install will install packages from PyPI, the Python Packaging Index by default. It's Python's official package repository.

There are a lot of old and outdated tutorials online about Python packaging. Beware of bad content. For example, if you see documentation that says use easy_install, you should be thinking, easy_install is a legacy package installer that has largely been replaced by pip, I should use pip instead. When in doubt, consult the Python packaging user guide and do what it recommends.

Don't trust the Python in your package manager

The more Python programming you do, the more you learn to not trust the Python package provided by your system / package manager.

Linux distributions such as Ubuntu that sit on the forward edge of versions are better than others. But I've run into enough problems with the OS or package manager maintained Python (especially on OS X), that I've learned to distrust them.

I use pyenv for installing and managing Python distributions from source. pyenv also installs virtualenv and pip for me, packages that I believe should be in all Python installs by default. As a more experienced Python programmer, I find pyenv just works.

If you are just a beginner with Python, it is probably safe to ignore this section. Just know that as soon as something weird happens, start suspecting your default Python install, especially if you are on OS X. If you suspect trouble, use something like pyenv to enforce a buffer so the system can have its Python and you can have yours.

Recovering from the past

Now that you know the preferred way to interact with Python, you are probably thinking oh crap, I've been wrong all these years - how do I fix it?

The goal is to get a Python install somewhere that is as pristine as possible. You have two approaches here: cleaning your existing Python or creating a new Python install.

To clean your existing Python, you'll want to purge it of pretty much all packages not installed by the core Python distribution. The exception is virtualenv, pip, and setuptools - you almost certainly want those installed globally. On Homebrew, you can uninstall everything related to Python and blow away your Python directory, typically /usr/local/lib/python*. Then, brew install python. On Linux distros, this is a bit harder, especially since most Linux distros rely on Python for OS features and thus they may have installed extra packages. You could try a similar approach on Linux, but I don't think it's worth it.

Cleaning your system Python and attempting to keep it pure are ongoing tasks that are very difficult to keep up with. All it takes is one dependency to get pulled in that trashes your system Python. Therefore, I shy away from this approach.

Instead, I install and run Python from my user directory. I use pyenv. I've also heard great things about Miniconda. With either solution, you get a Python in your home directory that starts clean and pure. Even better, it is completely independent from your system Python. So if your package manager does something funky, there is a buffer. And, if things go wrong with your userland Python install, you can always nuke it without fear of breaking something in system land. This seems to be the best of both worlds.

Please note that installing packages in the system Python shouldn't be evil. When you create virtualenvs, you can - and should - tell virtualenv to not use the system site-packages (i.e. don't use non-core packages from the system installation). This is the default behavior in virtualenv. It should provide an adequate buffer. But from my experience, things still manage to bleed through. My userland Python install is extra safety. If something wrong happens, I can only blame myself.

Conclusion

Python's long and complicated history of package management makes it very easy for you to shoot yourself in the foot. The long list of outdated tutorials on The Internet make this a near certainty for casual Python users. Using the guidelines in this post, you can adhere to best practices that will cut down on surprises and rage and keep your Python running smoothly.

Darrin HeneinSide Tabs: Prototyping An Unexpected Productivity Hack

A few months ago, I came across an interesting Github repo authored by my (highly esteemed!) colleague Vlad Vukicevic called VerticalTabs. This is a Firefox add-on which moves your tabs, normally organized horizontally along the top of your browser, to a vertical list docked to either the left or right side of the window. Vlad’s add-on worked great, but I saw a few areas where a small amount of UX and visual design love could make this something I’d be willing to trial. So, I forked.

 

After cloning the repo, I spent a couple days modifying some of the layout, adding a new dark theme to the CSS, and replaced a handful of the images and icons with my own. Ultimately, it was probably a single-digit number of hours spent to get my code to a place that I was happy with. Certainly, there are some issues on certain operating systems, and things like Firefox’s pinned tabs don’t get the treatment I would love them to have, but that was not the point. The point of my experiment was to learn.

Learn? Learn what?

Let’s step back for a moment. Here at Mozilla, we like to experiment. Hack, Play, Make… whatever you’d like to call it. But we don’t like to waste time: we do things with purpose. If we build something, we try to make sure it’s worth the time and effort involved. As a Design Engineer on the UX team, I (along with others) work hard to bring and make clear the value of prototyping to my colleagues. What is the minimal thing we can make to test our assumptions? The reality is that when designing digital products, how it works is equally (arguably more) important than how it looks. Steve Jobs said it best:

Design is how it works.

 

Let’s bring it back to Side Tabs now (I’ll be using Side Tabs and VerticalTabs interchangeably). The hypothesis I was hoping to validate was that there was a subset of the Firefox user base that would find value in the layout that Side Tabs enabled. I wanted to bring this add-on to a level where users would find it delightful and usable enough to at least give it a fair shot.

It’s critically important that before you unleash your experiment and start learning from it, you mitigate (as much as possible) any sources of bias or false-negatives. Make (or fake) it to a point where the data you collect is not influenced by poor design, conflated features, or poor performance/usability. It turned out that this delta, from Vlad’s work to my own version, was a small amount of work. I went for it, pushed it a few steps in the right direction, and shared it with as many people as I could.

I want to restate: the majority of the credit for my particular version of VerticalTabs goes to those who published the work on top of which I built, namely Vlad and Philipp von Weitershausen. Furthermore, the incredibly talented Stephen Horlander has explored the idea of Side Tabs as well, and you will notice his work helped inspire the visual language I used in my implementation. This is how great things are built; rarely from scratch, but more commonly on the shoulders and brilliance of those who came before you.

My Github repo (at time of writing) has 13 stars and is part of a graph with 19 forks. Similarly, I’ve had colleagues fork my repo and take it further, adding improvements and refinements as they go (see my colleague Hayden’s work for one promising effort). I’ve had great response on Twitter from developers and users who love the add-on and who can’t wait to share their ideas and thoughts. It’s awesome to see ideas take shape and grow so organically like this. This is collaboration.

I’ve been using Side Tabs full-time in my default browser (Firefox Nightly) for 5 or 6 months now, and I’ve learned a ton. Aside from now preferring a horizontal layout (made possible by stacking tabs vertically) on a screen pressed for vertical space, I’ve discovered a use case that I never would have imagined had I simply mocked this idea up in Photoshop.

I use productivity tools heavily, from calendars to to-do lists and beyond. One common scenario is this: I click on a link, and it’s something I find interesting or valuable, but I don’t want to address it right now. I’ve experimented with Pocket (I still use this for longer form writing I wish to read later) but find that most of my Read Later lists are Should-but-Never-Actually-Read-Later lists. Out of sight, out of mind, right?

Saving for Later

The Firefox UX Team has actually done some great research on Save for Later behaviour. There is a great blog post here as well as a more detailed research summary here.

By a quite pleasant surprise, the vertical layout of Side Tabs surfaced a solution to me. I found myself appropriating my tab list into a priority-stack , always giving my focus to the tab at the bottom of the list. When I open something I want to keep around, I simply drag it in the list to a spot based on its relative importance; right to the top for ‘Someday’ items, 2nd or 3rd from the bottom if I want to take a peek once I’m done my task at hand (which is always the bottom tab). I’ve even moved to having two Firefox windows open, which essentially gives me two separate task lists: one for personal tabs/to-dos and one for work.

 

So where does this leave us? Quite clearly, it’s shown the immediate value of investing in an interactive prototype versus using only static mockups: people can use the design, see how it works, and in this case, expose a usage pattern I hadn’t seen before. The most common argument against prototyping is the cost involved (time, chiefly), and in my experience the value of building your designs (to varying levels of fidelity, based on the project/hypothesis) always outweighs the cost. Building the design sheds light on design problems and solutions that traditional, static mockups often fail to illuminate.

With regards to Side Tabs itself, I learned that in some cases, users treat their tabs as tasks to accomplish, and when a task is completed, it’s tab is closed. Increasingly so, our work and personal tasks exist online (email, banking, shopping, etc.), and are accessed through the browser. Some tasks (tabs) have higher priority or urgency than others, and whether visible or not, there is an implicit order by which a user will attend to their tabs. Helping users better organize their tabs made using the browser a more productive, delightful experience. And anything that can make an experience more delightful or useful is not only of great value and importance to the product or team I work with, but also to me as a designer.

Get the Add-on Side Tabs on Github