Tantek Çelik IndieWebCampUK 2014 Hack Day Demos: HTTPS, #webactions, new & improved #indieweb sites

One weekend ago, 18 IndieWebCampUK participants (including 2 remote) showed 25 demos in just under 75 minutes of what they designed and built that weekend in 19 different interoperable projects. Every single demo exemplified an indieweb community member scratching their own personal site itch(es), helping each other do so, and together advancing the state of the indieweb. We can all say:

I'm building Indie Web Camp.

During the demos I took realtime notes in IRC, with some help from Barnaby Walters. Archived on the IndieWebCamp wiki, here's a summary of what each of us got working.

Glenn Jones

Glenn Jones built improvements to Transmat. (IRC notes)

He built a map view that shows the venues nearest to his current location (via GeoLocation API).

He also found an open source HTML5 JS open source pedometer and repurposed it into Transmat so that when running on his Android as a web app, it can detect when he's walking, and only do GPS lookups when he's walking, so it saves battery.

Now he has an HTML5 JS app that can auto-checkin for him while he's walking.

Barnaby and Pelle

Barnaby WaltersPelle Wessman Barnaby Walters and Pelle Wessman built cross-site reply webactions that work purely via their websites - no browser extension needed! This is the first time this has been done. (IRC notes)

Barnaby has setup registerProtocolHandler on Taproot to register a handler for the "web+indie:" (since updated to "web+action:") protocol when he loads a particular page on his website so that his website is registered to handle webactions via the <indie-action> tag.

Barnaby demonstrates loading the page that calls registerProtocolHandler. The browser asks to confirm that he wants waterpigs.co.uk to handle "web+indie" URLs.

Then Barnaby goes to Pelle's website home page where he has a list of posts that he's written, now with "Reply", "Like", and "Tip" webactions next to each post, each webaction represented and wrapped by <indie-action> tags in the markup.

Pelle's site also has a web component ([https://github.com/voxpelli/indie-action-component open sourced on github]) to handle his <indie-action> tags, which creates an iframe that uses that same protocol handler using a Promise, which connects the iframe to calling the handler that Taproot registered.

Thus without anything installed in the browser, Barnaby can go to Pelle's site, click the "Reply" button next to a post which automatically goes to Barnaby's site's Taproot UI to post a reply!

Barnaby Walters

Barnaby Walters also built a map-view post aggregator that shows icons for people at the locations embedded in their recent posts. (IRC notes)

The map-view aggregator is at a self-standing demo URL for now, but Barnaby plans to include this view as another column type in Shrewdness, so you can have a map view of recent posts from people you're following.

Grant Richmond

Grant Richmond got a fancy new domain (grant.codes) and setup Glenn Jones's Transmat on it - which makes it the second installation of Transmat! (IRC notes)

Grant also built a contact page: grant.codes/contact that has links for various methods of communication:

All of the links are text links for now, no icons yet.

Grant has implemented a people focused communication UI on his site!

Jeremy Keith

Jeremy Keith added https on adactio.com, and implemented <indie-action> tag webactions. (IRC notes)

adactio https

Jeremy took his site adactio.com from no https support to https Level 4. All adactio.com URLs redirect to https. However subdomains (e.g. austin.adactio.com) are still http.

adactio webactions

Jeremy's also implemented the new <indie-action> tag for webactions around his existing Tweet action links, both on his post permalinks, and on his posts in-stream (e.g. on his home page or when paginated).

Shane Hudson

Shane Hudson went from no SSL and no comments yesterday to https level 5! He also imported the contents of all his old comments from his WordPress blog to his Craft install (the CMS he's dogfooding, contributing plugins to, selfdogfooding). (IRC notes)

He was able to get SSL setup on his site with an A rating, and forward secrecy, and is thus https level 5.

Shane also wrote a script to do the import of comments from WordPress to Craft. It's "a bit crude, dealing with XML to CSV a few times".

Nat Welch

Nat Welch (AKA icco on IRC) got his blog running (his own software) in Go (language) hosted on AppEngine with SSL, achieving https level 4! (IRC notes)

AppEngine does SSL for free if you're ok with SNI.

So now Nat has SSL Labs rating A- on writing.natwelch.com! And also automatic redirect works from http to https. Thus he has also achieved https Level 4!

Right now he's using AppEngine default auth, using his Google account. Eventually he wants to use indieauth to auth into his site.

Tim Retout

Tim Retout got pump.io running on his site and added support to it for POSSEing to Twitter. (IRC notes)

His goal is to add all the indieweb feature support too like webmentions, microformats etc. He has to run off to catch a train.

He is also too humble, as he helped numerous people in person at the camp get on SSL, https level 4 or 5 at that. A round of applause for Tim!

Tom Morris

Tom Morris added https to his site, made it responsive, and setup mf2py as a service. (IRC notes)

responsive tommorris.org

Tom showed his current site tommorris.org with different window sizes. His CSS is now "less sucky" and he has made his site more responsive on mobile / small display etc.

mf2py as a service

Tom also got the Python microformats2 parser (mf2py) running as a service that you can submit your URLs to and get back pretty-printed JSON.

tommorris https

Tom got his main site tommorris.org up to https Level 4 with an A- rating, but has not yet done so with *.tommorris.org (e.g. wiki.tommorris.org).

During the next demo, Tom got his SSL Labs rating from A- to A with some help from Aral. And during the demo after that took his rating up to A+ thanks to this blog post.

Kevin Beynon

Kevin Beynon got IndieAuth login to his own site working! (IRC notes)

Kevin started by showing us his site home page kevinbeynon.com using a tablet. We projected it by holding up to the Talky HD camera.

He pointed out that there is no admin link on the home page then went to his "secret" URL at /admin/ which has an IndieAuth login screen. He entered his own URL, and chose to RelMeAuth authenticate using Twitter which redirected to it and back and came back with the message "Log-in Successful".

Kevin went to his home page again, and showed that it now has visible links to "admin" and "log out". Next he plans to bring his post creating and editing interface into his home page front end, so that he can do inline editing and post notes from his home page.

Joschi Kuphal

Joschi Kuphal got his site's https support to SSL rating A+, fixed his webmention implementation, and implemented webactions on permalinks. (IRC notes)

jkphl https A+

Joschi noted that his site was running with SSL before but had some flaws. He worked on it and improved his site's rating from F to A+.

jkphl webmentions fixed

He also fixed some flaws with his webmention implementation thanks to feedback from Ryan Barrett online.

jkphl permalinks webactions

Third, Joschi implemented webactions on permalinks, in particular he added <indie-action> markup around his default Twitter, G+, Facebook "share" links. He then demonstrated his site working with Barnaby Walters's Web Action Hero Toolkit browser extension.

Chris Asteriou

Chris Asteriou is fairly new to the IndieWeb and started with going through IndieMark, adding h-entry and h-card markup, and a notes section to his site.(IRC notes)

digitalbliss microformats

Chris showed digitalbliss.uk.com, noted that he added h-entry on his page with entries. He clicked the "Play" link at top to show this. And then he marked up the info at bottom of his home page with h-card.

digitalbliss notes

Chris added a notes section and used the verification tools on indiewebify.me to check it and verify that he reached IndieMark Level 2.

Tantek

Tantek Çelik switched his permalink webactions from <action> tags to <indie-action> tags and researched the UX of webactions on posts in a stream (e.g. a home page).

tantek indie-action

Based on the webactions discussion session in the first day with Tantek, Jeremy, and Pelle, they concluded that the <indie-action> tag was more appropriate than the <action> tag.

Tantek initially publicly proposed the <action> tag for consideration in a session on Web Actions at Open Source Bridge 2012, and then later implemented them at last year's IndieWebcampUK 2013 which were then demonstrated working with Barnaby Walters's browser extension.

Changing from <action> to <indie-action> at a minimum better fits with the web component model. Jeremy Keith pointed out that an <indie-action> tag in particular would be a good example of a web component, worthy as a case-study for web components.

Tantek updated his permalink webactions to use <indie-action> tags and Barnaby updated his browser extension to support them as well.

in-stream webactions

Tantek analyzed the UI of various silos, in particular Instagram and Twitter.

Instagram has a very minimal simple webaction UI, with just "Like", "Comment", and "..." (more) buttons, the first two with both icon and text labels, which makes sense since their primary content is large (relative to the UI) images/video (visual media). Instagram's webactions are identical on photos viewed on their own screen, and when in a stream of media. Deliberately designed consistency.

Twitter on the other hand is horribly inconsistent between different views of tweets, and even different streams, sometimes their webactions are:

  • on the right with text labels
  • on the left with text labels
  • on the left without text labels

Their trend seems to be icon only, likely because the text label distracts from the tweet text content around it, especially in a stream of tweets that are primarily (nearly all) just text.

Tantek walked through comparisons of Twitter's different webactions button icon/text usage/placements with Aral, who came to the same conclusions from the data.

It may be ok to use both icon and text labels on note/post permalink pages, as there is more distinction between the (single) content area, and the footer of webactions.

However, the conclusions is that in-stream webactions should use just icons (clear ones at that) when among posts that are primarily, mostly, or perhaps even often just text.

Next Tantek is working on implementing icon-only webactions on his home page posts stream. He made some progress but realized it will require him to rework some storage code first.

Aral Balkan

Aral Balkan upgraded his site's https support to SSL rating A+ and https Level 5, and his how-to blog post about it! (IRC notes)

Aral already supported https on his site aralbalkan.com beforehand. On IndieWebCampUK hack day he added support for forward secrecy, which raised its SSL rating from A- to A+ and thus he achieved https Level 5!

Apparently it took him only 2 lines of code to implement that change on nginx, and noted that it's a bit harder on Apache.

After his demo, Aral also updated his blog post about SSL setup with nginx with what he learned and how to get to SSL rating A+.

Rosa Fox

Rosa Fox created a UI on her site for CRUD posting of projects. (IRC notes)

Rosa wanted to make her own CMS with support for posting images and tags. She demonstrated her local dev install of her new CMS with the following new features she built at Hack Day:

  • a UI for creating a new project
  • CRUD posting interface for projects
  • using Postgres to store data

Aaron Parecki

Aaron Parecki participated remotely, added support for posting bookmarks to his site, and added bookmarks posting via micropub to his Quill app! (IRC notes)

Aaron has been publishing bookmarks to another place for a long time in a WordPress install at aaron.pk/bookmarks and he wanted to integrate them into his main site aaronparecki.com.

Once Aaron got the bookmark post type implemented in his publishing software p3k and deployed to his site, he did a mass import from the aaron.pk/bookmarks WordPress XML export.

That was the last thing aaronpk was using WordPress for, so he's no longer using WordPress to publish any of his own content.

Now all of Aaron's bookmarks are at aaronparecki.com/bookmarks all marked up with microformats. Each bookmark is an h-entry, and embedded inside is an h-cite of the bookmark itself.

This also means you can comment, bookmark, and like his bookmarks themselves!

During later demos, Aaron also updated his Quill app with a bookmark posting interface, as well as a bookmarklet so you can quickly open the Quill UI to make a bookmark.

Kevin Marks

Kevin Marks built a feed coverter that takes legacy RSS/Atom feeds and produces modern readable and usable h-entry page, including such niceties as inline playable audio elements in converted podcasts. (IRC notes)

Kevin noticed that people are building h-feed readers, so he built a tool that takes legacy RSS Atom feeds and unmunges them and produces nice clean h-entry feeds.

The converter is at feed.unmung.com/. Unmung.com is a URL he bought ages ago, and set it up on Google AppEngine.

E.g. if you put in xkcd.com/rss.xml into it, it generates a nice readable HTML page with h-entry, which you can then subscribe to in an indie reader like Barnaby's Shrewdness.

Kevin demonstrated using unmung to convert a podcast feed feeds.wnyc.org/onthemedia into an h-feed with embedded playable HTML5 <audio> elements, providing an actual useful interface, much better than the original feed.

Kevin made the point that no one wants to parse RSS or Atom any more. Now by parsing the microformats JSON representation, you can get any existing RSS or Atom etc.

You can now subscribe to iTunes podcasts etc. in your indieweb reader!

Robin Taylor

Robin Taylor added support for https (including forward secrecy, getting an SSL "A" rating) to his site robintaylor.uk and automatic redirects from http to https, achieving https Level 5! (IRC notes)

UK Homebrew Website Clubs

As we were wrapping up, Tom Morris asked openly if anyone would be interested in coming to a Homebrew Website Club in London. Jeremy Keith similarly asked the group for interest in a Homebrew Website Club Brighton.

Both had quite a bit of interest, so we can expect to start seeing more Homebrew Website Club meetups in more locations!

See also

Join Us At The Next IndieWebCamp In Cambridge

IndieWebCamp Cambridge is next month on the East Coast.

Join us. Share ideas. Come work on your personal web site. Help grow and evolve the independent web. Be the change you want to see in the world wide web.

"The people I met at @indiewebcamp are the A-Team of the Internet. Give them some tape and an oxy-acetalyne torch and they'll fix the web."

Adam Lofting“Conclusions”

Mile long string of baloons (6034077499)

  • Removing the second sentence increases conversion rate (hypothesis = simplicity is good).
  • The button text ‘Go!’ increased the conversion rate.
  • Both variations on the headline increased conversion rate, but ‘Welcome to Webmaker’ performed the best.
  • We should remove the bullet points on this landing page.
  • The log-in option is useful on the page, even for a cold audience who we assume do not have accounts already.
  • Repeating the ask ‘Sign-up for Webmaker’ at the end of the copy, even when it duplicates the heading immediately above, is useful. Even at the expense of making the copy longer.
  • The button text ‘Create an account’ works better than ‘Sign up for Webmaker’ even when the headline and CTA in the copy are ‘Sign up for Webmaker’.
  • These two headlines are equivalent. In the absence of other data we should keep the version which includes the brand name, as it adds one further ‘brand impression’ to the user journey.
  • The existing blue background color is the best variant, given the rest of the page right now.

The Webmaker Testing Hub

If any of those “conclusions” sound interesting to you, you’ll probably want to read more about them on the Webmaker Testing Hub (it’s a fancy name for a list on a wiki).

This is where we’ll try and share the results of any test we run, and document the tests currently running.

And why that image for this blog post?

Because blog posts need and image, and this song came on as I was writing it. And I’m sure it’s a song about statistical significance, or counting, or something…

Lucas RochaIntroducing Probe

We’ve all heard of the best practices regarding layouts on Android: keep your view tree as simple as possible, avoid multi-pass layouts high up in the hierarchy, etc. But the truth is, it’s pretty hard to see what’s actually going on in your view tree in each platform traversal (measure → layout → draw).

We’re well served with developer options for tracking graphics performance—debug GPU overdraw, show hardware layers updates, profile GPU rendering, and others. However, there is a big gap in terms of development tools for tracking layout traversals and figuring out how your layouts actually behave. This is why I created Probe.

Probe is a small library that allows you to intercept view method calls during Android’s layout traversals e.g. onMeasure(), onLayout(), onDraw(), etc. Once a method call is intercepted, you can either do extra things on top of the view’s original implementation or completely override the method on-the-fly.

Using Probe is super simple. All you have to do is implement an Interceptor. Here’s an interceptor that completely overrides a view’s onDraw(). Calling super.onDraw() would call the view’s original implementation.

public class DrawGreen extends Interceptor {
    private final Paint mPaint;

    public DrawGreen() {
        mPaint = new Paint();
        mPaint.setColor(Color.GREEN);
    }

    @Override
    public void onDraw(View view, Canvas canvas) {
        canvas.drawPaint(mPaint);
    }
}

Then deploy your Interceptor by inflating your layout with a Probe:

Probe probe = new Probe(this, new DrawGreen(), new Filter.ViewId(R.id.view2));
View root = probe.inflate(R.layout.main_activity, null);

Just to give you an idea of the kind of things you can do with Probe, I’ve already implemented a couple of built-in interceptors. OvermeasureInterceptor tints views according to the number of times they got measured in a single traversal i.e. equivalent to overdraw but for measurement.

LayoutBoundsInterceptor is equivalent to Android’s “Show layout bounds” developer option. The main difference is that you can show bounds only for specific views.

Under the hood, Probe uses Google’s DexMaker to generate dynamic View proxies during layout inflation. The stock ProxyBuilder implementation was not good enough for Probe because I wanted avoid using reflection entirely after the proxy classes were generated. So I created a specialized View proxy builder that generates proxy classes tailored for Probe’s use case.

This means Probe takes longer than your usual LayoutInflater to inflate layout resources. There’s no use of reflection after layout inflation though. Your views should perform the same. For now, Probe is meant to be a developer tool only and I don’t recommend using it in production.

The code is available on Github. As usual, contributions are very welcome.

ArkyFirefox OS: Designing Khmer Keyboards and Fonts

Back in Cambodia this week to participate in Barcamp Phnom Penh 2014. It is great to experience the energy and openness of Phnom Penh and the Cambodian youth's insatiable zeal to learn all things tech. Over the past few years, the barcamps helped us build the Mozilla community in Cambodia.

Cambodia is a fast growing economy in the region. One survey notes significant increase in smart phone ownership from last year. And also increase in Khmer supported smart phones and feature phone in the market. At Barcamp Phnom Penh I presented a Firefox OS talk about the on-going Khmer Internationalization (i18n) work and invited the audience to contribute to Firefox OS. Planning to organize hackathons to work on Khmer keyboards with the Mozilla community here.

After my talk, Vannak of Mozilla Cambodia community talked briefly about Mozilla community to the audience. And we did a presentation about Mozilla Web Maker tools. I hope we'll organize more web literacy events in future. Keep watching this space for more news from Cambodia, the kingdom of wonder.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1064878] Use of uninitialized value in pattern match (m//) at /loader/0x7ffa9dedc498/Bugzilla/Extension/BugmailFilter/Filter.pm line 172
  • [1020558] Add Involved with Bugs and Never Visited Query to MyDashboard
  • [1062944] Product::Component autocomplete when filing new bug shows disabled components.
  • [1046213] datetime_from() generates wrong dates if year < 1901
  • [1053513] remove last-visited entries when a user removes involvement from a bug
  • [1021902] UI to view a user’s review history
  • [1064678] searching for tracking flag “is empty” is generating incorrect sql
  • [1064329] splinter displays patches that remove lines starting with hyphens incorrectly
  • [1065594] Enable ‘due date’ field in ‘Community Building’ product (all components)
  • [1052851] add the ability to search by “assignee last login date”
  • [1066777] The kick-off form isn’t creating dependent bugs
  • [1039940] serialisation of objects for webservice responses is extremely slow
  • [1058615] New Custom Bugzilla Form Needed For PR Team

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

William Lachancemozregression 0.24

I just released mozregression 0.24. This would be a good time to note some of the user-visible fixes / additions that have gone in recently:

  1. Thanks to Sam Garrett, you can now specify a different branch other than inbound to get finer grained regression ranges from. E.g. if you’re pretty sure a regression occurred on fx-team, you can do something like:

    mozregression --inbound-branch fx-team -g 2014-09-13 -b 2014-09-14

  2. Fixed a bug where we could get an incorrect regression range (bug 1059856). Unfortunately the root cause of the bug is still open (it’s a bit tricky to match mozilla-central commits to that of other branches) but I think this most recent fix should make things work in 99.9% of cases. Let me know if I’m wrong.
  3. Thanks to Julien Pagès, we now download the inbound build metadata in parallel, which speeds up inbound bisection quite significantly

If you know a bit of python, contributing to mozregression is a great way to have a high impact on Mozilla. Many platform developers use this project in their day-to-day work, but there’s still lots of room for improvement.

Mozilla Open Policy & Advocacy BlogFCC Reply Comments on Net Neutrality

Today is the final deadline to file comments as part of the Federal Communications Commission’s open proceeding on net neutrality in the United States. The show of support for real net neutrality over the past six months has been tremendous – so much so that this issue has now received more public comments than any other in FCC history, nearly 1.5 million in total.

Mozilla has been pulling out all the stops as well. In May, we submitted an original petition to the FCC to propose a new path forward on the difficult question of authority, and to shake up a debate that had not seen many new ideas. We’ve also launched global teach-ins, filed comments, and joined last week’s Day of Action, among other activities.

Our reply comments filed today build on these past actions, summarize the state of the debate, and respond to net neutrality opponents. Our comments are structured around four points:

  1. Most parties agree on most issues. The FCC should adopt enforceable rules, including some form of a no blocking and no unreasonable discrimination rule, with an exception for reasonable network management.
  2. Mozilla’s classification theory is a viable and strong path forward. Mozilla’s approach ensures real net neutrality while bypassing the political conversation over reclassification, by articulating a new, not yet classified Title II service offered to remote end points.
  3. The FCC must adopt a presumption against paid prioritization. Allowing prioritization would degrade other uses of the Internet, and thus cause harm to user choice, innovation, and competition.
  4. The same rules should apply to mobile and fixed services. There is one Internet and it must remain open for all. Technical requirements for mobile networks can be protected through reasonable network management.

This week, the FCC will conduct roundtables on net neutrality, with varying focuses including technical, legal, and enforcement aspects. I’ll be participating in one of the Friday sessions, focusing on the topic of enforcement. The roundtables will be held in DC, and will include a moderated discussion among a diversity of viewpoints. At this stage of the process, I don’t expect much in the way of agreement – but at least a range of options will be presented and defended for the agency’s consideration. The public has been invited to submit questions in advance over email or Twitter – roundtables@fcc.gov or #FCCRoundtables – though a caveat from the session description: Your questions and identifying information will be made public and included in the official record.

We’ve seen comments, petitions, roundtables, protests, and events for months now. The main thing left is for the agency to make some decisions – and as we note in our comments, the outcome will set the course for the future of the industry, for better or for worse.

There’s still time for you to make your voice heard. You can contact the FCC and members of Congress, and ask them to protect net neutrality, and the choice, innovation, and freedom enjoyed today by all Internet users and developers.

Eric ShepherdThe Sheppy Report: September 12, 2014

You may notice that I’m posting this on Monday instead of Friday (despite what the title says). Last week was a bit of a mess; I had a little bit of a surprise on Wednesday that involved my being taken to the hospital by ambulance (no lights and siren — it wasn’t that big a surprise), and that kept me away from work for most of the rest of the week.

I’m feeling fine now, although there’s some follow-up medical testing to do to check things out.

Anyway! On to the important stuff!

What I did this week

  • Deleted an inappropriate article from the MDN inbox and emailed the author explaining why.
  • Succeeded in first deployment of a test of my new live sample server-side component server project.
  • Discovered the native GitHub application for Mac, which makes my life enormously easier.
  • Added more information to the comments on bug 1063580, about how shift-refreshing of macros no longer lets you pick up changes to macros and submacros.
  • Updated “How to create an MDN account” to consider that you can use either Persona or GitHub now as your authentication service. This fixed bug 1049972.
  • Added the dev-doc-needed keyword to bug 1064843, which is about implementing ::backdrop in Gecko.
  • Updated the draft of the State of MDN for August to include everything I could think of, so Ali can get it posted (which has happened!).
  • Created the soapbox message to appear at the top of MDN pages about the net neutrality petition. This has since been removed once again, since the event in question has passed. This resolved bug 1060483.
  • Finished debugging the SubpageMenuByCategories and MakeColumnsForDL macros. The former is new and the latter has been updated to support multiple <dl> lists with interspersed headings.

Meetings attended this week

Monday

  • #mdndev planning meeting.
  • #mdndev triage meeting.

Last week started off on a high note, with some very productive activity, and then went sideways on Wednesday and never really recovered. I have high hopes for this week though!

Daniel StenbergDaladevelop hackathon

On Saturday the 13th of September, I took part in a hackathon in Falun Sweden organized by Daladevelop.

20-something hacker enthusiasts gathered in a rather large and comfortable room in this place, an almost three hour drive from my home. A number of talks and lectures were held through the day and the difficulty level ranged from newbie to more advanced. My own contribution was a talk about curl followed by one about HTTP/2. Blabbermouth as I am, I exhausted the friendly audience by talking a good total of almost 90 minutes straight. I got a whole range of clever and educated questions and I think and hope we all had a good time as a result.

The organizers ran a quiz for two-person teams. I teamed up with Andreas Olsson in team Emacs, and after having identified x86 assembly, written binary, spotted perl, named Ada Lovelace, used the term lightfoot and provided about 15 more answers we managed to get first prize and the honor of having beaten the others. Great fun!

Daniel GlazmanMolly needs you, again!

There are bad mondays. This is a bad monday. And this is a bad monday because I just discovered two messages - among others - posted by our friend Molly Holzschlag (ANC is Absolute Neutrophil Count):

First message

Second message

If you care about our friend Molly and value all what she gave to Web Standards and CSS across all these years, please consider donating again to the fund some of her friends set up a while ago to support her health and daily life expenses. There are no little donations, there are only love messages. Send Molly a love message. Please.

Thank you.

Marco ZeheYour must read post for this week

This goes out to all my readers who are web developers, or who work with web developers closely enough to hand this to them.

It’s Monday morning, and for this week, I have a must read post for you which you will now bookmark and reference and use with every single web component you build! No, this is not a suggestion, it’s an order which you will follow. Because if you don’t, you’ll miss out on a lot of fun and grattitude! I’m serious! So here goes:

Web Components punch list by Steve Faulkner of the Paciello Group

Read. Read again. Begin to understand. Read again. Understand more. Read yet another time. Get the tools referenced in the post. Check your web component(s) against this list top to bottom. If even a single point is answered “no”, fix it, or get on Twitter and ask for help in the accessibility community on how to fix it. Listen and learn. And repeat for every future web component you build!

And don’t be shy! Tell the world about that your web component is accessible from the start, usable by at least twenty percent of people more than would otherwise! I kid you not!

Happy Monday, and happy coding!

Andy McKayWorking on Open Source

When hiring at Mozilla, having potential candidates who know open source software is almost a requirement. But there's a huge difference between people that work with open source software and those who work on open source.

About 14 years ago when I started interviewing candidates for open source software, even seeing candidates who knew what open source was could be unusual and seen as a advantage. That's not enough now. These days working with open source software is seen as a base requirement. But that's not still enough.

In fact it's almost staggering these days to understand how anyone can build any systems, especially web sites, without using a large amount of open source. So go ahead fill resumes with how you've used Linux, MySQL, PostgreSQL, Python, JavaScript, Ruby and so on. Show me your github, bitbucket, whatever account. Those are buzzwords that keep recruiters happy.

What I really want to see is that you've worked on open source. Have you:

  • contributed to an open source project?
  • published an open source project that is used by someone other yourself (or your company)?
  • given a talk at a open source conference?
  • helped out an open source foundation?
  • written some documentation on open source?
  • participated on mailing lists with other developers about an open source project?
  • dealt with those awesome people who want to help and those trolls who don't?

There's a reason we look for open source developers at Mozilla. It's partly because Mozilla is basically a collection of open source projects with some funding behind it. But also because developers on open source are great at developing code and at working with other people.

Working on open source separates you from those who just use it.

Jeff WaldenRacism from a United States judge. You’ll never guess which one!

A couple days ago I found this ugly passage in a United States legal opinion:

The white race deems itself to be the dominant race in this country. And so it is in prestige, in achievements, in education, in wealth and in power. So, I doubt not, it will continue to be for all time if it remains true to its great heritage and holds fast to the principles of constitutional liberty.

Take a guess who wrote it, and in what context.

A hint

The same person who wrote this immediately continued with these further words, some of which might sound familiar (if improbable):

But in view of the Constitution, in the eye of the law, there is in this country no superior, dominant, ruling class of citizens. There is no caste here. Our Constitution is color-blind, and neither knows nor tolerates classes among citizens. In respect of civil rights, all citizens are equal before the law. The humblest is the peer of the most powerful. The law regards man as man, and takes no account of his surroundings or of his color when his civil rights as guaranteed by the supreme law of the land are involved.

I’ll give you a little space to try to come up with the name and context, if you haven’t already gotten it.

The answer

These passages were written by the first Justice Harlan, dissenting in the notorious Plessy v. Ferguson case. It’s interesting how we now remember Justice Harlan for this solo dissent and for his statement that, “Our Constitution is color-blind, and neither knows nor tolerates classes among citizens.” Yet I’d never heard before, anywhere, that in the exact same paragraph he validated the idea of a dominant race and basically asserted that whites would always be so in the United States.

Justice Harlan certainly deserves credit as the only one of eight justices to hold in favor of Homer Plessy, the New Orleans Comité des Citoyens, and the railroad company that ejected him from a whites-only car (all of whom conspired in a test case to overturn the law). (The ninth justice, David Josiah Brewer, didn’t participate in the case because of the abrupt death of his daughter. It’s unclear how he would have voted had he participated, with his personal history and voting record pointing in somewhat different directions.) As the only Southerner on the Court, and a former slave owner at that, it’s far from what one might have expected of Harlan, or of his colleagues.

Yet at the same time, Justice Harlan adhered to some of the beliefs and prejudices of his time. It is an unfortunate gloss on history that we are less aware of this, than we are of his better-known, more admirable words. We should be aware of both: to correctly understand history, to not fall prey to knowing only that which we want to be true, and to place a historical figure in full context.

Tantek Çelik Happy 8-bit day 2014! #8bitday

8-bit day is the 256th day of the year. This year (and most years) that happens to be Gregorian September 13th. Five years ago I proposed making today an (un)official holiday in honor of all things 8-bit: art, music, video, games, and sure programmers too.

The Math

If you start the year with day 0, in the year 2014, 2014-09-13 (or 2014-256) is day number 255.

  • 255 decimal = FF hex
  • FF hex = 11111111 binary
  • 11111111 binary = 8 bits.

Enjoy some 8-bit stuff

Music
Videos

See Also

Related

Previously

Previously I kept this on my wiki, which is unfortunately still on pbworks.com, so starting this year, I'm retaking that content and blogging it here on my site, until I've implemented my own wiki pages. I'll write a new post once a year, like I have in past years.

Post your favorite 8-bit stuff

Take a moment today to post and celebrate the 8-bit things that you've found and enjoy, and hashtag it #8bitday (e.g. on your own site, Twitter, Instagram, etc.)

Tim TaubertTalk: Keeping secrets with JavaScript - An Introduction to the WebCrypto API

With the web slowly maturing as a platform the demand for cryptography in the browser has risen, especially in a post-Snowden era. Many of us have heard about the upcoming Web Cryptography API but at the time of writing there seem to be no good introductions available. We will take a look at the proposed W3C spec and its current state of implementation.

Slides

Code

https://github.com/ttaubert/secret-notes

Mozilla WebDev CommunityWebdev Extravaganza – September 2014

Once a month, web developers from across Mozilla gather to continue work on our doomsday robot that will force the governments of the world to relinquish control of the internet to us. Crafting robotic monsters is hard work, so we take frequent breaks to avoid burnout, and we find these breaks are a convenient time to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, view a recording of the meeting in Air Mozilla, or attempt to decipher the aimless scrawls that are the meeting notes. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Marketplace Redesign

clouserw stopped by to tell us that Firefox Marketplace shipped a redesign! The front page now has a set of modules that can be customized using a set of admin tools, including changing what apps are shown, setting colors and features, and more. Of particular note is the fact that the admin interface for the modules was given a lot of UX attention as well (as opposed to our standard practice of using the default Django admin design), and includes a live preview of what the modules will look like.

Socorro: Out of Memory Crashes and new ADI source

lonnen informs us that Socorro has landed support for logging out-of-memory crashes, meaning that crashes that are suspected of relating to memory now include about:memory logs in the crash data, to help us diagnose those problems. In addition, Socorro is now fetching data about the number of active daily instances of Firefox instead of depending on the data being sent to Socorro in bulk. Socorro uses this data to normalize crash data, and the new source reduces the time spent pulling in the data to under ten minutes.

Air Mozilla now supports pop-out videos

peterbe shared the news that Air Mozilla now supports pop-out videos, meaning you can now launch a new window with the video you want to watch. This gives the viewer more options in how to watch a video while working on something else, as previously you were limited to in-page viewing or full-screen viewing.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

contribute.json

peterbe had a few pieces of news about contribute.json. First, Air Mozilla and Peekaboo both have live contribute.json files, and Socorro is deploying one soon. Second,  seanbolton and espressive are working on a redesign of the contribute.json webpage. And finally, the validator now supports text and file upload as well as URLs.

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor. Unfortunately we had no one new to introduce this month.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Markeplace in multiple datacenters

clouserw shared an “exploration” he’s working on for moving Marketplace into being hosted in multiple datacenters. While the primary goals are redundancy (if a datacenter goes down) and performance (geographically close to users who normally have to reach servers in the US), one major issue that was raised was handling differing privacy laws between countries that we have datacenters in. Feedback is welcome!

Bedrock running on Cloud9

jgmize wanted to let everyone know that Bedrock can now be set up on Cloud9, allowing developers and contributors to get a running instance of Bedrock with almost no interaction or software installed on their own machine. There’s a quickstart guide for setting it up, and he’s looking for people to try it out and also to consider trying out the model on their own projects as a way of helping on-board new contributors.


If you’re curious, the robot is coming along nicely. Once we’re able to get the imported railgun to clear customs, we should be good to go!

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

 

Nick CameronA gotcha with raw pointers and unsafe code

This bit me today. It's not actually a bug and it only happens in unsafe code, but it is non-obvious and something to be aware of.

There are a few components to the issue. First off, we must look at `&expr` where `expr` is an rvalue, that is a temporary value. Rust allows you to write (for example) `&42` and through some magic, `42` will be allocated on the stack and `x` will be a reference to it with an inferred lifetime shorter than the value. For example, `let x = &42i;` works as does

struct Foo<'a> {
    f: &'a int,
}
fn main() {
    let x = Foo { f: &42 };
}

Next, we must know that borrowed pointers (`&T`) can be implicitly coerced to raw pointers (`*T`). So if you write `let x: *const int = &42;`, `x` is a raw pointer produced by coercing the borrowed pointer. Once this happens, you have no safety guarantees - a raw pointer can point at memory that has already been freed. This is fine, since you see the raw pointer type and must be aware, but if the type comes from a struct field what looks like a borrowed pointer could actually be a raw pointer:

struct Bar {
    f: *const int,
}
fn main() {
    let x = Bar { f: &42 };
}

Imagine that `Bar` is some other module or crate, then you might assume that `main` is ok here. But it is not. Since the borrowed pointer has a narrow scope and is not stored (the raw pointer does not count for this analysis), the compiler can choose to delete the `42` allocated on the stack and reuse that memory straight after (or even during, probably) the `let` statement. So, `x.f` is potentially a dangling pointer as soon as `x` is available and accessing it will give you bugs. That is OK, you can only do so in an `unsafe` block and thus you should check (i.e., as a programmer you should check) that you can't get a dangling pointer. You must do this whenever you dereference a raw pointer, and the fact that it must happen in unsafe code is your cue to do so.

The final part of this gotcha was that I was already in unsafe code and was transmuting. Of course transmuting is awful and you should never do it, but sometimes you have to. If you do `unsafe { transmute(x) }` then there is no cue in the code that you have a raw pointer. You have no cue to check the dereference, because there is no dereference! You just get a weird bug that only appears on some platforms and depends on the optimisation level of compilation.

Unfortunately, there is nothing we can really do from the language point of view - you just have to be super-careful around unsafe code, and especially transmutes.

Hat-tip to eddyb for figuring out what was going on here.

Mark SurmanSnapping the puzzle together

I’ve had a picture in mind for a while: a vision of FirefoxOS + Appmaker + Webmaker mentor programs coming together to drive a new wave of creativity and content on the web. I believe this would be a way to really show what Mozilla stands for right now: putting access to the Internet in more hands and then helping people unlock the full potential of the web as a part of their lives and their livelihoods.

Puzzle pieces

The thing is: this picture has felt a bit like a puzzle until recently — I can see where it’s going, but we don’t have all the pieces. It’s like a vision or a theory more than a plan. However, over the past few months, things are getting clearer — feels like the puzzle pieces are becoming real and snapping together.

Bangladesh

Dinner w/ Mozilla Bangladesh

I had this ‘it’s coming together’ feeling in spades the other day as I had dinner w/ 20 members of the Mozilla community in Bangladesh. Across from me was a college student named Ani who was telling me about the Bengali keyboard he’d written for FirefoxOS. To his right was a woman named Maliha who was explaining how she’d helped the Mozilla Bangladesh community organize nearly 50 Webmaker workshops in the last two months. And then beside me, Mak was enthusiastically — and accurately — describing Mozilla’s new Mobile Webmaker to the rest of the group. I was rapt. And energized.

More importantly, I was struck by how the people around the table had nearly all the pieces of the puzzle amongst them. At a practical level, they are all actively working on the practicalities of localizing FirefoxOS and making it work on the ground in Bangladesh. They are finding people and places to teach Webmaker workshops. They have offered to help develop and test Appmaker to see if it can really work for users in Bangladesh. And, they see how these things fit together: people around the table talked about how all these things combined have the potential for huge impact. In particular, they talked about the role phones, skills and publishing tools built with Mozilla values could unleash a huge wave of Bengali language content onto the mobile internet. In a country where less than 10% of people speak English. This is a big deal.

The overall theory behind this puzzle is: open platforms + digital skills + local content = an opportunity to disrupt and open up the mobile Internet.

IMG_20140912_204932

Well, at least, that’s my theory. I see local platforms like Firefox OS — and HTML5 in general — as the baseline. They make it possible for anyone to create apps and content for the mobile web on their own terms — and they are easy to learn. In order to unlock the potential of these platforms, we also need large numbers of people to have the skills to create their own apps and content. Which is what we’re trying to tee up with our Webmaker program. Finally, we need a huge wave of local content that smartphone users make for each other — which both Webmaker and Appmaker are meant to fuel. These are the puzzle pieces I think we need.

On this last point: the content needn’t be local per se — but it does need to be something of value to users that the web / HTML5 can provide this better than existing mobile app stores and social networks. Local apps and content — and especially local language content — is a very likely sweet spot here. The Android Play Store and Facebook are bad — or at least limited — in how they support people creating content and apps. In languages like Bengali, the web — and Mozilla — have historically been much better.

But it’s a theory with enough promise — with enough pieces of the puzzle coming together — that we should get out there and test it out in practice. Doing this will require both discipline and people on the ground. Luckily, the Mozilla community has these things in spades.

India Community

Mozillians at Webmaker event in Pune

Talking with a bunch of people from the Mozilla India community underlined this part of things for me — and helped my thinking on how to test the local content theory. Vineel, Sayak and others told me about the recent launch of low cost Firefox OS smartphones in India — including a $33/R1999 phone from a company called Intex. As with Firefox releases in many other countries, the core launch team behind this effort were volunteer Mozilla contributors.
Working with Mozilla marketing staff from Taiwan, members of the Mozilla India community made a plan, trained Intex sales staff and promoted the phone. Early results: Intex sold 15,000 units in the first three days. And things have been picking up from there.

It’s exactly this kind of community driven plan and discipline that we will need to test out the Firefox OS + Appmaker + Webmaker theory. What we need is something like:

  1. Pick a couple of places to test out our theory — India and Bangladesh are likely options, maybe also Brazil and Kenya.
  2. Work with the community to test out the ‘everyone can author an app’ software first — find out what regular users want, adapt the software with them, test again.
  3. Make sure this test includes a strong Webmaker / training component — we should be testing how to teach skills at the same time as testing the software idea.
  4. Make sure we have both phones and a v1 of Mobile Webmaker in local languages
  5. Also, work with community to develop a set of basic app templates in local language — it’s important not to have an ‘empty shelf’ and also to build around things people actually want to make.
  6. Move from research to ‘market’ testing — put Mobile Webmaker on FirefoxOS phones and do a campaign of related Webmaker training sessions.
  7. Step back. See what worked. What didn’t. Iterate. In the market.

This sort of thing is doable in the next six months — but only if we get the right community teams behind us. I’m going to work on doing just that at ReMoCamp in Berlin this weekend. If there is interest and traction, we’ll start moving ahead quickly.

In the meantime, I’d be interested in comments on my theory above. We’re going to do something like this — we need everybody’s feedback and ideas to increase the likelihood of getting it right.


Filed under: mozilla, webmakers

Doug BelshawWeeknote 37/2014

This week I’ve been:

  • Interviewing more people about Web Literacy Map 2.0:
  • Agreeing (with Lyndsey Britton & Lauren Summers) on 8th November 2014 as the date for a re-arranged Maker Party North East.
  • Selling my iPad Mini. I’ve bought a 6.4″ Sony Xperia Z Ultra phablet to replace both it and my Moto G (the 3G version). I’m still waiting for delivery as it was about £50 cheaper to order it from Amazon Germany instead of Amazon UK!
  • Giving feedback on designs for a (much needed) Webmaker badge landing page.
  • Attending GitClub, led by Ricardo Vazquez.
  • Connecting with Gordon Gow about some potentially overlapping interests in web literacy-related mobile learning projects in Sri Lanka.
  • Speaking at (and attending) a Code Acts in Education seminar at the University of Stirling, organised by Ben Williamson. It was a great event, at which I met great people and learned loads! My slides are here.
  • Recording videos as part of the guidance for people to earn Mozilla’s ‘remix the web’ badge on the forthcoming iDEA award platform.

Next week I’m at home all week, interviewing more people about the Web Literacy Map, and starting to think about synthesizing what I’ve been hearing so far. I should also start thinking about my Mozilla Festival sessions and deliverable for the Badge Alliance Digital & Web Literacy working group….

Image CC BY Michael Himbeault

Will Kahn-GreeneInput status: September 12th, 2014

Development

High-level summary:

  • Updated to ElasticUtils v0.10 which will allow us to upgrade our cluster to Elasticsearch 1.1. I'm working on a fix that'll let us to go to Elasticsearch 1.2, but that hasn't been released, yet.
  • Integrated the spicedham library prototype and set it up to classify abusive Input feedback. It's not working great, but that's entirely to be expected. I'm hoping to spend more time on spicedham and classification in Input in 2014q4. Ian did a great job with laying the foundation! Thank you, Ian!
  • Implemented a data retention policy and automated data purging.
  • Made some changes to the Input feedback GET and POST APIs to clarify things in the docs, fix some edge cases and make it work better for Firefox for Android and Loop.
  • Fixed the date picker in Chrome. Thank you, Ruben!

Landed and deployed:

  • c4e8e34 [bug 1055520] Update to ElasticUtils v0.10
  • e023fa4 [bug 1055520] Fix two reshape issues post EU 0.10 update
  • f9ba829 [bug 1055785] Codify data retention policy
  • 91396a8 Generalize About page text so it works for all products
  • 6fc03bf [bug 1053863] Update django to 1.5.9
  • 85709b2 [bug 1055788] Implement data purging
  • c0677a1 [bug 965796] Add a products update page
  • 121588d [bug 1057353] Update django-statsd and pystatsd
  • fe1c740 Add PII-related notes to the API fields
  • f77ecfa [bug 799562] Clarify API field documentation
  • c5eec03 [bug 1055789] Restrict front page dashboard and api to 6 months
  • 0892546 [bug 1059826] Add max_length to url field in API
  • f192f84 [bug 1057617] Fix url data validation
  • aad961d [bug 1030901] Document Input GET API
  • 2f212c5 [bug 1015788] Add flake8 linting
  • d673947 Update coding conventions
  • 27a1b6b Add "maximum" arg to GET API
  • 4f671e4 [bug 1062436] Add flags app and Flag model
  • 9d03d4b Fix flake8_lint issues
  • 0411d91 [bug 1062453] Add flagged view
  • 56f7e24 [bug 1062439] Celery task for classification
  • 7aa2930 [bug 1062455] Add spicedham to vendor (Ian Kronquist)
  • 0d90df3 We don't need spicedham under vendor/packages (Ian Kronquist)
  • a2a491d fix bug 1012965 - Date picker looks broken in chrome (Ruben Vereecken)
  • 0c42213 [bug 1063825] Integrate spicedham into fjord
  • 78a2d63 [bug 1062444] Initial training data
  • 5ca816e [bug 1020307] Prepare for adding gradient to generic form

Current head: 5ca816e

Rough plan for the next two weeks

  1. Working on Dashboards-for-everyone bits. Documenting the GET API. Making it a bit more functional. Writing up some more examples. (https://wiki.mozilla.org/Firefox/Input/Dashboards_for_Everyone)
  2. Gradients (https://wiki.mozilla.org/Firefox/Input/Gradient_Sentiment)

What I need help with

  1. (django) Update to django-rest-framework 2.3.14 (bug #934979) -- I think this is straight-forward. We'll know if it isn't if the tests fail.
  2. (django, cookies, debugging) API response shouldn't create anoncsrf cookie (bug #910691) -- I have no idea what's going on here because I haven't looked into it much.

For details, see our GetInolved page:

https://wiki.mozilla.org/Webdev/GetInvolved/input.mozilla.org

If you're interested in helping, let me know! We hang out on #input on irc.mozilla.org and there's the input-dev mailing list.

Additional thoughts

I've been codifying project plan details on the wiki:

https://wiki.mozilla.org/Firefox/Input

I have no idea who's going to use that information or whether it helps. If you see things that are missing, let me know. It'll help me hone the project management templates I'm using and know which information is important to keep up to date and which information I can let slide until rainy days.

That's it!

Matěj CeplOn bibshare

(this is originally a comment on the post about “scientific Markdown”)

In my previous life I was using heavily TeX and BibTeX for writing a scholarly articles when working on my PhD in sociology. When doing a large BibTeX database of bibliopgraphy there is a certain moment when one needs to establish some order in creating new keys for the individual references. When I hit that moment, I started to look around whether somebody didn’t do some thinking about the design of the bibliography keys. I found almost nothing on the Web perhaps because there was actually a file bibshare (originally in $TEXMF/doc/bibtex/base/bibshare now I cannot find it anywhere, so I have download a version from older tetex RPM to my website). It describes pretty nice standard, which really should be rewritten into RFC or something of that sort. The two biggest advantages are stable keys (so bibliographies can be exchanged) and a more rememberable ones. So, whenever I see now granovetter:AJS-1973-1360 I do remember (and it has been couple of years, since I used BibTeX last time) that it is an awesome article "The Strength of Weak Ties" by Mark Granovetter.

Mozilla Release Management TeamFirefox 33 beta2 to beta3

  • 21 changesets
  • 55 files changed
  • 696 insertions
  • 410 deletions

ExtensionOccurrences
list14
js11
h5
cpp5
html4
jsm2
css2
cc2
xul1
webidl1
java1
ini1
inc1
in1
build1

ModuleOccurrences
layout15
dom9
toolkit8
mobile5
js5
widget3
media3
services2
gfx1
content1

List of changesets:

Mark FinkleBug 1042715 - Add support for Restricted Profiles r=rnewman a=lmandel - 080344c7c80b
Ryan VanderMeulenBacked out changeset 080344c7c80b (Bug 1042715) for Android bustage. - fcf16a67fed4
Tim TaubertBug 1046645 - Mark moz-page-thumb:// as local resources to prevent mixed content warnings f=Mardak r=gavin a=lmandel - e0b583b1210e
Jason OrendorffFollow-up 2 to Bug 1041631, part 1 - Make one last test work when Symbol is not defined. Backported from rev 74637aa07226. a=testonly. - 337d96ca1194
Jason OrendorffBug 1041631, part 2 - Make ES6 Symbols Nightly-only for now. r=Waldo, a=sledru. - eee93220473c
Martijn WargersBug 1058797 - Intermittent test_303567.xul | Result logged after SimpleTest.finish(). r=mak, a=test-only - d2d97af8ecdd
Hiroyuki IkezoeBug 1041262 - Disable autofilling of search engines to avoid failures in unified complete tests when searchengines is in the platform directory. r=mak, a=test-only - 320e081cac62
Landry BreuilBug 1014375 - Properly define JS_PUNBOX64 or JS_NUNBOX32 depending on the CPU arch r=nbp a=lmandel - 31a06334affd
Landry BreuilBug 1014375 followup - add missing ;; to unbreak the tree a=ryanvm. - f89c8ca38b12
Ryan VanderMeulenBug 1023323 - Mark 413361-1.html as fuzzy on Android 4.0. a=test-only - 8338468a7588
Benoit JacobBug 1063048 - Backout 35ff4bfb198f because on DriverVersionMismatch our blacklisting logic is fooled and doesn't protect us against real crashes. r=Bas, a=lmandel - d0885f177e37
Wes JohnstonBug 763671 - Remove gradient from form elements. r=mfinkle, a=lmandel - 7b4d4b3b7598
Mark CapellaBug 1057685 - Regression: Tweak Browser:Quit to maintain existing support for add-ons - part deux. r=wesj, a=lmandel - dbfd31597299
Matt WoodrowBug 1059807 - Mark OSX printing surfaces as being write-only. r=roc, a=lmandel - 7b689c3657e4
Gian-Carlo PascuttoBug 1053264 - Do not use CAPTUREBLT when Desktop Composition is enabled. r=jimm, a=lmandel - 5638564e0d94
Gian-Carlo PascuttoBug 1060796 - Limit screen capture FPS. r=jesup, a=lmandel - 18ba9aece9bd
Benjamin SmedbergBug 1053745 - Add GMP plugin data to FHR. r=gfritzsche, a=lmandel - 02474d192901
Jan-Ivar BruaroeyBug 1062981 - Disable bfcache for pages active MediaManager. r=smaug, r=jesup, a=lmandel - 46abad0899f9
Xidorn QuanBug 1063856 - Add more counter styles from the Predefined Counter Styles document, for better interop and web-compat. r=jfkthame, a=lmandel - 8e9b139e30b9
Nils Ohlmeier [:drno]Bug 1021220 - Verify absence of loopback in SDP offer. r=bwc, a=test-only - fdf2f580b665
Jan-Ivar BruaroeyBug 1063808 - Support old constraint-like RTCOfferOptions for a bit. r=smaug, r=abr, a=lmandel - d4082d3a082c

Mozilla Open Policy & Advocacy BlogReflections on the 9th Internet Governance Forum

I recently returned from Istanbul, Turkey where I attended the 9th annual Internet Governance Forum. This was my third IGF in a row, and my second with Mozilla. Like the others I’ve attended, it was a vibrant event, with over 3000 registrants from very different regions and interests culminating in an energizing, inspiring forum.

This year’s event reinforced my positive position on the IGF. It has a crucial role to play at the core of the Internet governance ecosystem, and it continues to fulfill that role far, far better than any other event. The IGF brings people from all walks of life into the same venue and it gets them to interact with each other and talk about difficult issues, face to face and in real time. This year, even remote participation worked fairly smoothly, as I attended a couple sessions that included speakers on video-conference connections.

Some viewed Turkey as an odd choice for a host, given the country’s history of social media blocking and other interference with free expression and activity online (including a law adopted just after the conclusion of IGF to make it even easier to block Web pages). The sentiment was strong enough to inspire the creation of a competing “Internet Ungovernance Forum” focused on promoting an open, secure, and free-as-in-speech Internet. Despite the undercurrents, both forums were well attended, and featured a broad range of interesting and expert speakers (and even some who were both!).

There is always a spotlight on IGF in the international Internet policy world. This year’s comes from NETmundial in Brazil, and, looking ahead a bit, this October’s ITU Plenipotentiary Conference in Korea, a once-every-four-years convening for high-level intergovernmental activity at the core of the ITU’s mission.

So, what did that spotlight illuminate? As always, there were many broad-ranging discussions on Internet policy issues, and no structural mechanisms to move from policy development to any formalized decision-making. (But for the IGF, this is a feature, not a bug.)

Topically, if last year was the Snowden/surveillance IGF, this year was the net neutrality IGF, with at least three feeder sessions and a three-hour “main session” focused on the topic. I spoke at two of the net neutrality sessions, and attended the others. One of my sessions examined “network enhancement” and its relationship to net neutrality – a timely topic here in the United States, where opponents of strong net neutrality rules often indicate that excessive regulation will discourage investment in infrastructure. The other was the annual working session of the Dynamic Coalition on Network Neutrality, which was praised by conference organizers as one of the most effective examples of the ad-hoc IGF working coalitions. I also contributed a paper to the Coalition’s second annual report, drawing from Mozilla’s petition to the FCC and our July comments.

Surveillance had its moments in the spotlight as well, though it was less emphasized than last year. I spoke on two surveillance-related panels. A session organized by CIGI went straight to one of our core policy themes, trust, and how revelations of expansive surveillance have harmed trust, and what we can do to restore it. A separate session, co-organized by the Internet Society and CDT, focused on responses to surveillance, such as proposals to build additional IXPs and undersea cables, and new laws to mandate localization of data within a country. The group collectively opposed localization mandates as both unhelpful for protecting Internet users from surveillance and potentially disastrous to the global free and open Internet.

The IGF isn’t perfect. But it deserves the role it has as the first stop for collaborative discussion of issues related to governance “on” the Internet. Its mandate from the UN runs for one more year, through the 10th IGF in 2015, and then unless renewed the events will stop. But with massive support from many stakeholder groups in many regions of the world – and a host country for 2016 already lined up, by some accounts – I think the IGF will, and should, continue for many years to come.

Benjamin KerensaOff to Berlin

Right now, as this post is published, I’m probably settling into my seat for the next ten hours headed to Berlin, Germany as part of a group of leaders at Mozilla who will be meeting for ReMo Camp. This is my first transatlantic trip ever and perhaps my longest flight so far, so I’m both […]

William LachanceHacking on the Treeherder front end: refreshingly easy

Over the past two weeks, I’ve been working a bit on the Treeherder front end (our interface to managing build and test jobs from mercurial changesets), trying to help get things in shape so that the sheriffs can feel comfortable transitioning to it from tbpl by the end of the quarter.

One thing that has pleasantly surprised me is just how easy it’s been to get going and be productive. The process looks like this on Linux or Mac:


git clone https://github.com/mozilla/treeherder-ui.git
cd treeherder-ui/webapp
./scripts/web-server.js

Then just load http://localhost:8000 in your favorite web browser (Firefox) and you should be good to go (it will load data from the actually treeherder site). If you want to make modifications to the HTML, Javascript, or CSS just go ahead and do so with your favorite editor and the changes will be immediately reflected.

We have a fair backlog of issues to get through, many of them related to the front end. If you’re interested in helping out, please have a look:

https://wiki.mozilla.org/Auto-tools/Projects/Treeherder#Bugs_.26_Project_Tracking

If nothing jumps out at you, please drop by irc.mozilla.org #treeherder and we can probably find something for you to work on. We’re most active during Pacific Time working hours.

Liz HenryHow to test new features in Firefox 34 Aurora

If you’re a fan of free and open source software and would like to contribute to Firefox, join me for some Firefox feature testing!

There are some nifty features under development right now for Firefox 34 including translation in the browser, making voice or video calls (a feature called “Hello” or “Loop”), debugging information for web developers in the Dev Tools Inspector, and recent improvements to HTML5 gaming.

I’ve written step by step instructions on these
ways to test Firefox 34. If you would like to see what it’s like to improve a popular open source project, trying out these tasks is a good introduction.

Aurora

First, Install the Aurora version of Firefox. It is best to set it up to use multiple profiles. That ensures you don’t use your everyday version of Firefox for testing, so you won’t risk losing your usual profile information. It also makes it easy to restart Firefox with a new, clean profile with all the default settings, very useful for testing. Sometimes I realize I’m running 5 different versions of Firefox at once!

To test “Hello”, try making some voice or video calls from Firefox Aurora. You will need a friend to test with. Or, use two computers that you control. This is a good task to try while joining our chat channels, #qa or #testday on irc.mozilla.org; ask if anyone there wants to test Hello with you. The goal here is mostly to find and report new bugs.

If you test the translation infobar in Aurora you may find some new bugs. This is a fun feature to test. I like trying it on Wikipedia in many different languages, and also looking at newspapers!

If you’re a web developer, you may use Developer Tools in Firefox. I’m asking Aurora users to go through some unconfirmed bug reports, to help improve the Developer Tools Inspector.

If you like games you can test HTML5 web-based games in Firefox Aurora. This helps us improve Firefox and also helps the independent game developers. We have a list of demo games so you can play them, report glitches, and feel like a virtuous open source citizen all at once. Along the way you have opportunities to learn some interesting stuff about how graphics on the web can work (or not work).

Monster madness

These testing tasks are all set up in One and Done, Mozilla QA’s site to start people along the path to joining our open source community. This site was developed with a lot of community contribution including the design and concept by long-time community member Parul and a lot of code by two interns this summer, Pankaj and Maja.

Testing gives a great view into the development process for people who may not (yet) be programmers. I especially love how transparent Mozilla’s process can be. Anyone can report a bug, visible to the entire world in bugzilla.mozilla.org. There are many people watching that incoming stream of bug reports, confirming them and routing them to developer teams, sometimes tagging them as good first bugs for new contributors. Developers who may or may not be Mozilla employees show up in the bugs, like magic . . . if you think of bugmail notifications as magic . . .

It is amazing to see this very public and somewhat anarchic collaboration process at work. Of course, it can also be extremely satisfying to see a bug you discovered and reported, your pet bug, finally get fixed.

Related posts:

Gervase MarkhamPraise and Criticism

Praise and criticism are not opposites; in many ways, they are very similar. Both are primarily forms of attention, and are most effective when specific rather than generic. Both should be deployed with concrete goals in mind. Both can be diluted by inflation: praise too much or too often and you will devalue your praise; the same is true for criticism, though in practice, criticism is usually reactive and therefore a bit more resistant to devaluation.

– Karl Fogel, Producing Open Source Software

Wounds from a friend can be trusted, but an enemy multiplies kisses.

Proverbs 27:6

Armen ZambranoRun tbpl jobs locally with Http authentication (developer_config.py) - take 2

Back in July, we deployed the first version of Http authentication for mozharness, however, under some circumstances, the initial version could fail and affect production jobs.

This time around we have:

  • Remove the need for _dev.py config files
    • Each production config had an associated _dev.py config file
  • Prevented it from running in production environment
    • The only way to enable the developer mode is by appending --cfg developer_config.py
If you read How to run Mozharness as a developer you should see the new changes.

As quick reminder, it only takes 3 steps:

  1. Find the command from the log. Copy/paste it.
  2. Append --cfg developer_config.py
  3. Append --installer-url/--test-url with the right values
To see a real example visit this

Niko MatsakisAttribute and macro syntax

A few weeks back pcwalton introduced a PR that aimed to move the attribute and macro syntax to use a leading @ sigil. This means that one would write macros like:

@format("SomeString: {}", 22)

or

@vec[1, 2, 3]

One would write attributes in the same way:

@deriving(Eq)
struct SomeStruct {
}

@inline
fn foo() { ... }

This proposal was controversial. This debate has been sitting for a week or so. I spent some time last week reading every single comment and I wanted to lay out my current thoughts.

Why change it?

There were basically two motivations for introducing the change.

Free the bang. The first was to “free up” the ! sign. The initial motivation was aturon’s error-handling RFC, but I think that even if we decide not to act on that specific proposal, it’s still worth trying to reserve ! and ? for something related to error-handling. We are very limited in the set of characters we can realistically use for syntactic sugar, and ! and ? are valuable “ASCII real-estate”.

Part of the reason for this is that ! has a long history of being the sigil one uses to indicate something dangerous or surprising. Basically, something you should pay extra attention to. This is partly why we chose it for macros, but in truth macros are not dangerous. They can be mildly surprising, in that they don’t necessarily act like regular syntax, but having a distinguished macro invocation syntax already serves the job of alerting you to that possibility. Once you know what a macro does, it ought to just fade into the background.

Decorators and macros. Another strong motivation for me is that I think attributes and macros are two sides of the same coin and thus should use similar syntax. Perhaps the most popular attribute – deriving – is literally nothing more than a macro. The only difference is that its “input” is the type definition to which it is attached (there are some differences in the implementation side presently – e.g., deriving is based off the AST – but as I discuss below I’d like to erase that distiction eventually). That said, right now attributes and macros live in rather distinct worlds, so I think a lot of people view this claim with skepticism. So allow me to expand on what I mean.

How attributes and macros ought to move closer together

Right now attributes and macros are quite distinct, but looking forward I see them moving much closer together over time. Here are some of the various ways.

Attributes taking token trees. Right now attribute syntax is kind of specialized. Eventually I think we’ll want to generalize it so that attributes can take arbitrary token trees as arguments, much like macros operate on token trees (if you’re not familiar with token trees, see the appendix). Using token trees would allow more complex arguments to deriving and other decorators. For example, it’d be great to be able to say:

@deriving(Encodable(EncoderTypeName<foo>))

where EncoderTypeName<foo> is the name of the specific encoder that you wish to derive an impl for, vs today, where deriving always creates an encodabe impl that works for all encoders. (See Issue #3740 for more details.) Token trees seem like the obvious syntax to permit here.

Macros in decorator position. Eventually, I’d like it to be possible for any macro to be attached to an item definition as a decorator. The basic idea is that @foo(abc) struct Bar { ... } would be syntactic sugar for (something like) @foo((abc), (struct Bar { ... })) (presuming foo is a macro).

An aside: it occurs to me that to make this possible before 1.0 as I envisioned it, we’ll need to at least reserve macro names so they cannot be used as attributes. It might also be better to have macros declare whether or not they want to be usable as decorators, just so we can give better error messages. This has some bearing on the “disadvantages” of the @ syntax discussed below, as well.

Using macros in decorator position would be useful for those cases where the macro is conceptually “modifying” a base fn definition. There are numerous examples: memoization, some kind of generator expansion, more complex variations on deriving or pretty-printing, and so on. A specific example from the past was the externfn! wrapper that would both declare an extern "C" function and some sort of Rust wrapper (I don’t recall precisely why). It was used roughly like so:

externfn! {
    fn foo(...) { ... }
}

Clearly, this would be nicer if one wrote it as:

@extern
fn foo(...) { ... }

Token trees as the interface to rule them all. Although the idea of permitting macros to appear in attribute position seems to largely erase the distinction between today’s “decorators”, “syntax extensions”, and “macros”, there remains the niggly detail of the implementation. Let’s just look at deriving as an example: today, deriving is a transform from one AST node to some number of AST nodes. Basically it takes the AST node for a type definition and emits that same node back along with various nodes for auto-generated impls. This is completely different from a macro-rules macro, which operates only on token trees. The plan has always been to remove deriving out of the compiler proper and make it “just another” syntax extension that happens to be defined in the standard library (the same applies to other standard macros like format and so on).

In order to move deriving out of the compiler, though, the interface will have to change from ASTs to token trees. There are two reasons for this. The first is that we are simply not prepared to standardize the Rust compiler’s AST in any public way (and have no near term plans to do so). The second is that ASTs are insufficiently general. We have syntax extensions to accept all kinds of inputs, not just Rust ASTs.

Note that syntax extensions, like deriving, that wish to accept Rust ASTs can easily use a Rust parser to parse the token tree they are given as input. This could be a cleaned up version of the libsyntax library that rustc itself uses, or a third-party parser module (think Esprima for JS). Using separate libraries is advantageous for many reasons. For one thing, it allows other styles of parser libraries to be created (including, for example, versions that support an extensible grammar). It also allows syntax extensions to pin to an older version of the library if necessary, allowing for more independent evolution of all the components involved.

What are the objections?

There were two big objections to the proposal:

  1. Macros using ! feels very lightweight, whereas @ feels more intrusive.
  2. There is an inherent ambiguity since @id() can serve as both an attribute and a macro.

The first point seems to be a matter of taste. I don’t find @ particularly heavyweight, and I think that choosing a suitable color for the emacs/vim modes will probably help quite a bit in making it unobtrusive. In constrast, I think that ! has a strong connotation of “dangerous” which seems inappropriate for most macros. But neither syntax seems particularly egregious: I think we’ll quickly get used to either one.

The second point regarding potential ambiguities is more interesting. The ambiguities are easy to resolve from a technical perpsective, but that does not mean that they won’t be confusing to users.

Parenthesized macro invocations

The first ambiguity is that @foo() can be interpreted as either an attribute or a macro invocation. The observation is that @foo() as a macro invocation should behave like existing syntax, which means that either it should behave like a method call (in a fn body) or a tuple struct (at the top-level). In both cases, it would have to be followed by a “terminator” token: either a ; or a closing delimeter (), ], and }). Therefore, we can simply peek at the next token to decide how to interpret @foo() when we see it.

I believe that, using this disambiguation rule, almost all existing code would continue to parse correctly if it were mass-converted to use @foo in place of the older syntax. The one exception is top-level macro invocations. Today it is common to write something like:

declaremethods!(foo, bar)

struct SomeUnrelatedStruct { ... }

where declaremethods! expands out to a set of method declarations or something similar.

If you just transliterate this to @, then the macro would be parsed as a decorator:

@declaremethods(foo, bar)

struct SomeUnrelatedStruct { ... }

Hence a semicolon would be required, or else {}:

@declaremethods(foo, bar);
struct SomeUnrelatedStruct { ... }

@declaremethods { foo, bar }
struct SomeUnrelatedStruct { ... }

Note that both of these are more consistent with our syntax in general: tuple structs, for example, are always followed by a ; to terminate them. (If you replace @declaremethods(foo, bar) with struct Struct1(foo, bar), then you can see what I mean.) However, today if you fail to include the semicolon, you get a parser error, whereas here you might get a surprising misapplication of the macro.

Macro invocations with braces, square or curly

Until recently, attributes could only be applied to items. However, recent RFCs have proposed extending attributes so that they can be applied to blocks and expressions. These RFCs introduce additional ambiguities for macro invocations based on [] and {}:

  • @foo{...} could be a macro invocation or an annotation @foo applied to the block {...},
  • @foo[...] could be a macro invocation or an annotation @foo applied to the expression [...].

These ambiguities can be resolved by requiring inner attributes for blocks and expressions. Hence, rather than @cold x + y, one would write (@!cold x) + y. I actually prefer this in general, because it makes the precedence clear.

OK, so what are the options?

Using @ for attributes is popular. It is the use with macros that is controversial. Therefore, how I see it, there are three things on the table:

  1. Use @foo for attributes, keep foo! for macros (status quo-ish).
  2. Use @foo for both attributes and macros (the proposal).
  3. Use @[foo] for attributes and @foo for macros (a compromise).

Option 1 is roughly the status quo, but moving from #[foo] to @foo for attributes (this seemed to be universally popular). The obvious downside is that we lose ! forever and we also miss an opportunity to unify attribute and macro syntax. We can still adopt the model where decorators and macros are interoperable, but it will be a little more strange, since they look very different.

The advantages of Option 2 are what I’ve been talking about this whole time. The most significant disadvantage is that adding a semicolon can change the interpretation of @foo() in a surprising way, particularly at the top-level.

Option 3 offers most of the advantages of Option 2, while retaining a clear syntactic distinction between attributes and macro usage. The main downside is that @deriving(Eq) and @inline follow the precedent of other languages more closely and arguably look cleaner than @[deriving(Eq)] and @[inline].

What to do?

Currently I personally lean towards options 2 or 3. I am not happy with Option 1 both because I think we should reserve ! and because I think we should move attributes and macros closer together, both in syntax and in deeper semantics.

Choosing between options 2 and 3 is difficult. It seems to boil down to whether you feel the potential ambiguities of @foo() outweigh the attractiveness of @inline vs @[inline]. I don’t personally have a strong feeling on this particular question. It’s hard to say how confusing the ambiguities will be in practice. I would be happier if placing or failing to place a semicolon at the right spot yielded a hard error.

So I guess I would summarize my current feeling as being happy with either Option 2, but with the proviso that it is an error to use a macro in decorator position unless it explicitly opts in, or Option 3, without that proviso. This seems to retain all the upsides and avoid the confusing ambiguities.

Appendix: A brief explanation of token trees

Token trees are the basis for our macro-rules macros. They are a variation on token streams in which tokens are basically uninterpreted except that matching delimeters ((), [], {}) are paired up. A macro-rules macro is then “just” a translation from a token tree to another token. This output token tree is then parsed as normal. Similarly, our parser is actually not defined over a stream of tokens but rather a token tree.

Our current implementation deviates from this ideal model in some respects. For one thing, macros take as input token trees with embedded asts, and the parser parses a stream of tokens with embedded token trees, rather than token trees themselves, but these details are not particularly relevant to this post. I also suspect we ought to move the implementation closer to the ideal model over time, but that’s the subject of another post.

Ricky RosarioSUMO Development Update 2012.1

SUMO Dev goes agile

Inspired by the MDN Dev team, the SUMO Dev team decided to try an agile-style planning process in 2012.

To be fair, we have always been pretty agile, but perhaps we were more on the cowboy side than the waterfall side. We planned our big features for the quarter and worked towards that. Along the way, we picked up (or were thrown) lots of other bugs based on the hot issue of the day or week, contributor requests, scratching our own itch, etc. These bugs ended up taking time away from the major features we set as goals and, in some cases, ended up delaying them. This new process should help us become more predictable.

Starting out by copying what MDN has been doing for some time now, we are doing two week sprints. We will continue to push out new code weekly for now, so it is kind of weird in that each sprint has two two milestones within it. We will continue to name the milestones by the date of the push (ie, "2012-01-24" for today's push) and we are naming sprints as YEAR.sprint_number (ie, "2012.1" was our first sprint). We hope to will be doing continuous deployment soon. At that point we will only have to track one milestone (the sprint) at a time. For more details on our process, check out our Support/SUMOdev Sprints wiki page.

2012.1 sprint

We just pushed the second half of our first sprint to production. Some data:

  • Closed Stories: 26
  • Closed Points: 34
  • Developer Days: 36
  • Velocity: .94 pts/day

Our major focus of this sprint was getting our Elastic Search implementation (we are in the process of switching from Sphinx) to the point where we can index and start rolling it out to users. After today's push, we will find out whether this is working properly. *fingers crossed* (UPDATE: we did hit an issue with the indexing.)

Other stuff we landed:

  • Initial support for the apps marketplace. Basically, a landing page and a question workflow that integrates with zendesk for 1:1 help.
  • KPI (Key Performance Indicator) Dashboard. We landed the first chart which displays % of solved questions (it has a math bug in it that will get fixed in the next push).
  • Some minor UI fixes and improvements.

2012.2 sprint

We are currently halfway through our second sprint. Our main goals with this sprint are to get Elastic Search out to 15% of our users and to add a bunch of new metrics charts to the KPI Dashboard.

In my opinion, this new planning process is going well so far. The product team has better insight into what the dev team is up to day to day. And the dev team has better sense about what the short term priorities are. Probably the most awesome thing about it is that we are collecting lots of great data. The part I have liked the least so far has been the actual planning sessions, I end up pretty tired after those. I think it just needs a little getting used to and it is only 1-2 hours every two weeks.

:-)

Ricky RosarioSUMO Development: 2012.3 and 2012.4 Update

Oops, I procrastinated forgot to post an update for 2012.3 and we are done with 2012.4 too now.

2012.3 sprint

  • Closed Stories: 26
  • Closed Points: 37 (3 aren't used in the velocity calculation as they were fixed by James and Kadir - Thanks!)
  • Developer Days: 28
  • Velocity: 1.21 pts/day

The 2012.3 sprint went very well. We accomplished most of the goals we set out to do. We rolled out Elastic Search to 50% of our users and had it going for several days. We fixed some of the blocker bugs and came up with a plan for reindexing without downtime. Everything was great until we decided to add some timers to the search view in order to compare times of the Elastic Search vs the Sphinx code path. As soon as we saw some data, we decided to shut down Elastic Search. Basically, the ES path was taking about 4X more time than the Sphinx path. Yikes! We got on that right away and started looking for improvements.

On the KPI Dashboard side, we landed 4 new charts as well as some other enhancements. The new charts show metrics for:

  • Search click-through rate
  • Number of active contributors to the English KB
  • Number of active contributors to the non-English KB
  • Number of active forum contributors

We did miss the goal of adding a chart for active Army of Awesome contributors, as it turned out to be more complicated than we initially thought. So that slipped to 2012.4.

2012.4 sprint

  • Closed Stories: 20
  • Closed Points: 24
  • Developer Days: 19
  • Velocity: 1.26 pts/day

The 2012.4 sprint was sad. It was the first sprint without ErikRose :-(. We initially planned to have TimW help us part time, but he ended up getting too busy with his other projects. We did miss some of our initial goals, but we did as good as we could.

The good news is that we improved the search performance with ES a bunch. It still isn't on par with Sphinx but it is good enough to where we went back to using it for 50% of the users. We have plans to make it faster, but for now it looks like the click-through rates on results are already higher than what we get with Sphinx. That makes us very happy :-D.

We added two new KPI dashboard charts: daily unique visitors and active Army of Awesome contributors. We also landed new themes for the new Aurora community discussion forums.

2012.5 sprint

This week we started working on the 2012.5 sprint. Our goals are:

  • Elastic Search: refactor search view to make it easier to do ES-specific changes.
  • Elastic Search: improve search view performance (get us closer to Sphinx).
  • Hide unanswered questions that are over 3 months old. They don't add any value, so there is no reason to show them to anybody or have them indexed by google and other search engines.
  • Branding and styling updates for Marketplace pages
  • KPI Dashboard: l10n chart
  • KPI Dashboard: Combine solved and responded charts

We are really hoping to be ready to start dialing up the Elastic Search flag to 100% by the time we are done with this sprint.

Ricky RosarioSUMO Development: 2012.2 Update

Yesterday we shipped the second half of the 2012.2 sprint. We ended up accomplishing most of our goals:

  • [Elastic Search] Perform full index in prod - DONE
  • [Elastic Search] Roll out to 15% of users - DONE
  • Add more metrics to KPI dashboard - INCOMPLETE (We landed 3 out of the 4 new graphs we wanted).

Not too bad. In addition to this, we made other nice improvements to the site:

Great progress for two weeks of work! Some data from the sprint:

  • Closed Stories: 30
  • Closed Points: 38
  • Developer Days: 35
  • Velocity: 1.08 pts/day

Onward to 2012.3

We are now a little over halfway into the 2012.3 sprint. Our goals are to roll out Elastic Search to 50% of users, be ready to roll out to 100% (fix all blockers) and add 5 new KPI metrics to the KPI dashboard. So far so good, although we keep finding new issues as we continue to roll out Elastic Search to more users. That deserves it's own blog post though.

Ricky RosarioJoined the Mozilla Web Team

After 3 great years at Razorfish, I decided to move on and joined Mozilla 2 weeks ago. I will be working remote, but I spent the first week in Mountain View doing new hire orientation, setting up my shiney new MBP i7, setting up development environments for zamboni (new addons site) and kitsune (new support site), and fixing some easy bugs to start getting familiar with the codebase.

So far, I am loving it. Some of my initial observations:

  • My coworkers are super smart and awesome.
  • The main communication channel is through IRC (even when people are sitting nearby in the office). This works out great for the remote peeps like myself.
  • We use git/github for the our branch -> work on bug/feature -> review -> commit workflow. I am loving the process and github helps a ton with their UI for commenting on code.
  • Continuous Integration is the nuts.
  • Automated functional testing ^^.
  • Writing open source software full-time, and getting paid? Unreal!

I am working on SUMO (support.mozilla.com). It is currently going through a rewrite from tiki wiki to django (kitsune project). Working full time with django is like a dream come true for me (a very nerdy dream :).

Anyway, it is very exciting to work for Mozilla serving over 400 million Firefox users. I am looking forward to this new chapter in my career!

Ricky Rosariodotjs: My first Firefox Add-on

Inspired by defunkt's dotjs Chrome extension, I finally decided to play with the new add-on sdk to port the concept to Firefox. dotjs executes JavaScript files in ~/.js based on their filename and the domain of the site you are visitng. For example, if you navigate to http://www.twitter.com, dotjs will execute ~/.js/twitter.com.js. It also loads in jQuery so you can use jQuery with in your scripts even if the site doesn't use jQuery (it is loaded with .noConflict so it doesn't interfere with any existing jQuery on the page).

You can get the add-on for Firefox 4 on AMO and it doesn't require a browser restart (woot!). The code is on github. Feedback and patches welcome!

Ricky Rosariosupport.mozilla.org (SUMO) +dev in 2013

This is my first and last blog post for 2013!

Whewww, 2013 has been another splendid year for SUMO and the SUMO/INPUT Engineering team. We did lose (and missed a ton) our manager, James Socol, early in the year and I took over the managerial duties for the team, but the core dev team stayed intact.

Some metrics

Here are some metrics about what our platform, team and community was up to in 2013:

  • Page views: 502,812,271
  • Visits: 255,122,331
  • Unique visits: 190,633,959
  • Questions asked: 33,482
  • Questions replied to: 31,746 (94.8%)
  • Questions solved: 9,048 (27%)
  • Replies to questions: 119,440
  • Support Forum contributors:
    1+ answers: 8,723
    2+ answers: 3,436
    3+ answers: 1,764
    5+ answers: 742
    10+ answers: 247
    25+ answers: 97
    50+ answers: 63
    100+ answers: 42
    250+ answers: 22
    500+ answers: 17
    1000+ answers: 11
    2500+ answers: 7
    5000+ answers: 3
    10000+ answers: 1 (20,057 answers by cor-el)
  • Army of Awesome tweets handled: 46,030
  • Army of Awesome contributors: 911
  • Knowledge Base (KB) Revisions: 16,561
    en-US KB Revisions: 2,975
    L10n KB Revisions: 13,586
  • Locales with activity: 55
  • en-US KB Contributors: 165
  • L10n KB Contributors: 607
  • KB Helpful votes: 4,214,528 (72.6%)
  • KB Unhelpful votes: 1,587,416 (27.4%)

More metrics

Willkg wrote a blog post with that contains a lot more metrics specific to our development (bugs filed, bugs resolved, commits, major projects, etc.). Go check it out!

I wanted to highlight a few things he mentioned:

In 2011, we had 19 people who contributed code changes.
In 2012, we had 23 people.
In 2013, we had 32 people.

YAY!

Like 2011 and 2012, we resolved more bugs than we created in 2013. That's three years in a row! I've never seen that happen on a project I work on.

WOOT!

Input also had a great year in 2013. Check out willkg's blog post about it.

Onward

2013 was a great year for the SUMO platform. We finetuned the KB information architecture work we began in 2012 and simplified all of the landing pages (home, product, topic). In 2014, I am hoping we can make the Support Forum as awesome as the KB is today.

In addition to making the KB awesomer... The Support Forums now support more locales than just English. We now send HTML and localized emails! We added Open Badges! We switched to YouTube for videos. We improved search for locales. We made deployments better. We implemented persona (not enabled yet). We implementated escalation of questions to the helpdesk. We added lots of new and improved dashboards and tools for contributors and community managers. At the same time, we made lots of backend and infrastructure improvements that make the site more stable and resilient and our code more awesome.

As a testament to the awesomeness of the platform, new products have come to us to be their support platform. We are now the support site for Webmaker and will be adding Open Badges and Thunderbird early in 2014.

Thanks to the amazing awesome splendid dev team, the SUMO staff and the community for an awesome 2013!

Christie KoehlerAn Update from the MozillaWiki Team, including a report from Wikimania London

Last week we pushed a major upgrade to MozillaWiki, one that was months in the making. This post discusses the process of that upgrade and also talks about work the MozillaWiki Team did while together in London for Wikimania.

Who is the MozillaWiki Team?

The MozillaWiki team (formerly called the Wiki Working Group) is a mix of paid and volunteer contributors working to improve MozillaWiki. It is facilitated by MozillaWiki module owner (myself) and peers Gordon P. Hemsley and Lyre Calliope (both volunteer contributors).

Results from MozillaWiki user survey informs current roadmap

This summer, OPW (GNOME Outreach Program for Women) intern Joelle conducted a survey of MozillaWiki users. Much of our current roadmap is informed by the results of this survey, including re-organizing the Main Page, making information easier to find, improving the mobile experience and making editing easier.

If you’re interested in the results of that survey, watch her presentation Improving the Gateway: Mozilla Wiki User Research.

Why upgrade Mozilla Wiki now?

The primary motivation for this upgrade was to make current the version of MediaWiki, the software that runs MozillaWiki. Running a relatively older version of MediaWiki (1.19) prevented us from utilizing newer, beneficial features as well as useful extensions that require current versions of MediaWiki.

The Mozilla Wiki now utilizes MediaWiki version 1.23, and you can read about key features and improvements here: https://wiki.mozilla.org/MozillaWiki:News/2014-08/Upgrade_to_MediaWiki_1.23#MediaWiki_changes

This upgrade was carried out in two steps. The first was to change the default skin to Vector, which we did at the beginning of August. The second was to upgrade the software and require all users to use the new skin. This work we did last week.

Why did we choose Vector and drop support for all other skins?

Creating and maintaining MediaWiki skins is a complex and time-consuming process.

The two previous custom skins used on MozillaWiki were Cavendish and GMO. Already these themes, particularly GMO, were missing features available to users in officially supported skins. Our planned upgrade would make this disparity in user experience even greater. While planning the upgrade, we determined it didn’t make sense to expend resources keeping these skins tested and up to date, nor did it make sense to continue to offer a broken user experience just to maintain familiarity.

We selected Vector as the default skin because it is the one supported by MediaWiki itself and is thereby guaranteed to be stable and fully-featured. MonoBook is another theme supported by MediaWiki and we have left that enabled and available to use for those users who want an alternative look and feel. (You can make this change on your preferences page.)

Report from Wikimania London

As I mentioned, the MozillaWiki team has been preparing for and planning for this upgrade for several months. A small group of us gathered in London this August to have dedicated time to work together together and learn about MediaWiki and how to best utilize it at Mozilla by attending Wikimania, the annual MediaWiki community conference

The group included an even mix of paid and volunteer contributors who had been regularly participating in MozillaWiki team activities: Lyre Calliope, Jennie Halperin, Joelle F, Gordon P. Hemsley, C Liang and myself.

We spent the first two days hacking on MozillaWiki and the other three attending conference sessions and hacking together in between.

Having this rare time together in one place allowed us to get a lot done in a relatively short period of time.

Tasks we accomplished include:

  • updated sidebar (only visible in Vector and MonoBook)
  • created and deployed a new Main Page
  • roadmap planning through 2015 q1
  • planned and tested an upgrade to MediaWiki 1.23
  • continued to work on category planning

During the Wikimania conference, we accomplished the following:

  • learned about upcoming changes in MediaWiki, such as the new search extension (elastic search)  and visual editor
  • generated ideas for engaging new contributors across Mozilla projects, via targeted campaigns and directed play
  • generated ideas for recognizing different kinds of contributions leveraging badges and other projects at Mozilla
  • increased awareness of the Mozilla Wiki in the larger wiki community
  • learned about ways to enable real-time collaboration on the wiki
  • invited a number of Wikimedians to join Mozilla via the Wiki Working Group, CBT, and other areas

All of this information and collaboration helped us create our current roadmap.

Improvements planned for rest of 2014

We’re really proud of the work we’ve done on the Mozilla Wiki so far, but we’ve no intention to slow down yet. Improvements we’re planning to roll out this year, include:

  • Bug 1051201 – Audit and adjust user rights (to restore important feature to users and make wiki easier to use)
  • Bug 1051189 – Install MobileFrontend extension (to provide a mobile-friendly interface)
  • Bug 915187 – Improve search
  • Bug 1051204 – Implement real-time collaborative editing
  • Bug 1051206 – Improve discussion and collaboration
  • Bug 1064994 – Improve page categorization

An invitation to Participate

We hope you’re liking our work on MozillaWiki so far! We invite all those who would like to contribute to the wiki to join our regular MozillaWiki team meetings which are every other Tuesday at 8:30am PT (15:30 UTC). Our next meeting is 16 September. Participation details.

Monica ChewMaking decisions with limited data

It is challenging but possible to make decisions with limited data. For example, take the rollout saga of public key pinning.

The first implementation of public key pinning included enforcing pinning on addons.mozilla.org. In retrospect, this was a bad decision because it broke the Addons Panel and generated pinning warnings 86% of the time. As it turns out, the pinset was missing some Verisign certificates used by services.addons.mozilla.org, and the pinning enforcement on addons.mozilla.org included subdomains. Having more data lets us avoid bad decisions.

To enable safer rollouts, we implemented a test mode for pinning. In test mode, pinning violations are counted but not enforced. With sufficient telemetry, it is possible to measure how badly sites would break without actually breaking the site.

Due to privacy restrictions in telemetry, we do not collect per-organization pinning violations except for Mozilla sites that are operationally critical to Firefox. This means that it is not possible to distinguish pinning violations for Google domains from Twitter domains, for example. I do not believe that collecting the aggregated number of pinning violations for sites on the Alexa top 10 list constitutes a privacy violation, but I look forward to the day when technologies such as RAPPOR make it easier to collect actionable data in a privacy-preserving way.

Fortunately for us, Chrome has already implemented pinning on many high-traffic sites. This is fantastic news, because it means we can import Chrome’s pin list in test mode with relatively high assurance that the pin list won’t break Firefox, since it is already in production in Chrome.

Given sufficient test mode telemetry, we can decide whether to enforce pins instead of just counting violations. If the pinning violation rate is sufficiently low, it is probably safe to promote the pinned domain from test mode to production mode. The screenshot below shows a 3 week period where we promoted cdn.mozilla.com and media.mozilla.com and Google domains to production, as well as expand coverage on Twitter to include all subdomains.



Because the current implementation of pinning in Firefox relies on built-in static pinsets and we are unable to count violations per-pinset, it is important to track changes to the pinset file in the dashboard. Fortunately HighStock supports event markers which somewhat alleviates this problem, and David Keeler also contributed some tooltip code to roughly associate dates with Mercurial revisions. Armed with the timeseries of pinning violation rates, event markers for dates that we promoted organizations to production mode (or high-traffic organizations like Dropbox were added in test mode due to a new import from Chromium) we can see whether pinning is working or not.

Telemetry is useful for forensics, but in our case, it is not useful for catching problems as they occur. This limitation is due to several difficulties, which I hope will be overcome by more generalized, comprehensive SSL error-reporting and HPKP:
  • Because pinsets are static and built-in, there is sometimes a 24-hour lag between making a change to a pinset and reaching the next Nightly build.
  • Telemetry information is only sent back once per day, so we are looking at a 2-day delay between making a change and receiving any data back at all.
  • Telemetry dashboards (as accessible from telemetry.js and telemetry.mozilla.org) need about a day to aggregate, which adds another day.
  • Update uptake rates are slow. The median time to update Nightly is around 3 days, getting to 80% takes 10 days or longer.
Due to these latency issues, pinning violation rates take at least a week to stabilize. Thankfully, telemetry is on by default in all pre-release channels as of Firefox 31, which gives us a lot more confidence that the pinning violation rates are representative.

Despite all the caveats and limitations, using these simple tools we were able to successfully roll out pinning pretty much all sites that we’ve attempted (including AMO, our unlucky canary) as of Firefox 34 and look forward to expanding coverage.

Thanks for reading, and don’t forget to update your Nightly if you love Mozilla! :)

Kim MoirMozilla pushes - August 2014

Here's August 2014's monthly analysis of the pushes to our Mozilla development trees.  You can load the data as an HTML page or as a json file.



Trends
It was another record breaking month.  No surprise here!

Highlights
  • 13090 pushes
    • new record
  • 422 pushes/day (average)
    • new record
  • Highest number of pushes/day: 690 pushes on August 20.  This same day corresponded with our first day where we ran over 100,000 test jobs.
    • new record
  • 23.12 pushes/hour (average)

General Remarks
Both Try and Gaia-Try have about 36% each of the pushes.  The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 21% of all the pushes.


Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 620 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
August 20, 2014 had the highest number of pushes in one day with 690 pushes






Prashish RajbhandariThe Plan – After MozDrive

IMG_8370

“You should install a GPS tracking app in your phone so that we can track your location anytime.”

“If we don’t see your social media update every hour, we’re going to call and find you.”

“When you go through this route, don’t go wandering around except the highway. Just drive, don’t stop.”

“You better be in one piece at the end of the 25 days. We need you here!”

It was my last day as an intern in the Silicon Valley. My colleagues and supervisor jokingly threw comments and suggestions at me while I was debugging my final piece of production code before I leave. All my luggage and equipments were packed and stationed in the office so that I could directly go to the airport to catch my flight. My schedule was so tight that I had to drive out the day I landed in Cincinnati. There was no time to relax nor meet friends after I got home. I had to immediately check-in with my friend who was suppose to drive out with me, pack my remaining things, make sure we have everything for the journey (food, medicines and utilities) and head out. The fatigue after the 12-hour flight along with the timezone difference was the last thing on my mind. To understand the scenario, let me take you back a bit.

It had been 23 hours since I took a short nap, let alone good sleep. I was participating in an intern hackathon in LinkedIn HQ (probably one of the best hackathons that I’ve been to). It was 3 am in the morning and I was so caffeinated to a point that I was lost in my own code base. As I was working to fix a nagging problem, I received a notification in my inbox. You know those situation when you are feeling so helpless that you wander off as an legitimate excuse in the slightest opportunity in front of you. Yep, that was one of them. I even started opening some unread “Deal of the Week’ emails to reset my brain.

The message read -

“This is approved by the council. We really can’t wait to see the first report from this. : D Good luck : )”

As my brain was still trying to process the email because of how sleep deprived I was, I got another notification.

“Hi Prashish, Please document your trip thoroughly. We are very excited and waiting to see all your videos, pictures, blogposts and reports. : D”

Wow!

It had been little over a week since I sent my proposal to the Mozilla Reps Council and to be honest, I didn’t have much hope for my mega-drive to get approved. I had to stay calm, control my emotions and send out a ‘Thanks!’ email sounding happy and excited. I did that. Before telling this to all my friends and Mozillians who had been constantly supporting me, I had to finish my project in the hackathon. It was a test of control. I shut down the emotions and continued working on the project without sleep for another 17 hours. Even after the presentations was done and the event was officially over, I didn’t want to think with my super-tired head. It was a test of patience. I wanted a sound sleep and then think of the super exciting journey that I would be taking from the month of August.  I couch surfed at a nearby friend’s house in Santa Clara and woke up fresh after a full 12-hour uninterrupted sleep. I passed all my tests. It was the beginning of couch surfing and what was to come in the next one month.

The next few days kept me super busy as I planned and launched the official website, social networking profiles (Twitter, Facebook) and a Q/A page. As I was working on the MozDrive website, I asked several Mozillians for suggestions and testimonials.

William Reynolds, Product Manager at Mozilla – “I’m excited about the Mozilla Awareness Drive. This is one of the most ambitious campaigns organized by a Mozillian. There’s nothing like visiting Mozilla and Firefox fans and having casual chats with them.”

Sayak  Sarkar, Mozilla Reps Super Mentor  – “I think that this is perhaps one of the most ambitious yet promising initiatives towards spreading the Mozilla mission and awareness about the open web since the Firefox Crop Circle initiative. This initiative speaks out a great lot about how passionate Mozillians are towards the project and how much they are inspired towards contributing towards a common goal of a Free and Open Web.”

The testimonials by Sayak and William really caught my eyes as both of them used the phrase – ‘one of the most ambitious’. To be honest, I didn’t realize the scale of this project until the very last moment. It isn’t that I didn’t understand the project but the desire to do something meaningful for the Mozilla community made the whole planning process look very straightforward. You see, it had been little less than a year since I came to the United States as a graduate student. Back in Kathmandu, Nepal, I would be attending/organizing Mozilla related events in a regular basis; be it orientations, hackathons or meet ups. That drastically changed after I stepped in the United States as I was adapting to the new environment and getting myself caught in the new world around me. To be fair, I did attend the Mozilla Festival in London and Mozilla Reps Meet up in Portland the same year. But, I felt I didn’t make a lot of impact that I would have liked to. In Nepal, there was a huge movement in Mozilla and Open Source that you could actually see the community growing and getting more active. That was something that I wanted to do here too.

The Mozilla Community is very close to my heart because everyone really cares about their work and how that impacts lives around the world. Every Mozillians that I meet are passionate about their work and the community. There is no ‘I’ but ‘We’ in our community. You don’t see a lot of these sort of things in the world we live in. And to be part of this, always makes me a proud Mozillian. I could have easily spent my 25-days break completing full seasons of TV series that I’ve always wanted to. Or if I wanted to be productive, work on a hobby project. Both of them sounded fun as I had spent months working on many products in a very competitive startup in the heart of Silicon Valley. But that’s not something Mozillians do. A Mozillian would spend his free time taking actions on how he/she could build communities together. A Mozillian would plan and work to make the web free and open. A Mozillian would create a movement. That’s what I wanted to do. I wanted to inspire thousands of Mozillians around the world to take actions on their dreams to make them a reality. That’s the reason I set out on this incredible journey to travel around the United States to spread the love about Mozilla and the Open Web.

To tell you the truth, I’m freaking scared of driving. But who isn’t? When there are cars zooming from every direction, the only thought in my head is reaching my destination safely. I never drove a car for more than 10 hours total in my life (which includes me sitting on the driver’s seat and being amazed by all the buttons in front of me). I never had a driving license in Nepal and I barely passed the maneuvering exam on the same day that I was set to fly for San Francisco (internship). That left me with a learner’s permit to drive with a legal driver next to me.

But that didn’t stop me from driving almost half the United States in less than a month. It didn’t stop me from gathering the courage to say ‘YES!’ to the most amazing adventure even though I had no prior experience. It didn’t stop me from taking that risk that would drastically change my life for good.

You might think I’m crazy.

Ask John – that guy who we found in Craigslist to rideshare with us to Los Angeles. He had traveled to almost all the US States and when I told him about our drive, he immediately responded –  ‘You(‘re) crazy man!’.

Or ask Laura – the lady who I met in Nelson-Atkins Museum of Art, Kansas City and had to convince her by showing MozDrive’s Facebook page after she rejected my approach saying – ‘I don’t buy this sh*t’.

Or ask my mom whom I had to convince 4 times everyday that everything will be alright and is under control.

Because driving 13,000 miles in 25 days which is around 8-10 hours everyday is not a joke in any sense. The body and mind could take so much that you needed a lot of self control and motivation throughout the journey for you not to burn out. Yes, there were times when I questioned the entire journey and why I was doing this. Yes, there were times where I wanted to chicken out half way through thinking people will forget about this. But, when you are on a journey which carried such powerful mission and values, that becomes your driving force. When you truly believe in a cause, your physical body will somehow find a way to make it happen and keep you moving forward.

The journey itself was immense where I had opportunities to meet people from all walks of life, culture and countries. I have so much stories to share that I don’t even know where to start. But I promise, I will. That’s why you are reading this. I want you to know what’s in store for everyone in the next 3-4 months. I’m not a writer by any means nor do I have any experience in professional writing. It took me two days just to think and come up this amateur 2,000 word chapter. But, I’m a strong believer of Growth Mindset, and I believe that I can eventually learn the art of expressing my thoughts and ideas through words. My final goal is to write at least 20 chapters of my experience during MozDrive. And to take it one step further – publish it as an ebook in future. That’s the dream!

It is impossible to accomplish a goal without taking actions on it. And this is my first step towards that goal. I know it will take a longer time, but I feel that it will be worth it at the end. And I do hope that you find a positive progress in my writing over time. By taking actions, I simply aim to inspire and awaken hearts of people to do something that they believe in.

If you are reading this – I thank you for taking time and interest in my next journey for MozDrive. Since, I am no writer, I’m looking for people who would be interested to proof-read and edit my future articles for MozDrive. Please send me a message or tweet if you have any suggestions, feedback or are interested in being part of this journey with me.

‘Til then.


Filed under: MozDrive, Mozilla Tagged: mozdrive, mozilla, mozrep

Jorge VillalobosTaller de Firefox OS en Panamá

Me invitaron a dar una charla de desarrollo para apps en Firefox OS este fin de semana, en Ciudad de Panamá. El evento fue organizado por CascoStation, un coworking ubicado en un área muy interesante de la ciudad. Harold de CascoStation hizo un trabajo excepcional para asegurarse que todo saliera bien y todos estuviéramos muy cómodos.

El taller fue similar a los que he dado en el pasado, con algunas mejoras gracias a las lecciones aprendidas. La charla introductoria se puede encontrar aquí: Introducción a Firefox OS. Algunas de las páginas no tienen mucho sentido sin la charla, pero los vínculos son útiles para empezar a trabajar con Firefox OS.

La asistencia fue buena, alrededor de 20 personas. Lo más importante es que la mayoría tuvo suficiente interés para jugar un rato con Firefox OS durante el taller. Tomamos una foto grupal, pero ya varios se habían ido.

Foto de grupo al final

También nos hicieron una nota en El Espectador de Panamá, donde se puede apreciar un poco el ambiente del taller.

Una sorpresa muy grata es que en CascoStation trabaja un artista 3D muy habilidoso, que además aplica sus talentos para crear impresiones 3D con un nivel de detalle espectacular. Él nos creó una figura de Firefox OS que no podría haber quedado mejor.

Modelo de la figura de Firefox OS Figura de Firefox OS

Espero que podamos ver más de estos en el futuro :).

Por último, pude conocer a algunos de los miembros de la nueva comunidad de Mozilla Panamá (y en Facebook). Pudimos hablar sobre los lanzamientos en América Central y los retos que tenemos adelante. Esperamos tener noticias de Costa Rica muy pronto.

La experiencia estuvo excelente, y esperamos ver de nuevo a nuestros amigos panameños en unas semanas para el Encuentro Centroamericano de Software Libre 2014.

Mike ShalBuild System Partial Updates

There is a fairly long dev-platform thread about partial updates - specifically, running './mach build subdirectory'. In this post, we'll compare how this is handled in make-based systems, as well as in tup.

Michael KaplyChanges to CCK2 Support

When I originally came up my CCK2 support options, I thought that folks would use the basic support option as a way to simply show their support for the CCK2. It hasn't really turned out that way, and so effective immediately, I will no longer offer the CCK2 basic support option. I'm simply not getting enough business at that level to warrant the overhead.

Anyone that has already purchased basic support or is in the process of purchasing it will still receive the rest of their 1 year term. After that expires, they will have to choose the free or premium support option.

As far as premium support goes, I'm not planning any changes to that right now, but honestly it hasn't been as successful as I thought it would. I know there are hundreds of companies using the CCK and CCK2, so I'm surprised how few are willing to pay for your support. If anyone has any suggestions on things I can do to encourage folks make the support more appealing, I would appreciate them.

I'm continuing to update the CCK2, so make sure you grab the latest version (2.0.12). There were some update issues, so not everybody was updated.

Christian HeilmannHow to draft a speaker information email

I just had a real happy moment when I got an email from a conference organiser telling me all they need from me and all I need to know in a simple email:


Hi Christian.
I hope you are doing fine.
Your talk “$title” is scheduled for $date at 9:45am (it’s a 40 min talk, plus 5 for Q&A). This is the link to your slot.
$conference will be hosted at $place, $address (map).
Please, send me some options of flights and I will book one for you. I may need your passport number.
We will organize a (free) dinner for all speakers the night before, so you should arrive at least on $dinnerdate.
We will book a room for you for the following nights: $dates. The hotel is the $hotel **** .
Please remember to bring your laptop, charger and A/C adapter. In Spain we use Plug Type C, and you shouldn’t need any current transformer for your laptop.
There will be reinforced WiFi at the event and a separate segment for speakers, but please be prepared to deliver your presentation without access to the Internet, just in case. Remember to include any fonts or alternatively use a PDF version.
We are providing our speakers with a template that can be used for your talk, but feel free to use your own format if you have one.
Your talk may be recorded and shared later in our Youtube channel. We understand to be authorized to do so, unless you say otherwise.
Looking forward to hearing from you!

This is excellent, and a great blueprint to re-use. Well done, codemotion.

I have a similar way to tell conference organisers all I expect and give them the things they need with my conference organiser cheatsheet.

Nick ThomasZNC and Mozilla IRC

ZNC is great for having a persistent IRC connection, but it’s not so great when the IRC server or network has a blip. Then you can end up failing to rejoin with

nthomas (…) has joined #releng
nthomas has left … (Max SendQ exceeded)

over and over again.

The way to fix this is to limit the number of channels ZNC can connect to simultaneously. In the Web UI, you change ‘Max Joins’ preference to something like 5. In the config file use ‘MaxJoins = 5′ in a <User foo> block.

Gregory SzorcOn Monolithic Repositories

When companies or organizations deploy version control, they have to make many choices. One of them is how many repositories to create. Your choices are essentially a) a single, monolithic repository that holds everything b) many separate, smaller repositories that hold all the individual parts c) something in between.

The prevailing convention today (especially in the open source realm) is to create many separate and loosely coupled repositories, each repository mapping to a specific product or service. That does seem reasonable: if you were organizing files on your filesystem, you would group them by functionality or role (photos, music, documents, etc). And, version control tools are functionally filesystems. So it makes sense to draw repository boundaries at directory/role levels.

Further reinforcing the separate repository convention is the scaling behavior of our version control tools. Git, the popular tool in open source these days, doesn't scale well to very large repositories due to - among other things - not having narrow clones (fetching a subset of files). It scales well enough to the overwhelming majority of projects. But if you are a large organization generating lots of data (read: gigabytes of data over hundreds of thousands of files and commits) for version control, Git is unsuitable in its current form. Other tools (like Mercurial) don't currently fare that much better (although Mercurial has plans to tackle these scaling vectors).

Despite popular convention and even limitations in tools, companies like Google and Facebook opt to run large, monolithic repositories. Google runs Perforce. Facebook is on Mercurial, or at least is in the process of migrating to Mercurial.

Why do these companies run monolithic repositories? In Google's words:

We have a single large depot with almost all of Google's projects on it. This aids agile development and is much loved by our users, since it allows almost anyone to easily view almost any code, allows projects to share code, and allows engineers to move freely from project to project. Documentation and data is stored on the server as well as code.

So, monolithic repositories are all about moving fast and getting things done more efficiently. In other words, monolithic repositories increase developer productivity.

Furthermore, monolithic repositories are also more compatible with the ebb and flow of large organizations and large software projects. Components, features, products, and teams come and go, merge and split. The only constant is change. And if you are maintaining separate repositories that attempt to map to this ever-changing organizational topology, you are going to have a bad time. Either you'll be constantly copying, moving, merging, splitting, etc data and repositories. Or your repositories will be organized in a very non-logical and non-intuitive manner. That translates to overhead and lost productivity. I think that monolithic repositories handle the realities of large organizations much better. Big change or reorganization you want to reflect? You can make a single, atomic, history-preserving commit to move things around. I think that's much more manageable, especially when you consider the difficulty and annoyance of history-preserving changes across repositories.

Naysayers will decry monolithic repositories on principled and practical grounds.

The principled camp will say that separate repositories constitute a loosely coupled (dare I say service oriented) architecture that maps better to how software is consumed, assembled, and deployed and that erecting barriers in the form of separate repositories deliberately enforces this architecture. I agree. However, you can still maintain a loosely coupled architecture with monolithic repositories. The Subversion model of checking out a single tree from a larger repository proves this. Furthermore, I would say architecture decisions should be enforced by people (via code review, etc), not via version control repository topology. I believe this principled argument against monolithic repositories to be rather weak.

The principled camp living in the open source realm may also decry monolithic repositories as an affront to the spirit of open source. They would say that a monolithic repository creates unfairly strong ties to the organization that operates it and creates barriers to forking, etc. This may be true. But monolithic repositories don't intrinsically infringe on the basic software freedoms, organizations do. Therefore, I find this principled argument rather weak.

The practical camp will say that monolithic repositories just don't scale or aren't suitable for general audiences. These concerns are real.

Fully distributed version control systems (every commit on every machine) definitely don't scale past certain limits. Depending on your repository and user base, your scaling limits include disk space (repository data terabytes in size), bandwidth (repository data terabytes in size), filesystem (repository hundreds of thousands or millions of files), CPU and memory (operations on large repositories take too many system resources), and many heads/branches (tools like Git and Mercurial don't scale well to tens of thousands of heads/branches). These limitations with fully distributed version control are why distributed version control tools like Git and Mercurial support a partially-distributed mode that behaves more like your classical server-client model, like those employed by Subversion, Perforce, etc. Git supports shallow clone and sparse checkout. Mercurial supports shallow clone (via remotefilelog) and has planned support for narrow clone and sparse checkout in the next release or two. Of course, you can avoid the scaling limitations of distributed version control by employing a non-distributed tool, such as Subversion. Many companies continue to reach this conclusion today. However, users adapted to the distributed workflow would likely be up in arms (they would probably use tools like hg-subversion or git-svn to maintain their workflows). So, while scaling of version control can be a real concern, there are solutions and workarounds. However, they do involve falling back to a partially-distributed model.

Another concern with monolithic repositories is user access control. You inevitably have code or data that is more sensitive and want to limit who can change or even access it. Separate repositories seem to facilitate a simpler model: per-repository access control. With monolithic repositories, you have to worry about per-directory/subtree permissions, an increased risk of data leaking, etc. This concern is more real with distributed version control, as distributed data and access control aren't naturally compatible. But these issues can be resolved. And if the tooling supports it, there is only a semantic difference between managing access control between repositories versus components of a single repository.

When it comes to repository hosting conversions, I agree with Google and Facebook: I prefer monolithic repositories. When I am interacting with version control, I just want to get stuff done. I don't want to waste time dealing with multiple commands to manage multiple repositories. I don't want to waste time or expend cognitive load dealing with submodule, subrepository, or big files management. I don't want to waste time trying to find and reuse code, data, or documentation. I want everything at my fingertips, where it can be easily discovered, inspected, and used. Monolithic repositories facilitate these workflows more than separate repositories and make me more productive as a result.

Now, if only all the tools and processes we use and love would work with monolithic repositories...

Mozilla Release Management TeamFirefox 33 beta1 to beta2

  • 31 changesets
  • 72 files changed
  • 2046 insertions
  • 532 deletions

ExtensionOccurrences
js23
cpp12
html5
h5
mn4
jsm4
ini4
xhtml3
java3
jsx2
xml1
sh1
list1
css1

ModuleOccurrences
browser36
security10
mobile5
gfx4
content4
layout3
toolkit2
services2
dom2
netwerk1

List of changesets:

Jeff MuizelaarBug 1057716 - d3d11: Properly copy the background. r=bas, a=sledru - 9eb4dff42df0
Richard NewmanBug 993885 - Refactor SendTabActivity to avoid a race condition. r=mcomella, a=sledru - 764591e4e7f3
Lucas RochaBug 1050780 - Avoid disabled items in GeckoMenu's adapter. r=margaret, a=sledru - 7cf512b6b64c
Tim NguyenBug 891258 - Use Australis styling for the findbar buttons. r=Unfocused, a=sledru - 4815ff146c57
Dave TownsendBacking out Bug 891258 due to broken styling issues on OSX. r=backout - e7d6edff44d3
Cosmin MalutanBug 1062224 - [tps] Fix test_tabs.js for non-existent testcase pages. r=hskupin a=testonly DONTBUILD - 292839cc6594
David KeelerBug 1057128 - Add --clobber to generate_certs.sh, disabled by default (don't unnecessarily regenerate all certificates). r=rbarnes, a=sledru - 3f1e228fac54
David KeelerBug 1009161 - mozilla::pkix: Allow the Netscape certificate type extension if more standardized information is present. r=briansmith, a=sledru - 03029d16e697
JW WangBug 1034957 - Don't spin decode task queue waiting for audio frames since it hangs with gstreamer 1.0. r=cpearce, a=sledru - 46ffe60377d9
Neil RashbrookBug 1054289 - Scroll to the current ref, not the original one. r=smaug, a=sledru - 8865201cd18e
Neil RashbrookBug 1054289 - Add testcase. r=smaug, a=sledru - e47ff024eec1
Jan-Ivar BruaroeyBug 1060708 - Detect user and environment cameras on Android. r=gcp, r=blassey, r=snorp, a=sledru - fbc322c42d06
Mark FinkleBug 1063893 - Enable casting on beta and release. r=rnewman a=mfinkle - 32560f800b2e
Ed LeeBug 1062683 - Remove urls from new tab pings [r=adw a=lmandel] - c81810e5f3a5
Bas SchoutenBug 1040187 - Combine update regions properly when upload hasn't executed yet. r=nical, a=lmandel - 872fe12f9214
Matt WoodrowBug 1060114 - Fix partial surface uploading through BufferTextureClient. r=Bas, a=lmandel - 09d840603713
Chenxia LiuBug 1060678 - Notify Gecko when browser history is cleared from HistoryPanel. r=margaret, a=lmandel - 957e1ef7f769
Gijs KruitboschBug 1035536 - Add blank theme file for net error pages. r=Unfocused, a=lmandel - f9e4f36ba116
Ryan VanderMeulenBacked out changeset 09d840603713 (Bug 1060114) for bustage. - c3ecb4c952ec
Matt WoodrowBug 1060114 - Fix partial surface uploading through BufferTextureClient. r=Bas, a=lmandel - bca701646487
Chris KarlofBug 1056523 - Ensure sync credentials are reset during reauth flow. r=markh, a=lmandel - 8b409f2dfcb1
Steve WorkmanBug 1058099 - Cancel CacheStorageService::mPurgeTimer if it's still set during shutdown. r=mayhemer, a=lmandel - ede2300e8733
Michael ComellaBug 1046017 - Backed out changesets 1c213218173f & 8588817f7f86 (bugs 1017427 & 1006797). a=lmandel - 7984a6ceffb8
Randell JesupBug 1063971 - Allow SetRemoteDescription to omit callbacks again. r=jib, a=lmandel - 880228a5208a
Richard NewmanBug 1045085 - Remove main Product Announcements code. r=mcomella, a=lmandel - 776ddfd41f21
Ryan VanderMeulenBacked out changeset 776ddfd41f21 (Bug 1045085) for Android bustage. - 70930f30da0e
Benjamin SmedbergBug 1012924 - Experiments should cancel their XMLHttpRequest on shutdown and should also set a reasonable timeout on them. r=gfritzsche, a=lmandel - db5539e42eb5
Mark BannerBug 1022594 - Part 1: Change Loop's incoming call handling to get the call details before displaying the incoming call UI. r=nperriault, a=lmandel - e0ad01b2e26e
Mark BannerBug 1022594 - Part 2: Desktop client needs ability to decline an incoming call - set up a basic websocket protocol and use for both desktop and standalone UI. r=dmose, a=lmandel - 062929c9ff5d
Mark BannerBug 1045643 - Part 1: Notify the Loop server when the desktop client accepts the call, so that it can update the call status. r=nperriault, a=lmandel - be539410c211
Mark BannerBug 1045643 - Part 2: Notify the Loop server when the client has local media up and remote media being received, so that it can update the call connection status. r=nperriault, a=lmandel - d820ef3b256d

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [913647] Deploy YUI 3.17.2 for BMO
  • [1054138] add the ability to filter on “fields containing the string”
  • [1062344] contrib/reorg-tools/sync* do not clear memcached
  • [1051058] Auto-CC Erica Choe into Finance Review and Master Kick-Off Bugs

discuss these changes on mozilla.tools.bmo.

 

the new bugmail filtering ability allows you to filter on specific flags:

bugmail filtering with substrings

these two rules will prevent bugzilla from emaling you the changes to the “qa whiteboard” field or the “qe-verify” flag for bugs where you aren’t the assignee.


Filed under: bmo, mozilla

Matt BrubeckLet's build a browser engine! Part 5: Boxes

This is the latest in a series of articles about writing a simple HTML rendering engine:

This article will begin the layout module, which takes the style tree and translates it into a bunch of rectangles in a two-dimensional space. This is a big module, so I’m going to split it into several articles. Also, some of the code I share in this article may need to change as I write the code for the later parts.

The layout module’s input is the style tree from Part 4, and its output is yet another tree, the layout tree. This takes us one step further in our mini rendering pipeline:

I’ll start by talking about the basic HTML/CSS layout model. If you’ve ever learned to develop web pages you might be familiar with this already—but it may look a bit different from the implementer’s point of view.

The Box Model

Layout is all about boxes. A box is a rectangular section of a web page. It has a width, a height, and a position on the page. This rectangle is called the content area because it’s where the box’s content is drawn. The content may be text, image, video, or other boxes.

A box may also have padding, borders, and margins surrounding its content area. The CSS spec has a diagram showing how all these layers fit together.

Robinson stores a box’s content area and surrounding areas in the following structure. [Rust note: f32 is a 32-bit floating point type.]

// CSS box model. All sizes are in px.
struct Dimensions {
    // Top left corner of the content area, relative to the document origin:
    x: f32,
    y: f32,

    // Content area size:
    width: f32,
    height: f32,

    // Surrounding edges:
    padding: EdgeSizes,
    border: EdgeSizes,
    margin: EdgeSizes,
}

struct EdgeSizes {
    left: f32,
    right: f32,
    top: f32,
    bottom: f32,
}

Block and Inline Layout

Note: This section contains diagrams that won't make sense if you are reading them without the associated visual styles. If you are reading this in a feed reader, try opening the original page in a regular browser tab. I also included text descriptions for those of you using screen readers or other assistive technologies.

The CSS display property determines which type of box an element generates. CSS defines several box types, each with its own layout rules. I’m only going to talk about two of them: block and inline.

I’ll use this bit of pseudo-HTML to illustrate the difference:

<container>
  <a></a>
  <b></b>
  <c></c>
  <d></d>
</container>

Block boxes are placed vertically within their container, from top to bottom.

a, b, c, d { display: block; }

Description: The diagram below shows four rectangles in a vertical stack.

a
b
c
d

Inline boxes are placed horizontally within their container, from left to right. If they reach the right edge of the container, they will wrap around and continue on a new line below.

a, b, c, d { display: inline; }

Description: The diagram below shows boxes `a`, `b`, and `c` in a horizontal line from left to right, and box `d` in the next line.

a
b
c
d

Each box must contain only block children, or only inline children. When an DOM element contains a mix of block and inline children, the layout engine inserts anonymous boxes to separate the two types. (These boxes are “anonymous” because they aren’t associated with nodes in the DOM tree.)

In this example, the inline boxes b and c are surrounded by an anonymous block box, shown in pink:

a    { display: block; }
b, c { display: inline; }
d    { display: block; }

Description: The diagram below shows three boxes in a vertical stack. The first is labeled `a`; the second contains two boxes in a horizonal row labeled `b` and `c`; the third box in the stack is labeled `d`.

a
b
c
d

Note that content grows vertically by default. That is, adding children to a container generally makes it taller, not wider. Another way to say this is that, by default, the width of a block or line depends on its container’s width, while the height of a container depends on its children’s heights.

This gets more complicated if you override the default values for properties like width and height, and way more complicated if you want to support features like vertical writing.

The Layout Tree

The layout tree is a collection of boxes. A box has dimensions, and it may contain child boxes.

struct LayoutBox<'a> {
    dimensions: Dimensions,
    box_type: BoxType<'a>,
    children: Vec<LayoutBox<'a>>,
}

A box can be a block node, an inline node, or an anonymous block box. (This will need to change when I implement text layout, because line wrapping can cause a single inline node to split into multiple boxes. But it will do for now.)

enum BoxType<'a> {
    BlockNode(&'a StyledNode<'a>),
    InlineNode(&'a StyledNode<'a>),
    AnonymousBlock,
}

To build the layout tree, we need to look at the display property for each DOM node. I added some code to the style module to get the display value for a node. If there’s no specified value it returns the initial value, 'inline'.

enum Display {
    Inline,
    Block,
    DisplayNone,
}

impl StyledNode {
    /// Return the specified value of a property if it exists, otherwise `None`.
    fn value(&self, name: &str) -> Option<Value> {
        self.specified_values.find_equiv(&name).map(|v| v.clone())
    }

    /// The value of the `display` property (defaults to inline).
    fn display(&self) -> Display {
        match self.value("display") {
            Some(Keyword(s)) => match s.as_slice() {
                "block" => Block,
                "none" => DisplayNone,
                _ => Inline
            },
            _ => Inline
        }
    }
}

Now we can walk through the style tree, build a LayoutBox for each node, and then insert boxes for the node’s children. If a node’s display property is set to 'none' then it is not included in the layout tree.

/// Build the tree of LayoutBoxes, but don't perform any layout calculations yet.
fn build_layout_tree<'a>(style_node: &'a StyledNode<'a>) -> LayoutBox<'a> {
    // Create the root box.
    let mut root = LayoutBox::new(match style_node.display() {
        Block => BlockNode(style_node),
        Inline => InlineNode(style_node),
        DisplayNone => fail!("Root node has display: none.")
    });

    // Create the descendant boxes.
    for child in style_node.children.iter() {
        match child.display() {
            Block => root.children.push(build_layout_tree(child)),
            Inline => root.get_inline_container().children.push(build_layout_tree(child)),
            DisplayNone => {} // Skip nodes with `display: none;`
        }
    }
    return root;
}

impl LayoutBox {
    /// Constructor function
    fn new(box_type: BoxType) -> LayoutBox {
        LayoutBox {
            box_type: box_type,
            dimensions: Default::default(), // initially set all fields to 0.0
            children: Vec::new(),
        }
    }
}

If a block node contains an inline child, create an anonymous block box to contain it. If there are several inline children in a row, put them all in the same anonymous container.

impl LayoutBox {
    /// Where a new inline child should go.
    fn get_inline_container(&mut self) -> &mut LayoutBox {
        match self.box_type {
            InlineNode(_) | AnonymousBlock => self,
            BlockNode(_) => {
                // If we've just generated an anonymous block box, keep using it.
                // Otherwise, create a new one.
                match self.children.last() {
                    Some(&LayoutBox { box_type: AnonymousBlock,..}) => {}
                    _ => self.children.push(LayoutBox::new(AnonymousBlock))
                }
                self.children.mut_last().unwrap()
            }
        }
    }
}

This is intentionally simplified in a number of ways from the standard CSS box generation algorithm. For example, it doesn’t handle the case where an inline box contains a block-level child. Also, it generates an unnecessary anonymous box if a block-level node has only inline children.

To Be Continued…

Whew, that took longer than I expected. I think I’ll stop here for now, but don’t worry: Part 6 is coming soon, and will cover block-level layout.

Once block layout is finished, we could jump ahead to the next stage of the pipeline: painting! I think I might do that, because then we can finally see the rendering engine’s output as pretty pictures instead of just numbers.

However, the pictures will just be a bunch of colored rectangles, unless we finish the layout module by implementing inline layout and text layout. If I don’t implement those before moving on to painting, I hope to come back to them afterward.

Jeff WaldenQuote of the day

Snipped from irrelevant context:

<jorendorff> In this case I see nearby code asserting that IsCompiled() is true, so I think I have it right

Assertions do more than point out mistakes in code. They also document that code’s intended behavior, permitting faster iteration and modification to that code by future users. Assertions are often more valuable as documentation, than they are as a means to detect bugs. (Although not always. *eyes fuzzers beadily*)

So don’t just assert the tricky requirements: assert the more-obvious ones, too. You may save the next person changing the code (and the person reviewing it, who could be you!) a lot of time.

David BoswellCreating community contribution challenges

There is something magical about how anyone anywhere can contribute to Mozilla—people show up and help you with something you’re doing or offer you something completely new and unexpected.

The Code Rush documentary has a great example of this from the time when the Mozilla project first launched. Netscape opened it’s code to the world in the hope that people would contribute, but there was no guarantee that anyone would help.

One of the first signs they had that this was working was when Stuart Parmenter started contributing by rewriting a key part of the code and this accelerated development work by months. (This is about 27 minutes into the documentary.)

code_rush_pavlov_scene

It is hard to plan and schedule around magic though. This year we’ve been building up a participation system that will help make contributions more reliable and predictable, so that teams can plan and schedule around leveraging the Mozilla community.

Pathways, tools and education are part of that system. Something else we’re trying is contribution challenges. These will identify unmet needs where scale and asynchronous activities can provide impact in the short-term and where there is strong interest within the volunteer community.

The challenges will also specify the when, where, who and how of the idea, so that we can intentionally design for participation at the beginning and have a prepared way that we’re rallying people to take action.

For next steps, leadership of the Mozilla Reps program is meeting in Berlin from September 12-14 and they’ll be working on this concept as well as on some specific challenge ideas. There will be more to share after that.

RemoCamp-berlin

If you’re interested in helping with this and want to get involved, take a look at the contribution challenges etherpad for more background and a list of challenge ideas. Then join the community building mailing list and share your thoughts, comments and questions.


Nathan Froydxpcom and move constructors

Benjamin Smedberg recently announced that he was handing over XPCOM module ownership duties to me.  XPCOM contains basic data structures used throughout the Mozilla codebase, so changes to its code can have wide-ranging effects.  I’m honored to have been given responsibility for a core piece of the Gecko platform.

One issue that’s come up recently and I’m sure will continue to come up is changing XPCOM data structures to support two new C++11 features, rvalue references and their killer app, move constructors.  If you aren’t familiar with C++11’s new rvalue references feature, I highly recommend C++ Rvalue References Explained.  Move constructors are already being put to good use elsewhere in the codebase, notably mozilla::UniquePtr, which can be used to replace XPCOM’s nsAutoPtr and nsAutoRef (bug 1055035).  And some XPCOM data structures have received the move constructor treatment, notably nsRefPtr (bug 980753) and nsTArray (bug 982212).

A recent discussion and the associated bug, however, decided that the core referenced-counted smart pointer class in XPCOM, nsCOMPtr, shouldn’t support move constructors.  While move constructors could have replaced the already_AddRefed usage associated with nsCOMPtr, such as:

already_AddRefed<nsIMyInterface>
NS_DoSomething(...)
{
  nsCOMPtr<nsIMyInterface> interface = ...;
  // do some initialization stuff
  return interface.forget();
}

with the slightly shorter:

nsCOMPtr<nsIMyInterface>
NS_DoSomething(...)
{
  nsCOMPtr<nsIMyInterface> interface = ...;
  // do some initialization stuff
  return interface;
}

There were two primary arguments against move constructor support.  The first argument was that the explicitness of having to call .forget() on an nsCOMPtr (along with the explicitness of the already_AddRefed type), rather than returning it, is valuable for the code author, the patch reviewer, and subsequent readers of the code.  When dealing with ownership issues in C++, it pays to be more explicit, rather than less.  The second argument was that due to the implicit conversion of nsCOMPtr<T> to a bare T* pointer (a common pattern in smart pointer classes), returning nsCOMPtr<T> from functions makes it potentially easy to write buggy code:

// What really happens in the below piece of code is something like:
//
// nsIMyInterface* p;
// {
//   nsCOMPtr<nsIMyInterface> tmp(NS_DoSomething(...));
//   p = tmp.get();
// }
//
// which is bad if NS_DoSomething is returning the only ref to the object.
// p now points to deleted memory, which is a security risk.
nsIMyInterface* p = NS_DoSomething(...);

(I should note that we can return nsCOMPtr<T> from functions today, and in most cases, thanks to compiler optimizations, it will be as efficient as returning already_AddRefed.  But Gecko culture is such that a function returning nsCOMPtr<T> would be quite unusual, and therefore unlikely to pass code review.)

The changes to add move constructors to nsRefPtr and nsTArray?  They were reviewed by me.  And the nixing of move constructors for nsCOMPtr?  That was also done by me (with a lot of feedback from other people).

I accept the charge of inconsistency.  However, I offer the following defense.  In the case of nsTArray, there are no ownership issues like there are with nsCOMPtr: you either own the array, or you don’t, so many of the issues raised about nsCOMPtr don’t apply in that case.

For the case of nsRefPtr, it is true that I didn’t seek out as much input from other people before approving the patch.  But the nsRefPtr patch was also done without the explicit goal of removing already_AddRefed from the code base, which made it both smaller in scope and more palatable.  Also, my hunch is that nsRefPtr is used somewhat less than nsCOMPtr (although this may be changing somewhat given other improvements in the codebase, like WebIDL), and so it provides an interesting testbed for whether move constructors and/or less explicit transfers of ownership are as much of a problem as argued above.

Henrik SkupinFirefox Automation report – week 29/30 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 29 and 30.

Highlights

During week 29 it was time again to merge the mozmill-tests branches to support the upcoming release of Firefox 31.0. All necessary work has been handled on bug 1036881, which also included the creation of the new esr31 branch. Accordingly we also had to update our mozmill-ci system, and got the support landed on production.

The RelEng team asked us if we could help in setting up Mozmill update tests for testing the new update server aka Balrog. Henrik investigated the necessary tasks, and implemented the override-update-url feature in our tests and the mozmill-automation update script. Finally he was able to release mozmill-automation 2.6.0.2 two hours before heading out for 2 weeks of vacation. That means Mozmill CI could be used to test updates for the new update server.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 29 and week 30.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 29 and week 30.

Benoit GirardB2G Performance Polish: Unlock Animation (Part 2)

Our destination

Our starting point is a 200ms unlock delay, and an uniform ~25 FPS animation. Our aim should be a 0ms unlock delay and a uniform 60 FPS (or whatever vsync is). The former we will minimize as much as we can but the latter is entirely possible.

Let’s talk about how we should design a lock screen animation in the optimal case. When we go and apply it in practice we often hit some requirements and constraint that make it impossible to behave like we want but let’s ignore that for a second and discuss we want to get to.

In the ideal case we would have the lockscreen rendered offscreen to a GPU texture(s) set. We would have the background app ready in another GPU texture(s) set. These are ‘Layers’. We place the background app behind the lockscreen. When the transition begins we notify the compositor to start fading out the lockscreen. Keep them around costs memory but if we keep the right thing around we can reduce or eliminate repaints entirely.

Our memory requirement is what’s required by the background app + about one fullscreen layer required for the lockscreen. This should be fine for even low end of B2G phones. Our overdraw should be about 200%-300%, again enough for mobile GPUs to keep up at 60 FPS/vsync.

Ideal Lockscreen Layer Tree

Now let’s look at what we hope our timeline for our Main Thread and our Compositor Thread to look like:

Ideal Unlock Timeline

We want to use Off-Main-Thread-Animation to perform the fade entirely on the Compositor. This will be initiated on the main thread and will require a style flush to set a CSS transform transition. If done right we don’t expect to have to reflow or repaint any part of the page if we properly built the layer like shown in the first figure. Note that the style flush will contribute to the unlock delay (and so will the first composite time as incorrectly shown in the diagram). If we can keep that style flush + first composite in under say 50ms and each composite in 16ms or less then we should have a smooth unlock animation.

Next up let’s look at what’s actually happening in the unlock animation in practice…


Adam LoftingSomething special within ‘Hack the snippet’

Here are a couple of notes about ‘Hack the snippet‘ that I wanted to make sure got documented.

  1. It significantly changed peoples’ predisposition to Webmaker before they arrived on the site
  2. Its ‘post-interaction’ click-through-rate was equivalent to most one-click snippets

Behind these observations, something special was happening in ‘Hack the snippet’. I can’t tell you exactly what it was that had the end-effect, but it’s worth remembering the effect.

1. It ‘warmed people up’ to Webmaker

  • The ‘Hack the snippet’ snippet
    • was shown to the same audience (Firefox users) as eight other snippet variations we ran during the campaign
    • had the same % of users click through to the landing page
    • had the same on-site experience on webmaker.org as all the other snippet variations we tested (the same landing page, sign-up ask etc)
  • But when people who had interacted with ‘Hack the snippet’ landed on the website, they were more than three times as likely to signup for a webmaker account

Same audience, same engagement rate, same ask… but triple the conversion rate (most regular snippet traffic converted ~2%, ‘Hack the snippet’ traffic converted ~7%).

Something within that experience (and likely the overall quality of it) makes the Webmaker proposition more appealing to people who ‘hacked the snippet’. It could be one of many things: the simplicity, the guided learning, the feeling of power from editing the Firefox start page, the particular phrasing of the copy or many of the subtle design decisions. But whatever it was, it worked.

We need to keep looking for ways to recreate this.

Not everything we do going forwards needs to be a ‘Hack the snippet’ snippet (you can see how much time and effort went into that in the bug).

But when we think about these new-user experiences, we have a benchmark to compare things too. We know how much impact these things can have when all the parts align.

2. The ‘post-interaction’ CTR was as good as most one-click snippets

This is a quicker note:

  • Despite the steps involved in completing the ‘Hack the snippet’ on page activity, the same total number of people clicked through when compared to a standard ‘one-click’ snippet.
  • We got the same % of the audience to engage with a learning activity and then click through to the webmaker site, as we usually get just giving them a link directly to Webmaker
    • This defies most “best practice” about minimizing number of clicks

Again, this doesn’t give us an immediate thing we can repeat, but it gives us a benchmark to build on.

Lucas RochaIntroducing dspec

With all the recent focus on baseline grids, keylines, and spacing markers from Android’s material design, I found myself wondering how I could make it easier to check the correctness of my Android UI implementation against the intended spec.

Wouldn’t it be nice if you could easily provide the spec values as input and get it rendered on top of your UI for comparison? Enter dspec, a super simple way to define UI specs that can be rendered on top of Android UIs.

Design specs can be defined either programmatically through a simple API or via JSON files. Specs can define various aspects of the baseline grid, keylines, and spacing markers such as visibility, offset, size, color, etc.

Baseline grid, keylines, and spacing markers in action.

Baseline grid, keylines, and spacing markers in action.

Given the responsive nature of Android UIs, the keylines and spacing markers are positioned in relation to predefined reference points (e.g. left, right, vertical center, etc) instead of absolute offsets.

The JSON files are Android resources which means you can easily adapt the spec according to different form factors e.g. different specs for phones and tablets. The JSON specs provide a simple way for designers to communicate their intent in a computer-readable way.

You can integrate a DesignSpec with your custom views by drawing it in your View‘s onDraw(Canvas) method. But the simplest way to draw a spec on top of a view is to enclose it in a DesignSpecFrameLayout—which can take an designSpec XML attribute pointing to the spec resource. For example:

<DesignSpecFrameLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:designSpec="@raw/my_spec">
    ...
</DesignSpecFrameLayout>

I can’t wait to start using dspec in some of the new UI work we’re doing Firefox for Android now. I hope you find it useful too. The code is available on Github. As usual, testing and fixes are very welcome. Enjoy!

Doug BelshawWeb Literacy: More than just coding; an enabling education for our times [EdTech Digest]

Web Literacy | edtechdigest.com 2014-09-08 14-03-43

Last week, my colleague Lainie Decoursy got in touch wondering if I could write a piece about web literacy. It was a pretty tight turnaround, but given pretty much all I think about during my working hours is web literacy, it wasn’t too much of a big ask!

The result is a piece in EdTech Digest entitled Web Literacy: More than just coding; an enabling education for our times. It’s an overview of Mozilla’s work around Webmaker and, although most of the words are mine, I have to credit my colleagues for some useful edits.

Click here to read the post

I’ve closed comments here to encourage you to add your thoughts on the original post.

Robert O'Callahanrr 2.0 Released

Thanks to the hard work of our contributors, rr 2.0 has been released. It has many improvements over our 1.0 release:

  • gdb's checkpoint, restart and delete checkpoint commands are supported.
    These are implemented using new infrastructure in rr 2.0 for fast cloning of replay sessions.
  • You can now run debuggee functions from gdb during replay.
    This is a big feature for rr, since normally a record-and-replay debugger will only replay what happened during recording --- and of course, function calls from gdb did not happen during recording. So under the hood, rr 2.0 introduces "diversion sessions", which run arbitrary code instead of following a replay. When you run a debuggee function from gdb, we clone the current replay session to a diversion session, run your requested function, then destroy the diversion and resume the replay.
  • Issues involving Haswell have been fixed. rr now runs reliably on Intel CPU families from Westmere to Haswell.
  • Support for running rr in a VM has been improved. Due to a VMWare bug, rr is not as reliable in VMWare guests as in other configurations, but in practice it still works well.
  • Trace compression has been implemented, with compression ratios of 5-40x depending on workload, dramatically reducing rr's storage and I/O usage.
  • Many many bugs have been fixed to improve reliability and enable rr to handle more diverse workloads.

All the features normally available from gdb now work with rr, making this an important milestone.

The ability to run debuggee functions makes it much easier to use rr to debug Firefox. For example you can dump DOM, frame and layer trees at any point during replay. You can debug Javascript to some extent by calling JS engine helpers such as DumpJSStack(). Some Mozilla developers have successfully used rr to fix real bugs. I use it for most of my Gecko debugging --- the first of my research projects that I've actually wanted to use :-).

Stephen Kitt has packaged rr for Debian.

Considerable progress has been made towards x86-64 support, but it's not ready yet. We expect x86-64 support to be the next milestone.

I recorded a screencast showing a quick demo of rr on Firefox:

Robert O'CallahanVMWare CPUID Conditional Branch Performance Counter Bug

This post will be uninteresting to almost everyone. I'm putting it out as a matter of record; maybe someone will find it useful.

While getting rr working in VMWare guests, we ran into a tricky little bug. Typical usage of CPUID. e.g. to detect SSE2 support, looks like this pseudocode:

CPUID(0); // get maximum supported CPUID subfunction M
if (S <= M) {
CPUID(S); // execute subfunction S
}
Thus, CPUID calls often occur in pairs with a conditional branch between them. The bug is that in a VMWare guest, when we count the number of conditional branches executed, the conditional branch between those two CPUIDs is usually (but not always) omitted from the count. We assume this is a VMWare bug because it does not happen on the same hardware outside of a VM, and it does not happen in a KVM-based VM.

Experiments show that some code sequences trigger the bug and other equivalent sequences don't. Single-stepping and other kinds of interference suppress the bug. My best guess is that VMWare optimizes some forms of the above code, perhaps to reduce the number of VM exits, and in so doing skips execution of the conditional branch, without taking into account that this might perturb performance counter values. Admittedly, it's unusual for software to rely on precise performance counter values the way rr does.

This sucks for rr because rr relies on these counts being accurate. We sometimes find that replay diverges because one of these conditional branches was not counted during recording but is counted during replay. (The other way around is possible too, but less frequently observed.) We have some heuristics and workarounds, but it's difficult to fully work around without adding significant complexity and/or slowdown.

The bug is easily reproduced: just use rr to record and replay anything simple. When replaying, rr automatically detects the presence of the bug and prints a warning on the console:

rr: Warning: You appear to be running in a VMWare guest with a bug
where a conditional branch instruction between two CPUID instructions
sometimes fails to be counted by the conditional branch performance
counter. Partial workarounds have been enabled but replay may diverge.
Consider running rr not in a VMWare guest.

Steps forward:

  • Find a way to report this bug to VMWare.
  • Linux hosts can run rr in KVM-based VMs or directly on the host. Xen VMs might work too.
  • Parallels apparently supports PMU virtualization now; if Parallels doesn't have this bug, it might be the best way to run rr on a Mac or Windows host.
  • We can add a "careful mode" that would probably almost always replay successfully, albeit with additional overhead.
  • The bug is less likely to show up once rr supports x86-64. At least in Firefox, CPUID instructions are most commonly used to detect the presence of SSE2, which is unnecessary on x86-64.
  • In practice, recording Firefox in VMWare generally works well without hitting this bug, so maybe we don't need to invest a lot in fixing it.

Daniel StenbergVideo perhaps?

I decided to try to do a short video about my current work right now and make it available for you all. I try to keep it short (5-7 minutes) and I’m certainly no pro at it, but I will try to make a weekly one for a while and see if it gets any fun. I’m going to read your comments and responses to this very eagerly and that will help me decide how I will proceed on this experiment.

Enjoy.

Jordan LundThis Week In Releng - Sept 1st, 2014

Major Highlights:

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Brian BirtlesAnimations on Fire @ Graphical Web 2014

Just recently I had the chance to talk about authoring animations of CSS/SVG for better performance at The Graphical Web 2014. I thought I’d put up the slides here in case they’re useful to others.

In the rare chance that you’re reading this blog directly or the syndicator didn’t eat this bit, you can view the slides right here:

James LongTaming the Asynchronous Beast with CSP in JavaScript

This is an entry in a series about rebuilding my custom blog with react, CSP, and other modern tech. Read more in the blog rebuild series.

Every piece of software deals with complex control flow mechanisms like callbacks, promises, events, and streams. Some require simple asynchronous coordination, others processing of event or stream-based data, and many deal with both. Your solution to this has a deep impact on your code.

It's not surprising that a multitude of solutions exist. Callbacks are a dumb simple way for passing single values around asynchronously, and promises are a more refined solution to the same problem. Event emitters and streams allow asynchronous handling of multiple values. FRP is a different approach which tackles streams and events more elegantly, but isn't as good at asynchronous coordination. It can be overwhelming just to know where to start in all of this.

I think things can be simplified to a single abstraction since the underlying problem to all of this is the same. I present to you CSP and the concept of channels. CSP has been highly influential in Go and recently Clojure embraced it as well with core.async. There's even a C version. It's safe to say that it's becoming quite popular (and validated) and I think we need to try it out in JavaScript. I'm not going to spend time comparing it with every other solution (promises, FRP) because it would take too long and only incite remarks about how I wasn't using it right. I hope my examples do a good enough job convincing you themselves.

Typically channels are useful for coordinating truly concurrent tasks that might run at the same time on separate threads. They are actually just as useful in a single-threaded environment because they solve a more general problem of coordinating anything asynchronous, which is everything in JavaScript.

Two posts you should read in addition to this are David Nolen's exploration of core.async and the core.async announcement. You will find the rationale behind CSP and clear examples of how powerful channels are.

In this post, I will dive deeply into how we can use this in JavaScript, and illustrate many key points about it. CSP is enabled by js-csp which I will explain more soon. Here is a quick peek:

var ch = chan(); go(function*() { var val; while((val = yield take(ch)) !== csp.CLOSED) { console.log(val); } }); go(function*() { yield put(ch, 1); yield take(timeout(1000)); yield put(ch, 2); ch.close(); });

Note: these interactive example assume a very modern browser and have only been heavily tested in Firefox and Chrome.

We get synchronous-style code with generators by default, and a sophisticated mechanism for coordinating tasks that is simple for basic async workflows but also scales to complex scenarios.

Let's Talk About Promises

Before we dig in, we should talk about promises. Promises are cool. I am forever grateful that they have mostly moved the JavaScript community off of the terrible callback endemic. I really do like them a lot. Unlike some other advocates of CSP, I think they actually have a good error handling story because JavaScript does a good job of tracking the location from wherever an Error object was created (even so, find the "icing on the cake" later in this article about debugging errors from channels). The way promises simulate try/catch for asynchronous code is neat.

I do have one issue with how errors are handled in promises: because it captures any error from a handler, you need to mark the end of a promise chain (with something like done()) or else it will suppress errors. It's all too easy during development to make a simple typo and have the error gobbled up by promises because you forgot to attach an error handler.

I know that is a critical design decision for promises so that you get try/catch for async code, but I've been bitten too often by it. I have to wonder if it's really worth the ability to apply try/catch to async to ignore everything like TypeError and ReferenceError, or if there's a more controlled way to handle errors.

Error handling in CSP is definitely more manual, as you will see. But I also think it makes it clearer where errors are handled and makes it easier to rationalize about them. Additionally, by default syntax/null/etc errors are simply thrown and not gobbled up. This has drawbacks too, but I'm liking it so far.

I lied. I have a second complaint about promises: generators are an after-thought. In my opinion, anything that deals with asynchronous behavior and doesn't natively embrace generators is broken (though understandable considering you need to cross-compile them until they are fully implemented).

Lastly, when it comes down to it, using a channel is not that different from using a promise. Compare the following code that takes a value and returns a different one:

Promise

promiseReturningFunction().then(function(value) {
  return value * 2;
});

// Or with generators:

spawn(function*() {
  return (yield promiseReturningFunction()) * 2;
});

Channels

go(function*() {
  return (yield take(channelReturningFunction())) * 2;
});

The similarity is striking, especially when using generators with promises. This is a trivial example too, and when you start doing more async work the latter 2 approaches look far better than raw promises.

Channels are marginally better than promises with generators for single-value asynchronous coordination, but the best part is that you can do all sorts of more complex workflows that also relegate the need for streams and event-based systems.

Using CSP in JavaScript

The fundamental idea of CSP is an old one: handle coordination between processes via messsage passing. The unique ideas of modern CSP are that processes can be simple light-weight cooperative threads, use channels to pass messages, and block execution when taking or putting from channels. This tends to make it very easy to express complex asynchronous flows.

Generators are coming to JavaScript and allow us to suspend and resume functions. This lets us program in a synchronous style, using everything from while loops to try/catch statements, but "halt" execution at any point. In my opinion, anything dealing with asynchronous behavior that doesn't completely embrace generators natively is busted.

CSP channels do exactly that. Using generators, the js-csp project has been able to faithfully port Clojure's core.async to JavaScript. We will use all the same terms and function names as core.async. I eventually forked the project to add a few things:

  • The go block which spawns a lightweight process always returns a channel that holds the final value from the process
  • sleep was a special operation that you could yield, but if you wanted an actual channel that timed out you had to use timeout instead. I removed sleep so you always use timeout which makes it more consistent.
  • I added a takem instruction which stands for "take maybe". If an Error object is passed through the channel it will throw it automatically at the place were takem was yielded.

This project is early in development so things may change, but it should be relatively stable. You will need to cross-compile generators to run it in all browsers; I recommend the ridiculously awesome regenerator project.

If you don't know much about generators, Kyle Simpson posted a great 4-part series about them. He even explores CSP in the last post but misses some critical points which have serious consequences like breaking composition and the ease of transforming values.

Basic Principles

Let's study the basic principles of CSP:

  • Processes are spawned with go, and channels are created with chan. Processes are completely unaware of each other but talk through channels.
  • Use take and put to operate on channels within a process. take gets a value and blocks if one isn't available. put puts a value on a channel and blocks if a process isn't available to take it.

Wow, that's it! Pretty simple, right? There are more advanced usages of CSP, but even just with those 4 methods we have a powerful way to express asynchronous coordination.

Here's an example. We create 3 processes that put values on a channel and sleep for various times, and a 4th process that takes values off the channel and logs them. If you run the code below, you will see that that these processes are running as if they are separate threads! Each process has its own while loop that loops forever, which is an amazingly powerful way to express asynchronous interaction. The 4th process closes the channel after 10 values come through, which stops the other processes because a put on a closed channel returns false.

var ch = chan(); go(function*() { while(yield put(ch, 1)) { yield take(timeout(250)); } }); go(function*() { while(yield put(ch, 2)) { yield take(timeout(300)); } }); go(function*() { while(yield put(ch, 3)) { yield take(timeout(1000)); } }); go(function*() { for(var i=0; i<10; i++) { console.log(yield take(ch)); } ch.close(); });

Run the code to see a visualization that shows you what actually happened. If you hover over the arrows you will see details of how values moved across the program. The 3 processes all put a value on the channel at the start of the program, but then slept for different times. Note that the first 3 processes were almost always sleeping, and the 4th was almost always blocking. Since the 4th process was always available to take a value, the other processes never had to block.

timeout returns a channel that closes after a specific amount of time. When a channel closes, all blocked takes on it are resumed with the value of csp.CLOSED, and all blocked puts are resumed with false.

Each process also ended at different times because they woke up at different times. You don't always have to explicitly close channels; do it only when you want to send that specific signal to other parts of the program. Otherwise, a channel that you don't use anymore (and any processes blocked on it) will simply be garbage collected.

Here's another example. This program creates 2 processes that both take and put from/onto the same channel. Again, they contain their own event loops that run until the channel is closed. The second process kicks off the interaction by putting a value onto the channel, and you can see how they interact in the visualization below. The 3rd process just closes the channel after 5 seconds.

var ch = chan(); go(function*() { var v; while((v = yield take(ch)) !== csp.CLOSED) { console.log(v); yield take(timeout(300)); yield put(ch, 2); } }); go(function*() { var v; yield put(ch, 1); while((v = yield take(ch)) !== csp.CLOSED) { console.log(v); yield take(timeout(200)); yield put(ch, 3); } }); go(function*() { yield take(timeout(5000)); ch.close(); });

You can see how values bounce back and forth between the processes. This kind of interaction would be extremely difficult with many other asynchronous solutions out there.

These while loops have to check if the channel is closed when taking a value off the channel. You can do this by checking to see if the value is the special csp.CLOSED value. In Clojure, they pass nil to indicate closed and can use it simply in a conditional (like if((v = take(ch))) {}). We don't have that luxury in JavaScript because several things evaluate to false, even 0.

One more example. It's really important to understand that both take and put will block until both sides are there to actually pass the value. In the above examples it's clear that a take would block a process, but here's one where put obviously blocks until a take is performed.

var ch = chan(); go(function*() { yield put(ch, 5); ch.close(); }); go(function*() { yield take(timeout(1000)); console.log(yield take(ch)); });

The first process tried to put 5 on the channel, but nobody was there to take it, so it waited. This simple behavior turns out to be extremely powerful and adaptable to all sorts of complex asynchronous flows, from simple rendezvous to complex flows with timeouts.

Channels as Promises

We've got a lot more cool stuff to look at, but let's get this out of the way. How do processes map to promises, exactly? Honestly, this isn't really that interesting of a use case for channels, but it's necessary because we do this kind of thing all the time in JavaScript.

Treating a channel as a promise is as simple as spawning a process and putting a single value onto it. That means that every single async operation is its own process that will "fulfill" a value by putting it onto its channel. The key is that these are lightweight processes, and you are able to create hundreds upon thousands of them. I am still tuning the performance of js-csp, but creating many channels should be perfectly fine.

Here's an example that shows how many of the promise behaviors map to channels. httpRequest gives us a channel interface for doing AJAX, wrapping a callback just like a promise would. jsonRequest transforms the value from httpRequest into a JSON object, and errors are handled throughout all of this.

function httpRequest(url) { var ch = chan(); var req = new XMLHttpRequest(); req.onload = function() { if(req.status === 200) { csp.putAsync(ch, this.responseText); } else { csp.putAsync(ch, new Error(this.responseText)); } } req.open('get', url, true); req.send(); return ch; } function jsonRequest(url) { return go(function*() { var value = yield take(httpRequest(url)); if(!(value instanceof Error)) { value = JSON.parse(value); } return value; }); } go(function*() { var data = yield takem(jsonRequest('sample.json')); console.log(JSON.stringify(data)); });

You can see how this is very similar to code that uses promises with generators. The go function by default returns a channel that will have the value returned from the generator, so it's easy to create one-shot promise-like processes like jsonRequest. This also introduces putAsync (there's also takeAsync). These functions allow you to put values on channels outside of a go block, and can take callbacks which run when completed.

One of the most interesting aspects here is error handling. It's very different from promises, and more explicit. But in a good way, not like the awkward juggling of callbacks. Errors are simply sent through channels like everything else. Transformative functions like jsonRequest need to only operate on the value if it's not an error. In my code, I've noticed that really only a few channels send errors, and most of them (usually higher-level ones) don't need to worry because errors are handled at the lower-level. The benefit over promises is that when I know I don't need to worry about errors, I don't have to worry about ending the promise chain or anything. That overhead simply doesn't exist.

You probably noticed I said yield takem(jsonRequest('sample.json')) instead of using take. takem is another operation like take, except that when an Error comes off the channel, it is thrown. Try changing the url and checking your devtools console. Generators allow you to throw errors from wherever they are paused, so the process will be aborted if it doesn't handle the error. How does it handle the error? With the native try/catch of course! This is so cool because it's a very terse way to handle errors and lets us use the synchronous form we are used to. There's icing on the cake, too: in your debugger, you can set "pause on exceptions" and it should pause where it was thrown, giving you additional context and letting you inspect the local variables in your process (while the stack of the Error will tell you where the error actually happened). This doesnt work from the above editors because of eval and web worker complications.

Another option for error handling is to create separate channels where errors are sent. This is appropriate in certain (more complicated) scenarios. It's up to you.

Taming User Interfaces

We've seen a few abstract programs using channels and also how we can do typical asynchronous coordination with them. Now let's look at something much more interesting: completely reinventing how we interact with user interfaces.

The Clojure community has blown this door wide open, and I'm going to steal one of David Nolen's examples from his post to start with. (you'll also want to check out his other post). Here we make a simple listen function which gives us a channel interface for listening to DOM events, and we start a process which handles a mouseover event and prints the coordinates.

function listen(el, type) { var ch = chan(); el.addEventListener(type, function(e) { csp.putAsync(ch, e); }); return ch; } go(function*() { var el = document.querySelector('#ui1'); var ch = listen(el, 'mousemove'); while(true) { var e = yield take(ch); el.innerHTML = ((e.layerX || e.clientX) + ', ' + (e.layerY || e.clientY)); } });

Go ahead, move the mouse over the area above and you'll see it respond. We essentially have create a local event loop for our own purposes. You'll see with more complex examples that this is an extraordinary way to deal with user interfaces, bringing simplicity to complex workflows.

Let's also track where the user clicks the element. Here's where channels begin to shine, if they didn't already. Our local event loop handles both the mousemove and click events, and everything is nicely scoped into a single function. There's no callbacks or event handlers anywhere. If you've ever tried to keep track of state across event handlers this should look like heaven.

function listen(el, type) { var ch = chan(); el.addEventListener(type, function(e) { csp.putAsync(ch, e); }); return ch; } go(function*() { var el = document.querySelector('#ui2'); var mousech = listen(el, 'mousemove'); var clickch = listen(el, 'click'); var mousePos = [0, 0]; var clickPos = [0, 0]; while(true) { var v = yield alts([mousech, clickch]); var e = v.value; if(v.channel === mousech) { mousePos = [e.layerX || e.clientX, e.layerY || e.clientY]; } else { clickPos = [e.layerX || e.clientX, e.layerY || e.clientY]; } el.innerHTML = (mousePos[0] + ', ' + mousePos[1] + ' — ' + clickPos[0] + ', ' + clickPos[1]); } });

Mouse over the above area, and click on it. This is possible because of a new operation alts, which takes multiple channels and blocks until one of them sends a value. The return value is an object of the form { value, channel }, where value is the value returned and channel is the channel that completed the operation. We can compare which channel sent the value and conditionally respond to the specific event.

alts actually isn't constrained to performing a take on each channel. It actually blocks until any operation is completed on each channel, and by default it performs take. But you can tell it to perform put by specifying an array with a channel and a value instead of just a channel; for example, alts([ch1, ch2, [ch3, 5]]) performs a put on ch3 with the value 5 and a take on ch1 and ch2.

Expressing UI interactions with alts maps extremely well to how we intuitively think about them. It allows us to wrap events together into a single event, and respond accordingly. No callbacks, no event handlers, no tracking state across functions. We think about UI interactions like this all the time, why not express your code the same way?

If you've ever developed UI controls, you know how complex they quickly get. You need to delay actions by a certain amount, but cancel that action altogether if something else happens, and coordinate all sorts of behaviors. Let's look at a slightly more complex example: a tooltip.

Our tooltip appears if you hover over an item for 500ms. The complete interaction of waiting that amount, but cancelling if you mouse out, and adding/removing the DOM nodes is implemented below. This is the complete code; it relies on nothing other than the CSP library.

function listen(el, type, ch) { ch = ch || chan(); el.addEventListener(type, function(e) { csp.putAsync(ch, e); }); return ch; } function listenQuery(parent, query, type) { var ch = chan(); var els = Array.prototype.slice.call(parent.querySelectorAll(query)); els.forEach(function(el) { listen(el, type, ch); }); return ch; } function tooltip(el, content, cancel) { return go(function*() { var r = yield alts([cancel, timeout(500)]); if(r.channel !== cancel) { var tip = document.createElement('div'); tip.innerHTML = content; tip.className = 'tip-up'; tip.style.left = el.offsetLeft - 110 + 'px'; tip.style.top = el.offsetTop + 75 + 'px'; el.parentNode.appendChild(tip); yield take(cancel); el.parentNode.removeChild(tip); } }); } function menu(hoverch, outch) { go(function*() { while(true) { var e = yield take(hoverch); tooltip(e.target, 'a tip for ' + e.target.innerHTML, outch); } }); } var el = document.querySelector('#ui3'); el.innerHTML = '<span>one</span> <span>two</span> <span>three</span>'; menu(listenQuery(el, 'span', 'mouseover'), listenQuery(el, 'span', 'mouseout'));

Hover over the words above for a little bit and a tooltip should appear. Most of our code is either DOM management or a few utility functions for translating DOM events into channels. We made a new utility function listenQuery that attaches event listeners to a set of DOM elements and streams all those events through a single channel.

We already get a hint of how well you can abstract UI code with channels. There are essentially two components: the menu and the tooltip. The menu is a process with its local event loop that waits for something to come from hoverch and creates a tooltip for the target.

The tooltip is its own process that waits 500ms to appear, and if nothing came from the cancel channel it adds the DOM node, waits for a signal from cancel and removes itself. It's extraordinarily straightforward to code all kinds of interactions.

Note that I never said "wait for a hover event", but rather "wait for a signal from hoverch". We actually have no idea what is on the other end of hoverch actually sending the signals. In our code, it is a real mouseover event, but it could be anything else. We've achieved a fantastic separation of concerns. David Nolen talks more about this in his post.

These have been somewhat simple examples to keep the code short, but if you are intruiged by this you should also check out David's walkthrough where he creates a real autocompleter. All of these ideas come even more to life when things get more complex.

Buffering

There's another features of channels which is necessary when doing certain kinds of work: buffering. Channels can be buffered, which frees up both sides to process things at their own pace and not worry about someone blocking the whole thing.

When a channel is buffered, a put will happen immediately if room is available in the buffer, and a take will return if there's something in the buffer and otherwise block until there's something available.

Take a look below. You can buffer a channel but passing an integer to the constructor, which is the buffer size. We create a channel is a buffer size of 13, a process that puts 15 values on the channel, and another process that takes 5 values off every 200ms. Run the code and you'll see how buffering makes a difference.

var start = Date.now(); var ch = chan(13); go(function*() { for(var x=0; x<15; x++) { yield put(ch, x); console.log('put ' + x); } }); go(function*() { while(!ch.closed) { yield take(timeout(200)); for(var i=0; i<5; i++) { console.log(yield take(ch)); } } }); go(function*() { yield take(timeout(1000)); ch.close(); });

The first 13 puts happen immediately, but then it's blocked because the buffer is full. When a take happens, it's able to put another value in the buffer, and so on. Try removing 13 from the chan constructor and seeing the difference.

There are 3 types of buffers: fixed, dropping, and sliding. When an operation is performed on a fixed buffer, if it is full it will always block like normal. However, dropping and sliding buffers will never block. If the buffer is full when a put is performed, a dropping buffer will simply drop the value and it's lost forever, and a sliding buffer will remove the oldest value to make room for the new value.

Try it out above. Change chan(13) to chan(csp.buffers.dropping(5)) and you'll see that all the puts happen immediately, but only the first 5 values are taken and logged. The last 10 puts just dropped the values. You may see 5 nulls printed as well because there the second process ran one last time but nothing was in the buffer.

Try it with chan(csp.buffers.sliding(5)) and you'll see that you get the last 5 values instead.

You can implement all sorts of performance strategies using this, like backpressure. If you were handling server requests, you would have a dropping buffer of a fixed size that started dropping requests at a certain point. Or if you were doing some heavy processing from a frequent DOM event, you could use a sliding buffer to only process the latest values as fast as possible.

Transducers — Transformation of Values

Channels are a form of streams, and as with anything stream-like, you will want to frequently transform the data as it comes through. Our examples were simple enough to avoid this, but you will want to use map on channels just as frequently as you use map on arrays.

js-csp comes with a bunch of builtin transformations which provide a powerful set of tools for managing channels. However, you'll notice that a lot of them are duplications of ordinary transformers for arrays (map, filter, etc).

Within the past month Clojure has actually solved this with something called transducers. Even better, while I was writing this post, another post about CSP and transducers in JS came out. His channel implementation is extremely primitive, but he mostly focuses on tranducers and it's a great walkthrough of how we can apply them to channels.

I ran out of time to fully research transducers and show off good examples here. Most likely I will posting more about js-csp, so expect to see more about that soon.

The Beginning of Something New

Fron now on I will always be using js-csp in my projects. I sincerely believe that this is a better way to express asynchronous communication and has wide impact on everything from server management to user interfaces. I hope that the JS community learns from it, and I will be posting more articles as I wrote more code with it.

I also ran out of time to explore using sweet.js macros to implement native syntax for this. Imagine if you could just use var v = <-ch to take from a channel, or something like it? I'm definitely going to do this soon, so expect another post. Oh the power!

js-csp itself is somewhat new so I wouldn't go and write a production app quite yet, but it will get there soon. I give my gratitude to ubolonton for the fantastic initial implementation. It's up to you whether to use my fork or his project, but we will hopefully merge them soon.

Yunier José Sosa VázquezNueva versión de Firefox mejora la cache y aumenta soporte para HTML5

Casi al unísono del inicio del curso escolar y listo para nuestro disfrute, Mozilla liberó una nueva versión de Firefox. Conozcamos sus nuevas características:

Una nueva cache HTTP provee rendimiento mejorado, incluyendo la recuperación ante cierres inesperados. Esto se traduce en más rapidez tanto en la carga del contenido como al mostrarlo en la ventana del navegador. Al mismo tiempo, podremos disfrutar de la integración de la recolección de basura generacional, lo cual mejora el consumo de memoria del navegador.

 Ahora podemos ver el uso histórico del las cuentas almacenadas en el Administrador de contraseñas, el Administrador de complementos ha sido mejorado y la forma de ir hacia atrás, adelante, recargar y añadir a marcadores desde el menú contextual (clic derecho) han sido cambiada.

Cuando realizamos una búsqueda dentro de una página (Ctrl+F) se muestra el número de los elementos encontrados. También el soporte para HTML5 ha sido mejorado.

Para Android

  •  Posibilidad de cambiar entre los lenguajes soportados sin tener que cerrar el navegador.
  • Al panel de Historial ha sido añadido el botón Limpiar historial.
  • Soporte para Android 2.2 y procesadores ARMv6 terminado.
  • Gamepad API finalizada y habilitada.
  • Añadidos los lenguajes armenio [hy-AM], vasco [eu], fulah [ff], Icelandic [is], gaélico escocés [gd], galés [cy].

Otras novedades

  • drawFocusIfNeeded habilitado por defecto.
  • La propiedad CSS3 position:sticky ha sido habilitado y activada por defecto.
  • mix-blend-mode habilitado por defecto.
  • La API de Vibration ha sido actualizada a las últimas especificaciones de la W3C.
  • box-decoration-break habilitado por defecto.
  • El botón Inspeccionar ha sido movido hacia la izquierda superior.
  • Nuevo editor Web Audio.
  • Soporte HIDPI en las herramientas de desarrollo.
  • Completación de código y documentación en línea añadida al Borrador (Shift+F4).

Si deseas conocer más, puedes leer las notas de lanzamiento.

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac, Windows y Android. Recuerda que para navegar a través de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.

Lukas BlakkAbout to do some major learning

Tomorrow morning the first ever Ascend Project kicks off in Portland, OR.  I just completed a month-long vacation where we drove from San Francisco out to the Georgian Bay, Ontario (with a few stops along the way including playing hockey in the Cleveland Gay Games) and back again through the top of the US until we arrived here in Portland.  I’m staying in this city for 6 weeks, will be going in to the office *every* day, and doing everything I can to guide & mentor 20 people in their learning on becoming open source contributors.

Going to do my best to write about the experience as this one is all about learning what works and what doesn’t in order to iterate and improve the next pilot which will take place in New Orleans in 2015. It’s been almost a year since I first proposed this plan and got the OK to go for it.  See http://ascendproject.org for posts on the process so far and for updates by the participants.

Hannah KaneOn Retrospectives

Last week I convened a small, cross-functional team for a half hour debrief of the work we’d done together on last month’s Net Neutrality trainings and tweetchat. The trainings and tweetchat were largely successful efforts, but this debrief was to discuss the process of working together.

Here’s how we did it:

  • First I sent around an etherpad with some questions. There was a section for populating a timeline of the entire process from conception to completion. And there were sections for capturing what worked well, and what people felt could be improved upon.
  • As people added their thoughts to the etherpad, it became clear to me that a Vidyo chat would be useful. There were differences of opinion and indications of tension that I felt ought to be surfaced and discussed.
  • Everyone took 30 minutes out of their busy schedules to meet over Vidyo, which I totally appreciated! I started the meeting by stating my goal which was to reach a shared agreement about two or three concrete things we would try to do more of or less of in the future.
  • I would have loved to have had a full hour, as I felt we were just starting to surface the real issues near the end of the call. It felt a little strange to have to cut off the conversation right when we were getting into it.
  • In the short time we had, we were able to touch on what I think were probably the most salient points from the pad, and everyone had a chance to speak. We also identified four concrete things to do differently in the future. By those measures, I think the debrief was successful.
  • Some additional takeaways were shared via email after the call, and I think everyone is committed to making this the start of an ongoing process of continuous improvement.

I called this a “debrief” because it was a relatively unstructured conversation looking back at the end of a project. In my mind, a debrief is one flavor of a larger category of what I’d call “retrospecting behaviors.”

Here are some thoughts about what makes a good retrospective:

You don’t need to save retrospecting for the end. Retrospectives are different from post-mortems in this way. You can retrospect at any point during a project, and, in fact, for teams that work together consistently, retrospectives can be baked into your regular working rhythm.

First thing’s first: start with a neutral timeline. It’s amazing how much we can forget. Spend a couple minutes re-creating an objective timeline of what happened leading up to the retrospective. Use calendars, emails, blog posts, etc. to re-create the major milestones that occurred.

Bring data. If possible, the facilitator should bring data or solicit data from the team. Data can include so many things! Here are just a few examples:

  • Quantitative and qualitative measures of success.
  • Data about how long things took to finish.
  • Subjective experiences: each team member’s high point and low point. One word or phrase from each team member describing their experience.

Be ready for the awkward. For a breakthrough to happen, you often have to go through something uncomfortable first. No one should feel unsafe or attacked, of course, but transformation happens when people have the courage to speak and hear painful truths. Not every retrospective will feel like a group therapy session, but surfacing tensions in productive, solution-oriented ways is good for teams.

Despite their name, retrospectives are about the future. The outcome of any retrospective (whether it’s a team meeting, or 5 minutes of solo thinking time at your desk) should be at least one specific thing you’d like to do differently in the future. Make it visible to you and your teammates.

A “Do Differently” is a specific and immediately actionable experiment. Commit to trying something different just for a week. Because the risk is low (it’s just a week!), you can try something pretty dramatic. Choose something you can start right away.  “Let’s try using Trello for a week” or “Let’s see if having a 10-minute check-in each morning reduces confusion.”

Retrospectives often also inspire one-time actions and new rules. One-time actions are things like, “We need to do a CRM training for the team” or “We should update our list of vendors because no one knew who to call when we ran into trouble.” New rules are things like, “We should start every project with a kick-off meeting, no matter how small the project is.”

Both one-time actions and new rules are important, and should be captured and assigned a responsible person. But they are not the same as “Do Differentlys” which are meant to create a culture of experimentation that is necessary for continuous improvement.

It’s not about how well you followed a process; it’s about how well the process is serving the goals. This is another difference between retrospectives and post-mortems. Whereas in a post-mortem, you might be discussing what you did “right” and “wrong” (i.e. how well you adhered to some agreed upon rules or norms), in a retrospective you discuss what “worked” and “didn’t work” (which might lead to changing those norms).

Celebrate. Retrospectives are occasions to recognize the good as well as the bad. I won’t lie. Some of my favorite retrospectives involved cake.

What would you add to or change about the above list?


Hal WineNew Hg Server Status Page

New Hg Server Status Page

Just a quick note to let folks know that the Developer Services team continues to make improvements on Mozilla’s Mercurial server. We’ve set up a status page to make it easier to check on current status.

As we continue to improve monitoring and status displays, you’ll always find the “latest and greatest” on this page. And we’ll keep the page updated with recent improvements to the system. We hope this page will become your first stop whenever you have questions about our Mercurial server.

Wil ClouserRetiring AMO’s Landfill

A few years ago we deployed a landfill for AMO – a place where people could play around with the site without affecting our production install and developers could easily get some data to import into their local development environment.

I think the idea was sound (it was modeled after Bugzilla’s landfill) and it was moderately successful but it never grew like I had hoped and even after being available for so long, it had very little usable data in it. It could help you get a site running, but if you wanted to, say, test pagination you’d still need to create enough objects to actually have more than one page.

A broader and more scalable alternative is a script which can make as many or as few objects in the database as you’d like. Want 500 apps? No problem. Want 10000 apps in one category so you can test how categories scale? Sure. Want those 10000 apps owned by a single user? That should be as easy as running a command line script.

That’s the theory anyway. The idea is being explored on the wiki for the Marketplace (and AMO developers are also in the discussion). If you’re interested in being involved or seeing our progress watch that wiki page or join #marketplace on irc.mozilla.org.

Eric ShepherdThe Sheppy Report: September 5, 2014

First, a personal note:

Holy frickity-frak! It’s September!?

Okay, back to business. My work this week was all over the place. Got tons done but, of course, not what I meant to do. That said, I did actually make progress on the stuff I’d planned to do this week, so that’s something, anyway.

I love this job. The fact that I start my week expecting one awesome thing, and find myself doing two totally different awesome things instead, is pretty freaking cool.

What I did this week

  • Filed bug 1061624 about the new page editing window lacking a link to the Tagging Guide next to the tag edit area.
  • Followed up on some tweets reporting problems with MDN content; made sure the people working on that material knew about the issues at hand, and shared reassurances that we’re on the problem.
  • Tweaked the Toolbox page to mention where full-page screenshots are captured in both locations where the feature is described (instead of just the first place). Also added additional tags to the page.
  • Had a lot of discussions, both by video and by email and IRC, about planning and procedures for documentation work. A new effort is underway to come up with a standard process.
  • Submitted my proposal for changes to our documentation process to Ali, who will be collating this input from all the staff writers and producing a full proposal.
  • Checked the MDN Inbox: it was empty.
  • Experimentation with existing WebRTC examples.
  • Moved some WebRTC content to its new home on MDN.
  • Filed bug 1062538, which suggests that there be a way to close the expanded title/TOC editor on MDN, once it’s been expanded.
  • Fixed the parent page links for the older WebAPI docs; somehow they all believed their parent to be in the Polish part of MDN.
  • Corrected grammar in the article about HTMLMediaElement, and updated the page’s tags.
  • Filed a bug about search behavior in the MDN header, but it was a duplicate.
  • Discovered a privacy issue bug and filed it. A fix is already forthcoming.
  • bz told me that previewing changes to docs in the API reference results in an internal service error; I did some experimenting, then filed bug 1062856 for him. I also pinged mdn-drivers since it seems potentially serious.
  • Discovered an extant, known bug in media streaming which prevents me from determining the dimensions of the video correctly from script. This is breaking many samples for WebRTC.
  • Went through all pages with KumaScript errors (there were only 10). All but one were fixed with a shift-refresh. The last one had a typo in a macro call and worked fine after I fixed the error.
  • Expanded on Florian’s Glossary entry about endianness by adding info on common platforms and processors for each endianness.
  • Filed bug 1063560 about search results claiming to be for English only when your search was for locale=*.
  • Discovered and filed bug 1063582 about MDN edits not showing up until you refresh after saving. This had been fixed at one point but has broken again very recently.
  • Started designing a service to run on Mozilla’s PaaS platform to host the server side of MDN samples. My plan is nifty and I’ll share more about it when I’m done putting rough drafts together.
  • Extended discussions with MDN dev team about various issues and bugs.
  • Helped with the debugging of a Firefox bug I filed earlier in the week.

Meetings attended this week

Tuesday

  • #mdndev planning meeting
  • 1:1 with Jean-Yves

Wednesday

  • 1:1 meeting with Ali

Thursday

  • Writers’ staff meeting
  • Compatibility Data monthly meeting

Friday

  • #mdndev weekly review meeting
  • Web API documentation meeting; only myself, Jean-Yves, and Florian attended but it was still a viable conversation.

A good, productive week, even if it didn’t involve the stuff I expected to do. That may be my motto: I did a lot of things I didn’t expect to do.

Sean BoltonMozilla CBT Build Principles

Making implicit information explicit allows us to grow. We are able to recognize and add to something that works well, while focusing less on what doesn’t work well. Being explicit allows us to talk about something we do and/or experience – it allows this information to be shared and understood by others. When we focus on value and impact, we must be explicit in order to understand what is happening.

During my work on the Community Building Team (CBT) at Mozilla, I have been exposed to several themes of how the team works when success happens.  Intrinsically, these are the agreed upon ways by which we do our work. Extrinsically, these are the principles by which we do our work.

I cannot claim to be the single voice for these principles on our team – that would be not Mozilla-like. However, these are things I have been exposed to by working with and reading about the work of all members of the team.

  1. Build Understanding – Demonstrate competence. Seek first to understand. Every engagement is different. We care about people and doing the right thing for them. In order to best help them, we are curious.
  2. Build Connections – Be a catalyst for connection. Our team has a broad reach in the organizations. Sometimes the best way we can build is by connecting what is already there.
  3. Build Clarity – This is important when bringing more people into a project. We seek to navigate through the confusion to create clarity for us, our partners and the community.
  4. Build Trust – This is about having someone’s back. It’s important that the people we work with know that we are in this with them, together.
  5. Build Pilots – Our work is not a one size fits all. We care about the best solution so we test our assumptions to see what works and build from there.
  6. Build Win-Win – Focus on mutual benefit. We engage in win-win partnerships because our success is dependent on others. More people can only sustainably come into a project when it’s mutually beneficial. We want to make our partners look good.

Having these principles allows others people and teams to understand how the CBT works and what things are a valued when doing that work. It allows allow members of the team to have a toolkit to reference when entering into a new engagement and builds a level of consistency about interaction – creating clear expectations for others. All this leads to the sustainable success of the CBT.

I’ve places these into a nice PDF format below.

CBT principles

 


Benoit GirardB2G Performance Polish: Unlock Animation (Part 1)

I’ve decided to start a blog series documenting my workflow for performance investigation. Let me know if you find this useful and I’ll try to make this a regular thing. I’ll update this blog to track the progress made by myself, and anyone who wants to jump in and help.

I wanted to start with the b2g unlock animation. The animation is O.K. but not great and is core to the phone experience. I can notice a delay from the touch up to the start of the animation. I can notice that the animation isn’t smooth. Where do we go from here? First we need to quantity how things stand.

Measuring The Starting Point

The first thing is to grab a profile. From the profile we can extract a lot of information (we will look at it again in future parts). I run the following command:

./profile.sh start -p b2g -t GeckoMain,Compositor
*unlock the screen, wait until the end of the animation*
./profile.sh capture
*open profile_captured.sym in http://people.mozilla.org/~bgirard/cleopatra/*

This results in the following profile. I recommend that you open it and follow along. The lock animation starts from t=7673 and runs until 8656. That’s about 1 second. We can also note that we don’t see any CPU idle time so we’re working really hard during the unlock animation. Things aren’t looking great from a first glance.

I said that there was a long delay at the start of the animation. We can see a large transaction at the start near t=7673. The first composition completes at t=7873. That mean that our unlock delay is about 200ms.

Now let’s look at how the frame rate is for this animation. In the profile open the ‘Frames’ tab. You should see this (minus my overlay):

Lockscreen Frame Uniformity

Alright so our starting point is:

Unlock delay: 200ms

Frame Uniformity: ~25 FPS, poor uniformity

Next step

In part 2 we’re going to discuss the ideal implementation for a lock screen. This is useful because we established a starting point in part 1, part 2 will establish a destination.

 


Gregory SzorcReproducing Mozilla's Mercurial Server

Of of my first tasks in my new role as a Developer Productivity Engineer is to help make Mozilla's Mercurial server better. Many of the awesome things we have planned rely on features in newer versions of Mercurial. It's therefore important for us to upgrade our Mercurial server to a modern version (we are currently running 2.5.4) and to keep our Mercurial server upgraded as time passes.

There are a few reasons why we haven't historically upgraded our Mercurial server. First, as anyone who has maintained high-availability systems will tell you, there is the attitude of if it isn't broken, don't fix it. In other words, Mercurial 2.5.4 is working fine, so why mess with a good thing. This was all fine and dandy - until Mercurial started falling over in the last few weeks.

But the blocker towards upgrading that I want to talk about today is systems verification. There has been extreme caution around upgrading Mercurial at Mozilla because it is a critical piece of Mozilla's infrastructure and if the upgrade were to not go well, the outage would be disastrous for developer productivity and could even jeopardize an emergency Firefox release.

As much as I'd like to say that a modern version of Mercurial on the server would be a drop-in replacement (Mercurial has a great committment to backwards compatibility and has loose coupling between clients and servers such that upgrading servers should not impact clients), there is always a risk that something will change. And that risk is compounded by the amount of custom code we have running on our server.

The way you protect against unexpected changes is testing. In the ideal world, you have a robust test suite that you run against a staging instance of a service to validate that any changes have no impact. In the absence of testing, you are left with fear, uncertainty, and doubt. FUD is an especially horrible philosophy when it comes to managing servers.

Unfortunately, we don't really have a great testing infrastructure for Mozilla's Mercurial server. And I want to change that.

Reproducing the Server Environment

When writing tests, it is important for the thing being tested to be as similar as possible to the real thing. This is why so many people have an aversion to mocking: every time you alter the test environment, you run the risk that those differences from reality will mask changes seen in the real environment.

So, it makes sense that a good first goal for creating a test suite against our Mercurial server should be to reproduce the production server and environment as closely as possible.

I'm currently working on a Vagrant environment that attempts to reproduce the official environment as closely as possible. It starts one virtual machine for the SSH/master server. It starts a separate virtual machine for the hgweb/slave servers. The virtual machines are booting CentOS. This is different than production, where we run RHEL. But they are similar enough (and can share the same packages) that the differences shouldn't matter too much, at least for now.

Using Puppet

In production, Mozilla is using Puppet to manage the Mercurial servers. Unfortunately, the actual Puppet configs that Mozilla is running are behind a firewall, mainly for security reasons. This is potentially a huge setback for my reproducibility effort, as I'd like to have my virtual machines use the same exact Puppet configs as whats used in production so the environments match as closely as possible. This would also save me a lot of work from having to reinvent the wheel.

Fortunately, Ben Kero has extracted the Mercurial-relevant Puppet config files into a standalone repository. Apparently that repository gets rolled into the production Puppet configs periodically. So, my virtual machines and production can share the same Mercurial Puppet files. Nice!

It wasn't long after starting to use the standalone Puppet configs that I realized this would be a rabbit hole. This first manifests in the standalone Puppet code referencing things that exist in the hidden Mozilla Puppet files. So the liberation was only partially successful. Sad panda.

So, I'm now in the process of creating a fake Mozilla Puppet environment that mimics the base Mozilla environment (from the closed repo) and am modifying the shared Puppet Mercurial code to work with both versions. This is a royal pain, but it needs to be done if we want to reproduce production and maintain peace of mind that test results reflect reality.

Because reproducing runtime environments is important for reproducing and solving bugs and for testing, I call on the maintainers of Mozilla's closed Puppet repository to liberate it from behind its firewall. I'd like to see a public Puppet configuration tree available for all to use so that anyone anywhere can reproduce the state of a server or service operated by Mozilla to within reasonable approximation. Had this already been done, it would have saved me hours of work. As it stands, I'm reverse engineering systems and trying to cobble together understanding of how the Mozilla Puppet configs work and what parts of them can safely be ignored to reproduce an approximate testing environment.

Along that vein, I finally got access to Mozilla's internal Puppet repository. This took a few meetings and apparently a lot of backroom chatter was generated - "developer's don't normally get access, oh my!" All I wanted was to see how systems are configured so I can help improve them. Instead, getting access felt like pulling teeth. This feels like a major roadblock towards productivity, reproducibility, and testing.

Facebook gives its developers access to most production machines and trusts them to not be stupid. I know we (Mozilla) like to hold ourselves to a high standard of security and privacy. But not giving developers access to the configurations for the systems their code runs on feels like a very silly policy. I hope Mozilla invests in opening up this important code and data, if not to the world, at least to its trusted employees.

Anyway, hopefully I'll soon have a Vagrant environment that allows people to build a standalone instance of Mozilla's Mercurial server. And once that's in place, I can start writing tests that basic services and workflows (including repository syncing) work as expected. Stay tuned.

Christian HeilmannColdfrontconf is one to watch

I’ve said it before and I stick by it: conferences stand and fall with the enthusiasm of the organisers. And it is a joy for someone like me who does spend a lot of time at conferences to see a new one be a massive success from the get-go.

Yesterday was the Coldfront conference in Copenhagen, Denmark. A one day conference organised by Kenneth Auchenberg, @Danielovich (and of course a well-chosen team of people). It was very rewarding to work with him to give the closing keynote of the inaugural edition of this event.

The slides of my closing keynotes are available on Slideshare.

And, amazingly enough, the video is out, too:


Chris alt Coldfrontconf
(Notice the fan behind me giving me that wind-swept look that so fitted my physical state going directly from the plane to the venue)

I am sad that because of other commitments I had to miss the first talks, but here are my main impressions of the event:

  • I love the pragmatism of it – one track, good break times, a very simple and straight-forward web site and no push to “download the app of this event”.
  • The location – a program cinema – had great seating, working WiFi (with a few hickups but the hotel next door also had available WiFi that worked in the first rows) and very adequate facilities.
  • The projector and audio set up was great and the switch from speaker to speaker worked flawlessly.
  • All talks were streamed on the web
  • Even a last minute speaker cancellation didn’t quite disturb the event (thanks for the reminder Steen H. Rasmussen)
  • Instead of keeping people perched up inside, the breaks had coffee available for self-service and the food and branded ice cream was served outside the building in the street. This was also the spot for the beers and cupcakes after the event and the final venue was just down the road.
  • The after party was in a beer place that has over 40 beers on tab and the open bar lasted well till after midnight. Nobody got blindly drunk or misbehaved – it actually felt more like a beer tasting experience than a drink-up. There was a lot of seating and no loud music to discourage or hinder communication after party
  • All the videos of the talks were already available on the day or the day after. I managed to see myself whilst my head was still hurting from the party (and my lack of sleep) the night before.
  • Elisabeth Irgens did a great job doing live sketch notes of each talk and uploading them immediately to Twitter.
  • The audience was very well behaved and it was a very inviting and inspiring environment to share information in. Good mix of people with various backgrounds.
  • Whilst there was a bit of sponsorship being shown on the big screen and there were sponsor booths in the foyer all of it was very low-key and appeared utterly in context. No sales weasels or booth babes there. The sponsors sent their geeks to talk to geeks.
  • I felt very well looked after – the organisers paid my flights and hotel and the communication with the speakers as to where to be when was only a handful of emails. Things just fell in place and there was no hesitance to make sure everybody gets there in time.
  • It is very worth while to watch the recordings of the talk. All of them were very high quality. Personally, I was most impressed with Guillermo Rauch“’s How to build the modern, optimistic and reactive user interface, we all want.

All in all, this was a conference that was as pragmatic and spot-on as Kenneth is when you talk to him. It felt very good and I was very much reminded of the first Fronteers event. This is one to watch, let’s see what happens next.

Gregory SzorcNew Job Role

As of today, I have a new role and title at Mozilla: Developer Productivity Engineer. I'll be reporting to Laura Thomson as a member of the Developer Services team.

I have an immediate goal to make our version control work better. This includes making Try scale and helping out with the deployment of ReviewBoard. After that, I'm not entirely sure. But Autoland and Firefox build system improvements have been discussed.

I'm really excited to be in this new role. If someone were to give me a clean slate and tell me to design my own job role, I think I'd answer with something very similar to the role I am now in. I am passionate about tools and enabling people to become more productive. I have little doubt I'll thrive in this new role.

Christian HeilmannFirefox OS auf der MobileTechCon Berlin 2014

Vor zwei Tagen war ich in Berlin auf der MobileTechCon und hielt neben der Eröffnungskeynote am zweiten Tag auch einen Vortrag über den aktuellen Stand von Firefox OS.

Geschätlich in Berlin

Da das Publikum den Vortrag gerne auf Deutsch haben wollte, hatte ich kurzfristig umgeschwenkt und ihn dann auf sowas wie Deutsch gehalten.

Hier sind die Slides und die Screencasts. Der erste ist nur vom Vortrag, der zweite beinhaltet auch die Fragen und Antworten mit ein paar Beispielen wie man zum Beispiel die Developer Tools im Firefox verwenden kann, was together.js ist und wozu das gut ist und ein paar weitere “Schmankerln des offenen Netzes”.

Das alles is sehr ungeschnitten und war mehr oder minder im Moment geändert, daher kann es sein das da auch ungezogene Worte mit dabei sind. Die Slides sind auf Slideshare erhältlich.

Den halbstündigen Vortrag gibt es hier als Screencast zu sehen:

Wer den ganzen Vortrag mit Fragen und Antworten hören will, gibt es hier die ganze Stunde als Screencast.