Looking For

Air MozillaJanuary Cantina Speaker - Nico Sell, CEO r00tz and Wickr

January Cantina Speaker - Nico Sell, CEO r00tz and Wickr Nico Sell is a professional artist, athlete and entrepreneur based in California. She is cofounder and CEO of r00tz and Wickr. r00tz is a nonprofit...

Allison NaaktgeborenApplying Privacy Series: The 4th meeting

Engineer goes off and adjusts the plan. At a high level,it looks like this:

    Client side code

        Set up

            snippet lets users know there is a new options in preferences

            if the app locale does map to a supported language, the pref panel is greyed out with a message that their locale is not supported by the EU service

            if not, user clicks ToS, privacy policy checkbox, confirms language

            app contacts server

            if server not available, throw up offline error

            if server available, upload device id, language, url list

            server sends back the guid assigned to this device id

            notify user setup is complete

            enable  upload service

        When new tab is opened or refreshed

            send msg to server with guid + url

        Turning off feature

            prompt user ‘are you sure’ & confirm

            notify server of deletion

            delete local translated pages

Server side code

    Set up

        poked by client,

        generate guid

        insert into high risk table: guid+device id

adds rows for tabs list (med table)

adds rows for the urls (low table)

    Periodic background translation job:

        finds urls of rows where the translated blob is missing

        for each url, submits untranslated blob to EU’s service

        sticks the resulting translated text blob back into the table

    Periodic background deletion jobs:

        finds rows older than 2 days and evicts them in low risk table & medium risk tables

        find rows in sensitive table older than 90 days and evict.

secure destruction used.

        user triggered deletion

            delete from sensitive table. secure destruction

            delete from medium table

    Database layout

        sensitive data/high risk table columns: user guid, device id

            maps guid to device id

        medium risk table columns: user guid, url

            maps guid to tabs list

        low risk table columns: url, timestamp, language, blob of translated text

            maps urls to translated text

The 4th Meeting

Engineer: Hey, so what do you guys think of the new plan? The feedback on the mailing list was pretty positive. People seem pretty excited about the upcoming feature.

Engineering Manager: indeed.

DBA: much better.

Operations Engineer: I agree. I’ll see about getting you a server in stage to start testing the new plan on.

Engineer: cool, thanks!

Engineer: Privacy Rep, one of the questions that came up on the mailing list was about research access to the data. Some phd students at the Sorbonne want to study the language data.

Privacy Rep: Did they say which bits of data they might be interested in?

Engineer: the most popular pages and languages they were translated into. I think it would really be just the low risk table to start.

Privacy Rep: I think that’d be fine, there’s no personal data in that table. Make we send them the basic disclosure & good behavior form.

Engineering Manager: A question also came to me about exporting data. I don’t think we have anything like that right now.

Engineer: No, we don’t.

Privacy Rep: well, can we slate that do after we get the 1.0 landed?

Engineering Manager: sounds like a good thing to work on while it’s baking on the alpha-beta channels.

Who brought up user data safety & privacy concerns in this conversation?

Engineer, Engineering Manager, & Privacy Rep.

Michelle ThorneClubs: First test kicks off!

webmaker clubs 15

A few weeks ago we posted an overview of a new initiative with Mozilla, “Webmaker Clubs.” While details (including the name!) are still pending, we’ve made great progress already on the program and are kicking off our first local tests this week.

Joined by over 40 organizations and individuals around the world, we’ll test the first section of our web literacy basics curriculum, based on our community-created Web Literacy Map.

We anticipate having a community-created and tested Web Literacy Basics curriculum ready by the end of March, consisting of three sections:

  • Reading the Web
  • Writing the Web
  • Participating on the Web

In addition, there will be extra guides and goodies packaged with the curriculum to help people start their own local clubs or to inject this kind of web literacy learning into their existing programs. These will be bolstered by an online “club house” and leadership development for club mentors.

If you’re interested in trying out the club curriculum or just learning more, drop us a line on this discussion thread.

webmaker clubs 11

Testing 1. Reading the Web.

The first section consists of two 45min. activities, designed by the ever-pioneering MOUSE, to introduce learners to “Reading the Web.”

We selected these activities because we’re looking for lessons that:

  • are production-centered and about learning socially.
  • readily adapted to a local context.
  • work as standalone lessons or strung together for a larger arc.
  • require little or no prior web literacy skills for the mentor.
  • done offline, without internet or computers. or, at the very most, with only a modern browser.

webmaker clubs 16

The testing process

Testers are looking at the effectiveness and compatibility of the activities. In particular, we’re interested in how people adapt the curriculum to their learners. One example could be swapping out the mythical creature, The Kraken, for your local variety, like Loch Ness, Knecht Ruprecht, etc.

We’d love to see greater remixes and alternatives to the activities themselves, hopefully uncovering more compelling and context-sensitive ways to teach credibility and web mechanics.

And most importantly, we’re looking at whether the activities meet our learning objectives. They should not only be fun and engaging, but instill real skill and a deeper understanding of the web.

The testing process invites our first cohort to:

  1. complete a pre-activity questionnaire
  2. do the activity first on their own
  3. do the activity with their learners
  4. complete a post-activity questionnaire
  5. share a reflection

where the questionnaires and reflection will unpack how the activities played out with learners and whether they taught what we think they do.

webmaker clubs 12

Co-creating 2. Writing the Web

In parallel to testing the first section, we’re co-developing the second section with our fellow club creators. Here we hope to up-level two existing activities from the community and to prepare them for testing in the next round, starting Feb. 10.

If you have ideas for how to teach “Writing on the Web”, particularly the competencies of remix and composing, chime in!

There are some great activities identified so far, including the Gendered Advertising Remixer, Life in the Slowww Lane.

webmaker clubs 13

Getting involved

There are also other groups emerging to hack on other aspects of clubs. These include:

  • online platform
  • leadership and professional development
  • localization

If you’re interested in any of the above topics, or would like to test and co-create the curriculum, please get in touch! We’d love to have your help to #teachtheweb.

Photos by Mozilla Europe available under a Creative Commons Attribution-ShareAlike 2.0 license.

Dave Townsendhgchanges is back up

The offending changeset that broke hgchanges yesterday turns out to be a merge from an ancient branch to current tip. That makes the diff insanely huge which is why things like hgweb were tripping over it. Kwierso point out that just ignoring those changesets would solve the problem. It’s not ideal but since in this case they aren’t useful changesets I’ve gone ahead and done that and so hgchanges is now updating again.

Air MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

Gervase MarkhamSamuel David Markham

I am overjoyed to announce the birth of our third son, Samuel David Markham, at 9.01am on the morning of 28th January 2015, weighing 8lb 0oz. Mother, father, baby and older brothers are all well :-)

He is called Samuel after:

  • The prophet and leader Samuel, who was called into God’s service at an early age, as recorded in the book of 1 Samuel;
  • Samuel Rutherford (1600 – 1661), a Scottish minister and representative at the Westminster Assembly, whose book Lex, Rex contains arguments foundational to a Christian understanding of good government;
  • Samuel Davies (1723 – 1761), American non-conformist preacher, evangelist and hymn writer, who showed we are all equal in God’s sight by welcoming black and white, slave and free to the same Communion table;
  • Samuel Crowther (1809 – 1891), the first black Anglican bishop in Africa, who persevered against unjust opposition and translated the Bible into Yoruba.

He is called David primarily after the King David in the Bible, who was “a man after God’s own heart” (a fact recorded in the book of 1 Samuel, 13:14).

Doug BelshawIs grunt 'n' click getting in the way of web literacy?

I’ve just listening to a fascinating episode of the 99% Invisible podcast. It’s Episode 149: Of Mice and Men and its subject is my much-more-talented namesake Doug Englebart.

Cat and cursor

The episode covered The Mother of All Demos, after which Steve Jobs took some of Englebart’s ideas and ran with them. However, instead of the three-buttoned mouse and ‘keyset’ originally envisioned, we got the single-button Apple mouse. The MacBook Pro I’m typing on this retains this legacy: if I want to 'right-click’ I have to hold down the Option key.

Christina Englebart explained her father believed that simplicity only gets you so far. We may be in an age where toddlers can intuitively use iPads and smartphones but a relentless focus on this has led to a ceiling on the average user’s technical skills. The analogy used was the difference between a tricycle and a bicycle. Anyone can immediately get on a tricyle and use it. That’s great. But sometimes, and especially when you’re going up a hill, you need gears and the features that bicycles afford.

Essentially what we’ve got now are faster horses pimped-out tricyles which are shiny terminals to web-based services. Cory Doctorow calls this the civil war over general purpose computing:

We don’t know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal.

The closest approximation we have for such a device is a computer with spyware on it— a computer that, if you do the wrong thing, can intercede and say, “I can’t let you do that, Dave.”

This is paternalism at its worst. The experience of a whole generation of young people is digital surveillance baked into the tools they’re encouraged to use to learn and be 'social’. Meanwhile, their physical boundaries are more constrained than ever before 'for their own safety’.

Extremely simple user interfaces are great, but not when they’re the whole story. I don’t want a device that’s hard to use. But nor do I want one that’s so locked down that I have to bend my hopes, wishes and dreams to the will of a company providing software with shareholders. To extend the tricycle analogy, we grow up and stop using Fisher-Price toys at some point. We learn to use the real deal. This may take time and effort to learn, but the payoff is empowerment, human flourishing, and innovation.

I’ve had this experience learning Git recently. Ostensibly it’s a method of version control for software but the whole experience allows you to think in different ways about what’s possible in the (digital) world. While not quite grunt 'n’ click, GitHub makes this easier for those new to the whole process. It’s got a relatively low floor but a pretty high ceiling. That, I think, is what we should be aiming for.


Questions? Comments? Tweet me: @dajbelshaw or email me: doug@mozillafoundation.org

Christian HeilmannOn towards my next challenge…

One of the things I like where I live in London is my doctor. He is this crazy German who lived for 30 years in Australia before coming to England. During my checkup he speaks a mixture of German, Yiddish, Latin and English and has a dark sense of humour.

He also does one thing that I have massive respect for: he never treats my symptoms and gives me quick acting remedies. Instead he digs and analyses until he found the cause of them and treats that instead. Often this means I suffer longer from ill effects, but it also means that I leave with the knowledge of what caused the problem. I don’t just get products pushed in my face to make things easier and promises of a quick recovery. Instead, I get pulled in to become part of the healing process and to own my own health in the long run. This is a bad business decision for him as giving me more short-acting medication would mean I come back more often. What it means to me though is that I have massive respect for him, as he has principles.

Professor X to Magneto: If you know you can deflect it, you are not really challenging yourself

As you know if you read this blog, I left Mozilla and I am looking for a new job. As you can guess, I didn’t have to look long and hard to find one. We are superbly, almost perversely lucky as people in IT right now. We can get jobs easily compared to other experts and markets.

Hard to replace: Mozilla

Leaving Mozilla was ridiculously hard for me. I came to Mozilla to stay. I loved meeting people during my interviews who’ve been there for more than eight years – an eternity in our market. I loved the products, I am still madly in love with the manifesto and the work that is going on. A lot changed in the company though and the people I came for left one by one and got replaced by people with other ideals and ideas. This can be a good thing. Maybe this is the change the company needs.
I didn’t want to give up on these ideals. I didn’t want to take a different job where I have to promote a product by all means. I didn’t want to be in a place that does a lot of research, builds impressive looking tools and solutions but also discards them when they aren’t an immediate success. I wanted to find a place where I can serve the web and make a difference in the lives of those who build things for the web. In day to day products. Not in a monolithic product that tries to be the web.

Boredom on the bleeding edge

My presentations in the last year had a re-occuring theme. We can not improve the web if we don’t make it easy and reliable for everybody building things on it to take part in our innovations. Only a tiny amount of the workers of the web can use alpha or beta solutions. Even fewer can rely on setting switches in browsers or use prefixed functionality that might change on a whim. Many of us have to deliver products that are much more strict. Products that have to adhere to non-sensical legal requirements.

These people have a busy job, and they want to make it work. Often they have to cut corners. In many other cases they rely on massive libraries and frameworks that promise magical solutions. This leads to products that are in a working state, and not in an enjoyable state.

To make the web a continuous success, we need to clean it up. No innovation and no framework will replace the web, its very nature makes that impossible. What we need to do now is bring our bleeding edge knowledge of what means well performing and maintainable code down the line to those who do not attend every conference or spend days on Hacker News and Stack Overflow.

And the winner is…

This is why I answered a job offer I got and I will start working on the 2nd of February for a new company. The company is *drumroll*:

Microsoft. Yes, the bane of my existence as a standards protagonist during the dark days of the first browser wars. Just like my doctor, I am going to the source of a lot of our annoyances and will do my best to change the causes instead of fighting the symptoms.

My new title is “Senior Program Manager” in the Developer experience and evangelism org of Microsoft. My focus is developer outreach. I will do the same things I did at Mozilla: JavaScript, Open Web Technologies and cross-browser support.

Professor X to Magneto: you know, I've always found the real focus to be in between rage and serenity

Frankly, I am tired of people using “But what about Internet Explorer” as an excuse. An excuse to not embrace or even look into newer, sensible and necessary technology. At the same time I am bored of people praising experimental technology in “modern browsers” as a must. Best practices have to prove themselves in real products with real users. What’s great for building Facebook or GMail doesn’t automatically apply to any product on the web. If we want a standards-based web to survive and be the go-to solution for new developers we need to change. We can not only invent, we also need to clean up and move outdated technology forward. Loving and supporting the web should be unconditional. Warts and all.

A larger audience who needs change…

I’ve been asking for more outreach from the web people “in the know” to enterprise users. I talked about a lack of compassion for people who have to build “boring” web solutions. Now I am taking a step and will do my best to try to crack that problem. I want to have a beautiful web in all products I use, not only new ones. I’ve used enough terrible ticketing systems and airline web sites. It is time to help brush up the web people have to use rather than want to use.

This is one thing many get wrong. People don’t use Chrome over Firefox or other browsers because of technology features. These are more or less on par with another. They chose Chrome as it gives them a better experience with Google’s services. Browsers in 2015 are not only about what they can do for developers. It is more important how smooth they are for end users and how well they interact with the OS.

I’ve been annoyed for quite a while about the Mac/Chrome centric focus of the web development community. Yes, I use them both and I am as much to blame as the next person. Fact is though, that there are millions of users of Windows out there. There are a lot of developers who use Windows, too. The reason is that that’s what their company provides them with and that is what their clients use. It is arrogant and elitist to say we change the web and make the lives of every developer better when our tools are only available for a certain platform. That’s not the world I see when I travel outside London or the Silicon Valley.

We’re in a time of change as Alex Wilhelm put it on Twitter:

Microsoft is kinda cool again, Apple is boring, Google is going after Yahoo on Firefox, and calculator watches are back. Wtf is going on.

What made me choose

In addition to me pushing myself to fix one of the adoption problems of new web technologies from within there were a few more factors that made this decision the right one for me:

  • The people – my manager is my good friend and former Ajaxian co-writer Rey Bango. You might remember us from the FoxIE HTML5 training video series. Rey and me on the beach My first colleague in this team is also someone who I worked with on several open projects and got massive respect for.
  • The flexibility – I am a remote worker. I work from London, Stockholm, or wherever I need to be. This is becoming a rare occasion and many offers I got started with “you need to move to the Silicon Valley”. Nope, I work on the web. We all could. I have my own travel budget. I propose where I am going to present. And I will be in charge of defining, delivering and measuring the developer outreach program.
  • The respect – every person who interviewed me was on time, prepared and had real, job related problems for me to solve. There was no “let me impress you with my knowledge and ask you a thing you don’t know”. There was no “Oh, I forgot I needed to interview you right now” and there was no confusion about who I am speaking to and about what. I loved this. and in comparison to other offers it was refreshing. We all are open about our actions and achievements as developers. If you interview someone you have no clue about then you didn’t do your research as an interviewer. I interviewed people as much as they interviewed me and the answers I got were enlightening. There was no exaggerated promises and they scrutinised everything I said and discussed it. When I made a mistake, I got a question about it instead of letting me show off or talk myself into a corner.
  • The organisation – there is no question about how much I earn, what the career prospects are, how my travels and expenses will work and what benefits I get. These things are products in use and not a Wiki page where you could contact people if you want to know more.
  • The challenge – the Windows 10 announcement was interesting. Microsoft is jumping over its own shadow with the recent changes, and I am intrigued about making this work.

I know there might be a lot of questions, so feel free to contact me if you have any concerns, or if you want to congratulate me.

FAQ:

  • Does this mean you will not be participating in open communication channels and open source any longer?

    On the contrary. I am hired to help Microsoft understand the open source world and play a bigger part in it. I will have to keep some secrets until a certain time. That will also be a re-occuring happening. But that is much the same for anyone working in Google, Samsung, Apple or any other company. Even Mozilla has secrets now, this is just how some markets work. I will keep writing for other channels, I will write MDN docs, be active on GitHub and applaud in public when other people do great work. I will also be available to promote your work. All I publish will be open source or Creative Commons. This blog will not be a Microsoft blog. This is still me and will always remain me.

  • Will you now move to Windows with all your computing needs?

    No, like every other person who defends open technology and freedom I will keep using my Macintosh. (There might be sarcasm in this sentence, and a bit of guilt). I will also keep my Android and Firefox OS phones. I will of course get Windows based machinery as I am working on making those better for the web.

  • Will you now help me fix all my Internet Explorer problems when I complain loud enough on Twitter?

    No.

  • What does this mean to your speaking engagements and other public work?

    Not much. I keep picking where I think my work is needed the most and my terms and conditions stay the same. I will keep a strict separation of sponsoring events and presenting there (no pay to play). I am not sure how sponsorship requests work, but I will find out soon and forward requests to the right people.

  • What about your evangelism coaching work like the Mozilla Evangelism Reps and various other groups on Facebook and LinkedIn?

    I will keep my work up there. Except that the Evangelism Reps should have a Mozilla owner. I am not sure what should happen to this group given the changes in the company structure. I’ve tried for years to keep this independent of me and running smoothly without my guidance and interference. Let’s hope it works out.

  • Will you tell us now to switch to Internet Explorer / Spartan?

    No. I will keep telling you to support standards and write code that is independent of browser and environment. This is not about peddling a product. This is about helping Microsoft to do the right thing and giving them a voice that has a good track record in that regard.

  • Can we now get endless free Windows hardware from you to test on?

    No. Most likely not. Again, sensible requests I could probably forward to the right people

So that’s that. Now let’s fix the web from within and move a bulk of developers forward instead of preaching from an ivory tower of “here is how things should be”.

Dave Townsendhgchanges is down for now

My handy tool for tracking changes to directories in the mozilla mercurial repositories is going to be broken for a little while. Unfortunately a particular changeset seems to be breaking things in ways I don’t have the time to fix right now. Specifically trying to download the raw patch for the changeset is causing hgweb to timeout. Short of finding time to debug and fix the problem my only solution is to wait until that patch is old enough that it no longer attempts to index it. That could take a week or so.

Obviously I’ll happily accept patches to fix this problem sooner.

Karl DubostHired in Taipei? Cool Intros.

Mozilla has Weekly Updates, a public meeting, giving news about projects and events. It's also the time for introducing the newly hired employees across the organization. The meeting is also available as a video.

Taipei office has the same issue than us working from Japan. The meeting is scheduled at 11am San Francisco time. It means 4am in Japan and 3am in Taiwan. Luckily enough, there is a record. So when the Mozilla Taipei office in Taiwan needs to introduce a new employee, instead of being awake at 3am, they record a video of their new hires. Go to 40m55s into this week update. For example, Ceci Chang, a Firefox OS Visual Designer, is explaining how to grow more mint.

Ceci Chang in front of Mint

Introductions of new hired employees to the rest of our distributed organization do not have to be boring. Taipei office is making it a very « feel good » thing. Thanks.

Otsukare.

Mozilla FundraisingYear-End Campaign in Review: What We Learned

Ending a campaign of this magnitude is like stepping off a roller coaster. I’m mildly disoriented, exhausted, but also ecstatic that we far exceeded the goals we set. In retrospect, we learned a lot. What we do next with what … Continue reading

Air MozillaPrivacy Lab - a meetup for privacy minded people in San Francisco

Privacy Lab - a meetup for privacy minded people in San Francisco Brings together privacy professionals and others interested in privacy at for-profits, non-profits, and NGOs in an effort to contribute to the state of the ecosystem...

Ben KeroWorking around flaky internet connections

In many parts of the world a reliable internet connection is hard to come by. Ethernet is essentially non-existent and WiFi is king. As any digital nomad can testify, this is ‘good enough’ for doing productive work.

Unfortunately not all WiFi connections work perfectly all the time. They’re fraught with unexpected problems including dropping out entirely, abruptly killing connections, and running into connection limits.

Thankfully with a little knowledge it is possible to regain productivity that would otherwise be lost to a flaky internet connection. These techniques are applicable to coffee shops, hotels, and other places with semi-public WiFi.

Always have a backup connection

Depending on a WiFi connections as your sole source of connectivity is a losing proposition. If all you’re doing are optional tasks it can work, although critical tasks demand a backup source should the primary fail.

This usually takes the shape of a cellular data connection. I USB or WiFi tether my laptop to my cell phone. This is straightforward in your home country, where you have a reliable data connection already. If working from another country it is advisable to get a local prepaid SIM card with data plan. These are usually inexpensive and never require a contract. Almost all Android devices support this behavior already.

If you’re too lazy to get a local SIM card, or are not in a country long enough to benefit from one (I usually use 1 full week as the cutoff), T-Mobile US’s post-paid plans offer roaming data in other countries. This is only EDGE (2.5G) connectivity, but is still entirely usable if you’re careful and patient with the connection.

Reducing background data

Some of the major applications that you’re using do updates in the background, including Firefox and Chrome. They can detect that your computer is connected to an internet connection, and will attempt to do updates anytime. Obviously if you’re using a connection with limited bandwidth, this can ruin the experience for everybody (including yourself).

You can disable this feature in Firefox by navigating to Edit -> Preferences -> Advanced -> Update, and switching Firefox Updates to Never check for updates.

Your operating system might do this as well, so it is worth investigating so you can disable it.

Mosh: The Mobile Shell

If you’re a command-line junkie or a keyboard cowboy, you’ll usually spend a lot of time SSHing into other servers. Mosh is an application like SSH that is specifically designed for unreliable connections. It allows some conveniences like resume-after-sleep even if your connection changes, and local echo so that you can see/revise your input even if the other side is non-responsive. There are some known security concerns with using Mosh, so I’ll leave it as an exercise to the reader if they feel safe using it.

It should be noted that with proper configuration, OpenSSH can also gain some of this resiliency.

Tunneling

Often the small wireless routers you’re connecting to are not configured to handle the load of several people. One symptom of this is the TCP connection limit. The result of the router hitting this limit is that you will no longer be able to establish new TCP connections until one is closed. The way around this is to use a tunnel.

The simplest method to do this is a SOCKS proxy. A SOCKS proxy is a small piece of software that runs on your laptop. Its purpose is to tunnel new connections through an existing connection. The way I use it is by establishing a connection to my colocated server in Portland, OR, then tunneling all my connections through that. The server is easy to set up.

The simplest way to do this is with SSH. To use it, simply open up a terminal and type the following command (replacing my host name with your own)

$ ssh -v -N -D1080 bke.ro

This will open a tunnel between your laptop and the remote host. You’re not done yet though. The next part is telling your software to use the tunnel. In Firefox this can be done in Edit -> Preferences -> Advanced -> Network -> Connection Settings -> Manual Proxy Configuration -> SOCKS Host. You’ll also want to check “Remote DNS” below. You can test this is working by visiting a web site such as whatismyip.com.

Command-line applications can use a SOCKS proxy by using the program called tsocks. Tsocks will transparently tunnel the connections of your command-line applications through your proxy. It is invoked like this:

$ tsocks curl http://bke.ro/

Some other methods of tunneling that have been used successfully include real VPN software such as OpenVPN. There is an entire market of OpenVPN providers available that will give you access to endpoints in many countries. You can also just run this yourself.

An alternative to that is sshuttle. This uses iptables on Linux (and the built-in firewall on OS X) to transparently tunnel connections over a SSH session.All system connections will transparently be routed through it. One cool feature of this approach is that no special software needs to be installed on the remote side. This means that it’s easy to use with several different hosts.

Local caching

Some content can be cached and reused without having to hit the Internet. This isn’t perfect, but reducing the amount of network traffic should result in less burden on the network and faster page-load times. There are a couple pieces of software that can help achieve this.

Unbound is a local DNS caching daemon. It runs on your computer and listens for applications to make DNS requests. It then asks the internet for the answer, and caches that. This results in less DNS queries hitting the internet, which reduces network burden and theoretically loads pages faster. I’ve been subjecting Unbound to constant daily use for 6 months, and have not attributed a single problem to it. Available in a distro near you.

Polipo is a local caching web proxy. This is a small daemon that runs on your computer and transparently caches web content. This can speed up page load times and reduce amount of network traffic done. It has a knob to tune the cache size, and you can empty the cache whenever you want. Again, this should be available in any major package manager.

Ad blocking software

Privoxy is a web proxy that filters out unwanted content, such as tracking cookies, advertisements, social-media iframes, and other “obnoxious internet junk”. It can be used in conjunction with polipo, and there is even a mention in the docs about how to layer them.

SomeoneWhoCares Hosts File is an alternative /etc/hosts file that promises “to make the internet not suck (as much)”. This replaces your /etc/hosts file, which is used before DNS queries are made. This particular /etc/hosts file simply resolves many bad domains to ‘127.0.0.1’ instead of their real address. This blocks many joke sites (goatse, etc) as well as ad servers. I’ve used this for a long time and have never encountered a problem associated with it.

AdBlock Plus might be a Firefox extension you’re familiar with it. It is a popular extension that removes ads from web pages, which should save you bandwidth, page load speed, and battery life. AdBlock Plus is a heavy memory user, so if you’re on a device with limited memory (< 4GB) it might be worth considering an alternate ad blocking extension.

Second browser instance (that doesn’t use any of the aforementioned)

As great as these pieces are, sometimes you’ll encounter a problem. At that point it could be advantageous to have a separate browser instance to access the internet “unadulterated”. This will let you know if the problem is on your side, the remote host, or your connection.

I hope that using these techniques will help you have a better experience while using questionable connections. It’s an ongoing struggle, but the state of connectivity is getting better. Hopefully one day these measures will be unnecessary.

Please leave a comment if you learned something from reading this, or notice anything I missed.

Benoit GirardTesting a JS WebApp

Test Requirements

I’ve been putting off testing my cleopatra project for a while now https://github.com/bgirard/cleopatra because I wanted to take the time to find a solution that would satisfy the following:

  1. The tests can be executed by running a particularly URL.
  2. The tests can be executed headless using a script.
  3. No server side component or proxy is required.
  4. Stretch goal: Continuous integration tests.

After a bit of research I came up with a solution that addressed my requirements. I’m sharing here in case this helps others.

First I found that the easiest way to achieve this is to find a Test Framework to get 1) and find a solution to run a headless browser for 3.

Picking a test framework

For the Test Framework I picked QUnit. I didn’t have any strong requirements there so you may want to review your options if you do. With QUnit I load my page in an iframe and inspect the resulting document after performing operations. Here’s an example:

QUnit.test("Select Filter", function(assert) {
  loadCleopatra({
    query: "?report=4c013822c9b91ffdebfbe6b9ef300adec6d5a99f&select=200,400",
    assert: assert,
    testFunc: function(cleopatraObj) {
    },
    profileLoadFunc: function(cleopatraObj) {
    },
    updatedFiltersFunc: function(cleopatraObj) {
      var samples = shownSamples(cleopatraObj);

      // Sample count for one of the two threads in the profile are both 150
      assert.ok(samples === 150, "Loaded profile");
    }
  });
});

Here I just load a profile, and once the document fires an updateFilters event I check that the right number of samples are selected.

You can run the latest cleopatra test here: http://people.mozilla.org/~bgirard/cleopatra/test.html

Picking a browser (test) driver

Now that we have a page that can run our test suite we just need a way to automate the execution. Turns out that PhantomJS, for webkit, and SlimerJS, for Gecko, provides exactly this. With a small driver script we can load our test.html page and set the process return code based on the result of our test framework, QUnit in this case.

Stretch goal: Continuous integration

If you hooked up the browser driver to run via a simple test.sh script adding continuous integration should be simple. Thanks to Travis-CI and Github it’s easy to setup your test script to run per check-in and set notifications.

All you need is to configure Travis-CI to looks at your repo and to check-in an appropriate .travis.cml config file. Your travis.yml should configure the environment. PhantomJS is pre-installed and should just work. SlimerJS requires a Firefox binary and a virtual display so it just requires a few more configuration lines. Here’s the final configuration:

env:
  - SLIMERJSLAUNCHER=$(which firefox) DISPLAY=:99.0 PATH=$TRAVIS_BUILD_DIR/slimerjs:$PATH
addons:
  firefox: "33.1"
before_script:
  - "sh -e /etc/init.d/xvfb start"
  - "echo 'Installing Slimer'"
  - "wget http://download.slimerjs.org/releases/0.9.4/slimerjs-0.9.4.zip"
  - "unzip slimerjs-0.9.4.zip"
  - "mv slimerjs-0.9.4 ./slimerjs"

notifications:
  irc:
    channels:
      - "irc.mozilla.org#perf"
    template:
     - "BenWa: %{repository} (%{commit}) : %{message} %{build_url}"
    on_success: change
    on_failure: change

script: phantomjs js/tests/run_qunit.js test.html && ./slimerjs/slimerjs js/tests/run_qunit.js $PWD/test.html

Happy testing!


Gregory SzorcBranch Cleanup in Firefox Repositories

Mozilla has historically done some funky things with the Firefox Mercurial repositories. One of the things we've done is create a bunch of named branches to track the Firefox release process. These are branch names like GECKO20b12_2011022218_RELBRANCH.

Over in bug 927219, we started the process of cleaning up some cruft left over from many of these old branches.

For starters, the old named branches in the Firefox repositories are being actively closed. When you hg commit --close-branch, Mercurial creates a special commit that says this branch is closed. Branches that are closed are automatically hidden from the output of hg branches and hg heads. As a result, the output of these commands is now much more usable.

Closed branches still constitute heads on the DAG. And several heads lead to degraded performance in some situations (notably push and pull times - the same thing happens in Git). I'd like to eventually merge these old heads so that repositories only have 1 or a small number of DAG heads. However, extra care must be taken before that step. Stay tuned.

Anyway, for the average person reading, you probably won't be impacted by these changes at all. The greatest impact will be from the person who lands the first change on top of any repository whose last commit was a branch close. If you commit on top of the tip commit, you'll be committing on top of a previously closed branch! You'll instead want to hg up default after you pull to ensure you are on the proper DAG head! And even then, if you have local commits, you may not be based on top of the appropriate commit! A simple run of hg log --graph should help you decipther the state of the world. (Please note that the usability problems around discovering the appropriate head to land on are a result of our poor branching strategy for the Firefox repositories. We probably should have named branches tracking the active Gecko releases. But that ship sailed years ago and fixing that is pretty far down the priority list. Wallpapering over things with the firefoxtree extensions is my recommended solution until matters are fixed.)

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Yunier José Sosa Vázquez6 años llevando la voz de Mozilla a la Intranet cubana

Hace 6 años y coincidiendo con la natalicio de nuestro Héroe Nacional José Martí, un grupo de interactivos estudiantes lidereados por Erick realizaron un hito en nuestra universidad, fundaban Firefoxmanía, en aquellos tiempos, la comunidad Firefox de la UCI.

Foto Comunidad

Hoy, somos la Comunidad Mozilla de Cuba, un nombre que nos hemos ganado gracias a todos los que nos visitan, participan en los eventos y nos siguen en el país. Se dice fácil, pero la verdad es que tenemos la tarea de dar lo mejor y nuestro mayor esfuerzo.

En todo este tiempo, se han realizado disímiles eventos en los que hemos disfrutado compartir los mismo intereses, actividades en las que nos relacionamos más y crecemos juntos. ¿Quién olvidará el logo gigante pintado entre la plaza Mella, el docente 1 y el 3, o el logo en la yerba al lado del docente 1? También se han publicado más de 950 noticias y 450 complementos, la instalación de complementos supera el medio millar y las descargas las 200 mil.

Estos años han servido para conocer nuevas personas con las que he intercambiado, algunas ya no están pero se les extraña y los tenemos presentes, el Robert, mi tocayo Yunier, Manuel y sus complementos, Gustavo, Jesús, Lachy y sus temas, entre otros muchos más.

También reconocer a los activos, quienes seguimos hacia adelante, inventando cosas nuevas y geniales para la comunidad. Agradecer a Erick -el impulsor, el Pochy, la bella Ody, Nilmar, Abraham, Eliecer, Artillero por mantenerse activos y espero que por muchos años más.

A la verdad, no me veo sin Firefoxmanía, estos 5 años en ella me han hecho sentir como en familia y gracias a ella he aprendido muchas cosas que me han ayudado a superarme como un ingeniero que seré.

Hoy también se celebra el Día de la Privacidad y Mozilla lanzó una campaña que busca concienciar a las personas acerca de la importancia de la privacidad de los datos en Internet. En este sitio podrás conocer herramientas y recibir consejos de cómo tomar el control de tu privacidad en la red. Comparte con tus amigos la página y todos podrán cuidar mejor sus datos.

Algunas fotos de estos 6 años.

hpim2579 hpim2606 img_7174 Izq-der: Erick, Yunier J, Antonio, Yunier, Roberto y Pedro Yaisel DSCN0138_2 todos Los ganadores del concurso junto a nosotros 20130427_130510 mhworkweek Todos juntos listos para picar los cake al lado del poster Todos juntos listos para picar los cake IMG_20141110_152332

Ben HearsumSigning Software at Scale

Mozilla produces a lot of builds. We build Firefox for somewhere between 5 to 10 platforms (depending how you count). We release Nightly and Aurora every single day, Beta twice a week, and Release and ESR every 6 weeks (at least). Each release contains an en-US build and nearly a hundred localized repacks. In the past the only builds we signed were Betas (which were once a week at the time), Releases, and ESRs. We had a pretty well established manual for it, but due to being manual it was still error prone and impractical to use for Nightly and Aurora. Signing of Nightly and Aurora became an important issue when background updates were implemented because one of the new security requirements with background updates was signed installers and MARs.

Enter: Signing Server

At this point it was clear that the only practical way to sign all the builds that we need to is to automate it. It sounded crazy to me at first. How can you automate something that depends on secret keys, passphrases, and very unfriendly tools? Well, there's some tricks you need to know, and throughout the development and improvement of our "signing server", we've learned a lot. In the post I'll talk about those tricks and show you how can use them (or even our entire signing server!) to make your signing process faster and easier.

Credit where credit is due: Chris AtLee wrote the core of the signing server and support for some of the signature types. Over time Erick Dransch, Justin Wood, Dustin Mitchell, and I have made some improvements and added support for additional types of signatures.


Tip #1: Collect passphrases at startup

This should be obvious to most, but it's very important not to store the passphrases to your private keys unencrypted. However, because they're needed to unlock the private keys when doing any signing the server needs to have access to them somehow. We've dealt with this by asking for them when launching a signing server instance:

$ bin/python tools/release/signing/signing-server.py signing.ini
gpg passphrase: 
signcode passphrase: 
mar passphrase: 

Because instances are started manually by someone in the small set of people with access to passphrases we're able to ensure that keys are never left unencrypted at rest.

Tip #2: Don't let just any machine request signed files

One of the first problems you run into when you have an API for signing files is how to make sure you don't accidentally sign malicious files. We've dealt with this in a few ways:

  • You need a special token in order to request any type of signing. These tokens are time limited and only a small subset of segregated machines may request them (on behalf of the build machines). Since build jobs can only be created if you're able to push to hg.mozilla.org, random people are unable to submit anything for signing.
  • Only our build machines are allowed to make signing requests. Even if you managed to get hold of a valid signing token, you wouldn't be able to do anything with it without also having access to a build machine. This is a layer of security that helps us protect against a situation where an evil doer may gain access to a loaner machine or other less restricted part of our infrastructure.

We have other layers of security built in too (HTTPS, firewalls, access control, etc.), but these are the key ones built into the signing server itself.

Tip #3: Use input redirection and other tricks to work around unfriendly command line tools

One of the trickiest parts about automating signing is getting all the necessary command line tools to accept input that's not coming from a console. Some of them are relative easy and accept passphrases via stdin:

proc = Popen(command, stdout=stdout, stderr=STDOUT, stdin=PIPE)
proc.stdin.write(passphrase)
proc.stdin.close()

Others, like OpenSSL, are fussier and require the use of pexpect:

proc = pexpect.spawn("openssl", args)
proc.logfile_read = stdout
proc.expect('Enter pass phrase')
proc.sendline(passphrase)

And it's no surprise at all that OS X is the fussiest of them all. In order to sign you have to unlock the keychain by hand, run the signing command, and relock the keychain yourself:

child = pexpect.spawn("security unlock-keychain" + keychain)
child.expect('password to unlock .*')
child.sendline(passphrase)
check_call(sign_command + [f], cwd=dir_, stdout=stdout, stderr=STDOUT)
check_call(["security", "lock-keychain", keychain])

Although the code is simple in the end, a lot of trial, error, and frustration was necessary to arrive at it.

Tip #4: Sign everything you can on Linux (including Windows binaries!)

As fussy as automating tools like openssl can be on Linux, it pales in comparison to trying to automate anything on Windows. In the days before the signing server we had a scripted signing method that ran on Windows. Instead of providing the passphrase directly to the signing tool, it had to typed into a modal window. It was "automated" with an AutoIt script that typed in the password whenever the window popped up. This was hacky, and sometimes lead to issues if someone moved the mouse or pressed a key at the wrong time and changed window focus.

Thankfully there's tools available for Linux that are capable of signing Windows binaries. We started off by using Mono's signcode - a more or less drop in replacement for Microsoft's:

$ signcode -spc MozAuthenticode.spc -v MozAuthenticode.pvk -t http://timestamp.verisign.com/scripts/timestamp.dll -i http://www.mozilla.com -a sha1 -tr 5 -tw 60 /tmp/test.exe
Mono SignCode - version 2.4.3.1
Sign assemblies and PE files using Authenticode(tm).
Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed.

Enter password for MozAuthenticode.pvk: 
Success

This works great for 32-bit binaries - we've been shipping binaries signed with it for years. For some reason that we haven't figured out though, it doesn't sign 64-bit binaries properly. For those we're using "osslsigncode", which is an OpenSSL based tool to do Authenticode signing:

$ osslsigncode -certs MozAuthenticode.spc -key MozAuthenticode.pvk -i http://www.mozilla.com -h sha1 -in /tmp/test64.exe -out /tmp/test64-signed.exe
Enter PEM pass phrase:
Succeeded

$ osslsigncode verify /tmp/test64-signed.exe 
Signature verification: ok

Number of signers: 1
    Signer #0:
        Subject: /C=US/ST=CA/L=Mountain View/O=Mozilla Corporation/CN=Mozilla Corporation
        Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1

Number of certificates: 3
    Cert #0:
        Subject: /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA
        Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA
    Cert #1:
        Subject: /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1
        Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA
    Cert #2:
        Subject: /C=US/ST=CA/L=Mountain View/O=Mozilla Corporation/CN=Mozilla Corporation
        Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1

In addition to Authenticode signing we also do GPG, APK, and couple of Mozilla-specific types of signing (MAR, EME Voucher) on Linux. We also sign our Mac builds with the signing server. Unfortunately, the tools needed for that are only available on OS X, so we have to run separate signing servers for these.

Tip #5: Run multiple signing servers

Nobody likes a single point of failure, so we've built support our signing client to retry against multiple instances. Even if we lose part of our signing server pool, our infrastructure stays up:
$ python signtool.py --cachedir cache -t token -n nonce -c host.cert -H dmgv2:mac-v2-signing1.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing2.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing3.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 --formats dmgv2 Firefox.app
2015-01-23 06:17:59,112 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing3.srv.releng.scl3.mozilla.com:9120
2015-01-23 06:17:59,118 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: connection error; trying again soon
2015-01-23 06:18:00,119 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120
2015-01-23 06:18:00,141 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: uploading for signing
2015-01-23 06:18:10,748 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120
2015-01-23 06:19:11,848 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120
2015-01-23 06:19:40,480 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: OK

Running your own signing server

It's easy! All of the code you need to run your own signing server is in our tools repository. You'll need to set-up a virtualenv and create your own config file, but once you're ready you can attempt to start it with the following command:

python signing-server.py signing.ini

You'll be prompted for the passphrases to your private keys. If there's any problems with your config file or the passphrases the server will fail to start. Once you've got it up and running you can use try signing! get_token.py has an example of how to generate a signing token, and signtool.py will take your unsigned files and give you back signed versions. Happy signing!

Robert LongsonNew SVG/CSS Filter support in Firefox

There’s a new specification for filters that replaces the filters module in SVG 1.1. Firefox and Chrome are both implementing new features from this specification.

Firefox 30 was the first version to support feDropShadow As well as being simpler to write, feDropShadow will be faster than the equivalent individual filters as it skips some unnecessary colour conversions that we’d otherwise perform.

Firefox 35 has support for all CSS Filters so for simple cases you no longer need any SVG markup to create a filter. We’ve examples on MDN showing how to use CSS filters.

We’ve also implemented filter chaining, this is we support multiple filter either via URLs or CSS filters on a single element.

As with earlier versions of Firefox you can apply SVG and CSS filters to both SVG and HTML elements.

As part of the rewrite to support SVG filters we’ve improved their performance by using D2D on Windows to render them thus taking advantage of any hardware acceleration possibilities on that platform and on other platforms using SIMD and SSE2 to accelerate rendering so you can now use more filters without slowing your site down.


Pete MooreWeekly review 2015-01-28

Task Cluster Go Client

This week I have got the task cluster go client talking to the TaskCluster API service end points.

See: https://github.com/petemoore/taskcluster-client-go/blob/master/README.md

I also now have part of the client library auto-generating, e.g. see: https://github.com/petemoore/taskcluster-client-go/blob/master/client/generated-code.go

Shouldn’t be too far from auto-generating the entire client library soon and having it working, tested, documented and published.

Mozilla Privacy BlogHow Mozilla Addresses the Privacy Paradox

Earlier this month, a 20 year old NBA Clippers fan held up a sign in a crowded Washington DC arena with her phone number on it. Seasoned privacy professionals have long lamented the old adage that if you give someone … Continue reading

Ryan KellyAre we Python yet?

Andy McKayIron Workers Memorial Bridge

I pretty much hate the Iron Workers Memorial Bridge (or Second Narrows). I have to cross it each time I cycle into work and its miserable.

Its asymmetric, the climb up from North to South is demoralising. The ride is often windy. Often wet. Usually cold. Everything that nature can throw at you, you'll encounter on the bridge.

And currently its only got one sidewalk, which means everyone has to stop and let each other pass. The east side walk is currently being renovated and will hopefully be slightly wider. This means its harder to get over and has a brutally dangerous off ramp on the north side. I'm surprised no-one has been killed on that yet.

Once the east side walk renovation gets completed, they'll start on the west side. The other part of the renovation is adding in a suicide fence. That will obstruct the view. But just occasionally the bridge gives you a stunning view... and just occasionally there's a break in the bike traffic and I get a photo like this:

Looking west to downtown and Lions Gate from the Second Narrows

I just realised that I might miss that view.

William Lachancemozregression updates

Lots of movement in mozregression (a tool for automatically determining when a regression was introduced in Firefox by bisecting builds on ftp.mozilla.org) in the last few months. Here’s some highlights:

  • Support for win64 nightly and inbound builds (Kapil Singh, Vaibhav Agarwal)
  • Support for using an http cache to reduce time spent downloading builds (Sam Garrett)
  • Way better logging and printing of remaining time to finish bisection (Julien Pagès)
  • Much improved performance when bisecting inbound (Julien)
  • Support for automatic determination on whether a build is good/bad via a custom script (Julien)
  • Tons of bug fixes and other robustness improvements (me, Sam, Julien, others…)

Also thanks to Julien, we have a spiffy new website which documents many of these features. If it’s been a while, be sure to update your copy of mozregression to the latest version and check out the site for documentation on how to use the new features described above!

Thanks to everyone involved (especially Julien) for all the hard work. Hopefully the payoff will be a tool that’s just that much more useful to Firefox contributors everywhere. :)

Justin WoodRelease Engineering does a lot…

Hey Everyone,

I spent a few minutes a week over the last month or two working on compiling a list of Release Engineering work areas. Included in that list is identifying which repositories we “own” and work in, as well as where these repositories are mirrored. (We have copies in hg.m.o git.m.o and github, some exclusively in their home).

While we transition to a more uniform and modern design style and philosphy.

My major takeaway here is we have A LOT of things that we do. (this list is explicitly excluding repositories that are obsolete and unused)

So without further ado, I present our page ReleaseEngineering/Repositories

repositoriesYou’ll notice a few things about this, we have a column for Mirrors, and RoR (Repository of Record), “Committable Location” was requested by Hal and is explicitly for cases where “Where we consider our important location the RoR, it may not necessarily be where we allow commits to”

The other interesting thing is we have automatic population of travis and coveralls urls/status icons. This is for free using some magic wiki templates I did.

The other piece of note here, is the table is generated by a list of pages, using “SemanticMediaWiki” so the links to the repositories can be populated with things like “where are the docs” “what applications use this repo”, “who are suitable reviewers” etc. (all those are TODO on the releng side so far).

I’m hoping to be putting together a blog post at some point about how I chose to do much of this with mediawiki, however in the meantime should any team at Mozilla find this enticing and wish to have one for themselves, much of the work I did here can be easily replicated for your team, even if you don’t need/like the multiple repo location magic of our table. I can help get you setup to add your own repos to the mix.

Remember the only fields that are necessary is a repo name, the repo location, and owner(s). The last field can even be automatically filled in by a form on your page (see the end of Release Engineerings page for an example of that form)

Reach out to me on IRC or E-mail (information is on my mozillians profile) if you desire this for your team and we can talk. If you don’t have a need for your team, you can stare at all the stuff Releng is doing and remember to thank one of us next time you see us. (or inquire about what we do, point contributors our way, we’re a friendly group, I promise.)

Hannah KaneA new online home for those who #teachtheweb

We’ve recently begun work on a new website that will serve the mentors in our Webmaker community—a gathering place for anyone who is teaching the Web. They’ll find activity kits, trainings, badges, the Web Literacy Map, and more. It will also be an online clubhouse for Webmaker Clubs, and will showcase the work of Hives to the broader network.

Our vision for the site is that it will provide pathways for sustained involvement in teaching the Web. Imagine a scenario where, after hosting a Maker Party, a college student in Pune wants to build on the momentum, but doesn’t know how. Or imagine a librarian in Seattle who is looking for activities for her weekly teen drop-in hours. Or a teacher in Buenos Aires who is looking to level up his own digital literacy skills. In each of these scenarios, we hope the person will look to this new site to find what they need.

We’re in the very early stages of building out the site. One of our first challenges is to figure out the best way to organize all of the content.

Fortunately, we were able to find 14 members of the community who were willing to participate in a “virtual card-sorting” activity. We gave each of the volunteers a list of 22 content areas (e.g. “Find a Teaching Kit,” “Join a Webmaker Club,” “Participate in a community discussion”), and asked them to organize the items into groups that made sense to them.

The results were fascinating. Some grouped the content by specific programs, concepts, or offerings. Others grouped by function (e.g “Participate,” “Learn,” “Lead”). Others organized by identity (e.g. “Learner” or “Mentor”). Still others grouped by level of expertise needed.

We owe a debt of gratitude to those who participated in the research. We were able to better understand the variety of mental models, and we’re currently using those insights to build out some wireframes to test in the next heartbeat.

Once we firm up the information architecture, we’ll build and launch v1 of the site (our goal is to launch it by the end of Q1). From there, we’ll continue to iterate, adding more functionality and resources to meet the needs of our mentor community.

Future iterations will likely include:

  • Improving the way we share and discover curriculum modules
  • Enhancing our online training platform
  • Providing tools for groups to self-organize
  • Making improvements to our badging platform
  • Incorporating the next version of the Web Literacy Map

Stay tuned for more updates and opportunities to provide feedback throughout the process. We’ve also started a Discourse thread for continuing discussion of the platform.


Christian HeilmannWhere would people like to see me – some interesting answers

Will code for Bananas

For pure Shits and Giggles™ I put up a form yesterday asking people where I should try to work now that I left Mozilla. By no means I have approached all the companies I listed (hence an “other” option). I just wanted to see what people see me as and where I could do some good work. Of course, some of the answers disagreed and made a lot of assumptions:

Your ego knows no bounds. Putting companies that have already turned you down is very special behavior.

This is utterly true. I applied at Yahoo in 1997 and didn’t get the job. I then worked at Yahoo for almost five years a few years later. I should not have done that. Companies don’t change and once you have a certain skillset there is no way you could ever learn something different that might make yourself appealing to others. Know your place, and all that.

Sarcasm aside, I am always amazed how lucky we are to have choices in our market. There is not a single day I am not both baffled and very, very thankful for being able to do what I like and make a living with it. I feel like a fraud many a time, and I know many other people who seemingly have a “big ego” doing the same. The trick is to not let that stop you but understand that it makes you a better person, colleague and employee. We should strive to get better all the time, and this means reaching beyond what you think you can achieve.

I’m especially flattered that people thought I had already been contacted by all the companies I listed and asked for people to pick for me. I love working in the open, but that’s a bit too open, even for my taste. I am not that lucky – I don’t think anybody is.

The answers were pretty funny, and of course skewed as I gave a few options rather than leaving it completely open. The final “wish of the people list” is:

  • W3C (108 votes)
  • Canonical (39 votes)
  • Microsoft (38 votes)
  • Google (37 votes)
  • Facebook (14 votes)
  • Twitter (9 votes)
  • Mozilla (7 votes)
  • PubNub (26 votes)

Pubnub’s entries were having exceedingly more exclamation points the more got submitted – I don’t know what happened there.

Other options with multiple votes were Apple, Adobe, CozyCloud, EFF, Futurice, Khan Academy, Opera, Spotify (I know who did that!) and the very charming “Retirement”.

Options labeled “fascinating” were:

  • A Circus
  • Burger King (that might still be a problem as I used to work on McDonalds.co.uk – might be a conflict of interest)
  • BangBros (no idea what that might be – a Nintendo Game?)
  • Catholic Church
  • Derick[SIC] Zoolander’s School for kids who can’t read good
  • Kevin Smith’s Movie
  • “Pizza chef at my local restaurant” (I was Pizza delivery guy for a while, might be possible to level up)
  • Playboy (they did publish Fahrenheit 451, let’s not forget that)
  • Taco Bell (this would have to be remote, or a hell of a commute)
  • The Avengers (I could be persuaded, but it probably will be an advisory role, Tony Stark style)
  • UKIP (erm, no, thanks)
  • Zombocom and
  • Starbucks barista. (this would mean I couldn’t go to Sweden any longer – they really like their coffee and Starbucks is seen as the Antichrist by many)

Some of the answers were giving me super powers I don’t have but show that people would love to have people like me talk to others outside the bubble more:

  • “NASA” (I really, really think I have nothing they need)
  • “A book publisher (they need help to move into the 21st century)”
  • “Data.gov or another country’s open data platform.” (did work with that, might re-visit it)
  • “GCHQ; Be the bridge between our worlds”
  • “Spanish Government – Podemos CTO” (they might realise I am not Spanish)
  • “any bank for a11y online banking :-/”

Some answers showed a need to vent:

  • “Anything but Google or Facebook for God sake!”
  • “OK, and option 2: perhaps Twitter? You might improve their horrible JS code in the website! ;)”

The most confusing answers were “My butthole” which sounds cramped and not a creative working environment and “Who are you?” which begs the answer “Why did you fill this form?”.

Many of the answers showed a lot of trust in me and made me feel all warm and fuzzy and I want to thank whoever gave those:

  • be CTO of awesome startup
  • Enjoy life Chris!
  • Start something of your own. You rock too hard, anyway!
  • you were doing just fine. choose the one where your presence can be heard the loudest. cheers!
  • you’ve left Mozilla for something else, so you are jobless for a week or so! :-)
  • Yourself then hire me and let me tap your Dev knowledge :D

I have a new job, I am starting on Monday and I will announce in probably too much detail here on Thursday. Thanks for everyone who took part in this little exercise. I have an idea what I need to do in my new job, and these ideas listed and the results showed me that I am on the right track.

Stormy PetersCan or Can’t?

10628746_986665307681_7544861487392315883_o

Can read or can’t eat books?

What I love about open source is that it’s a “can” world by default. You can do anything you think needs doing and nobody will tell you that you can’t. (They may not take your patch but they won’t tell you that you can’t create it!)

It’s often easier to define things by what they are not or what we can’t do. And the danger of that is you create a culture of “can’t”. Any one who has raised kids or animals knows this. “No, don’t jump.” You can’t jump on people. “No, off the sofa.” You can’t be on the furniture. “No, don’t lick!” You can’t slobber on me. And hopefully when you realize it, you can fix it. “You can have this stuffed animal (instead of my favorite shoe). Good dog!”

Often when we aren’t sure how to do something, we fill the world with can’ts. “I don’t know how we should do this, but I know you can’t do that on a proprietary mailing list.” “I don’t know how I should lose weight, but I know you can’t have dessert.” I don’t know. Can’t. Don’t know. Can’t. Unsure. Can’t.

Watch the world around you. Is your world full of can’ts or full of “can do”s? Can you change it for the better?

Nathan Froydexamples of poor API design, 1/N – pldhash functions

The other day in the #content IRC channel:

<bz> I have learned so many things about how to not define APIs in my work with Mozilla code ;)
<bz> (probably lots more to learn, though)

I, too, am still learning a lot about what makes a good API. Like a lot of other things, it’s easier to point out poor API design than to describe examples of good API design, and that’s what this blog post is about. In particular, the venerable XPCOM datastructure PLDHashTable has been undergoing a number of changes lately, all aimed at bringing it up to date. (The question of why we have our own implementations of things that exist in the C++ standard library is for a separate blog post.)

The whole effort started with noticing that PL_DHashTableOperate is not a well-structured API. It’s necessary to quote some more of the API surface to fully understand what’s going on here:

typedef enum PLDHashOperator {
    PL_DHASH_LOOKUP = 0,        /* lookup entry */
    PL_DHASH_ADD = 1,           /* add entry */
    PL_DHASH_REMOVE = 2,        /* remove entry, or enumerator says remove */
    PL_DHASH_NEXT = 0,          /* enumerator says continue */
    PL_DHASH_STOP = 1           /* enumerator says stop */
} PLDHashOperator;

typedef PLDHashOperator
(* PLDHashEnumerator)(PLDHashTable *table, PLDHashEntryHdr *hdr, uint32_t number,
                      void *arg);

uint32_t
PL_DHashTableEnumerate(PLDHashTable *table, PLDHashEnumerator etor, void *arg);

PLDHashEntryHdr*
PL_DHashTableOperate(PLDHashTable* table, const void* key, PLDHashOperator op);

(PL_DHashTableOperate no longer exists in the tree due to other cleanup bugs; the above is approximately what it looked like at the end of 2014.)

There are several problems with the above slice of the API:

  • PL_DHashTableOperate(table, key, PL_DHASH_ADD) is a long way to spell what should have been named PL_DHashTableAdd(table, key)
  • There’s another problem with the above: it’s making a runtime decision (based on the value of op) about what should have been a compile-time decision: this particular call will always and forever be an add operation. We shouldn’t have the (admittedly small) runtime overhead of dispatching on op. It’s worth noting that compiling with LTO and a quality inliner will remove that runtime overhead, but we might as well structure the code so non-LTO compiles benefit and the code at callsites reads better.
  • Given the above definitions, you can say PL_DHashTableOperate(table, key, PL_DHASH_STOP) and nothing will complain. The PL_DHASH_NEXT and PL_DHASH_STOP values are really only for a function of type PLDHashEnumerator to return, but nothing about the above definition enforces that in any way. Similarly, you can return PL_DHASH_LOOKUP from a PLDHashEnumerator function, which is nonsensical.
  • The requirement to always return a PLDHashEntryHdr* from PL_DHashTableOperate means doing a PL_DHASH_REMOVE has to return something; it happens to return nullptr always, but it really should return void. In a similar fashion, PL_DHASH_LOOKUP always returns a non-nullptr pointer (!); one has to check PL_DHASH_ENTRY_IS_{FREE,BUSY} on the returned value. The typical style for an API like this would be to return nullptr if an entry for the given key didn’t exist, and a non-nullptr pointer if such an entry did. The free-ness or busy-ness of a given entry should be a property entirely internal to the hashtable implementation (it’s possible that some scenarios could be slightly more efficient with direct access to the busy-ness of an entry).

We might infer corresponding properties of a good API from each of the above issues:

  • Entry points for the API produce readable code.
  • The API doesn’t enforce unnecessary overhead.
  • The API makes it impossible to talk about nonsensical things.
  • It is always reasonably clear what return values from API functions describe.

Fixing the first two bulleted issues, above, was the subject of bug 1118024, done by Michael Pruett. Once that was done, we really didn’t need PL_DHashTableOperate, and removing PL_DHashTableOperate and related code was done in bug 1121202 and bug 1124920 by Michael Pruett and Nicholas Nethercote, respectively. Fixing the unusual return convention of PL_DHashTableLookup is being done in bug 1124973 by Nicholas Nethercote. Maybe once all this gets done, we can move away from C-style PL_DHashTable* functions to C++ methods on PLDHashTable itself!

Next time we’ll talk about the actual contents of a PL_DHashTable and how improvements have been made there, too.

Gregory SzorcCommit Part Numbers and MozReview

It is common for commit messages in Firefox to contains strings like Part 1, Part 2, etc. See this push for bug 784841 for an extreme multi-part example.

When code review is conducted in Bugzilla, these identifiers are necessary because Bugzilla orders attachments/patches in the order they were updated or their patch title (I'm not actually sure!). If part numbers were omitted, it could be very confusing trying to figure out which order patches should be applied in.

However, when code review is conducted in MozReview, there is no need for explicit part numbers to convey ordering because the ordering of commits is implicitly defined by the repository history that you pushed to MozReview!

I argue that if you are using MozReview, you should stop writing Part N in your commit messages, as it provides little to no benefit.

I, for one, welcome this new world order: I've previously wasted a lot of time rewriting commit messages to reflect new part ordering after doing history rewriting. With MozReview, that overhead is gone and I barely pay a penalty for rewriting history, something that often produces a more reviewable series of commits and makes reviewing and landing a complex patch series significantly easier.

Cameron KaiserAnd now for something completely different: the Pono Player review and Power Macs (plus: who's really to blame for Dropbox?)

Regular business first: this is now a syndicated blog on Planet Mozilla. I consider this an honour that should also go a long way toward reminding folks that not only are there well-supported community tier-3 ports, but lots of people still use them. In return I promise not to bore the punters too much with vintage technology.

IonPower crossed phase 2 (compilation) yesterday -- it builds and links, and nearly immediately asserts after some brief codegen, but at this phase that's entirely expected. Next, phase 3 is to get it to build a trivial script in Baseline mode ("var i=0") and run to completion without crashing or assertions, and phase 4 is to get it to pass the test suite in Baseline-only mode, which will make it as functional as PPCBC. Phase 5 and 6 are the same, but this time for Ion. IonPower really repays most of our technical debt -- no more fragile glue code trying to keep the JaegerMonkey code generator working, substantially fewer compiler warnings, and a lot less hacks to the JIT to work around oddities of branching and branch optimization. Plus, many of the optimizations I wrote for PPCBC will transfer to IonPower, so it should still be nearly as fast in Baseline-only mode. We'll talk more about the changes required in a future blog post.

Now to the Power Mac scene. I haven't commented on Dropbox dropping PowerPC support (and 10.4/10.5) because that's been repeatedly reported by others in the blogscene and personally I rarely use Dropbox at all, having my own server infrastructure for file exchange. That said, there are many people who rely on it heavily, even a petition (which you can sign) to bring support back. But let's be clear here: do you really want to blame someone? Do you really want to blame the right someone? Then blame Apple. Apple dropped PowerPC compilation from Xcode 4; Apple dropped Rosetta. Unless you keep a 10.6 machine around running Xcode 3, you can't build (true) Universal binaries anymore -- let alone one that compiles against the 10.4 SDK -- and it's doubtful Apple would let such an app (even if you did build it) into the App Store because it's predicated on deprecated technology. Except for wackos like me who spend time building PowerPC-specific applications and/or don't give a flying cancerous pancreas whether Apple finds such work acceptable, this approach already isn't viable for a commercial business and it's becoming even less viable as Apple actively retires 10.6-capable models. So, sure, make your voices heard. But don't forget who screwed us first, and keep your vintage hardware running.

That said, I am personally aware of someoneTM who is working on getting the supported Python interconnect running on OS X Power Macs, and it might be possible to rebuild Finder integration on top of that. (It's not me. Don't ask.) I'll let this individual comment if he or she wants to.

Onto the main article. As many of you may or may not know, my undergraduate degree was actually in general linguistics, and all linguists must have (obviously) some working knowledge of acoustics. I've also been a bit of a poseur audiophile too, and while I enjoy good music I especially enjoy good music that's well engineered (Alan Parsons is a demi-god).

The Por Pono Player, thus, gives me pause. In acoustics I lived and died by the Nyquist-Shannon sampling theorem, and my day job today is so heavily science and research-oriented that I really need to deal with claims in a scientific, reproducible manner. That doesn't mean I don't have an open mind or won't make unusual decisions on a music format for non-auditory reasons. For example, I prefer to keep my tracks uncompressed, even though I freely admit that I'm hard pressed to find any difference in a 256kbit/s MP3 (let alone 320), because I'd like to keep a bitwise exact copy for archival purposes and playback; in fact, I use AIFF as my preferred format simply because OS X rips directly to it, everything plays it, and everything plays it with minimum CPU overhead despite FLAC being lossless and smaller. And hard disks are cheap, and I can convert it to FLAC for my Sansa Fuze if I needed to.

So thus it is with the Por Pono Player. For $400, you can get a player that directly pumps uncompressed, high-quality remastered 24-bit audio at up to 192kHz into your ears with no downsampling and allegedly no funny business. Immediately my acoustics professor cries foul. "Cameron," she says as she writes a big fat F on this blog post, "you know perfectly well that a CD using 44.1kHz as its sampling rate will accurately reproduce sounds up to 22.05kHz without aliasing, and 16-bit audio has indistinguishable quantization error in multiple blinded studies." Yes, I know, I say sheepishly, having tried to create high-bit rate digital playback algorithms on the Commodore 64 and failed because the 6510's clock speed isn't fast enough to pump samples through the SID chip at anything much above telephone call frequencies. But I figured that if there was a chance, if there was anything, that could demonstrate a difference in audio quality that I could uncover it with a Pono Player and a set of good headphones (I own a set of Grado SR125e cans, which are outstanding for the price). So I preordered one and yesterday it arrived, in a fun wooden box:

It includes a MicroUSB charger (and cable), an SDXC MicroSD card (64GB, plus the 64GB internal storage), a fawning missive from Neil Young, the instigator of the original Kickstarter, the yellow triangular unit itself (available now in other colours), and no headphones (it's BYO headset):

My original plan was to do an A-B comparison with Pink Floyd's Dark Side of the Moon because it was originally mastered by the godlike Alan Parsons, I have the SACD 30th Anniversary master, and the album is generally considered high quality in all its forms. When I tried to do that, though, several problems rapidly became apparent:

First, the included card is SDXC, and SDXC support (and exFAT) wasn't added to OS X until 10.6.4. Although you can get exFAT support on 10.5 with OSXFUSE, I don't know how good their support is on PowerPC and it definitely doesn't work on Tiger (and I'm not aware of a module for the older MacFUSE that does run on Tiger). That limits you to SDHC cards up to 32GB at least on 10.4, which really hurts on FLAC or ALAC and especially on AIFF.

Second, the internal storage is not accessible directly to the OS. I plugged in the Pono Player to my iMac G4 and it showed up in System Profiler, but I couldn't do anything with it. The 64GB of internal storage is only accessible to the music store app, which brings us to the third problem:

Third, the Pono Music World app (a skinned version of JRiver Media Center) is Intel-only, 10.6+. You can't download tracks any other way right now, which also means you're currently screwed if you use Linux, even on an Intel Mac. And all they had was Dark Side in 44.1kHz/16 bit ... exactly the same as CD!

So I looked around for other options. HDTracks didn't have Dark Side, though they did have The (weaksauce) Endless River and The Division Bell in 96kHz/24 bit. I own both of these, but 96kHz wasn't really what I had in mind, and when I signed up to try a track it turns out they need a downloader also which is also a reskinned JRiver! And their reasoning for this in the FAQ is total crap.

Eventually I was able to find two sites that offer sample tracks I could download in TenFourFox (I had to downsample one for comparison). The first offers multiple formats in WAV, which your Power Mac actually can play, even in 24-bit (but it may be downsampled for your audio chip; if you go to /Applications/Utilities/Audio MIDI Setup.app you can see the sample rate and quantization for your audio output -- my quad G5 offers up to 24/96kHz but my iMac only has 16/44.1). The second was in FLAC, which Audacity crashed trying to convert, MacAmp Lite X wouldn't even recognize, and XiphQT (via QuickTime) played like it was being held underwater by a chainsaw (sample size mismatch, no doubt); I had to convert this by hand. I then put them onto a SDHC card and installed it in the Pono.

Yuck. I was very disappointed in the interface and LCD. I know that display quality wasn't a major concern, but it looks clunky and ugly and has terrible angles (see for yourself!) and on a $400 device that's not acceptable. The UI is very slow sometimes, even with the hardware buttons (just volume and power, no track controls), and the touch screen is very low quality. But I duly tried the built-in Neil Young track, which being an official Por Pono track turns on a special blue light to tell you it's special, and on my Grados it sounded pretty good, actually. That was encouraging. So I turned off the display and went through a few cycles of A-B testing with a random playlist between the two sets of tracks.

And ... well ... my identification abilities were almost completely statistical chance. In fact, I was slightly worse than chance would predict on the second set of tracks. I can only conclude that Harry Nyquist triumphs. With high quality headphones, presumably high quality DSPs and presumably high quality recordings, it's absolutely bupkis difference for me between CD-quality and Pono-quality.

Don't get me wrong: I am happy to hear that other people are concerned about the deficiencies in modern audio engineering -- and making it a marketable feature. We've all heard the "loudness war," for example, which dramatically compresses the dynamic range of previously luxurious tracks into a bafflingly small amplitude range which the uncultured ear, used only to quantity over quality, apparently prefers. Furthermore, early CD masters used RIAA equalization, which overdrove the treble and was completely unnecessary with digital audio, though that grave error hasn't been repeated since at least 1990 or earlier. Fortunately, assuming you get audio engineers who know what they're doing, a modern CD is every bit as a good to the human ear as a DVD-Audio disc or an SACD. And if modern music makes a return to quality engineering with high quality intermediates (where 24-bit really does make a difference) and appropriate dynamic range, we'll all be better off.

But the Pono Player doesn't live up to the hype in pretty much any respect. It has line out (which does double as a headphone port to share) and it's high quality for what it does play, so it'll be nice for my hi-fi system if I can get anything on it, but the Sansa Fuze is smaller and more convenient as a portable player and the Pono's going back in the wooden box. Frankly, it feels like it was pushed out half-baked, it's problematic if you don't own a modern Mac, and the imperceptible improvements in audio mean it's definitely not worth the money over what you already own. But that's why you read this blog: I just spent $400 so you don't have to.

Tarek ZiadéCharity Python Code Review

Raising 2500 euros for a charity is hard. That's what I am trying to do for the Berlin Marathon on Alvarum.

Mind you, this is not to get a bib - I was lucky enough to get one from the lottery. It's just that it feels right to take the opportunity of this marathon to raise money for Doctors without Borders. Whatever my marathon result will be. I am not getting any money out of this, I am paying for all my Marathon fees. Every penny donated goes to MSF (Doctors without Borders).

It's the first time I am doing a fundraising for a foundation and I guess that I've exhausted all the potentials donators in my family, friends and colleagues circles.

I guess I've reached the point where I have to give back something to the people that are willing to donate.

So here's a proposal: I have been doing Python coding for quite some time, wrote some books in both English and French on the topic, and working on large scale projects using Python. I have also gave a lot of talks in Python conferences around the world.

I am not an expert of any specific fields like scientific Python, but I am good in "general Python" and in designing stuff that scales.

I am offering one of the following service:

  • Python code review
  • Slides review
  • Documentation review or translation from English to French

The contract (gosh this is probably very incomplete):

  • Your project have to be under an open source license, and available online.
  • I am looking from small reviews, between 30mn and 4 hours of work I guess.
  • You are responsible for the intial guidance. e.g. explain what specific review you want me to do.
  • I am allowed to say no (mostly if by any chance I have tons of proposals, or if I don't feel like I am the right person to review your code.)
  • This is on my free time so I can't really give deadlines - however depending on the project and amount of work I will be able to roughly estimate how long is going to take and when I should be able to do it.
  • If I do the work you can't back off if you don't like the result of my reviews. If you do without a good reason, this is mean and I might cry a little.
  • I won't be responsible for any damage or liability done to your project because of my review.
  • I am not issuing any invoice or anything like that. The fundraising site however will issue a classical invoice when you do the donation. I am not part of that transaction nor responsible for it.
  • Once the work will be done, I will tell you how long it took, and you are free to give wathever you think is fair and I will happily accept whatever you give my fundraising. If you give 1 euro for 4 hours of work I might make a sad face, but I will just accept it.

Interested ? Mail me! tarek@ziade.org

And if you just want to give to the fundraising it's here: http://www.alvarum.com/tarekziade

Michael KaplyWhat About Firefox Deployment?

You might have noticed that I spend most of my resources around configuring Firefox and not around deploying Firefox. There are a couple reasons for that:

  1. There really isn’t a "one size fits all" solution for Firefox deployment because there are so many products that can be used to deploy software within different organizations.
  2. Most discussions around deployment devolve into a "I wish Mozilla would do a Firefox MSI" discussion.

That being said, there are some things I can recommend around deploying Firefox on Windows.

If you want to modify the Firefox installer, I’ve done a few posts on this in the past:

If you need to integrate add-ons into that install, I've posted about that as well:

You could also consider asking on the Enterprise Working Group mailing list. There's probably someone that's already figured it out for your software deployment solution.

If you really need an MSI, check out FrontMotion. They've been doing MSI work for quite a while.

And if you really want Firefox to have an official MSI, consider working on bug 598647. That's where an MSI implementation got started but never finished.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1122269] no longer have access to https://bugzilla.mozilla.org/cvs-update.log
  • [1119184] Securemail incorrectly displays ” You will have to contact bugzilla-admin@foo to reset your password.” for whines
  • [1122565] editversions.cgi should focus the version field on page load to cut down on need for mouse
  • [1124254] form.dev-engagement-event: More changes to default NEEDINFO
  • [1119988] form.dev-engagement-event: disabled accounts causes invalid/incomplete bugs to be created
  • [616197] Wrap long bug summaries in dependency graphs, to avoid horizontal scrolling
  • [1117345] Can’t choose a resolution when trying to resolve a bug (with canconfirm rights)
  • [1125320] form.dev-engagement-event: Two new questions
  • [1121594] Mozilla Recruiting Requisition Opening Process Template
  • [1124437] Backport upstream bug 1090275 to bmo/4.2 to whitelist webservice api methods
  • [1124432] Backport upstream bug 1079065 to bmo/4.2 to fix improper use of open() calls

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jess KleinQuality Assurance reviews for Design, Functionality and Communications


This week a few of the features that I have been writing about will be shipping on webmaker.org - the work for Privacy Day and the new on-boarding experience. You might be wondering what we've been up to during that period of time after the project gets coded until the time it goes live. Two magical words: quality assurance (QA). We are still refining the process, and I am very open to suggestions as to how to improve it and streamline it. For the time being, let me walk you through this round of QA on the Privacy Day content.


It all starts out with a github issue




... and a kickoff meeting

The same team who worked on the prototyping phase of the Privacy Day campaign work are responsible for the quality assurance. We met to kick off and map out our plan for going live. This project required three kinds of reviews - that more or less had to happen simultaneously. We broke down the responsibilities like this:



Aki - (lead engineer) - responsible for preparing the docs and leading a functionality review
Paul - (communication/marketing) - responsible for preparing the docs and leading a marketing review
Jess - (lead designer) - responsible for preparing docs and leading design review
Bobby - (product manager) - responsible for recruiting participants to do the reviews and to wrangle  bug triage.
Cassie - (quality) - responsible for final look and thumbs up to say if the feature is acceptable to ship

Each of us who were responsible for docs wrote up instructions for QA reviewers to follow:




We recruited staff and community to user test on a variety of different devices:


This was done in a few different ways. I did both one on one and asynchronous review sessions with my colleagues and the community. It helps to have both kinds of user tests so that you can get honest feedback. Allowing for asynchronous or independent testing is particularly beneficial because it signals to the reviewer that this is an ongoing process and that bugs can be filed at any point during the review period specified. 


The process is completely open to the community. At any given point the github issues are public, the calls for help are public and the iteration is done openly. 


and if there were any problems, they were logged in github as issues:



The most effective issues have a screenshot with the problem and a recommended solution. Additionally, it's important to note if this problem is blocking the feature from shipping or not.

We acknowledge when user testers found something useful:


and identified when a problem was out of scope to fix before shipping: 






We quickly iterated on fixing bugs and closing issues as a team:








and gave each other some indication when we thought that the problem was fixed sufficiently:






When we are all happy and got the final thumbsup regarding quality, we then....


Close the github issue and celebrate:





Then we start to make preparations to push the feature live (and snoopy dance a little):







Doug BelshawConsiderations when creating a Privacy badge pathway

Between June and October 2014 I chaired the Badge Alliance working group for Digital and Web Literacies. This was an obvious fit for me, having previously been on the Open Badges team at Mozilla, and currently being Web Literacy Lead.

Running

We used a Google Group to organise our meetings. Our Badge Alliance liaison was my former colleague Carla Casilli. The group contained 208 people, although only around 10% of that number were active at any given time.

The deliverable we decided upon was a document detailing considerations individuals/organisations should take into account when creating a Privacy badge pathway.

Access the document here

We used Mozilla’s Web Literacy Map as a starting point for this work, mainly because many of us had been part of the conversations that led to the creation of it. Our discussions moved from monthly, to fortnightly, to weekly. They were wide-ranging and included many options. However, the guidance we ended up providing is as simple and as straightforward as possible.

For example, we advocated the creation of five badges:

  1. Identifying rights retained and removed through user agreements
  2. Taking steps to secure non-encrypted connections
  3. Explaining ways in which computer criminals are able to gain access to user information
  4. Managing the digital footprint of an online persona
  5. Identifying and taking steps to keep important elements of identity private

We presented options for how learners would level-up using these badges:

  • Trivial Pursuit approach
  • Majority approach
  • Cluster approach

More details on the badges and approaches can be found in the document. We also included more speculative material around federation. This involved exploring the difference between pathways, systems and ecosystems.

The deliverable from this working is currently still on Google Docs, but if there’s enough interest we’ll port it to GitHub pages so it looks a bit like the existing Webmaker whitepaper. This work is helping inform an upcoming whitepaper around Learning Pathways which should be ready by end of Q1 2014.

Karen Smith, co-author of the new whitepaper and part of the Badge Alliance working group, is also heading up a project (that I’m involved with in a small way) for the Office of the Privacy Commissioner of Canada. This is also informed in many ways by this work.


Comments? Questions? Comment directly on the document, tweet me (@dajbelshaw) or email me: doug@mozillafoundation.org

Stormy PetersYour app is not a lottery ticket

Many app developers are secretly hoping to win the lottery. You know all those horrible free apps full of ads? I bet most of them were hoping to be the next Flappy Bird app. (The Flappy Bird author was making $50K/day from ads for a while.)

The problem is that when you are that focused on making millions, you are not focused on making a good app that people actually want. When you add ads before you add value, you’ll end up with no users no matter how strategically placed your ads are.

So, the secret to making millions with your app?

  • Find a need or problem that people have that you can solve.
  • Solve the problem.
  • Make your users awesome. Luke first sent me a pointer to Kathy Sierra’s idea of making your users awesome.  Instagram let people create awesome pictures. Then their friends asked them how they did it …
  • Then monetize. (You can think about this earlier but don’t focus on it until you are doing well.)

If you are a good app developer or web developer, you’ll probably find it easier to do well financially helping small businesses around you create the apps and web pages they need than you will trying to randomly guess what game people might like. (If you have a good idea for a game, that you are sure you and your friends and then perhaps others would like to play, go for it!)

Alistair LaingRight tool for the Job

I’m still keen as ever and enjoying the experience of developing a browser extension. Last week was the first time I hung out in Google Hangouts with Jan and Florent. On first impressions Google hangouts is pretty sweet. It was smooth and clear(I’m not sure how much of that was down to broadband speeds and connection quality). I learnt so much in that first one hour session and enjoyed chatting to them face-to-face (in digital terms).

TOO COOL to Minify & TOO SASS’y for tools

One of things I learnt was how to approach JS/CSS now my front-end Developer head tells me to always minify and concatenate files to reduce HTTP request. While from a maintenance side, look to using a CSS pre-processor for variables etc. Now when it comes to developing browser extensions you do not have the same issues because of the following reasons:

  1. No HTTP requests are done because the files are actually packaged with the extension and therefore installed on the client machine anyway. Theres also NO network latency because of this.
  2. File sizes aren’t that important as browser extensions (for Firefox at least). The extensions are packaged up in such an effective way that its basically zipping all the contents together so “reducing” the file sizes anyway.
  3. Whilst attempting to fix an issue I came across Mozilla’s implementation of CSS variables, which sort of solves the issue around css variables and modularise the code.

Later today, I’m scheduled to hangout with Jan again and I’m thinking about writing another post about XUL

[contact-form]

Mozilla Release Management TeamFirefox 36 beta3 to beta4

In this beta release, for both Desktop & Mobile, we fixed some issues in Javascript, some stability fixes, etc. We also increased the memory size of some components to decrease the number of crashes (example: bugs 869208 & 1124892)

  • 40 changesets
  • 121 files changed
  • 1528 insertions
  • 1107 deletions

ExtensionOccurrences
cpp28
h21
c16
11
java9
html7
py4
js4
mn3
ini3
mk2
cc2
xml1
xhtml1
svg1
sh1
in1
idl1
dep1
css1
build1

ModuleOccurrences
security46
js15
dom15
mobile10
browser10
editor6
gfx5
testing4
toolkit3
ipc2
xpcom1
services1
layout1
image1

List of changesets:

Cameron McCormackBug 1092363 - Disable Bug 931668 optimizations for the time being. r=dbaron a=abillings - 126d92ac00e9
Tim TaubertBug 1085369 - Move key wrapping/unwrapping tests to their own test file. r=rbarnes, a=test-only - afab84ec4e34
Tim TaubertBug 1085369 - Move other long-running tests to separate test files. r=keeler, a=test-only - d0660bbc79a1
Tim TaubertBug 1093655 - Fix intermittent browser_crashedTabs.js failures. a=test-only - 957b4a673416
Benjamin SmedbergBug 869208 - Increase the buffer size we're using to deliver network streams to OOPP plugins. r=aklotz, a=sledru - cb0fd5d9a263
Nicholas NethercoteBug 1122322 (follow-up) - Fix busted paths in worker memory reporter. r=bent, a=sledru - a99eabe5e8ea
Bobby HolleyBug 1123983 - Don't reset request status in MediaDecoderStateMachine::FlushDecoding. r=cpearce, a=sledru - e17127e00300
Jean-Yves AvenardBug 1124172 - Abort read if there's nothing to read. r=bholley, a=sledru - cb103a939041
Jean-Yves AvenardBug 1123198 - Run reset parser state algorithm when aborting. r=cajbir, a=sledru - 17830430e6be
Martyn HaighBug 1122074 - Normal Tabs tray has an empty state. r=mcomella, a=sledru - c1e9f11144a5
Michael ComellaBug 1096958 - Move TilesRecorder instance into TopSitesPanel. r=bnicholson, a=sledru - d6baa06d52b4
Michael ComellaBug 1110555 - Use real device dimensions when calculating LWT bitmap sizes. r=mhaigh, a=sledru - 2745f66dac6f
Michael ComellaBug 1107386 - Set internal container height as height of MenuPopup. r=mhaigh, a=sledru - e4e2855e992c
Ehsan AkhgariBug 1120233 - Ensure that the delete command will stay enabled for password fields. r=roc, ba=sledru - 34330baf2af6
Philipp KewischBug 1084066 - plugins and extensions moved to wrong directory by mozharness. r=ted,a=sledru - 64fb35ee1af6
Bob OwenBug 1123245 Part 1: Enable an open sandbox on Windows NPAPI processes. r=josh, r=tabraldes, a=sledru - 2ab5add95717
Bob OwenBug 1123245 Part 2: Use the USER_NON_ADMIN access token level for Windows NPAPI processes. r=tabraldes, a=sledru - f7b5148c84a1
Bob OwenBug 1123245 Part 3: Add prefs for the Windows NPAPI process sandbox. r=bsmedberg, a=sledru - 9bfc57be3f2c
Makoto KatoBug 1121829 - Support redirection of kernel32.dll for hooking function. r=dmajor, a=sylvestre - d340f3d3439d
Ting-Yu ChouBug 989048 - Clean up emulator temporary files and do not overwrite userdata image. r=ahal, a=test-only - 89ea80802586
Richard NewmanBug 951480 - Disable test_tokenserverclient on Android. a=test-only - 775b46e5b648
Jean-Yves AvenardBug 1116007 - Disable inconsistent test. a=test-only - 5d7d74f94d6a
Kai EngertBug 1107731 - Upgrade Mozilla 36 to use NSS 3.17.4. a=sledru - f4e1d64f9ab9
Gijs KruitboschBug 1098371 - Create localized version of sslv3 error page. r=mconley, a=sledru - e6cefc687439
Masatoshi KimuraBug 1113780 - Use SSL_ERROR_UNSUPPORTED_VERSION for SSLv3 error page. r=gijs, a=sylvestre (see Bug 1098371) - ea3b10634381
Jon CoppeardBug 1108007 - Don't allow GC to observe uninitialized elements in cloned array. r=nbp, a=sledru - a160dd7b5dda
Byron Campen [:bwc]Bug 1123882 - Fix case where offset != 0. r=derf, a=abillings - 228ee06444b5
Mats PalmgrenBug 1099110 - Add a runtime check before the downcast in BreakSink::SetCapitalization. r=jfkthame, a=sledru - 12972395700a
Mats PalmgrenBug 1110557. r=mak, r=gavin, a=abillings - 3f71dcaa9396
Glenn Randers-PehrsonBug 1117406 - Fix handling of out-of-range PNG tRNS values. r=jmuizelaar, a=abillings - a532a2852b2f
Tom SchusterBug 1111248. r=Waldo, a=sledru - 7f44816c0449
Tom SchusterBug 1111243 - Implement ES6 proxy behavior for IsArray. r=efaust, a=sledru - bf8644a5c52a
Ben TurnerBug 1122750 - Remove unnecessary destroy calls. r=khuey, a=sledru - 508190797a80
Mark CapellaBug 851861 - Intermittent testFlingCorrectness, etc al. dragSync() consumers. r=mfinkle, a=sledru - 3aca4622bfd5
Jan de MooijBug 1115776 - Fix LApplyArgsGeneric to always emit the has-script check. r=shu, a=sledru - 9ac8ce8d36ef
Nicolas B. PierronBug 1105187 - Uplift the harness changes to fix jit-test failures. a=test-only - b17339648b55
Nicolas SilvaBug 1119019 - Avoid destroying a SharedSurface before its TextureClient/Host pair. r=sotaro, a=abillings - 6601b8da1750
Markus StangeBug 1117304 - Also do the checks at the start of CopyRect in release builds. r=Bas, a=sledru - 4417d345698a
Markus StangeBug 1117304 - Make sure the tile filter doesn't call CopyRect on surfaces with different formats. r=Bas, a=sledru - bc7489448a98
David MajorBug 1124892 - Adjust Breakpad reservation for xul.dll inflation. r=bsmedberg, a=sledru - 59aa16cfd49f

Ian BickingA Product Journal: To MVP Or Not To MVP

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous post was The Tech Demo, and the first in the series is Conception.

The Minimal Viable Product

The Minimal Viable Product is a popular product development approach at Mozilla, and judging from Hacker News it is popular everywhere (but that is a wildly inaccurate way to judge common practice).

The idea is that you build the smallest thing that could be useful, and you ship it. The idea isn’t to make a great product, but to make something so you can learn in the field. A couple definitions:

The Minimum Viable Product (MVP) is a key lean startup concept popularized by Eric Ries. The basic idea is to maximize validated learning for the least amount of effort. After all, why waste effort building out a product without first testing if it’s worth it.

– from How I built my Minimum Viable Product (emphasis in original)

I like this phrase “validated learning.” Another definition:

A core component of Lean Startup methodology is the build-measure-learn feedback loop. The first step is figuring out the problem that needs to be solved and then developing a minimum viable product (MVP) to begin the process of learning as quickly as possible. Once the MVP is established, a startup can work on tuning the engine. This will involve measurement and learning and must include actionable metrics that can demonstrate cause and effect question.

– Lean Startup Methodology (emphasis added)

I don’t like this model at all: “once the MVP is established, a startup can work on tuning the engine.” You tune something that works the way you want it to, but isn’t powerful or efficient or fast enough. You’ve established almost nothing when you’ve created an MVP, no aspect of the product is validated, it would be premature to tune. But I see this antipattern happen frequently: get an MVP out quickly, often shutting down critically engaged deliberation in order to Just Get It Shipped, then use that product as the model for further incremental improvements. Just Get It Shipped is okay, incrementally improving products is okay, but together they are boring and uncreative.

There’s another broad discussion to be had another time about how to enable positive and constructive critical engagement around a project. It’s not easy, but that’s where learning happens, and the purpose of the MVP is to learn, not to produce. In contrast I find myself impressed by the shear willfulness of the Halflife development process which apparently involved months of six hour design meetings, four days a week, producing large and detailed design documents. Maybe I’m impressed because it sounds so exhausting, a feat of endurance. And perhaps it implies that waterfall can work if you invest in it properly.

Plan plan plan

I have a certain respect for this development pattern that Dijkstra describes:

Q: In practice it often appears that pressures of production reward clever programming over good programming: how are we progressing in making the case that good programming is also cost effective?

A: Well, it has been said over and over again that the tremendous cost of programming is caused by the fact that it is done by cheap labor, which makes it very expensive, and secondly that people rush into coding. One of the things people learn in colleges nowadays is to think first; that makes the development more cost effective. I know of at least one software house in France, and there may be more because this story is already a number of years old, where it is a firm rule of the house, that for whatever software they are committed to deliver, coding is not allowed to start before seventy percent of the scheduled time has elapsed. So if after nine months a project team reports to their boss that they want to start coding, he will ask: “Are you sure there is nothing else to do?” If they say yes, they will be told that the product will ship in three months. That company is highly successful.

– from Interview Prof. Dr. Edsger W. Dijkstra, Austin, 04–03–1985

Or, a warning from a page full of these kind of quotes: “Weeks of programming can save you hours of planning.” The planning process Dijkstra describes is intriguing, it says something like: if you spend two weeks making a plan for how you’ll complete a project in two weeks then it is an appropriate investment to spend another week of planning to save half a week of programming. Or, if you spend a month planning for a month of programming, then you haven’t invested enough in planning to justify that programming work – to ensure the quality, to plan the order of approach, to understand the pieces that fit together, to ensure the foundation is correct, ensure the staffing is appropriate, and so on.

I believe “Waterfall Design” gets much of its negative connotation from a lack of good design. A Waterfall process requires the design to be very very good. With Waterfall the design is too important to leave it to the experts, to let the architect arrange technical components, the program manager to arrange schedules, the database architect to design the storage, and so on. It’s anti-collaborative, disengaged. It relies on intuition and common sense, and those are not powerful enough. I’ll quote Dijkstra again:

The usual way in which we plan today for tomorrow is in yesterday’s vocabulary. We do so, because we try to get away with the concepts we are familiar with and that have acquired their meanings in our past experience. Of course, the words and the concepts don’t quite fit because our future differs from our past, but then we stretch them a little bit. Linguists are quite familiar with the phenomenon that the meanings of words evolve over time, but also know that this is a slow and gradual process.

It is the most common way of trying to cope with novelty: by means of metaphors and analogies we try to link the new to the old, the novel to the familiar. Under sufficiently slow and gradual change, it works reasonably well; in the case of a sharp discontinuity, however, the method breaks down: though we may glorify it with the name “common sense”, our past experience is no longer relevant, the analogies become too shallow, and the metaphors become more misleading than illuminating. This is the situation that is characteristic for the “radical” novelty.

Coping with radical novelty requires an orthogonal method. One must consider one’s own past, the experiences collected, and the habits formed in it as an unfortunate accident of history, and one has to approach the radical novelty with a blank mind, consciously refusing to try to link it with what is already familiar, because the familiar is hopelessly inadequate. One has, with initially a kind of split personality, to come to grips with a radical novelty as a dissociated topic in its own right. Coming to grips with a radical novelty amounts to creating and learning a new foreign language that can not be translated into one’s mother tongue. (Any one who has learned quantum mechanics knows what I am talking about.) Needless to say, adjusting to radical novelties is not a very popular activity, for it requires hard work. For the same reason, the radical novelties themselves are unwelcome.

– from EWD 1036, On the cruelty of really teaching computing science

Research

All this praise of planning implies you know what you are trying to make. Unlikely!

Coding can be a form of planning. You can’t research how interactions feel without having an actual interaction to look at. You can’t figure out how feasible some techniques are without trying them. Planning without collaborative creativity is dull, planning without research is just documenting someone’s intuition.

The danger is that when you are planning with code, it feels like execution. You can plan to throw one away to put yourself in the right state of mind, but I think it is better to simply be clear and transparent about why you are writing the code you are writing. Transparent because the danger isn’t just that you confuse your coding with execution, but that anyone else is likely to confuse the two as well.

So code up a storm to learn, code up something usable so people will use it and then you can learn from that too.

My own conclusion…

I’m not making an MVP. I’m not going to make a maximum viable product either – rather, the next step in the project is not to make a viable product. The next stage is research and learning. Code is going to be part of that. Dogfooding will be part of it too, because I believe that’s important for learning. I fear thinking in terms of “MVP” would let us lose sight of the why behind this iteration – it is a dangerous abstraction during a period of product definition.

Also, if you’ve gotten this far, you’ll see I’m not creating minimal viable blog posts. Sorry about that.

Stormy Peters7 reasons asynchronous communication is better than synchronous communication in open source

Traditionally, open source software has relied primarily on asynchronous communication. While there are probably quite a few synchronous conversations on irc, most project discussions and decisions will happen on asynchronous channels like mailing lists, bug tracking tools and blogs.

I think there’s another reason for this. Synchronous communication is difficult for an open source project. For any project where people are distributed. Synchronous conversations are:

  • Inconvenient. It’s hard to schedule synchronous meetings across time zones. Just try to pick a good time for Australia, Europe and California.
  • Logistically difficult. It’s hard to schedule a meeting for people that are working on a project at odd hours that might vary every day depending on when they can fit in their hobby or volunteer job.
  • Slower. If you have more than 2-3 people you need to get together every time you make a decision, things will move slower. I currently have a project right now that we are kicking off and the team wants to do everything in meetings. We had a several meeting last week and one this week. Asynchronously we could have had several rounds of discussion by now.
  • Expensive for many people. When I first started at GNOME, it was hard to get some of our board members on a phone call. They couldn’t call international numbers, or couldn’t afford an international call and they didn’t have enough bandwidth for an internet voice call. We ended up using a conference call line from one of our sponsor companies. Now it’s video.
  • Logistically difficult. Mozilla does most of our meetings as video meetings. Video is still really hard for many people. Even with my pretty expensive, supposedly high end internet in a developed country, I often have bandwidth problems when participating in video calls. Now imagine I’m a volunteer from Nigeria. My electricity might not work all the time, much less my high speed internet.
  • Language. Open source software projects work primarily in English and most of the world does not speak English as their first language. Asynchronous communication gives them a chance to compose their messages, look up words and communicate more effectively.
  • Confusing. Discussions and decisions are often made by a subset of the project and unless the team members are very diligent the decisions and rationale are often not communicated out broadly or effectively. You lose the history behind decisions that way too.

There are some major benefits to synchronous conversation:

  • Relationships. You build relationships faster. It’s much easier to get to know the person.
  • Understanding. Questions and answers happen much faster, especially if the question is hard to formulate or understand. You can quickly go back and forth and get clarity on both sides. They are also really good for difficult topics that might be easily misinterpreted or misunderstood over email where you don’t have tone and body language to help convey the message.
  • Quicker. If you only have 2-3 people, it’s faster to talk to them then to type it all out. Once you have more than 2-3, you lose that advantage.

I think as new technologies, both synchronous and asynchronous become main stream, open source software projects will have to figure out how to incorporate them. For example, at Mozilla, we’ve been working on how video can be a part of our projects. Unfortunately, they usually just add more synchronous conversations that are hard to share widely but we work on taking notes, sending notes to mailing lists and recording meetings to try to get the relationship and communication benefits of video meetings while maintaining good open source software project practices. I personally would like to see us use more asynchronous tools as I think video and synchronous tools benefit full time employees at the expense of volunteer involvement.

How does your open source software project use asynchronous and synchronous communication tools? How’s the balance working for you?

Darrin HeneinRapid Prototyping with Gulp, Framer.js and Sketch: Part One

process
When I save my Sketch file, my Framer.js prototype updates and reloads instantly.


Rationale

The process of design is often thought of as being entirely generative–people who design things study a particular problem, pull out their sketchbooks, markers and laptops, and produce artifacts which slowly but surely progress towards some end result which then becomes “The Design” of “The Thing”. It is seen as an additive process, whereby each step builds upon the previous, sometimes with changes or modifications which solve issues brought to light by the earlier work.

Early in my career, I would sit at my desk and look with disdain at all the crumpled-paper that filled my trash bin and cherish that one special solution that made the cut. The bin was filled with all of my “bad ideas”. It was overflowing with “failed” attempts before I finally “got it right”. It took me some time, but I’ve slowly learned that the core of my design work is defined not by that shiny mockup or design spec I deliver, but more truly by the myriad of sketches and ideas that got me there. If your waste bin isn’t full by the end of a project, you may want to ask yourself if you’ve spent enough time exploring the solution space.

I really love how Facebook’s Product Design Director Julie Zhuo put it in her essay “Junior Designers vs. Senior Designers”, where she illustrates (in a very non-scientific, but effective way) the difference in process that experience begets. The key delta to me is the singularity of the Junior Designer’s process, compared to the exploratory, branching, subtractive process of the more seasoned designer. Note all the dead ends and occasions where the senior designer just abandons an idea or concept. They clearly have a full trash bin by the end of this journey. Through the process of evaluation and subtraction, a final result is reached. The breadth of ideas explored and abandoned is what defines the process, rather than the evolution of a single idea. It is important to achieve this breadth of ideation to ensure that the solution you commit to was not just a lucky one, but a solution that was vetted against a variety of alternatives.

The unfortunate part of this realization is that often it is just that – an idealized process which faces little conceptual opposition but (in my experience) is often sacrificed in the name of speed or deadlines. Generating multiple sketches is not a huge cost, and is one of the primary reasons so much exploration should take place at that fidelity. Interactions, behavioural design and animations, however, are much more costly to generate, and so the temptation there is to iterate on an idea until it feels right. While this is not inherently a bad thing, wouldn’t it be nice if we could iterate and explore things like animations with the same efficiency we experience with sketching?

As a designer with the ability to write some code, my first goal with any project is to eliminate any inefficiencies – let me focus on the design and not waste time elsewhere. I’m going to walk through a framework I’ve developed during a recent project, but the principle is universal – eliminate or automate the things you can, and maximize the time you spend actually problem-solving and designing.

Designing an Animation Using Framer.js and Sketch

Get the Boilerplate Project on Github

User experience design has become a much more complex field as hardware and software have evolved to allow increasingly fluid, animated, and dynamic interfaces. When designing native applications (especially on mobile platforms such as Android or iOS) there is both an expectation and great value to leverage animation in our UI. Whether to bring attention to an element, educate the user about the hierarchy of the screens in an app, or just to add a moment of delight, animation can be a powerful tool when used correctly. As designers, we must now look beyond Photoshop and static PNG files to define our products, and leverage tools like Keynote or HTML to articulate how these interfaces should behave.

While I prefer to build tools and workflows with open-source software, it seems that the best design tools available are paid applications. Thankfully, Sketch is a fantastic application and easily worth it’s price.

My current tool of choice is a library called framer.js, which is an open-source framework for prototyping UI. For visual design I use Sketch. I’m going to show you how I combine these two tools to provide me with a fast, automated, and iterative process for designing animations.

I am also aware that Framer Studio exists, as well as Framer Generator. These are both amazing tools. However, I am looking for something as automated and low-friction as possible; both of these tools require some steps between modifying the design and seeing the results. Lets look at how I achieved a fully automated solution to this problem.

Automating Everything With Gulp

Here is the goal: let me work in my Sketch and/or CoffeeScript file, and just by saving, update my animated prototype with the new code and images without me having to do anything. Lofty, I know, but let’s see how it’s done.

Gulp is a Javascript-based build tool, the latest in a series of incredible node-powered command line build tools.

Some familiarity with build tools such as Gulp or Grunt will help here, but is not mandatory. Also, this will explain the mechanics of the tool, but you can still use this framework without understanding every line!

 The gulpfile is  just a list of tasks, or commands, that we can run in different orders or timings. Let’s breakdown my gulpfile.js:


var gulp        = require('gulp');
var coffee      = require('gulp-coffee');
var gutil       = require('gulp-util');
var watch       = require('gulp-watch');
var sketch      = require('gulp-sketch');
var browserSync = require('browser-sync');

This section at the top just requires (imports) the external libraries I’m going to use. These include Gulp itself, CoffeeScript support (which for me is faster than writing Javascript), a watch utility to run code whenever a file changes, and a plugin which lets me parse and export from Sketch files.


gulp.task('build', ['copy', 'coffee', 'sketch']);
gulp.task('default', ['build', 'watch']);

Next, I setup the tasks I’d like to be able to run. Notice that the build and default tasks are just sets of other tasks. This lets me maintain a separation of concern and have tasks that do only one thing.


gulp.task('watch', function(){
  gulp.watch('./src/*.coffee', ['coffee']);
  gulp.watch('./src/*.sketch', ['sketch']);
  browserSync({
    server: {
      baseDir: 'build'
    },
    browser: 'google chrome',
    injectChanges: false,
    files: ['build/**/*.*'],
    notify: false
  });
});

This is the watch task. I tell Gulp to watch my src folder for CoffeeScript files and Sketch files; these are the only source files that define my prototype and will be the ones I change often. When a CoffeeScript or Sketch file changes, the coffee or sketch tasks are run, respectively.

Next, I set up browserSync to push any changed files within the build directory to my browser, which in this case is Chrome. This keeps my prototype in the browser up-to-date without having to hit refresh. Notice I’m also specifying a server: key, which essentially spins up a web server with the files in my build directory.


gulp.task('coffee', function(){
  gulp.src('src/*.coffee')
    .pipe(coffee({bare: true}).on('error', gutil.log))
    .pipe(gulp.dest('build/'))
});

The second major task is coffee. This, as you may have guessed, simply transcompiles any *.coffee files in my src folder to Javascript, and places the resulting JS file in my build folder. Because we are containing our prototype in one app.coffee file, there is no need for concatenation or minification.


gulp.task('sketch', function(){
  gulp.src('src/*.sketch')
    .pipe(sketch({
      export: 'slices',
      format: 'png',
      saveForWeb: true,
      scales: 1.0,
      trimmed: false
    }))
    .pipe(gulp.dest('build/images'))
});

The sketch task is also aptly named, as it is responsible for exporting the slices I have defined in my Sketch file to pngs, which can then be used in the prototype. In Sketch, you can mark a layer or group as “exportable”, and this task only looks for those assets.


gulp.task('copy', function(){
  gulp.src('src/index.html')
    .pipe(gulp.dest('build'))
    gulp.src('src/lib/**/*.*')
    .pipe(gulp.dest('build/lib'))
    gulp.src('src/images/**/*.{png, jpg, svg}')
    .pipe(gulp.dest('build/images'));
});

The last task is simply housekeeping. It is only run once, when you first start the Gulp process on the command line. It copies any HTML files, JS libraries, or other images I want available to my prototype. This let’s me keep everything in my src folder, which is a best practice. As a general rule of thumb for build systems, avoid placing anything in your output directory (in this case, build), as you jeopardize your ability to have repeatable builds.

Recall my default task was defined above, as:


gulp.task('default', ['build', 'watch']);

This means that by running $ gulp in this directory from the command line, my default task is kicked off. It won’t exit without ctrl-C, as watch will run indefinitely. This lets me run this command only once, and get to work.


$ gulp

So where are we now? If everything worked, you should see your prototype available at http://localhost:3000. Saving either app.coffee or app.sketch should trigger the watch we setup, and compile the appropriate assets to our build directory. This change of files in the build directory should trigger BrowserSync, which will then update our prototype in the browser. Voila! We can now work in either of 2 files (app.coffee or app.sketch), and just by saving them have our shareable, web-based prototype updated in place. And the best part is, I only had to set this up once! I can now use this framework with mynext project and immediately begin designing, with a hyper-fast iteration loop to facilitate that work.

The next step is to actually design the animation using Sketch and framer.js, which deserves it’s own post altogether and will be covered in Part Two of this series.

Follow me on twitter @darrinhenein to be notified when part two is available.

Adam OkoyeTests, Feedback Results, and a New Thank You

As I’ve said in previous posts, my internship primarily revolves around creating a new “thank you” page that will be shown to people who leave negative (or “sad”) feedback on input.mozilla.org.

The current thank you page which the user gets directed to after giving any type of feedback, good or bad, looks like this:

current thank you page

As you can see it’s pretty basic. It does include a link to Support Mozilla (SUMO), which I think is very useful. It also has links a page that shows you how to download different builds of Firefox (beta, nightly, etc), a page with a lot of useful information on how to get involved with contributing to Mozilla, and links to Mozilla’s social networking profiles. While the links are interesting in their own right, they don’t do a lot in terms of quickly guiding someone a solution if they’re having a problem with Firefox. We want to change that in order to hopefully make the page more useful to people who are having trouble using Firefox. The new thank you page will end up being a different Django template that people will be redirected to.

Part of making the new page more useful will be including links to SUMO articles that are related to the feedback that people have given. Last week I wrote the code that redirects a specific segment of people to the new thank you page as well as a test for that code. The new thank you page will be rolled out via a Waffle flag which I made some weeks ago which made writing the test a tad more complex. Right now there are a few finishing touches that needed to be added to the test in order to close out the bug, but I’m hoping to finish those by the end of Tuesday, the 27th.

We’ll be using one of the three SUMO API endpoints to take the text from the feedback, search the knowledge base, and questions and return results. To figure out which endpoint to use I used a script that Will Kahn-Greene wrote to look at feedback taken from Input and results returned via SUMO’s endpoints and then rank which endpoint’s results were the best. I did that for 120 pieces of feedback.

Tomorrow I’m going to start sketching and mocking up the new thank you page which I’m really looking forward to. I’ll be using a white board for the sketching which will be a first for me I’m hoping that it’ll be easier for me than pencils/pen and paper. I’ll also be able to quickly and easily upload all of the pictures I take of the whiteboard to the my computer which I think will be useful.

Mark SurmanMozilla Participation Plan (draft)

Mozilla needs a more creative and radical approach to participation in order to succeed. That is clear. And, I think, pretty widely agreed upon across Mozilla at this stage. What’s less clear: what practical steps do we take to supercharge participation at Mozilla? And what does this more creative and radical approach to participation look like in the everyday work and lives of people involved Mozilla?

Mozilla and participation

This post outlines what we’ve done to begin answering these questions and, importantly, it’s a call to action for your involvement. So read on.

Over the past two months, we’ve written a first draft Mozilla Participation Plan. This plan is focused on increasing the impact of participation efforts already underway across Mozilla and on building new methods for involving people in Mozilla’s mission. It also calls for the creation of new infrastructure and ways of working that will help Mozilla scale its participation efforts. Importantly, this plan is meant to amplify, accelerate and complement the many great community-driven initiatives that already exist at Mozilla (e.g. SuMo, MDN, Webmaker, community marketing, etc.) — it’s not a replacement for any of these efforts.

At the core of the plan is the assumption that we need to build a virtuous circle between 1) participation that helps our products and programs succeed and 2) people getting value from participating in Mozilla. Something like this:

Virtuous circle of participation

This is a key point for me: we have to simultaneously pay attention to the value participation brings to our core work and to the value that participating provides to our community. Over the last couple of years, many of our efforts have looked at just one side or the other of this circle. We can only succeed if we’re constantly looking in both directions.

With this in mind, the first steps we will take in 2015 include: 1) investing in the ReMo platform and the success of our regional communities and 2) better connecting our volunteer communities to the goals and needs of product teams. At the same time, we will: 3) start a Task Force, with broad involvement from the community, to identify and test new approaches to participation for Mozilla.

Participation Plan

The belief is that these activities will inject the energy needed to strengthen the virtuous circle reasonably quickly. We’ll know we’re succeeding if a) participation activities are helping teams across Mozilla measurably advance product and program goals and b) volunteers are getting more value out of their participation out of Mozilla. These are key metrics we’re looking at for 2015.

Over the longer run, there are bigger ambitions: an approach to participation that is at once massive and diverse, local and global. There will be many more people working effectively and creatively on Mozilla activities than we can imagine today, without the need for centralized control. This will result in a different and better, more diverse and resilient Mozilla — an organization that can consistently have massive positive impact on the web and on people’s lives over the long haul.

Making this happen means involvement and creativity from people across Mozilla and our community. However, a core team is needed to drive this work. In order to get things rolling, we are creating a small set of dedicated Participation Teams:

  1. A newly formed Community Development Team that will focus on strengthening ReMo and tying regional communities into the work of product and program groups.
  2. A participation ‘task force’ that will drive a broad conversation and set of experiments on what new approaches could look like.
  3. And, eventually, a Participation Systems Team will build out new infrastructure and business processes that support these new approaches across the organization.

For the time being, these teams will report to Mitchell and me. We will likely create an executive level position later in the year to lead these teams.

As you’ll see in the plan itself, we’re taking very practical and action oriented steps, while also focusing on and experimenting with longer-term questions. The Community Development Team is working on initiatives that are concrete and can have impact soon. But overall we’re just at the beginning of figuring out ‘radical participation’.

This means there is still a great deal of scope for you to get involved — the plans  are still evolving and your insights will improve our process and the plan. We’ll come out with information soon on more structured ways to engage with what we’re calling the ‘task force’. In the meantime, we strongly encourage your ideas right away on ways the participation teams could be working with products and programs. Just comment here on this post or reach out to Mitchell or me.

PS. I promised a follow up on my What is radical participation? post, drawing on comments people made. This is not that. Follow up post on that topic still coming.


Filed under: mozilla, opensource

Mozilla Reps CommunityRep of the month: January 2015

Irvin Chen has been an inspiring contributor last month and we want to recognize his great work as a Rep.

irvinIrvin has been organizing weekly MozTW Lab and also other events to spread Mozilla in the local community space in Taiwan, such as Spark meetup, d3.js meetup or Wikimedia mozcafe.

He also helped to run an l10n sprint for video subtitle/Mozilla links/SUMO and webmaker on transifex.

Congratulations Irvin for your awesome work!

Don’t forget to congratulate him on Discourse!

Ben KeroAttempts source large E-Ink screens for a laptop-like device

One idea that’s been bouncing around in my head for the last few years has been a laptop with an E-Ink display. I would have thought this would be a niche that had been carved out already, but it doesn’t seem that any companies are interested in exploring it.

I use my laptop in some non-traditional environments, such as outdoors in direct sunlight. Almost all laptops are abysmal in a scenario like this. E-Ink screens are a natural response to this requirement. Unlike traditional TFT-LCD screens, E-Ink panels are meant to be viewed with an abundance of natural light. As a human, I too enjoy natural light.

Besides my fantasies of hacking on the beach, these would be very useful to combat the raster burn that seems to be so common among regular computer users. Since TFT-LCDs act as an artificial sunlight, they can have very negative side-effects on the eyes, and indirectly on the brain. Since E-Ink screens work without a backlight they are not susceptible to these problems. This has the potential to help me reclaim some of the time that I spend without a device before bedtime for health reasons.

The limitations of E-Ink panels are well known to anybody who has used one. The refresh rate is not nearly as good, the color saturation varies between abysmal to non-existent, and the available size are much more limited than LCD panels (smaller). Despite all these reasons, the panels do have advantages. They do not give the user raster burn like other backlit panels. They are cheap, standardized, and easy to replace. They are also useable in direct sunlight. Until recently they offered competitive DPI compared to laptop panels as well.

As a computer professional many of these downsides of LCD panels concern me. I spend a large amount of my work day staring at the displays. I fear this will have a lasting effect on me and many others who do the same.

The E-Ink manufacturer offerings are surprisingly sparse, with no devices that I can find targeted towards consumers or hobbyists. Traditional LCDs are available over a USB interface, able to be used as external displays on any embedded or workstation system. Interfaces for E-Ink displays are decidedly less advanced. The panels that Amazon sources use an undocumented DTO protocol/connector. The panels that everybody else seems to use also have a specific protocol/connector, but some controllers are available.

The one panel I’ve been able to source to try to integrate into a laptop-like object is PervasiveDisplay’s 9.7″ panel with SPI controller. This would allow a computer to speak SPI to the controller board, which would then translate the calls into operations to manage drawing to the panel. Although this is useful, availability is limited to a few component wholesale sites and Digikey. Likewise it’s not exactly cheap. Although the SPI controller board is only $28, the set of controller and 9.7″ panel is $310. Similar replacement Kindle DX panels cost around $85 elsewhere on the internet.

It would be cheaper to buy an entire Kindle DX, scrap the computer and salvage the panel than to buy the PervasiveDisplays evaluation kit on Digikey. To be fair this is comparing a used consumer device to a niche evaluation kit, so of course the former device is going to be cheaper.

To their credit, they’re also trying to be active in the Open Hardware community. They’ve launched RePaper.org, which is a site advocating freeing ePaper technology from the hands of the few companies and into the hands of open hardware enthusiasts and low-run product manufacturers.

From their site:

We recognize ePaper is a new technology and we’re asking your help in making it better known. Up till now, all industry players have kept the core technologies closed. We want to change this. If the history of the Internet has proven anything, it is that open technologies lead to unbounded innovation and unprecedented value added to the entire economy.

There are some panels listed up on SparkFun and Adafruit, although those are limited to 1.44 inch to 2.0 inch displays, which are useless for my use case. Likewise, these are geared towards Arduino compatibility, while I need something that is performant through a (relatively) fast and high bandwidth interface like exists on my laptop mainboard.

Bunnie/Xobs of the Kosagi Novena open laptop project clued me in to the fact that the iMX6 SoC present in the aforementioned device contains an EPD (Electronic Paper Display) controller. Although the pins on the chip likely aren’t broken out to the board, it gives me hope. My hope is that in the future devices such as the Raspberry Pi, CubieBoard, or other single-board computers will break out the controller to a header on the main board.

I think that making this literal stockpile of panels available to open hardware enthusiasts, we can empower them to create anything from innovations in the eBook reader market to creating an entirely new class of device.

Adam LoftingThe week ahead: 26 Jan 2015

unrelatedphoto

I should have started the week by writing this, but I’ll do it quickly now anyway.

My current todo list.
List status: Pretty good. Mostly organized near the top. Less so further down. Fine for now.

Objectives to call out for this week.

  • Bugzilla and Github clean-out / triage
  • Move my home office out to the shed (depending on a few things)

+ some things that carry over from last week

  • Write a daily working process
  • Work out a plan for aligning metrics work with dev team heartbeats
  • Don’t let the immediate todo list get in the way of planning long term processes
  • Invest time in working open
  • Wrestle with multiple todo list systems until they (or I) work together nicely

Mozilla FundraisingShould we put payment provider options directly on the snippet?

While our End of Year (EOY) fundraising campaign is finished, we still have a few updates to share with you. This post documents one of the A/B tests we ran during the campaign. Should we put payment provider options directly … Continue reading

Nick DesaulniersWriting my first technical book chapter

It’s a feeling of immense satisfaction when we complete a major achievement. Being able to say “it’s done” is such a great stress relief. Recently, I completed work on my first publication, a chapter about Emscripten for the upcoming book WebGL Insights to be published by CRC Press in time for SIGGRAPH 2015.

One of the life goals I’ve had for a while is writing a book. A romantic idea it seems to have your ideas transcribed to a medium that will outlast your bones. It’s enamoring to hold books from long dead authors, and see that their ideas are still valid and powerful. Being able to write a book, in my eyes, provides some form of life after death. Though, one could imagine ancestors reading blog posts from long dead relatives via utilities like the Internet Archive’s WayBack Machine.

Writing about highly technical content places an upper limit on the usefulness of the content, and shows as “dated” quickly. A book I recently ordered was Scott Meyers’ Effective Modern C++. This title strikes me, because what exactly do we consider modern or contemporary? Those adjectives only make sense in a time limited context. When C++ undergoes another revolution, Scott’s book may become irrelevant, at which point the adjective modern becomes incorrect. Not that I think Scott’s book or my own is time-limited in usefulness; more that technical books’ duration of usefulness is significantly less than philosophical works like 1984 or Brave New World. Almost like having a record in a sport is a feather in one’s cap, until the next best thing comes along and you’re forgotten to time.

Somewhat short of my goal of writing an entire book, I only wrote a single chapter for a book. It’s interesting to see that a lot of graphics programming books seem to follow the format of one author per chapter or at least multiple authors. Such book series as GPU Gems, Shader X, and GPU Pro follow this pattern, which is interesting. After seeing how much work goes into one chapter, I think I’m content with not writing an entire book, though I may revisit that decision later in life.

How did this all get started? I had followed Graham Sellers on Twitter and saw a tweet from him about a call to authors for WebGL Insights. Explicitly in the linked to page under the call for authors was interest in proposals about Emscripten and asm.js.

At the time, I was headlong into a project helping Disney port Where’s My Water from C++ to JavaScript using Emscripten. I was intimately familiar with Emscripten, having been trained by one of its most prolific contributors, Jukka Jylänki. Also, Emscripten’s creator, Alon Zakai, sat on the other side of the office from me, so I was constantly pestering him about how to do different things with Emscripten. The #emscripten irc channel on irc.mozilla.org is very active, but there’s no substitute for being able to have a second pair of eyes look over your shoulder when something is going wrong.

Knowing Emscripten’s strengths and limitations, seeing interest in the subject I knew a bit about (but wouldn’t consider myself an expert in), and having the goal of writing something to be published in book form, this was my opportunity to seize.

I wrote up a quick proposal with a few figures about why Emscripten was important and how it worked, and sent it off with fingers crossed. Initially, I was overjoyed to learn when my proposal was accepted, but then there was a slow realization that I had a lot of work to do. The editor, Patrick Cozzi, set up a GitHub repo for our additional code and figures, a mailing list, and sent us a chapter template document detailing the process. We had 6 weeks to write the rough draft, then 6 weeks to work with reviewers to get the chapter done. The chapter was written as a Google Doc, so that we could have explicit control over who we shared the document with, and what kinds of editing power they had over the document. I think this approach worked well.

I had most of the content written by week 2. This was surprising to me, because I’m a heavy procrastinator. The only issue was that the number of pages I wrote was double the allowed amount; way over page count. I was worried about the amount of content, but told myself to try not to be attached to the content, just as you shouldn’t stay attached with your code.

I took the additional 4 weeks I had left to finish the rough draft to invite some of my friends and coworkers to provide feedback. It’s useful to have a short list of people who have ever offered to help in this regard or owe you one. You’ll also want a diverse team of reviewers that are either close to the subject matter, or approaching it as new information. This allows you to stay technically correct, while not presuming your readers know everything that you do.

The strategy worked out well; some of the content I had initially written about how JavaScript VMs and JITs speculate types was straight up wrong. While it played nicely into the narrative I was weaving, someone more well versed in JavaScript virtual machines would be able to call BS on my work. The reviewers who weren’t as close to subject matter were able to point out when logical progressions did not follow.

Fear of being publicly corrected prevents a lot of people from blogging or contributing to open source. It’s important to not stay attached to your work, especially when you need to make cuts. When push came to shove, I did have difficulty removing sections.

Lets say you have three sequential sections: A, B, & C. If section A and section B both set up section C, and someone tells you section B has to go, it can be difficult to cut section B because as the author you may think it’s really important to include B for the lead into C. My recommendation is sum up the most important idea from section B and add it to the end of section A.

For the last six weeks, the editor, some invited third parties, and other authors reviewed my chapter. It was great that others even followed along and pointed out when I was making assumptions based on specific compiler or browser. Eric Haines even reviewed my chapter! That was definitely a highlight for me.

We used a Google Sheet to keep track of the state of reviews. Reviewers were able to comment on sections of the chapter. What was nice was that you were able to keep use the comment as a thread, being able to respond directly to a criticism. What didn’t work so well was then once you edited that line, the comment and thus the thread was lost.

Once everything was done, we zipped up the assets to be used as figures, submitted bios, and wrote a tips and tricks section. Now, it’s just a long waiting game until the book is published.

As far as dealing with the publisher, I didn’t have much interaction. Since the book was assembled by a dedicated editor, Patrick did most of the leg work. I only asked that what royalties I would receive be donated to Mozilla, which the publisher said would be too small (est $250) to be worth the paperwork. It would be against my advice if you were thinking of writing a technical book for the sole reason of monetary benefit. I’m excited to be receiving a hard cover copy of the book when it’s published. I’ll also have to see if I can find my way to SIGGRAPH this year; I’d love to meet my fellow authors in person and potential readers. Just seeing the list of authors was really a who’s-who of folks doing cool WebGL stuff.

If you’re interested in learning more about working with Emscripten, asm.js, and WebGL, I sugguest you pick up a copy of WebGL Insights in August when it’s published. A big thank you to my reviewers: Eric Haines, Havi Hoffman, Jukka Jylänki, Chris Mills, Traian Stanev, Luke Wagner, and Alon Zakai.

So that was a little bit about my first experience with authorship. I’d be happy to follow up with any further questions you might have for me. Let me know in the comments below, on Twitter, HN, or wherever and I’ll probably find it!

Gregory SzorcAutomatic Python Static Analysis on MozReview

A bunch of us were in Toronto last week hacking on MozReview.

One of the cool things we did was deploy a bot for performing Python static analysis. If you submit some .py files to MozReview, the bot should leave a review. If it finds violations (it uses flake8 internally), it will open an issue for each violation. It also leaves a comment that should hopefully give enough detail on how to fix the problem.

While we haven't done much in the way of performance optimizations, the bot typically submits results less than 10 seconds after the review is posted! So, a human should never be reviewing Python that the bot hasn't seen. This means you can stop thinking about style nits and start thinking about what the code does.

This bot should be considered an alpha feature. The code for the bot isn't even checked in yet. We're running the bot against production to get a feel for how it behaves. If things don't go well, we'll turn it off until the problems are fixed.

We'd like to eventually deploy C++, JavaScript, etc bots. Python won out because it was the easiest to integrate (it has sane and efficient tooling that is compatible with Mozilla's code bases - most existing JavaScript tools won't work with Gecko-flavored JavaScript, sadly).

I'd also like to eventually make it easier to locally run the same static analysis we run in MozReview. Addressing problems locally before pushing is a no-brainer since it avoids needless context switching from other people and is thus better for productivity. This will come in time.

Report issues in #mozreview or in the Developer Services :: MozReview Bugzilla component.

Karl DubostWorking With Transparency Habits. Something To Learn.

I posted this following text as a comment 3 days ago on Mark Surman's blog on Transparency habits, but it is still in the moderation queue. So instead of taking the chance to lose it. I'm reposting that comment here. This might need to be develop by a followup post.

Mark says:

I encourage everyone at Mozilla to ask themselves: how can we all build up our transparency habits in 2015? If you already have good habits, how can you help others? If, like me, you’re a bit rusty, what small things can you do to make your work more open?

The mistake we often do with transparency is that we think it is obvious for most people. But working in a transparent way requires a lot of education and mentoring. It’s one thing we should try to improve at Mozilla when onboarding new employees. Teaching what it means to be transparent. I’m not even sure everyone has the same notion of what transparency means already.

For example, too many times, I receive emails in private. That’s unfortunate because it creates information silos and it becomes a lot harder to open up a conversation which started in private. Because I was kind of tired of this, I created a set of slides and explanation for learning how to work with emails. Available in French and English.

Some people are afraid of working in the open for many reasons. They may come from a company where secrecy was very strong, or they had a bad experience by being too open. It takes then time to re-learn the benefits of working in the open.

So because you asked an open question :) Some items.

  • Each time you sent an email, it probably belongs to a project. Send the email to the person (To: field) and always add in copy the relevant mailing list (Cc: field). You’ll get an archive, URL pointer for the future, etc.
  • Each time you are explaining something (such as a process, an howto, etc) to someone, make it a blog post, then send this URL to this someone. It will benefit more people on the long term.
  • Each time, you’re having a meeting, choose one scribe and scribe down this meeting and publish the minutes of the meetings. There are many techniques associated to these. See for example the record of Web Compat team meeting on January 13, 2015 and the index of all meetings (I could explain how we manage that in a blog post).
  • Each time you have a F2F meeting with someone or a group, take notes and publish these notes online to a stable URI. It will help other people to participate.

Let's learn together how to work in a transparent way or in the open.

Otsukare.

Gregory SzorcEnd to End Testing with Docker

I've written an extensive testing framework for Mozilla's version control tools. Despite it being a little rough around the edges, I'm a bit proud of it.

When you run tests for MozReview, Mozilla's heavily modified Review Board code review tool, the following things happen:

  • A MySQL server is started in a Docker container.
  • A Bugzilla server (running the same code as bugzilla.mozilla.org) is started on an Apache httpd server with mod_perl inside a Docker container.
  • A RabbitMQ server mimicking pulse.mozilla.org is started in a Docker container.
  • A Review Board Django development server is started.
  • A Mercurial HTTP server is started

In the future, we'll likely also need to add support for various other services to support MozReview and other components of version control tools:

  • The Autoland HTTP service will be started in a Docker container, along with any other requirements it may have.
  • An IRC server will be started in a Docker container.
  • Zookeeper and Kafka will be started on multiple Docker containers

The entire setup is pretty cool. You have actual services running on your local machine. Mike Conley and Steven MacLeod even did some pair coding of MozReview while on a plane last week. I think it's pretty cool this is even possible.

There is very little mocking in the tests. If we need an external service, we try to spin up an instance inside a local container. This way, we can't have unexpected test successes or failures due to bugs in mocking. We have very high confidence that if something works against local containers, it will work in production.

I currently have each test file owning its own set of Docker containers and processes. This way, we get full test isolation and can run tests concurrently without race conditions. This drastically reduces overall test execution time and makes individual tests easier to reason about.

As cool as the test setup is, there's a bunch I wish were better.

Spinning up and shutting down all those containers and processes takes a lot of time. We're currently sitting around 8s startup time and 2s shutdown time. 10s overhead per test is unacceptable. When I make a one line change, I want the tests to be instantenous. 10s is too long for me to sit idly by. Unfortunately, I've already gone to great pains to make test overhead as short as possible. Fig wasn't good enough for me for various reasons. I've reimplemented my own orchestration directly on top of the docker-py package to achieve some significant performance wins. Using concurrent.futures to perform operations against multiple containers concurrently was a big win. Bootstrapping containers (running their first-run entrypoint scripts and committing the result to be used later by tests) was a bigger win (first run of Bugzilla is 20-25 seconds).

I'm at the point of optimizing startup where the longest pole is the initialization of the services inside Docker containers themselves. MySQL takes a few seconds to start accepting connections. Apache + Bugzilla has a semi-involved initialization process. RabbitMQ takes about 4 seconds to initialize. There are some cascading dependencies in there, so the majority of startup time is waiting for processes to finish their startup routine.

Another concern with running all these containers is memory usage. When you start running 6+ instances of MySQL + Apache, RabbitMQ, + ..., it becomes really easy to exhaust system memory, incur swapping, and have performance fall off a cliff. I've spent a non-trivial amount of time figuring out the minimal amount of memory I can make services consume while still not sacrificing too much performance.

It is quite an experience having the problem of trying to minimize resource usage and startup time for various applications. Searching the internet will happily give you recommended settings for applications. You can find out how to make a service start in 10s instead of 60s or consume 100 MB of RSS instead of 1 GB. But what the internet won't tell you is how to make the service start in 2s instead of 3s or consume as little memory as possible. I reckon I'm past the point of diminishing returns where most people don't care about any further performance wins. But, because of how I'm using containers for end-to-end testing and I have a surplus of short-lived containers, it is clearly I problem I need to solve.

I might be able to squeeze out a few more seconds of reduction by further optimizing startup and shutdown. But, I doubt I'll reduce things below 5s. If you ask me, that's still not good enough. I want no more than 2s overhead per test. And I don't think I'm going to get that unless I start utilizing containers across multiple tests. And I really don't want to do that because it sacrifices test purity. Engineering is full of trade-offs.

Another takeaway from implementing this test harness is that the pre-built Docker images available from the Docker Registry almost always become useless. I eventually make a customization that can't be shoehorned into the readily-available image and I find myself having to reinvent the wheel. I'm not a fan of the download and run a binary model, especially given Docker's less-than-stellar history on the security and cryptography fronts (I'll trust Linux distributions to get package distribution right, but I'm not going to be trusting the Docker Registry quite yet), so it's not a huge loss. I'm at the point where I've lost faith in Docker Registry images and my default position is to implement my own builder. Containers are supposed to do one thing, so it usually isn't that difficult to roll my own images.

There's a lot to love about Docker and containerized test execution. But I feel like I'm foraging into new territory and solving problems like startup time minimization that I shouldn't really have to be solving. I think I can justify it given the increased accuracy from the tests and the increased confidence that brings. I just wish the cost weren't so high. Hopefully as others start leaning on containers and Docker more for test execution, people start figuring out how to make some of these problems disappear.

Stormy PetersWorking in the open is hard

A recent conversation on a Mozilla mailing list about whether IRC channels should be archived or not shows what a commitment it is to remain open. It’s hard work and not always comfortable to work completely in the open.

Most of us in open source are used to “working in the open”. Everything we send to a mailing list is not only public, but archived and searchable via Google or Yahoo! forever. Five years later, you can go back and see how I did my job as Executive Director of GNOME. Not only did I blog about it, but many of the conversations I had were on open mailing lists and irc channels.

There are many benefits to being open.

Being open means that anybody can participate, so you get much more help and diversity.

Being open means that you are transparent and accountable.

Being open means you have history. You can also go back and see exactly why a decision was made, what the pros and cons were and see if any of the circumstances have changed.

But it’s not easy.

Being open means that when you have a disagreement, the world can follow along. We warn teenagers about not putting too much on social media, but can you imagine every disagreement you’ve ever had at work being visible to all your future employers. (And spouses!)

But those of us working in open source have made a commitment to be open and we try hard.

Many of us get used to working in the open, and we think it feels comfortable and we think we’re good at it. And then something will remind us that it is a lot of work and it’s not always comfortable. Like a conversation about whether irc conversations should be archived or not. IRC conversations are public but not always archived. So people treat them as a place where anyone can drop in but the conversation is bounded in time and limited to the people you can see in the room. The fact that these informal conversations might be archived and read by anyone and everyone later means that you now have to think a lot more about what you are saying. It’s less of a chat and more of a weighed conversation.

The fact that people steeped in open source are having a heated debate about whether Mozilla IRC channels should be archived or not shows that it’s not easy being open. It takes a lot of work and a lot of commitment.

 

Doug BelshawWeeknote 04/2015

This week I’ve been:

I wasn’t at BETT this week. It’s a great place to meet people I haven’t seen for a while and last year I even gave a couple of presentations and a masterclass. However, this time around, my son’s birthday and party gave me a convenient excuse to miss it.

Next week I’m working from home as usual. In fact, I don’t think I’m away again until our family holiday to Dubai in February half-term!

Image CC BY Dave Fayram

Mike HommeyExplicit rename/copy tracking vs. detection after the fact

One of the main differences in how mercurial and git track files is that mercurial does rename and copy tracking and git doesn’t. So in the case of mercurial, users are expected to explicitly rename or copy the files through the mercurial command line so that mercurial knows what happened. Git simply doesn’t care, and will try to detect after the fact when you ask it to.

The consequence is that my git-remote-hg, being currently a limited prototype, doesn’t make the effort to inform mercurial of renames or copies.

This week, Ehsan, as a user of that tool, pushed some file moves, and subsequently opened an issue, because some people didn’t like it.

It was a conscious choice on my part to make git-remote-hg public without rename/copies detection, because file renames/copies are not happening often, and can just as much not be registered by mercurial users.

In fact, they haven’t all been registered for as long as Mozilla has been using mercurial (see below, I didn’t actually know I was so spot on when I wrote this sentence), and people haven’t been pointed at for using broken tools (and I’ll skip the actual language that was used when talking about Ehsan’s push).

And since I’d rather not make unsubstantiated claims, I dug in all of mozilla-central and related repositories (inbound, b2g-inbound, fx-team, aurora, beta, release, esr*) and here is what I found, only accounting files that have been copied or renamed without being further modified (so, using git diff-tree -r -C100%, and eliminating empty files), and correlating with the mercurial rename/copy metadata:

  • There have been 45069 file renames or copies in 1546 changesets.
  • Mercurial doesn’t know 5482 (12.1%) of them, from 419 (27.1%) changesets.
  • 72 of those changesets were backouts.
  • 19 of those backouts were of changesets that didn’t have rename/copy information, so 53 of those backouts didn’t actually undo what mercurial knew of those backed out changesets.
  • Those 419 changesets were from 144 distinct authors (assuming I didn’t miss some duplicates from people who changed email).
  • Fun fact, the person with colorful language, and that doesn’t like git-remote-hg, is part of them. I am too, and that was with mercurial.
  • The most recent occurrence of renames/copies unknown to mercurial is already not Ehsan’s anymore.
  • The oldest occurrence is in the 19th (!) mercurial changeset.

And that’s not counting all the copies and renames with additional modifications.

Fun fact, this is what I found in the Mercurial mercurial repository:

  • There have been 255 file renames or copies in 41 changesets.
  • Mercurial doesn’t know about 38 (14.9%) of them, from 4 (9.7%) changesets.
  • One of those changesets was from Matt Mackall himself (creator and lead developer of mercurial).

There are 1061 files in mercurial, versus 115845 in mozilla-central, so there is less occasion for renames/copies there, still, even they forget to use “hg move” and break their history as a result.

I think this shows how requiring explicit user input simply doesn’t pan out.

Meanwhile, I have prototype copy/rename detection for git-remote-hg working, but I need to tweak it a little bit more before publishing.

Mozilla Release Management TeamFirefox 36 beta2 to beta3

In this beta, as in beta 2, we have a bug fixes for MSE. We have also a few bugs found with the release of Firefox 35. As usual, beta 3 is a desktop only version.

  • 43 changesets
  • 118 files changed
  • 1261 insertions
  • 476 deletions

ExtensionOccurrences
cpp17
js13
h7
ini6
html6
jsm3
xml1
webidl1
txt1
svg1
sjs1
py1
mpd1
list1

ModuleOccurrences
dom20
browser18
toolkit6
dom4
dom4
testing3
gfx3
netwerk2
layout2
mobile1
media1
docshell1
build1

List of changesets:

Steven MichaudBug 1118615 - Flash hangs in HiDPI mode on OS X running peopleroulette app. r=mstange a=sledru - 430bff48811d
Mark GoodwinBug 1096197 - Ensure SSL Error reports work when there is no failed certificate chain. r=keeler, a=sledru - a7f164f7c32d
Seth FowlerBug 1121802 - Only add #-moz-resolution to favicon URIs that end in ".ico". r=Unfocused, a=sledru - d00b4a85897c
David MajorBug 1122367 - Null check the result of D2DFactory(). r=Bas, a=sledru - 57cb206153af
Michael ComellaBug 1116910 - Add new share icons in the action bar for tablet. r=capella, a=sledru - f6b2623900f1
Jonathan WattBug 1122578 - Part 1: Make DrawTargetCG::StrokeRect stroke from the same corner and in the same direction as older OS X and other Moz2D backends. r=Bas, a=sledru - d3c92eebdf3e
Jonathan WattBug 1122578 - Part 2: Test start point and direction of dashed stroking on SVG rect. r=longsonr, a=sledru - 7850d99485e6
Mats PalmgrenBug 1091709 - Make Transform() do calculations using gfxFloat (double) to avoid losing precision. r=mattwoodrow, a=sledru - 0b22d12d0736
Hiroyuki IkezoeBug 1118749 - Need promiseAsyncUpdates() before frecencyForUrl. r=mak, a=test-only - 4501fcac9e0b
Jordan LundBug 1121599 - remove android-api-9-constrained and android-api-10 mozconfigs from all trees, r=rnewman a=npotb DONTBUILD - 787968dadb44
Tooru FujisawaBug 1115616 - Commit composition string forcibly when search suggestion list is clicked. r=gavin,adw a=sylvestre - 2d629038c57b
Ryan VanderMeulenBug 1075573 - Disable test_videocontrols_standalone.html on Android 2.3 due to frequent failures. a=test-only - a666c5c8d0ba
Ryan VanderMeulenBug 1078267 - Skip netwerk/test/mochitests/ on Android due to frequent failures. a=test-only - 0c36034999bb
Christoph KerschbaumerBug 1121857 - CSP: document.baseURI should not get blocked if baseURI is null. r=sstamm, a=sledru - a9b183f77f8d
Christoph KerschbaumerBug 1122445 - CSP: don't normalize path for CSP checks. r=sstamm, a=sledru - 7f32601dd394
Christoph KerschbaumerBug 1122445 - CSP: don't normalize path for CSP checks - test updates. r=sstamm, a=sledru - a41c84bee024
Karl TomlinsonBug 1085247 enable remaining mediasource-duration subtests a=sledru - d918f7ea93fe
Sotaro IkedaBug 1121658 - Remove DestroyDecodedStream() from MediaDecoder::SetDormantIfNecessary() r=roc a=sledru - 731843c58e0d
Jean-Yves AvenardBug 1123189: Use sourceended instead of loadeddata to check durationchanged count r=karlt a=sledru - 09df37258699
Karl TomlinsonBug 1123189 Queue "durationchange" instead of dispatching synchronously r=cpearce a=sledru - 677c75e4d519
Jean-Yves AvenardBug 1123269: Better fix for Bug 1121876 r=cpearce a=sledru - 56b7a3953db2
Jean-Yves AvenardBug 1123054: Don't check VDA reference count. r=rillian a=sledru - a48f8c55a98c
Andreas PehrsonBug 1106963 - Resync media stream clock before destroying decoded stream. r=roc, a=sledru - cdffc642c9b9
Ben TurnerBug 1113340 - Make sure blob urls can load same-prcess PBackground blobs. r=khuey, a=sledru - c16ed656a43b
Paul AdenotBug 1113925 - Don't return null in AudioContext.decodeAudioData. r=bz, a=sledru - 46ece3ef808e
Masatoshi KimuraBug 1112399 - Treat NS_ERROR_NET_INTERRUPT and NS_ERROR_NET_RESET as SSL errors on https URLs. r=bz, a=sledru - ba67c22c1427
Hector ZhaoBug 1035400 - 'restart to update' button not working. r=rstrong, a=sledru - 8a2a86c11f7c
Ryan VanderMeulenBacked out the code changes from changeset c16ed656a43b (Bug 1113340) since Bug 701634 didn't land on Gecko 36. - e8effa80da5b
Ben TurnerBug 1120336 - Land the test-only changes on beta. r=khuey, a=test-only - a6e5dedbd0c0
Sami JaktholmBug 1001821 - Wait for eyedropper to be destroyed before ending tests and checking for leaks. r=pbrosset, a=test-only - 4036f72a0b10
Mark HammondBug 1117979 - Fix orange by not relying on DNS lookup failure in the 'error' test. r=gavin, a=test-only - e7d732bf6091
Honza BambasBug 1123732 - Null-check uri before trying to use it. r=mcmanus, a=sledru - 3096b7b44265
Florian QuèzeBug 1103692 - ReferenceError: bundle is not defined in webrtcUI.jsm. r=felipe, a=sledru - 9b565733c680
Bobby HolleyBug 1120266 - Factor some machinery out of test_BufferingWait into mediasource.js and make it Promise-friendly. r=jya, a=sledru - ff1b74ec9f19
Jean-Yves AvenardBug 1120266 - Add fragmented mp4 sample videos. r=cajbir, a=sledru - 53f55825252a
Paul AdenotBug 698079 - When using the WASAPI backend, always output audio to the default audio device. r=kinetik, a=sledru - 20f7d44346da
Paul AdenotBug 698079 - Synthetize the clock when using WASAPI to prevent A/V desynchronization issues when switching the default audio output device. r=kinetik, a=sledru - 0411d20465b4
Matthew NoorenbergheBug 1079554 - Ignore most UITour messages from pages that aren't visible. r=Unfocused, a=sledru - e35e98044772
Markus StangeBug 1106906 - Always return false from nsFocusManager::IsParentActivated in the parent process. r=smaug, a=sledru - 0d51214654ad
Bobby HolleyBug 1121148 - Move constants that we should not be using directly into a namespace. r=cpearce, a=sledru - 1237ddff18be
Bobby HolleyBug 1121148 - Make QUICK_BUFFERING_LOW_DATA_USECS a member variable and adjust it appropriately. r=cpearce, a=sledru - 62f7b8ea571f
Chris AtLeeBug 1113606 - Use app-specific API keys. r=mshal, r=nalexander, a=gavin - b3836e49ae7f
Ryan VanderMeulenBug 1121148 - Add missing detail:: to fix bustage. a=bustage - b3792d13df24

Henri SivonenIf You Want Software Freedom on Phones, You Should Work on Firefox OS, Custom Hardware and Web App Self-Hostablility

TL;DR

To achieve full-stack Software Freedom on mobile phones, I think it makes sense to

  • Focus on Firefox OS, which is already Free Software above the driver layer, instead of focusing on removing proprietary stuff from Android whose functionality is increasingly moving into proprietary components including Google Play Services.
  • Commission custom hardware whose components have been chosen such that the foremost goal is achieving Software Freedom on the driver layer.
  • Develop self-hostable Free Software Web apps for the on-phone software to connect to and a system that makes installing them on a home server as easy as installing desktop or mobile apps and connecting the home server to the Internet as easy as connecting a desktop.

Inspiration

Back in August, I listened to an episode of the Free as in Freedom oggcast that included a FOSDEM 2013 talk by Aaron Williamson titled “Why the free software phone doesn’t exist”. The talk actually didn’t include much discussion of the driver situation and instead devoted a lot of time to talking about services that phones connect to and the interaction of the DMCA with locked bootloaders.

Also, I stumbled upon the Indie Phone project. More on that later.

Software Above the Driver Layer: Firefox OS—Not Replicant

Looking at existing systems, it seems that software close to the hardware on mobile phones tends to be more proprietary than the rest of the operating system. Things like baseband software, GPU drivers, touch sensor drivers and drivers for hardware-accelerated video decoding (and video DRM) tend to be proprietary even when the Linux kernel is used and substantial parts of other system software are Free Software. Moreover, most of the mobile operating systems built on the Linux kernel are actually these days built on the Android flavor of the Linux kernel in order to be able to use drivers developed for Android. Therefore, the driver situation is the same for many of the different mobile operating systems. For these reasons, I think it makes sense to separate the discussion of Software Freedom on the driver layer (code closest to hardware) and the rest of the operating system.

Why Not Replicant?

For software above the driver layer, there seems to be something of a default assumption in the Free Software circles that Replicant is the answer for achieving Software Freedom on phones. This perception of mine probably comes from Replicant being the contender closest to the Free Software Foundation with the FSF having done fundraising and PR for Replicant.

I think betting on Replicant is not a good strategy for the Free Software community if the goal is to deliver Software Freedom on phones to many people (and, therefore, have more of a positive impact on society) instead of just making sure that a Free phone OS exists in a niche somewhere. (I acknowledge that hardline FSF types keep saying negative things about projects that e.g. choose permissive licenses in order to prioritize popularity over copyleft, but the “Free Software, Free Society” thing only works if many people actually run Free Software on the end-user devices, so in that sense, I think it makes sense to think of what has a chance to be run by many people instead of just the existence of a Free phone OS.)

Android is often called an Open Source system, but when someone buys a typical Android phone, they get a system with substantial proprietary parts. Initially, the main proprietary parts above the driver layer were the Google applications (Gmail, Maps, etc.) but the non-app, non-driver parts of the system were developed as Open Source / Free Software in the Android Open Source Project (AOSP). Over time, as Google has realized that OEMs don’t care to deliver updates for the base system, Google has moved more and more stuff to the proprietary Google application package. Some apps that were originally developed as part of AOSP no longer are. Also, Google has introduced Google Play Services, which is a set of proprietary APIs that keeps updating even when the base system doesn’t.

Replicant takes Android and omits the proprietary parts. This means that many of the applications that users expect to see on an Android phone aren’t actually part of Replicant. But more importantly, Replicant doesn’t provide the same APIs as a normal Android system does, because Google Play Services are missing. As more and more applications start relying on Google Play Services, Replicant and Android-as-usually-shipped diverge as development platforms. If Replicant was supposed to benefit from the network effects of being compatible with Android, these benefits will be realized less and less over time.

Also, Android isn’t developed in the open. The developers of Replicant don’t really get to contribute to the next version of AOSP. Instead, Google develops something and then periodically throws a bundle of code over the wall. Therefore, Replicant has the choice of either having no say over how the platform evolves or has the option to diverge even more from Android.

Instead of the evolution of the platform being controlled behind closed doors and the Free Software community having to work with a subset of the mass-market version of the platform, I think it would be healthier to focus efforts on a platform that doesn’t require removing or replacing (non-driver) system components as the first step and whose development happens in public repositories where the Free Software community can contribute to the evolution of the platform.

What Else Is There?

Let’s look at the options. What at least somewhat-Free mobile operating systems are there?

First, there’s software from the OpenMoko era. However, the systems have no appeal to people who don’t care that much about the Free Software aspect. I think it would be strategically wise for the Free Software community to work on a system that has appeal beyond the Free Software community in order to be able to benefit from contributions and network effects beyond the core Free Software community.

Open webOS is not on an upwards trajectory (on phones despite there having been a watch announcement at CES). (Addition 2015-01-24: There exists a project called LuneOS to port Open webOS to Nexus phones, though.) Tizen (on phones) has been delayed again and again and became available just a few days ago, so it’s not (at least quite yet) a system with demonstrated appeal (on phones) beyond the Free Software community, and it seems that Tizen has substantial non-Free parts. Jolla’s Sailfish OS is actually shipping on a real phone, but Jolla keeps some components proprietary, so the platform fails the criterion of not having to remove or replace (non-driver) system components as the first step (see Nemo). I don’t actually know if Ubuntu Touch has proprietary non-driver system components. However, it does appear to have central components to which you cannot contribute on an “inbound=outbound” licensing basis, because you have to sign a CLA that gives Canonical rights to your code beyond the Free Software license of the project as a condition of your patch getting accepted. In any case, Ubuntu Touch is not shipping yet on real phones, so it is not yet demonstratably a system that has appeal beyond the Free Software community.

Firefox OS, in contrast, is already shipping on multiple real phones (albeit maybe not in your country) demonstrating appeal beyond the Free Software community. Also, Mozilla’s leverage is the control of the trademark—not keeping some key Mozilla-developed code proprietary. The (non-trademark) licensing of the project works on the “inbound=outbound” basis. And, importantly, the development repositories are visible and open to contribution in real time as opposed to code getting thrown over the wall from time to time. Sure, there is code landing such that the motivation of the changes is confidential or obscured with codenames, but if you want to contribute based on your motivations, you can work on the same repositories that the developers who see the confidential requirements work on.

As far as I can tell, Firefox OS has the best combination of not being vaporware, having appeal beyond the Free Software appeal and being run closest to the manner a Free Software project is supposed to be run. So if you want to advance Software Freedom on mobile phones, I think it makes the most sense to put your effort into Firefox OS.

Software Freedom on the Driver Layer: Custom Hardware Needed

Replicant, Firefox OS, Ubuntu Touch, Sailfish OS and Open webOS all use an Android-flavored Linux kernel in order to be able to benefit from the driver availability for Android. Therefore, the considerations for achieving Software Freedom on the driver layer apply equally to all these systems. The foremost problems are controlling the various radios—the GSM/UMTS radio in particular—and the GPU.

If you consider the Firefox OS reference device for 2014 and 2015, Flame, you’ll notice that Mozilla doesn’t have the freedom to deliver updates to all software on the device. Firefox OS is split into three layers: Gonk, Gecko and Gaia. Gonk contains the kernel, drivers and low-level helper processes. Gecko is the browser engine and runs on top of Gonk. Gaia is the system UI and set of base apps running on top of Gecko. You can get Gecko and Gaia builds from Mozilla, but you have to get Gonk builds from the device vendor.

If Software Freedom extended to the whole stack—including drivers—Mozilla (or anyone else) could give you Gonk buids, too. That is, to get full-stack Software Freedom with Firefox OS, the challenge is to come up with hardware whose driver situation allows for a Free-as-in-Freedom Gonk.

As noted, Flame is not that kind of hardware. When this is lamented, it is typically pointed out that “not even the mighty Google” can get the vendors of all the hardware components going into the Nexus devices to provide Free Software drivers and, therefore, a Free Gonk is unrealistic at this point in time.

That observation is correct, but I think it lacks some subtlety. Both Flame and the Nexus devices are reference devices on which the software platform is developed with the assumption that the software platform will then be shipped on other devices that are sufficiently similar that the reference devices can indeed serve as reference. This means that the hardware on the reference devices needs to be reasonably close to the kind of hardware that is going to be available with mass-market price/performance/battery life/weight/size characteristics. Similarity to mass-market hardware trumps Free Software driver availability for these reference devices. (Disclaimer: I don’t participate in the specification of these reference devices, so this paragraph is my educated guess about what’s going on—not any sort of inside knowledge.)

I theorize that building a phone that puts the availability of Free Software drivers first is not impossible but would involve sacrificing on the current mass-market price/performance/battery life/weight/size characteristics and be different enough from the dominant mass-market designs not to make sense as a reference device. Let’s consider how one might go about designing such a phone.

In the radio case, there is proprietary software running on a baseband processor to control the GSM/UMTS radio and some regulatory authorities, such as the FCC, require this software to be certified for regulatory purposes. As a result, the chances of gaining Software Freedom relative to this radio control software in the near term seem slim. From the privacy perspective, it is problematic that this mystery software can have DMA access to the memory of the application processor-i.e. the processor that runs the Linux kernel and the apps. Addition 2015-01-24: There seems to exist a project, OsmocomBB, that is trying to produce GSM-level baseband software as Free Software. (Unlike the project page, the Git repository shows recent signs of activity.) For smart phones, you really need 3G, though.

Technically, data transfer between the application processor and various radios does not need to be fast enough to require DMA access or other low-level coupling. Indeed, for desktop computers, you can get UMTS, Wi-Fi, Bluetooth and GPS radios as external USB devices. It should be possible to document the serial protocol these devices use over USB such that Free drivers can be written on the Linux side while the proprietary radio control software is embedded on the USB device.

This would solve the problem of kernel coupling with non-free drivers in a way that hinders the exercise of Software Freedom relative to the kernel. But wouldn’t the radio control software embedded on the USB device still be non-free? Well, yes it would, but in the current regulatory environment it’s unrealistic to fix that. Moreover, if the software on the USB devices is truly embedded to the point where no one can update it, the Free Software Foundation considers the bundle of hardware and un-updatable software running on the hardware as “hardware” as a whole for Software Freedom purposes. So even if you can’t get the freedom to modify the radio control software, if you make sure that no one can modify it and put it behind a well-defined serial interface, you can both solve the problem of non-free drivers holding back Software Freedom relative to the kernel and get the ideological blessing.

So I think the way to solve the radio side of the problem is to license circuit designs for UMTS, Wi-Fi, Bluetooth and GPS USB dongles and build those devices as hard-wired USB devices onto the main board of the phone inside the phone’s enclosure. (Building hard-wired USB devices into the device enclosure is a common practice in the case of laptops.) This would likely result in something more expensive, more battery draining, heavier and larger than the usual more integrated designs. How much more expensive, heavier, etc.? I don’t know. I hope within bounds that would be acceptable for people willing to pay some extra and accept some extra weigh and somewhat worse battery life and performance in order to get Software Freedom.

As for the GPU, there are a couple of Free drivers: There’s Freedreno for Adreno GPUs. There is the Lima driver for Mali-200 and Mali-400, but a Replicant developer says it’s not good enough yet. Intel has Free drivers for their desktop GPUs and Intel is trying to compete in the mobile space so, who knows, maybe in the reasonably near future Intel manages to integrate GPU design of their own (with a Free driver) with one of their mobile CPUs. Correction 2015-01-24: It appears that after I initially wrote that sentence in August 2014 but before I got around to publishing in January 2015, Intel announced such a CPU/GPU combination.

The current Replicant way to address the GPU driver situation is not to have hardware-accelerated OpenGL ES. I think that’s just not going to be good enough. For Firefox OS (or Ubuntu Touch or Sailfish OS or a more recent version of Android) to work reasonably, you have to have hardware-accelerated OpenGL ES. So I think the hardware design of a Free Software phone needs to grow around a mobile GPU that has a Free driver. Maybe that means using a non-phone (to put radios behind USB) QUALCOMM SoC with Adreno. Maybe that means pushing Lima to good enough a state and then licensing Mali-200 or Mali-400. Maybe that means using x86 and waiting for Intel to come up with a mobile GPU. But it seems clear that the GPU is the big constraint and the CPU choice will have to follow from the GPU solution.

For the encumbered codecs that everyone unfortunately needs to have in practice, it would be best to have true hardware implementations that are so complete that the drivers wouldn’t contain parts of the codec but would just push bits to the hardware. This way, the encumberance would be limited to the hardware. (Aside: Similarly, it would be possible to design a hardware CDM for EME. In that case, you could have video DRM without it being a Software Freedom problem.)

So I think that in order to achieve Software Freedom on the driver layer, it is necessary to commission hardware that fits Free Software instead of trying to just write software that fits the hardware that’s out there. This is significantly different from how software freedom has been achieved on desktop. Also, the notion of making a big upfront capital investment in order to achieve Software Freedom is rather different from the notion that you only need capital for a PC and then skill and time.

I think it could be possible to raise the necessary capital through crowdfunding. (Purism is trying it with the Librem laptop, but, unfortunately, the rate of donations looks bad as of the start of January 2015. Addition 2015-01-24: They have actually reached and exceeded their funding target! Awesome!) I’m not going to try to organize anything like that myself—I’m just theorizing. However, it seems that developing a phone by crowdfunding in order to get characteristics that the market isn’t delivering is something that is being attempted. The Indie Phone project expresses intent to crowdfund the development of a phone designed to allow users to own their own data. Which brings us to the topic of the services that the phone connects to.

Freedom on the Service Side: Easy Self-Hostability Needed

Unfortunately, Indie Phone is not about building hardware to run Firefox OS. The project’s Web site talks about an Indie OS but intentionally tries to make the OS seem uninteresting and doesn’t explain what existing software the small team is intending to build upon. (It seems implausible that such a small team could develop an operating system from scratch.) Also, the hardware intentions are vague. The site doesn’t explain if the project is serious about isolating the baseband processor from the application processor out of privacy concerns, for example. But enough about the vagueness of what the project is going to do. Let’s look at the reasons the FAQ gave against Firefox OS (linking to version control, since the FAQ appears to have been removed from the site between the time I started writing this post and the time I got around to publishing):

“As an operating system that runs web applications but without any applications of its own, Firefox OS actually incentivises the use of closed silos like Google. If your platform can only run web apps and the best web apps in town are made by closed silos like Google, your users are going to end up using those apps and their data will end up in these closed silos.”

The FAQ then goes on to express angst about Mozilla’s relationship with Google (the Indie Phone FAQ was published before Mozilla’s seach deal with Yahoo! was announced) and Telefónica and to talk about how Mozilla doesn’t control the hardware but Indie will.

I think there is truth to Web technology naturally having the effect of users gravitating towards whatever centralized service provides the best user experience. However, I think the answer is not to shun Firefox OS but to make de-centralized services easy to self-host and use with Firefox OS.

In particular, it doesn’t seem realistic that anyone would ship a smart phone without a Web browser. In that sense, any smartphone is susceptible to the lure of centralized Web-based services. On the other hand, Google Play and the iOS App Store contain plenty of applications whose user interface is not based on HTML, CSS and JavaScript but still those applications put the users’ data into centralized services. On the flip side, it’s not actually true that Firefox OS only runs Web apps hosted on a central server somewhere. Firefox OS allows you to use HTML, CSS and JavaScript to build apps that are distributed as a zip file and run entirely on the phone without a server component.

But the thing is that, these days, people don’t want even notes or calendar entries that are intended for their own eyes only to stay on the phone only. Instead, even for data meant for the user’s own eyes only, there is a need to have the data show up on multiple devices. I very much doubt that any underdog effort has the muscle to develop a non-Web decentralized network application platform that allows users to interact with their data from all the devices that they want to use to interact with their data. (That is, I wouldn’t bet on e.g Indienet that is going to launch with “with a limited release on OS X Yosemite”.)

I think the answer isn’t fighting the Web Platform but using the only platform that already has clients for all the devices that users want to use—in addition to their phone—to interact with their data: the Web Platform. To use the Web Platform as the application platform such that multiple devices can access the apps but also such that users have Software Freedom, the users need to host the Web apps themselves. Currently, this is way too difficult. Hosting Web apps at home needs to become at least as easy as maintaining a desktop computer at home-preferably easier.

For this to happen, we need:

  • Small home server hardware that is powerful enough to host Web apps for family, that consumes negligible energy (maybe in part by taking the place of the home router that people have always-on consuming electricity today), that is silent and that can boot a vanilla kernel that gets security updates.
  • A Free operating system that runs in such hardware, makes it easy to install Web apps and makes it easy for the apps to become securely reachable over the network.
  • High-quality apps for such a platform.

(Having Software Freedom on the server doesn’t strictly require the server to be placed in your home, but if that’s not a realistic option, there’s clearly a practical freedom deficit even if not under the definition of Free Software. Also, many times the interest in Software Freedom in this area is motivated by data privacy reasons and in the case of Web apps, the server of the apps can see the private data. For these reasons, it makes sense to consider home-hostability.)

Hardware

In this case, the hardware and driver side seems like the smallest problem. At least if you ignore the massive and creepy non-Free firmware, the price of the hardware and don’t try to minimize energy consumption particularly aggressively, suitable x86/x86_64 hardware already exists e.g. from CompuLab. To get the price and energy consumption minimized, it seems that ARM-based solutions would be better, but the situation with 32-bit ARM boards requiring per-board kernel builds and most often proprietary blobs that don’t get updated makes the 32-bit ARM situation so bad that it doesn’t make sense to use 32-bit ARM hardware for this. (At FOSDEM 2013, it sounded like a lot of the time of the FreedomBox project has been sucked into dealing with the badness of the Linux on 32-bit ARM situation.) It remains to be seen whether x86/x86_64 SoCs that boot with generic kernels reach ARM-style price and energy consumption levels first or whether the ARM side gets their generic kernel bootability and Free driver act together (including shipping) with 64-bit ARM first. Either way, the hardware side is getting better.

Apps

As for the apps, PHP-based apps that are supposed to be easy-ish to deploy as long as you have an Apache plus PHP server from a service provider are plentiful, but e.g. Roundcube is no match for Gmail in terms of user experience and even though it’s theoretically possible to write quality software in PHP, the execution paradigm of PHP and the culture of PHP don’t really guide things to that direction.

Instead of relying on the PHP-based apps that are out there and that are woefully uncompetitive with the centralized proprietary offerings, there is a need for better apps written on better foundations (e.g. Python and Node.js). As an example, Mailpile (Python on the server) looks very promising in terms of Gmail-competitive usability aspirations. Unfortunately, as of December 2014, it’s not ready for use yet. (I tried and, yes, filed bugs.) Ethercalc and Etherpad (Node.js on the server) are other important apps.

With apps, the question doesn’t seem to be whether people know how to write them. The question seems to be how to fund the development of the apps so that the people who know how to write them can devote a lot of time to these projects. I, for one, hope that e.g. Mailpile’s user-funded development is sustainable, but it remains to be seen. (Yes, I donated.)

Putting the Apps Together

A crucial missing piece is having a system that can be trivially installed on suitable hardware (or, perhaps in the future, can be pre-installed on suitable hardware) that allows users who want to get started without exercising their freedom to modify the software but provides the freedom to install modified apps if the user so chooses and-perhaps most importantly-makes the networking part very easy.

There are a number of projects that try to aggregate self-hostable apps into a (supposedly at least) easy to install and manage system. However, it seems to me that they tend to be of the PHP flavor, which I think fundamentally disadvantages them in terms of becoming competitive with proprietary centralized Web apps. I think the most promising project in the space that deals with making the better (Python and Node.js-based among others) apps installable with ease is Sandstorm.io, which unfortunately, like Mailpile, doesn’t seem quite ready yet. (Also, in common with Mailpile: a key developer is an ex-Googler. Looks like people who’ve worked there know what it takes to compete with GApps…)

Looking at Sandstorm.io is instructive in terms of seeing what’s hard about putting it all together. On the server, Sandstorm.io runs each Web app in a Linux container that’s walled off from the other apps. All the requests go through a reverse proxy that also provides additional browser-site UI for switching between the apps. Instead of exposing the usual URL structure of each app, Sandstorm.io exposes “grain” URLs, which are unintelligible random-looking character sequences. This design isn’t without problems.

The first problem is that the apps you want to run like Mailpile, Etherpad and Ethercalc have been developed to be deployed on a vanilla Linux server by using application-specific manual steps that puts hosting these apps on a server out of the reach of normal users. (Mailpile is designed to be run on localhost by normal users, but that doesn’t make it reachable from multiple devices, which is what you want from a Web app.) This means that each app needs to be ported to Sandstorm.io. This in turn means that compared to going to upstream, you get stale software, because except for Ethercalc, the maintainer of the Sandstorm.io port isn’t the upstream developer of the app. In fairness, though, the software doesn’t seem to be as a stale as it would be if you installed a package from Debian Stable… Also, as the platform and the apps mature, it’s possible that various app developers start to publish for Sandstorm.io directly on one hand and with more mature apps it’s less necessary to have the latest version (except for security fixes).

Unlike in the case getting a Web app as a Debian package, the URL structure and, it appears, in some cases the storage structure is different in a Sandstorm.io port of an app and in a vanilla upstream version of the app. Therefore, even though avoiding lock-in is one of the things the user is supposed to be able to accomplish by using Sandstorm.io, it’s non-trivial to migrate between the Sandstorm.io version and a non-Sandstorm.io version of a given app. It particularly bothers me that Sandstorm.io completely hides the original URL structure of the app.

Networking

And that leads to the last issue of self-hosting with the ease of just plugging a box into home Ethernet: Web security and Web addressing are rather unfriendly to easy self-hosting.

First of all, there is the problem of getting basic incoming IPv4 connectivity to work. After all, you must be able to reach port 443 (https) of your self-hosting box from all your devices-including reaching the box that’s on your wired home Internet connection from the mobile connection of your phone. Maybe your own router imposes a NAT between your server and the Internet and you’d need to set up port forwarding, which makes things significantly harder than just instructing people to plug stuff in. This might be partially alleviated by making the self-hosting box contain NAT functionality itself so that it could take the place of the NATting home router, but even then you might have to configure something like a cable modem to a bridging mode or, worse, you might be dealing with an ISP who doesn’t actually sell you neutral end-to-end Internet routing and blocks incoming traffic to port 443 (or detects incoming traffic to port 443 and complains to you about running a server even if it’s actually for your personal use so you aren’t violating any clause that prohibits you from using a home connection to offer a service to the public).

One way to solve this would be standardizing simple service were a service provider takes your credit card number and an ssh public key and gives you an IP address. The self-hosting system you run at home would then have a configuration interface that gives you an ssh public key and takes an IP address. The self-hosting box would then establish an ssh reverse tunnel to the IP address with 443 as the local target port and the service provided by the service provider would be sending port 443 of the IP address to this tunnel. You’d still own your data and your server and you’d terminate TLS on your server even though you’d rent an IP address from a data center.

(There are efforts to solve this by giving the user-hosted devices host names under the domain of a service that handles the naming, such as OPI giving the each user a hostname under the op-i.me domain, but then the naming service—not the user—is presumptively the one eligible to get the necessary certificates signed, and delegating away the control of the crypto defeats an important aspect of self-hosting. As a side note, one of the reasons I migrated from hsivonen.iki.fi to hsivonen.fi was that even though I was able to get the board of IKI to submit iki.fi to the Public Suffix List, CAs still seem to think that IKI, not me, is the party eligible for getting certificates signed for hsivonen.iki.fi.)

But even if you solved IPv4-level reachability of the home server from the public Internet as a turn-key service, there are still more hurdles on the way of making this easy. Next, instead of the user having to use an IP address, the user should be able to use a memorable name. So you need to tell the user to go register a domain name, get DNS hosting and point an A record to the IP address. And then you need a certificate for the name you chose for the A record, which at the moment (before Let’s Encrypt is operational) is another thing that makes things too hard.

And that brings us back to Sandstorm.io obscuring the URLs. Correction 2015-01-24: Rather paradoxically, even though Sandstorm.io is really serious about isolating apps from each other on the server, Sandstorm.io gives up the browser-side isolation of the apps that you’d get with a typical deployment of the upstream apps. The only true way to have browser-enforced privilege separation of the client-side JavaScript parts of the apps is for different apps to have different Origins. An Origin is a triple of URL scheme, host name and port. For the apps not to be ridiculously insecure, the scheme has to be https. This means that you either have to give each app a distinct port number or a distinct host name. On surface, it seems that it would be easy to mint port numbers, but users are not used to typing URLs with non-default port numbers and if you depend on port forwarding in a NATting home router or port forwarding through an ssh reverse tunnel, minting port numbers on-demand isn’t that convenient anymore.

So you really want a distinct host name for each app to have a distinct Origin for browser-enforced privilege separation of JavaScript on the client. But the idea was that you could install new apps easily. This means that you have to be able to generate a new working host name at the time of app installation. So unless you have a programmatic way to configure DNS on the fly and have certificates minted on the fly, neither of which you can currently realistically have for a home server, you need a wildcard in the DNS zone and you need a wildcard TLS certicate. Correction 2015-01-24: Sandstorm.io instead uses one hostname and obscure URLs, which is understandable. Despite being understandable, it is sad, since it loses both human-facing semantics of the URLs and browser-enforced privilege separation between the apps. To provide Origin-based privilege separation on the browser side, Sandstorm.io generates hostnames that do not look meaningful to the user and hides them in an iframe, but the URLs shown for the top-level origin in the URL bar are equally obscure. I find it unfortunate that Sandstorm.io does not mint human-friendly URL bar origins with the app names when it is capable of minting origins. (Instead of https://etherpad.example.org/example-document-title and https://ethercalc.example.org/example-spreadsheet-title, you get https://example.org/grain/FcTdrgjttPbhAzzKSv6ESD and https://example.org/grain/o96ouPLKQMEMZkFxNKf2Dr.) Fortunately, Let’s Encrypt seems to be on track to solving the certificate side of this problem by making it easy to get a cert for a newly-minted hostname signed automatically. Even so, the DNS part needs to be made easy enough that it doesn’t remain a blocker for self-hosting a box that allows on-demand Web app installation with browser-side app privilege separation.

Conclusion

There are lots of subproblems to work on, but, fortunately, things don’t seem fundamentally impossible. Interestingly, the problem with software that resides on the phone may be the relatively easy part to solve. That is not to say that it is easy to solve, but once solved, it can scale to a lot of users without the users having to do special things to get started in the role of a user who does not exercise the freedom to modify the system. However, since users these days are not satisfied by merely device-resident software but want things to work across multiple devices, the server-side part is relevant and harder to scale. Somewhat paradoxically, the hardest thing to scale in a usable way seems like a triviality on surface: the addressing of the server-side part in a way that gives sovereignty to users.

Pascal FinetteWeekend Link Pack (Jan 24)

What I was reading this week:

Benjamin KerensaGet a free U2F Yubikey to test on Firefox Nightly

U2F, Yubikey, Universal 2nd FactorPasswords are always going to be vulnerable to being cracked. Fortunately, there are solutions out there that are making it safer for users to interact with services on the web. The new standard in protecting users is Universal 2nd Factor (U2F) authentication which is already available in browsers like Google Chrome.

Mozilla currently has a bug open to start the work necessary to delivering U2F support to people around the globe and bring Firefox into parity with Chrome by offering this excellent new feature to users.

I recently reached out to the folks at Yubico who are very eager to see Universal 2nd Factor (U2F) support in Firefox. So much so that they have offered me the ability to give out up to two hundred Yubikeys with U2F support to testers and will ship them directly to Mozillians regardless of what country you live in so you can follow along with the bug we have open and begin testing U2F in Firefox the minute it becomes available in Firefox Nightly.

If you are a Firefox Nightly user and are interested in testing U2F, please use this form (offer now closed) and apply for a code to receive one of these Yubikeys for testing. (This is only available to Mozillians who use Nightly and are willing to help report bugs and test the patch when it lands)

Thanks again to the folks at Yubico for supporting U2F in Firefox!

Update: This offer is now closed check your email for a code or a request to verify you are a vouched Mozillian! We got more requests also then we had available so only the first two hundred will be fulfilled!

Mozilla WebDev CommunityBeer and Tell – January 2015

Once a month, web developers from across the Mozilla Project get together to trade and battle Pokémon. While we discover the power of friendship, we also find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Michael Kelly: gamegirl

Our first presenter, Osmose (that’s me!), shared a Gameboy emulator, written in Python, called gamegirl. The emulator itself is still very early in development and only has a few hundred CPU instructions implemented. It also includes a console-based debugger for inspecting the Gameboy state while executing instructions, powered by urwid.

Luke Crouch: Automatic Deployment to Heroku using TravisCI and Github

Next, groovecoder shared some wisdom about his new favorite continuous deployment setup. The setup involves hosting your code on Github, running continuous integration using Travis CI, and hosting the site on Heroku. Travis supports deploying your app to Heroku after a successful build, and groovecoder uses this to deploy his master branch to a staging server.

Once the code is ready to go to production, you can make a pull request to a production branch on the repo. Travis can be configured to deploy to a different app for each branch, so once that pull request is merged, the site is deployed to production. In addition, the pull request view gives a good overview of what’s being deployed. Neat!

This system is in use on codesy, and you can check out the codesy Github repo to see how they’ve configured their project to deploy using this pattern.

Peter Bengtsson: django-screencapper

Friend of the blog peterbe showed off django-screencapper, a microservice that generates screencaps from video files using ffmpeg. Developed as a test to see if generating AirMozilla icons via an external service was viable, it queues incoming requests using Alligator and POSTs the screencaps to a callback URL once they’ve been generated.

A live example of the app is available at http://screencapper.peterbe.com/receiver/.

tofumatt: i-dont-like-open-source AKA GitHub Contribution Hider

Motorcycle enthusiast tofumatt hates the Github contributor streak graph. To be specific, he hates the one on his own profile; it’s distracting and leads to bad behavior and imposter syndrome. To save himself and others from this terror, he created a Firefox add-on called the GitHub Contribution Hider that hides only the contribution graph on your own profile. You can install the addon by visiting it’s addons.mozilla.org page. Versions of the add-on for other browsers are in the works.


Fun fact: The power of friendship cannot, in fact, overcome type weaknesses.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Adam LoftingWeeknotes: 23 Jan 2015

unrelated photo

unrelated photo

I managed to complete roughly five of my eleven goals for the week.

  • Made progress on (but have not cracked) daily task management for the newly evolving systems
  • Caught up on some email from time off, but still a chunk left to work through
  • Spent more time writing code than expected
  • Illness this week slowed me down
  • These aren’t very good weeknotes, but perhaps better than none.

 

Jess KleinDino Dribbble

The newly created Mozilla Foundation design team started out with a bang (or maybe I should say rawr) with our very first collaboration: a team debut on dribbble. Dribbble describes itself as a show and tell community for designers. I have not participated in this community yet but this seemed like a good moment to join in. For our debut shot, we decided to have some fun and plan out our design presence. We ultimately decided to go in a direction designed by Cassie McDaniel.

The concept was for us to break apart the famed Shepard Fairey Mozilla dinosaur into quilt-like
tiles.

 
Each member of the design team was assigned a tile or two and given a shape. This is the one I was assigned:
I turned that file into this:

We all met together in a video chat to upload our images on to the site.

Anticipation was building as we uploaded each shot one by one:
But the final reveal made it worth all the effort! 

Check out our new team page on dribbble. rawr!

Cassie also wrote about the exercise on her blog and discussed the opinion position for a designer to join the team.



Mozilla Reps CommunityReps Weekly Call – January 22th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

this-is-mozilla

Summary

  • Dashboard QA and UI.
  • Community Education.
  • Feedback on reporting.
  • Participation plan and Grow meeting.
  • Womoz Badges.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Rubén MartínWe have to fight again for the web

Versión en español: “Debemos volver a pelear por la web

It’s interesting to see how the history repeats itself and we repeat the same mistakes over and over again.

I started contributing to Mozilla in 2004, at that time Internet Explorer had the 95% of the market share. That meant that there was absolutely no way you could create a web not “adapted” to this browser and there were no way you could surf the full web with other browsers becuase a lot of sites used ActiveX and other IE-only non-standard technologies.

The web was Internet Explorer, Internet Explorer was the web, you had no choice.

We fought hard (really hard) from Mozilla and other organizations to bring user choice and open other ways to understand the web. People understood this and the web changed to an open diverse ecosystem where standards were the way to go and where everyone was able to be part of it, both users and developers.

Today everything we fought for is at risk.

It’s not the first time we see how some sites decide to offer their content just for one browser, sometimes using technologies that are not a standard and only work there but sometimes using technologies that are standards and blocking other browsers for no reason.

Business is business.

If we don’t want a web controlled again by a few, driven by stockholders interests and not by users, we have to stand up. We have to call out sites that try to hijack user choice asking them to use one browser to access their content.

whatsapp-hats

The web should run everywhere and users should be free to choose the browser they think best serve their interests/values.

I truly believe that Mozilla, as a non-profit organization, is still the only one that can provide an independent choice to users and balance the market to avoid the walled gardens some dream to build.

Don’t lower the guard, let’s not repeat the same mistakes again, let’s fight again for the web.

PD: If you want a deep analysis about the Whatsapp web fiasco, I recommend the post by my friend André: “Whatsapp doesn’t understand the web”.

Ehsan AkhgariRunning Microsoft Visual C++ 2013 under Wine on Linux

The Wine project lets you run Windows programs on other operating systems, such as Linux.  I spent some time recently trying to see what it would take to run Visual C++ 2013 Update 4 under Linux using Wine.

The first thing that I tried to do was to run the installer, but that unfortunately hits a bug and doesn’t work.  After spending some time looking into other solutions, I came up with a relatively decent solution which seems to work very well.  I put the instructions up on github if you’re interested, but the gist is that I used a Chromium depot_tools script to extract the necessary files for the toolchain and the Windows SDK, which you can copy to a Linux machine and with some DLL loading hackery you will get a working toolchain.  (Note that I didn’t try to run the IDE, and I strongly suspect that will not work out of the box.)

This should be the entire toolchain that is necessary to build Firefox for Windows under Linux.  I already have some local hacks which help us get past the configure script, hopefully this will enable us to experiment with using Linux to be able to build Firefox for Windows more efficiently.  But there is of course a lot of work yet to be done.

Armen ZambranoBacked out - Pinning for Mozharness is enabled for the fx-team integration tree

EDIT=We had to back out this change since it caused issues for PGO talos jobs. We will try again after further testing.

Pinning for Mozharness [1] has been enabled for the fx-team integration tree.
Nothing should be changing. This is a no-op change.

We're still using the default mozharness repository and the "production" branch is what is being checked out. This has been enabled on Try and Ash for almost two months and all issues have been ironed out. You can know if a job is using pinning of Mozharness if you see "repostory_manifest.py" in its log.

If you notice anything odd please let me know in bug 1110286.

If by Monday we don't see anything odd happening, I would like to enable it for mozilla-central for few days before enabling it on all trunk trees.

Again, this is a no-op change, however, I want people to be aware of it.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Doug BelshawOntology, mentorship and web literacy

This week’s Web Literacy Map community call was fascinating. They’re usually pretty interesting, but today’s was particularly good. I’m always humbled by the brainpower that comes together and concentrates on something I spend a good chunk of my life thinking about!

Brain

I’ll post an overview of the entire call in on the Web Literacy blog tomorrow but I wanted to just quickly zoom out and focus on things that Marc Lesser and Jess Klein were discussing during the call. Others mentioned really useful stuff too, but I don’t want to turn this into an epic post!


Marc

Marc reminded us of Clay Shirky’s post entitled Ontology is Overrated: Categories, Links, and Tags. It’s a great read but the point Marc wanted to extract is that pre-defined ontologies (i.e. ways of classifying things) are kind of outdated now we have the Internet:

No filesystem

In the Web 2.0 era (only 10 years ago!) this was called a folksonomic approach. I remembered that I actually used Shirky’s post in one of my own for DMLcentral a couple of years ago. To quote myself:

The important thing here is that we – Mozilla and the community – are creating a map of the territory. There may be others and we don’t pretend we’re not making judgements. We hope it’s “useful in the way of belief” (as William James would put it) but realise that there are other ways to understand and represent the skills and competencies required to read, write and participate on the Web.

Given that we were gently chastised at the LRA conference for having an outdated approach to representing literacies, we should probably think more about this.


Jess

Meanwhile, Jess, was talking about the Web Literacy Map as an ‘API’ upon which other things could be built. I reminded her of the WebLitMapper, a prototype I suggested and Atul Varma built last year. The WebLitMapper allows users to tag resources they find around the web using competencies from the Web Literacy Map.

This, however, was only part of what Jess meant (if I understood her correctly). She was interested in multiple representations of the map, kind of like these examples she put together around learning pathways. This would allow for the kind of re-visualisations of the Web Literacy Map that came out of the MozFest Remotee Challenge:

Re-visualising the Web Literacy Map

Capturing the complexity of literacy skill acquisition and development is particularly difficult given the constraints of two dimensions. It doubly-difficult if the representation has to be static.

Finally from Jess (for the purposes of this post, at least) she reminded us of some work she’d done around matching mentors and learners:

MentorN00b


Conclusion

The overwhelming feeling on the call was that we should retain the competency view of the Web Literacy Map for v1.5. It’s familiar, and helps with adoption:

Web Literacy Map v1.1

However, this decision doesn’t prevent us from exploring other avenues combining learning pathways, badges, and alternative ways to representing the overall skill/competency ecosystem. Perhaps diy.org/skills can teach us a thing or two?


Questions? Comments Tweet me (@dajbelshaw or email me (doug@mozillafoundation.org

Florian QuèzeProject ideas wanted for Summer of Code 2015

Google is running Summer of Code again in 2015. Mozilla has had the pleasure of participating every year so far, and we are hoping to participate again this year. In the next few weeks, we need to prepare a list of suitable projects to support our application.

Can you think of a 3-month coding project you would love to guide a student through? This is your chance to get a student focusing on it for 3 months! Summer of Code is a great opportunity to introduce new people to your team and have them work on projects you care about but that aren't on the critical path to shipping your next release.

Here are the conditions for the projects:

  • completing the project should take roughly 3 months of effort for a student;
  • any part of the Mozilla project (Firefox, Firefox OS, Thunderbird, Instantbird, SeaMonkey, Bugzilla, L10n, NSS, IT, and many more) can submit ideas, as long as they require coding work;
  • there is a clearly identified mentor who can guide the student through the project.


If you have an idea, please put it on the Brainstorming page, which is our idea development scratchpad. Please read the instructions at the top – following them vastly increases chances of your idea getting added to the formal Ideas page.

The deadline to submit project ideas and help us be selected by Google is February 20th.

Note for students: the student application period starts on March 16th, but the sooner you start discussing project ideas with potential mentors, the better.

Please feel free to discuss with me any question you may have related to Mozilla's participation in Summer of Code. Generic Summer of Code questions are likely already answered in the FAQ.

Benoit GirardGecko Bootcamp Talks

Last summer we held a short bootcamp crash course for Gecko. The talks have been posted to air.mozilla.com and collected under the TorontoBootcamp tag. The talks are about an hour each but will be very informative to some. They are aimed at people wanting a deeper understanding of Gecko.

View the talks here: https://air.mozilla.org/search/?q=tag%3A+TorontoBootcamp

Gecko Pipeline

Gecko Pipeline

In the talks you’ll find my first talk covering an overall discussion of the pipeline, what stages run when and how to skip stages for better performance. Kannan’s talk discusses Baseline, our first tier JIT. Boris’ talk discusses Restyle and Reflow. Benoit Jacob’s talk discusses the graphics stack (Rasterization + Compositing + IPC layer) but sadly the camera is off center for the first half. Jeff’s talk goes into depth into Rasterization, particularly path drawing. My second talk discusses performance analysis in Gecko using the gecko profiler where we look at real profiles of real performance problems.

I’m trying to locate two more videos about layout and graphics that were given at another session but would elaborate more the DisplayList/Layer Tree/Invalidation phase and another on Compositing.


Matt ThompsonMozilla Learning in 2015: our vision and plan

This post is a shortened, web page version of the 2015 Mozilla Learning plan we shared back in December. Over the next few weeks, we’ll be blogging and encouraging team and community members to post their reflections and detail on specific pieces of work in 2015 and Q1. Please post your comments and questions here — or get more involved.

Surman Keynote for Day One - Edit.002

Within ten years, there will be five billion citizens of the web.

Mozilla wants all of these people to know what the web can do. What’s possible. We want them to have the agency, skills and know-how they need to unlock the full power of the web. We want them to use the web to make their lives better. We want them to know they are citizens of the web.

Mozilla Learning is a portfolio of products and programs that helps people learn how to read, write and participate in the digital world.

Building on Webmaker, Hive and our fellowship programs, Mozilla Learning is a portfolio of products and programs that help these citizens of the web learn the most important skills of our age: the ability to read, write and participate in the digital world. These programs also help people become mentors and leaders: people committed to teaching others and to shaping the future of the web.

Mark Surman presents the Mozilla Learning vision and plan in Portland, Dec 2015

Three-year vision

By 2017, Mozilla will have established itself as the best place to learn the skills and know-how people need to use the web in their lives, careers and organizations. We will have:

  • Educated and empowered users by creating tools and curriculum for learning how to read, write and participate on the web. Gone mainstream.
  • Built leaders, everywhere by growing a global cadre of educators, researchers, coders, etc. who do this work with us. We’ve helped them lead and innovate.
  • Established the community as the classroom by improving and explaining our experiential learning model: learn by doing and innovating with Mozilla.

At the end of these three years, we may have established something like a “Mozilla University” — a learning side of Mozilla that can sustain us for many decades. Or, we may simply have a number of successful learning programs. Either way, we’ll be having impact.

We may establish something like a “Mozilla University” — a learning side of Mozilla that can sustain us for many decades.

Surman Keynote for Day One - Edit.008
2015 Focus

1) Learning Networks 2) Learning Products 3) Leadership Development

Our focus in 2015 will be to consolidate, improve and focus what we’ve been building for the last few years. In particular we will:

  • Improve and grow our local Learning Networks (Hive, Maker Party, etc).
  • Build up an engaged user base for our Webmaker Learning Products on mobile and desktop.
  • Prototype a Leadership Development program, and test it with fellows and ReMo.

The short term goal is to make each of our products and programs succeed in their own right in 2015. However, we also plan to craft a bigger Mozilla Learning vision that these products and programs can feed into over time.

Surman Keynote for Day One - Edit.003
A note on brand

Mozilla Learning is notional at this point. It’s a stake in the ground that says:

Mozilla is in the learning and empowerment business for the long haul.

In the short term, the plan is to use “Mozilla Learning” as an umbrella term for our community-driven learning and leadership development initiatives — especially those run by the Mozilla Foundation, like Webmaker and Hive. It may also grow over time to encompass other initiatives, like the Mozilla Developer Network and leadership development programs within the Mozilla Reps program. In the long term: we may want to a) build out a lasting Mozilla learning brand (“Mozilla University?”), or b) build making and learning into the Firefox brand (e.g., “Firefox for Making”). Developing a long-term Mozilla Learning plan is an explicit goal for 2015.

What we’re building

Practically, the first iteration of Mozilla Learning will be a portfolio of products and programs we’ve been working on for a number of years: Webmaker, Hive, Maker Party, Fellowship programs, community labs. Pulled together, these things make up a three-layered strategy we can build more learning offerings around over time.

  1. The Learning Networks layer is the most developed piece of this picture, with Hives and Maker Party hosts already in 100s of cities around the world.
  2. The Learning Products layer involves many elements of the Webmaker.org work, but will be relaunched in 2015 to focus on a mass audience.
  3. The Leadership Development piece has strong foundations, but a formal training element still needs to be developed.
Scope and scale

One of our goals with Mozilla Learning is to grow the scope and scale of Mozilla’s education and empowerment efforts. The working theory is that we will create an interconnected set of offerings that range from basic learning for large numbers of people, to deep learning for key leaders who will help shape the future of the web (and the future of Mozilla).

We want to increasing the scope and diversity of how people learn with Mozilla.

We’ll do that by building opportunities for people to get together to learn, hack and invent in cities on every corner of the planet. And also: creating communities that help people working in fields like science, news and government figure out how to tap into the technology and culture of the web in their own lives, organizations and careers. The plan is to elaborate and test out this theory in 2015 as a part of the Mozilla Learning strategy process. (Additional context on this here: http://mzl.la/depth_and_scale.) Surman Keynote for Day One - Edit.016

Contributing to Mozilla’s overall 2015 KPIs

How will we contribute to Mozilla’s top-line goals? In 2015, We’ll measure success through two key performance indicators: relationships and reach.

  • Relationships: 250K active Webmaker users
  • Reach: 500 cities with ongoing Learning Network activity

Surman Keynote for Day One - Edit.006

Learning Networks

In 2015, we will continue to grow and improve the impact of our local Learning Networks.

  • Build on the successful ground game we’ve established with teachers and mentors under the Webmaker, Hive and Maker Party banners.
  • Evolve Maker Party into year-round activity through Webmaker Clubs.
  • Establish deeper presence in new regions, including South Asia and East Africa.
  • Improve the websites we use to support teachers, partners, clubs and networks.
  • Sharpen and consolidate teaching tools and curriculum built in 2014. Package them on their own site, “teach.webmaker.org.”
  • Roll out largescale, extensible community-building software to run Webmaker clubs.
  • Empower more people to start Hive Learning Networks by improving documentation and support.
  • Expand scale, rigour and usability of curriculum and materials to help people better mentor and teach.
  • Expand and improve trainings online and in-person for mentors.
  • Recruit more partners to increase reach and scope of networks.

Surman Keynote for Day One - Edit.011

Learning Products

Grow a base of engaged desktop and mobile users for Webmaker.

  • Expand our platform to reach a broad market of learners directly.
  • Mobile & Desktop: Evolve current tools into a unified Webmaker making and learning platform for desktop, Firefox OS and Android.
  • Tablet: Build on our existing web property to address tablet browser users and ensure viability in classrooms.
  • Firefox: Experiment with ways to integrate Webmaker directly into Firefox.
  • Prioritize mobile. Few competitors here, and the key to emerging markets growth.
  • Lower the bar. Build user on-boarding that gets people making / learning quickly.
  • Engagement. Create sticky engagement. Build mentorship, online mentoring and social into the product.

Surman Keynote for Day One - Edit.014

Leadership Development

Develop a leadership development program, building off our existing Fellows programs.

  • Develop a strategy and plan. Document the opportunity, strategy and scope. Figure out how this leadership development layer could fit into a larger Mozilla Learning / Mozilla University vision.
  • Build a shared definition of what it means to be a ‘fellow’ at Mozilla. Empowering emerging leaders to use Mozilla values and methods in their own work.
  • Figure out the “community as labs” piece. How we innovate and create open tech along the way.
  • Hire leadership. Create an executive-level role to lead the strategy process and build out the program.
  • Test pilot programs. Develop a handbook / short course for new fellows.
  • Test with fellows and ReMo. Consider expanding fellows programs for science, web literacy and computer science research.
Get involved
  • Learn more. There’s much more detail on the Learning Networks, Learning Products and Leadership Development pieces in the complete Mozilla Learning plan.
  • Get involved. There’s plenty of easy ways to get involved now with Webmaker and our local Learning Networks today.
  • Get more hands-on. Want to go deeper? Get hands-on with code, curriculum, planning and more through build.webmaker.org