Looking For

Mozilla FundraisingShould we put payment provider options directly on the snippet?

While our End of Year (EOY) fundraising campaign is finished, we still have a few updates to share with you. This post documents one of the A/B tests we ran during the campaign. Should we put payment provider options directly … Continue reading

Nick DesaulniersWriting my first book (chapter)

It’s a feeling of immense satisfaction when we complete a major achievement. Being able to say “it’s done” is such a great stress relief. Recently, I completed work on my first publication, a chapter about Emscripten for the upcoming book WebGL Insights to be published by CRC Press in time for SIGGRAPH 2015.

One of the life goals I’ve had for a while is writing a book. A romantic idea it seems to have your ideas transcribed to a medium that will outlast your bones. It’s enamoring to hold books from long dead authors, and see that their ideas are still valid and powerful. Being able to write a book, in my eyes, provides some form of life after death. Though, one could imagine ancestors reading blog posts from long dead relatives via utilities like the Internet Archive’s WayBack Machine.

Writing about highly technical content places an upper limit on the usefulness of the content, and shows as “dated” quickly. A book I recently ordered was Scott Meyers’ Effective Modern C++. This title strikes me, because what exactly do we consider modern or contemporary? Those adjectives only make sense in a time limited context. When C++ undergoes another revolution, Scott’s book may become irrelevant, at which point the adjective modern becomes incorrect. Not that I think Scott’s book or my own is time-limited in usefulness; more that technical books’ duration of usefulness is significantly less than philosophical works like 1984 or Brave New World. Almost like having a record in a sport is a feather in one’s cap, until the next best thing comes along and you’re forgotten to time.

Somewhat short of my goal of writing an entire book, I only wrote a single chapter for a book. It’s interesting to see that a lot of graphics programming books seem to follow the format of one author per chapter or at least multiple authors. Such book series as GPU Gems, Shader X, and GPU Pro follow this pattern, which is interesting. After seeing how much work goes into one chapter, I think I’m content with not writing an entire book, though I may revisit that decision later in life.

How did this all get started? I had followed Graham Sellers on Twitter and saw a tweet from him about a call to authors for WebGL Insights. Explicitly in the linked to page under the call for authors was interest in proposals about Emscripten and asm.js.

At the time, I was headlong into a project helping Disney port Where’s My Water from C++ to JavaScript using Emscripten. I was intimately familiar with Emscripten, having been trained by one of its most prolific contributors, Jukka Jylänki. Also, Emscripten’s creator, Alon Zakai, sat on the other side of the office from me, so I was constantly pestering him about how to do different things with Emscripten. The #emscripten irc channel on irc.mozilla.org is very active, but there’s no substitute for being able to have a second pair of eyes look over your shoulder when something is going wrong.

Knowing Emscripten’s strengths and limitations, seeing interest in the subject I knew a bit about (but wouldn’t consider myself an expert in), and having the goal of writing something to be published in book form, this was my opportunity to seize.

I wrote up a quick proposal with a few figures about why Emscripten was important and how it worked, and sent it off with fingers crossed. Initially, I was overjoyed to learn when my proposal was accepted, but then there was a slow realization that I had a lot of work to do. The editor, Patrick Cozzi, set up a GitHub repo for our additional code and figures, a mailing list, and sent us a chapter template document detailing the process. We had 6 weeks to write the rough draft, then 6 weeks to work with reviewers to get the chapter done. The chapter was written as a Google Doc, so that we could have explicit control over who we shared the document with, and what kinds of editing power they had over the document. I think this approach worked well.

I had most of the content written by week 2. This was surprising to me, because I’m a heavy procrastinator. The only issue was that the number of pages I wrote was double the allowed amount; way over page count. I was worried about the amount of content, but told myself to try not to be attached to the content, just as you shouldn’t stay attached with your code.

I took the additional 4 weeks I had left to finish the rough draft to invite some of my friends and coworkers to provide feedback. It’s useful to have a short list of people who have ever offered to help in this regard or owe you one. You’ll also want a diverse team of reviewers that are either close to the subject matter, or approaching it as new information. This allows you to stay technically correct, while not presuming your readers know everything that you do.

The strategy worked out well; some of the content I had initially written about how JavaScript VMs and JITs speculate types was straight up wrong. While it played nicely into the narrative I was weaving, someone more well versed in JavaScript virtual machines would be able to call BS on my work. The reviewers who weren’t as close to subject matter were able to point out when logical progressions did not follow.

Fear of being publicly corrected prevents a lot of people from blogging or contributing to open source. It’s important to not stay attached to your work, especially when you need to make cuts. When push came to shove, I did have difficulty removing sections.

Lets say you have three sequential sections: A, B, & C. If section A and section B both set up section C, and someone tells you section B has to go, it can be difficult to cut section B because as the author you may think it’s really important to include B for the lead into C. My recommendation is sum up the most important idea from section B and add it to the end of section A.

For the last six weeks, the editor, some invited third parties, and other authors reviewed my chapter. It was great that others even followed along and pointed out when I was making assumptions based on specific compiler or browser. Eric Haines even reviewed my chapter! That was definitely a highlight for me.

We used a Google Sheet to keep track of the state of reviews. Reviewers were able to comment on sections of the chapter. What was nice was that you were able to keep use the comment as a thread, being able to respond directly to a criticism. What didn’t work so well was then once you edited that line, the comment and thus the thread was lost.

Once everything was done, we zipped up the assets to be used as figures, submitted bios, and wrote a tips and tricks section. Now, it’s just a long waiting game until the book is published.

As far as dealing with the publisher, I didn’t have much interaction. Since the book was assembled by a dedicated editor, Patrick did most of the leg work. I only asked that what royalties I would receive be donated to Mozilla, which the publisher said would be too small (est $250) to be worth the paperwork. It would be against my advice if you were thinking of writing a technical book for the sole reason of monetary benefit. I’m excited to be receiving a hard cover copy of the book when it’s published. I’ll also have to see if I can find my way to SIGGRAPH this year; I’d love to meet my fellow authors in person and potential readers. Just seeing the list of authors was really a who’s-who of folks doing cool WebGL stuff.

If you’re interested in learning more about working with Emscripten, asm.js, and WebGL, I sugguest you pick up a copy of WebGL Insights in August when it’s published. A big thank you to my reviewers: Eric Haines, Havi Hoffman, Jukka Jylänki, Chris Mills, Traian Stanev, Luke Wagner, and Alon Zakai.

So that was a little bit about my first experience with authorship. I’d be happy to follow up with any further questions you might have for me. Let me know in the comments below, on Twitter, HN, or wherever and I’ll probably find it!

Gregory SzorcAutomatic Python Static Analysis on MozReview

A bunch of us were in Toronto last week hacking on MozReview.

One of the cool things we did was deploy a bot for performing Python static analysis. If you submit some .py files to MozReview, the bot should leave a review. If it finds violations (it uses flake8 internally), it will open an issue for each violation. It also leaves a comment that should hopefully give enough detail on how to fix the problem.

While we haven't done much in the way of performance optimizations, the bot typically submits results less than 10 seconds after the review is posted! So, a human should never be reviewing Python that the bot hasn't seen. This means you can stop thinking about style nits and start thinking about what the code does.

This bot should be considered an alpha feature. The code for the bot isn't even checked in yet. We're running the bot against production to get a feel for how it behaves. If things don't go well, we'll turn it off until the problems are fixed.

We'd like to eventually deploy C++, JavaScript, etc bots. Python won out because it was the easiest to integrate (it has sane and efficient tooling that is compatible with Mozilla's code bases - most existing JavaScript tools won't work with Gecko-flavored JavaScript, sadly).

I'd also like to eventually make it easier to locally run the same static analysis we run in MozReview. Addressing problems locally before pushing is a no-brainer since it avoids needless context switching from other people and is thus better for productivity. This will come in time.

Report issues in #mozreview or in the Developer Services :: MozReview Bugzilla component.

Gregory SzorcEnd to End Testing with Docker

I've written an extensive testing framework for Mozilla's version control tools. Despite it being a little rough around the edges, I'm a bit proud of it.

When you run tests for MozReview, Mozilla's heavily modified Review Board code review tool, the following things happen:

  • A MySQL server is started in a Docker container.
  • A Bugzilla server (running the same code as bugzilla.mozilla.org) is started on an Apache httpd server with mod_perl inside a Docker container.
  • A RabbitMQ server mimicking pulse.mozilla.org is started in a Docker container.
  • A Review Board Django development server is started.
  • A Mercurial HTTP server is started

In the future, we'll likely also need to add support for various other services to support MozReview and other components of version control tools:

  • The Autoland HTTP service will be started in a Docker container, along with any other requirements it may have.
  • An IRC server will be started in a Docker container.
  • Zookeeper and Kafka will be started on multiple Docker containers

The entire setup is pretty cool. You have actual services running on your local machine. Mike Conley and Steven MacLeod even did some pair coding of MozReview while on a plane last week. I think it's pretty cool this is even possible.

There is very little mocking in the tests. If we need an external service, we try to spin up an instance inside a local container. This way, we can't have unexpected test successes or failures due to bugs in mocking. We have very high confidence that if something works against local containers, it will work in production.

I currently have each test file owning its own set of Docker containers and processes. This way, we get full test isolation and can run tests concurrently without race conditions. This drastically reduces overall test execution time and makes individual tests easier to reason about.

As cool as the test setup is, there's a bunch I wish were better.

Spinning up and shutting down all those containers and processes takes a lot of time. We're currently sitting around 8s startup time and 2s shutdown time. 10s overhead per test is unacceptable. When I make a one line change, I want the tests to be instantenous. 10s is too long for me to sit idly by. Unfortunately, I've already gone to great pains to make test overhead as short as possible. Fig wasn't good enough for me for various reasons. I've reimplemented my own orchestration directly on top of the docker-py package to achieve some significant performance wins. Using concurrent.futures to perform operations against multiple containers concurrently was a big win. Bootstrapping containers (running their first-run entrypoint scripts and committing the result to be used later by tests) was a bigger win (first run of Bugzilla is 20-25 seconds).

I'm at the point of optimizing startup where the longest pole is the initialization of the services inside Docker containers themselves. MySQL takes a few seconds to start accepting connections. Apache + Bugzilla has a semi-involved initialization process. RabbitMQ takes about 4 seconds to initialize. There are some cascading dependencies in there, so the majority of startup time is waiting for processes to finish their startup routine.

Another concern with running all these containers is memory usage. When you start running 6+ instances of MySQL + Apache, RabbitMQ, + ..., it becomes really easy to exhaust system memory, incur swapping, and have performance fall off a cliff. I've spent a non-trivial amount of time figuring out the minimal amount of memory I can make services consume while still not sacrificing too much performance.

It is quite an experience having the problem of trying to minimize resource usage and startup time for various applications. Searching the internet will happily give you recommended settings for applications. You can find out how to make a service start in 10s instead of 60s or consume 100 MB of RSS instead of 1 GB. But what the internet won't tell you is how to make the service start in 2s instead of 3s or consume as little memory as possible. I reckon I'm past the point of diminishing returns where most people don't care about any further performance wins. But, because of how I'm using containers for end-to-end testing and I have a surplus of short-lived containers, it is clearly I problem I need to solve.

I might be able to squeeze out a few more seconds of reduction by further optimizing startup and shutdown. But, I doubt I'll reduce things below 5s. If you ask me, that's still not good enough. I want no more than 2s overhead per test. And I don't think I'm going to get that unless I start utilizing containers across multiple tests. And I really don't want to do that because it sacrifices test purity. Engineering is full of trade-offs.

Another takeaway from implementing this test harness is that the pre-built Docker images available from the Docker Registry almost always become useless. I eventually make a customization that can't be shoehorned into the readily-available image and I find myself having to reinvent the wheel. I'm not a fan of the download and run a binary model, especially given Docker's less-than-stellar history on the security and cryptography fronts (I'll trust Linux distributions to get package distribution right, but I'm not going to be trusting the Docker Registry quite yet), so it's not a huge loss. I'm at the point where I've lost faith in Docker Registry images and my default position is to implement my own builder. Containers are supposed to do one thing, so it usually isn't that difficult to roll my own images.

There's a lot to love about Docker and containerized test execution. But I feel like I'm foraging into new territory and solving problems like startup time minimization that I shouldn't really have to be solving. I think I can justify it given the increased accuracy from the tests and the increased confidence that brings. I just wish the cost weren't so high. Hopefully as others start leaning on containers and Docker more for test execution, people start figuring out how to make some of these problems disappear.

Karl DubostWorking With Transparency Habits. Something To Learn.

I posted this following text as a comment 3 days ago on Mark Surman's blog on Transparency habits, but it is still in the moderation queue. So instead of taking the chance to lose it. I'm reposting that comment here. This might need to be develop by a followup post.

Mark says:

I encourage everyone at Mozilla to ask themselves: how can we all build up our transparency habits in 2015? If you already have good habits, how can you help others? If, like me, you’re a bit rusty, what small things can you do to make your work more open?

The mistake we often do with transparency is that we think it is obvious for most people. But working in a transparent way requires a lot of education and mentoring. It’s one thing we should try to improve at Mozilla when onboarding new employees. Teaching what it means to be transparent. I’m not even sure everyone has the same notion of what transparency means already.

For example, too many times, I receive emails in private. That’s unfortunate because it creates information silos and it becomes a lot harder to open up a conversation which started in private. Because I was kind of tired of this, I created a set of slides and explanation for learning how to work with emails. Available in French and English.

Some people are afraid of working in the open for many reasons. They may come from a company where secrecy was very strong, or they had a bad experience by being too open. It takes then time to re-learn the benefits of working in the open.

So because you asked an open question :) Some items.

  • Each time you sent an email, it probably belongs to a project. Send the email to the person (To: field) and always add in copy the relevant mailing list (Cc: field). You’ll get an archive, URL pointer for the future, etc.
  • Each time you are explaining something (such as a process, an howto, etc) to someone, make it a blog post, then send this URL to this someone. It will benefit more people on the long term.
  • Each time, you’re having a meeting, choose one scribe and scribe down this meeting and publish the minutes of the meetings. There are many techniques associated to these. See for example the record of Web Compat team meeting on January 13, 2015 and the index of all meetings (I could explain how we manage that in a blog post).
  • Each time you have a F2F meeting with someone or a group, take notes and publish these notes online to a stable URI. It will help other people to participate.

Let's learn together how to work in a transparent way or in the open.

Otsukare.

Stormy PetersWorking in the open is hard

A recent conversation on a Mozilla mailing list about whether IRC channels should be archived or not shows what a commitment it is to remain open. It’s hard work and not always comfortable to work completely in the open.

Most of us in open source are used to “working in the open”. Everything we send to a mailing list is not only public, but archived and searchable via Google or Yahoo! forever. Five years later, you can go back and see how I did my job as Executive Director of GNOME. Not only did I blog about it, but many of the conversations I had were on open mailing lists and irc channels.

There are many benefits to being open.

Being open means that anybody can participate, so you get much more help and diversity.

Being open means that you are transparent and accountable.

Being open means you have history. You can also go back and see exactly why a decision was made, what the pros and cons were and see if any of the circumstances have changed.

But it’s not easy.

Being open means that when you have a disagreement, the world can follow along. We warn teenagers about not putting too much on social media, but can you imagine every disagreement you’ve ever had at work being visible to all your future employers. (And spouses!)

But those of us working in open source have made a commitment to be open and we try hard.

Many of us get used to working in the open, and we think it feels comfortable and we think we’re good at it. And then something will remind us that it is a lot of work and it’s not always comfortable. Like a conversation about whether irc conversations should be archived or not. IRC conversations are public but not always archived. So people treat them as a place where anyone can drop in but the conversation is bounded in time and limited to the people you can see in the room. The fact that these informal conversations might be archived and read by anyone and everyone later means that you now have to think a lot more about what you are saying. It’s less of a chat and more of a weighed conversation.

The fact that people steeped in open source are having a heated debate about whether Mozilla IRC channels should be archived or not shows that it’s not easy being open. It takes a lot of work and a lot of commitment.

 

Doug BelshawWeeknote 04/2015

This week I’ve been:

I wasn’t at BETT this week. It’s a great place to meet people I haven’t seen for a while and last year I even gave a couple of presentations and a masterclass. However, this time around, my son’s birthday and party gave me a convenient excuse to miss it.

Next week I’m working from home as usual. In fact, I don’t think I’m away again until our family holiday to Dubai in February half-term!

Image CC BY Dave Fayram

Mike HommeyExplicit rename/copy tracking vs. detection after the fact

One of the main differences in how mercurial and git track files is that mercurial does rename and copy tracking and git doesn’t. So in the case of mercurial, users are expected to explicitly rename or copy the files through the mercurial command line so that mercurial knows what happened. Git simply doesn’t care, and will try to detect after the fact when you ask it to.

The consequence is that my git-remote-hg, being currently a limited prototype, doesn’t make the effort to inform mercurial of renames or copies.

This week, Ehsan, as a user of that tool, pushed some file moves, and subsequently opened an issue, because some people didn’t like it.

It was a conscious choice on my part to make git-remote-hg public without rename/copies detection, because file renames/copies are not happening often, and can just as much not be registered by mercurial users.

In fact, they haven’t all been registered for as long as Mozilla has been using mercurial (see below, I didn’t actually know I was so spot on when I wrote this sentence), and people haven’t been pointed at for using broken tools (and I’ll skip the actual language that was used when talking about Ehsan’s push).

And since I’d rather not make unsubstantiated claims, I dug in all of mozilla-central and related repositories (inbound, b2g-inbound, fx-team, aurora, beta, release, esr*) and here is what I found, only accounting files that have been copied or renamed without being further modified (so, using git diff-tree -r -C100%, and eliminating empty files), and correlating with the mercurial rename/copy metadata:

  • There have been 45069 file renames or copies in 1546 changesets.
  • Mercurial doesn’t know 5482 (12.1%) of them, from 419 (27.1%) changesets.
  • 72 of those changesets were backouts.
  • 19 of those backouts were of changesets that didn’t have rename/copy information, so 53 of those backouts didn’t actually undo what mercurial knew of those backed out changesets.
  • Those 419 changesets were from 144 distinct authors (assuming I didn’t miss some duplicates from people who changed email).
  • Fun fact, the person with colorful language, and that doesn’t like git-remote-hg, is part of them. I am too, and that was with mercurial.
  • The most recent occurrence of renames/copies unknown to mercurial is already not Ehsan’s anymore.
  • The oldest occurrence is in the 19th (!) mercurial changeset.

And that’s not counting all the copies and renames with additional modifications.

Fun fact, this is what I found in the Mercurial mercurial repository:

  • There have been 255 file renames or copies in 41 changesets.
  • Mercurial doesn’t know about 38 (14.9%) of them, from 4 (9.7%) changesets.
  • One of those changesets was from Matt Mackall himself (creator and lead developer of mercurial).

There are 1061 files in mercurial, versus 115845 in mozilla-central, so there is less occasion for renames/copies there, still, even they forget to use “hg move” and break their history as a result.

I think this shows how requiring explicit user input simply doesn’t pan out.

Meanwhile, I have prototype copy/rename detection for git-remote-hg working, but I need to tweak it a little bit more before publishing.

Mozilla Release Management TeamFirefox 36 beta2 to beta3

In this beta, as in beta 2, we have a bug fixes for MSE. We have also a few bugs found with the release of Firefox 35. As usual, beta 3 is a desktop only version.

  • 43 changesets
  • 118 files changed
  • 1261 insertions
  • 476 deletions

ExtensionOccurrences
cpp17
js13
h7
ini6
html6
jsm3
xml1
webidl1
txt1
svg1
sjs1
py1
mpd1
list1

ModuleOccurrences
dom20
browser18
toolkit6
dom4
dom4
testing3
gfx3
netwerk2
layout2
mobile1
media1
docshell1
build1

List of changesets:

Steven MichaudBug 1118615 - Flash hangs in HiDPI mode on OS X running peopleroulette app. r=mstange a=sledru - 430bff48811d
Mark GoodwinBug 1096197 - Ensure SSL Error reports work when there is no failed certificate chain. r=keeler, a=sledru - a7f164f7c32d
Seth FowlerBug 1121802 - Only add #-moz-resolution to favicon URIs that end in ".ico". r=Unfocused, a=sledru - d00b4a85897c
David MajorBug 1122367 - Null check the result of D2DFactory(). r=Bas, a=sledru - 57cb206153af
Michael ComellaBug 1116910 - Add new share icons in the action bar for tablet. r=capella, a=sledru - f6b2623900f1
Jonathan WattBug 1122578 - Part 1: Make DrawTargetCG::StrokeRect stroke from the same corner and in the same direction as older OS X and other Moz2D backends. r=Bas, a=sledru - d3c92eebdf3e
Jonathan WattBug 1122578 - Part 2: Test start point and direction of dashed stroking on SVG rect. r=longsonr, a=sledru - 7850d99485e6
Mats PalmgrenBug 1091709 - Make Transform() do calculations using gfxFloat (double) to avoid losing precision. r=mattwoodrow, a=sledru - 0b22d12d0736
Hiroyuki IkezoeBug 1118749 - Need promiseAsyncUpdates() before frecencyForUrl. r=mak, a=test-only - 4501fcac9e0b
Jordan LundBug 1121599 - remove android-api-9-constrained and android-api-10 mozconfigs from all trees, r=rnewman a=npotb DONTBUILD - 787968dadb44
Tooru FujisawaBug 1115616 - Commit composition string forcibly when search suggestion list is clicked. r=gavin,adw a=sylvestre - 2d629038c57b
Ryan VanderMeulenBug 1075573 - Disable test_videocontrols_standalone.html on Android 2.3 due to frequent failures. a=test-only - a666c5c8d0ba
Ryan VanderMeulenBug 1078267 - Skip netwerk/test/mochitests/ on Android due to frequent failures. a=test-only - 0c36034999bb
Christoph KerschbaumerBug 1121857 - CSP: document.baseURI should not get blocked if baseURI is null. r=sstamm, a=sledru - a9b183f77f8d
Christoph KerschbaumerBug 1122445 - CSP: don't normalize path for CSP checks. r=sstamm, a=sledru - 7f32601dd394
Christoph KerschbaumerBug 1122445 - CSP: don't normalize path for CSP checks - test updates. r=sstamm, a=sledru - a41c84bee024
Karl TomlinsonBug 1085247 enable remaining mediasource-duration subtests a=sledru - d918f7ea93fe
Sotaro IkedaBug 1121658 - Remove DestroyDecodedStream() from MediaDecoder::SetDormantIfNecessary() r=roc a=sledru - 731843c58e0d
Jean-Yves AvenardBug 1123189: Use sourceended instead of loadeddata to check durationchanged count r=karlt a=sledru - 09df37258699
Karl TomlinsonBug 1123189 Queue "durationchange" instead of dispatching synchronously r=cpearce a=sledru - 677c75e4d519
Jean-Yves AvenardBug 1123269: Better fix for Bug 1121876 r=cpearce a=sledru - 56b7a3953db2
Jean-Yves AvenardBug 1123054: Don't check VDA reference count. r=rillian a=sledru - a48f8c55a98c
Andreas PehrsonBug 1106963 - Resync media stream clock before destroying decoded stream. r=roc, a=sledru - cdffc642c9b9
Ben TurnerBug 1113340 - Make sure blob urls can load same-prcess PBackground blobs. r=khuey, a=sledru - c16ed656a43b
Paul AdenotBug 1113925 - Don't return null in AudioContext.decodeAudioData. r=bz, a=sledru - 46ece3ef808e
Masatoshi KimuraBug 1112399 - Treat NS_ERROR_NET_INTERRUPT and NS_ERROR_NET_RESET as SSL errors on https URLs. r=bz, a=sledru - ba67c22c1427
Hector ZhaoBug 1035400 - 'restart to update' button not working. r=rstrong, a=sledru - 8a2a86c11f7c
Ryan VanderMeulenBacked out the code changes from changeset c16ed656a43b (Bug 1113340) since Bug 701634 didn't land on Gecko 36. - e8effa80da5b
Ben TurnerBug 1120336 - Land the test-only changes on beta. r=khuey, a=test-only - a6e5dedbd0c0
Sami JaktholmBug 1001821 - Wait for eyedropper to be destroyed before ending tests and checking for leaks. r=pbrosset, a=test-only - 4036f72a0b10
Mark HammondBug 1117979 - Fix orange by not relying on DNS lookup failure in the 'error' test. r=gavin, a=test-only - e7d732bf6091
Honza BambasBug 1123732 - Null-check uri before trying to use it. r=mcmanus, a=sledru - 3096b7b44265
Florian QuèzeBug 1103692 - ReferenceError: bundle is not defined in webrtcUI.jsm. r=felipe, a=sledru - 9b565733c680
Bobby HolleyBug 1120266 - Factor some machinery out of test_BufferingWait into mediasource.js and make it Promise-friendly. r=jya, a=sledru - ff1b74ec9f19
Jean-Yves AvenardBug 1120266 - Add fragmented mp4 sample videos. r=cajbir, a=sledru - 53f55825252a
Paul AdenotBug 698079 - When using the WASAPI backend, always output audio to the default audio device. r=kinetik, a=sledru - 20f7d44346da
Paul AdenotBug 698079 - Synthetize the clock when using WASAPI to prevent A/V desynchronization issues when switching the default audio output device. r=kinetik, a=sledru - 0411d20465b4
Matthew NoorenbergheBug 1079554 - Ignore most UITour messages from pages that aren't visible. r=Unfocused, a=sledru - e35e98044772
Markus StangeBug 1106906 - Always return false from nsFocusManager::IsParentActivated in the parent process. r=smaug, a=sledru - 0d51214654ad
Bobby HolleyBug 1121148 - Move constants that we should not be using directly into a namespace. r=cpearce, a=sledru - 1237ddff18be
Bobby HolleyBug 1121148 - Make QUICK_BUFFERING_LOW_DATA_USECS a member variable and adjust it appropriately. r=cpearce, a=sledru - 62f7b8ea571f
Chris AtLeeBug 1113606 - Use app-specific API keys. r=mshal, r=nalexander, a=gavin - b3836e49ae7f
Ryan VanderMeulenBug 1121148 - Add missing detail:: to fix bustage. a=bustage - b3792d13df24

Henri SivonenIf You Want Software Freedom on Phones, You Should Work on Firefox OS, Custom Hardware and Web App Self-Hostablility

TL;DR

To achieve full-stack Software Freedom on mobile phones, I think it makes sense to

  • Focus on Firefox OS, which is already Free Software above the driver layer, instead of focusing on removing proprietary stuff from Android whose functionality is increasingly moving into proprietary components including Google Play Services.
  • Commission custom hardware whose components have been chosen such that the foremost goal is achieving Software Freedom on the driver layer.
  • Develop self-hostable Free Software Web apps for the on-phone software to connect to and a system that makes installing them on a home server as easy as installing desktop or mobile apps and connecting the home server to the Internet as easy as connecting a desktop.

Inspiration

Back in August, I listened to an episode of the Free as in Freedom oggcast that included a FOSDEM 2013 talk by Aaron Williamson titled “Why the free software phone doesn’t exist”. The talk actually didn’t include much discussion of the driver situation and instead devoted a lot of time to talking about services that phones connect to and the interaction of the DMCA with locked bootloaders.

Also, I stumbled upon the Indie Phone project. More on that later.

Software Above the Driver Layer: Firefox OS—Not Replicant

Looking at existing systems, it seems that software close to the hardware on mobile phones tends to be more proprietary than the rest of the operating system. Things like baseband software, GPU drivers, touch sensor drivers and drivers for hardware-accelerated video decoding (and video DRM) tend to be proprietary even when the Linux kernel is used and substantial parts of other system software are Free Software. Moreover, most of the mobile operating systems built on the Linux kernel are actually these days built on the Android flavor of the Linux kernel in order to be able to use drivers developed for Android. Therefore, the driver situation is the same for many of the different mobile operating systems. For these reasons, I think it makes sense to separate the discussion of Software Freedom on the driver layer (code closest to hardware) and the rest of the operating system.

Why Not Replicant?

For software above the driver layer, there seems to be something of a default assumption in the Free Software circles that Replicant is the answer for achieving Software Freedom on phones. This perception of mine probably comes from Replicant being the contender closest to the Free Software Foundation with the FSF having done fundraising and PR for Replicant.

I think betting on Replicant is not a good strategy for the Free Software community if the goal is to deliver Software Freedom on phones to many people (and, therefore, have more of a positive impact on society) instead of just making sure that a Free phone OS exists in a niche somewhere. (I acknowledge that hardline FSF types keep saying negative things about projects that e.g. choose permissive licenses in order to prioritize popularity over copyleft, but the “Free Software, Free Society” thing only works if many people actually run Free Software on the end-user devices, so in that sense, I think it makes sense to think of what has a chance to be run by many people instead of just the existence of a Free phone OS.)

Android is often called an Open Source system, but when someone buys a typical Android phone, they get a system with substantial proprietary parts. Initially, the main proprietary parts above the driver layer were the Google applications (Gmail, Maps, etc.) but the non-app, non-driver parts of the system were developed as Open Source / Free Software in the Android Open Source Project (AOSP). Over time, as Google has realized that OEMs don’t care to deliver updates for the base system, Google has moved more and more stuff to the proprietary Google application package. Some apps that were originally developed as part of AOSP no longer are. Also, Google has introduced Google Play Services, which is a set of proprietary APIs that keeps updating even when the base system doesn’t.

Replicant takes Android and omits the proprietary parts. This means that many of the applications that users expect to see on an Android phone aren’t actually part of Replicant. But more importantly, Replicant doesn’t provide the same APIs as a normal Android system does, because Google Play Services are missing. As more and more applications start relying on Google Play Services, Replicant and Android-as-usually-shipped diverge as development platforms. If Replicant was supposed to benefit from the network effects of being compatible with Android, these benefits will be realized less and less over time.

Also, Android isn’t developed in the open. The developers of Replicant don’t really get to contribute to the next version of AOSP. Instead, Google develops something and then periodically throws a bundle of code over the wall. Therefore, Replicant has the choice of either having no say over how the platform evolves or has the option to diverge even more from Android.

Instead of the evolution of the platform being controlled behind closed doors and the Free Software community having to work with a subset of the mass-market version of the platform, I think it would be healthier to focus efforts on a platform that doesn’t require removing or replacing (non-driver) system components as the first step and whose development happens in public repositories where the Free Software community can contribute to the evolution of the platform.

What Else Is There?

Let’s look at the options. What at least somewhat-Free mobile operating systems are there?

First, there’s software from the OpenMoko era. However, the systems have no appeal to people who don’t care that much about the Free Software aspect. I think it would be strategically wise for the Free Software community to work on a system that has appeal beyond the Free Software community in order to be able to benefit from contributions and network effects beyond the core Free Software community.

Open webOS is not on an upwards trajectory (on phones despite there having been a watch announcement at CES). (Addition 2015-01-24: There exists a project called LuneOS to port Open webOS to Nexus phones, though.) Tizen (on phones) has been delayed again and again and became available just a few days ago, so it’s not (at least quite yet) a system with demonstrated appeal (on phones) beyond the Free Software community, and it seems that Tizen has substantial non-Free parts. Jolla’s Sailfish OS is actually shipping on a real phone, but Jolla keeps some components proprietary, so the platform fails the criterion of not having to remove or replace (non-driver) system components as the first step (see Nemo). I don’t actually know if Ubuntu Touch has proprietary non-driver system components. However, it does appear to have central components to which you cannot contribute on an “inbound=outbound” licensing basis, because you have to sign a CLA that gives Canonical rights to your code beyond the Free Software license of the project as a condition of your patch getting accepted. In any case, Ubuntu Touch is not shipping yet on real phones, so it is not yet demonstratably a system that has appeal beyond the Free Software community.

Firefox OS, in contrast, is already shipping on multiple real phones (albeit maybe not in your country) demonstrating appeal beyond the Free Software community. Also, Mozilla’s leverage is the control of the trademark—not keeping some key Mozilla-developed code proprietary. The (non-trademark) licensing of the project works on the “inbound=outbound” basis. And, importantly, the development repositories are visible and open to contribution in real time as opposed to code getting thrown over the wall from time to time. Sure, there is code landing such that the motivation of the changes is confidential or obscured with codenames, but if you want to contribute based on your motivations, you can work on the same repositories that the developers who see the confidential requirements work on.

As far as I can tell, Firefox OS has the best combination of not being vaporware, having appeal beyond the Free Software appeal and being run closest to the manner a Free Software project is supposed to be run. So if you want to advance Software Freedom on mobile phones, I think it makes the most sense to put your effort into Firefox OS.

Software Freedom on the Driver Layer: Custom Hardware Needed

Replicant, Firefox OS, Ubuntu Touch, Sailfish OS and Open webOS all use an Android-flavored Linux kernel in order to be able to benefit from the driver availability for Android. Therefore, the considerations for achieving Software Freedom on the driver layer apply equally to all these systems. The foremost problems are controlling the various radios—the GSM/UMTS radio in particular—and the GPU.

If you consider the Firefox OS reference device for 2014 and 2015, Flame, you’ll notice that Mozilla doesn’t have the freedom to deliver updates to all software on the device. Firefox OS is split into three layers: Gonk, Gecko and Gaia. Gonk contains the kernel, drivers and low-level helper processes. Gecko is the browser engine and runs on top of Gonk. Gaia is the system UI and set of base apps running on top of Gecko. You can get Gecko and Gaia builds from Mozilla, but you have to get Gonk builds from the device vendor.

If Software Freedom extended to the whole stack—including drivers—Mozilla (or anyone else) could give you Gonk buids, too. That is, to get full-stack Software Freedom with Firefox OS, the challenge is to come up with hardware whose driver situation allows for a Free-as-in-Freedom Gonk.

As noted, Flame is not that kind of hardware. When this is lamented, it is typically pointed out that “not even the mighty Google” can get the vendors of all the hardware components going into the Nexus devices to provide Free Software drivers and, therefore, a Free Gonk is unrealistic at this point in time.

That observation is correct, but I think it lacks some subtlety. Both Flame and the Nexus devices are reference devices on which the software platform is developed with the assumption that the software platform will then be shipped on other devices that are sufficiently similar that the reference devices can indeed serve as reference. This means that the hardware on the reference devices needs to be reasonably close to the kind of hardware that is going to be available with mass-market price/performance/battery life/weight/size characteristics. Similarity to mass-market hardware trumps Free Software driver availability for these reference devices. (Disclaimer: I don’t participate in the specification of these reference devices, so this paragraph is my educated guess about what’s going on—not any sort of inside knowledge.)

I theorize that building a phone that puts the availability of Free Software drivers first is not impossible but would involve sacrificing on the current mass-market price/performance/battery life/weight/size characteristics and be different enough from the dominant mass-market designs not to make sense as a reference device. Let’s consider how one might go about designing such a phone.

In the radio case, there is proprietary software running on a baseband processor to control the GSM/UMTS radio and some regulatory authorities, such as the FCC, require this software to be certified for regulatory purposes. As a result, the chances of gaining Software Freedom relative to this radio control software in the near term seem slim. From the privacy perspective, it is problematic that this mystery software can have DMA access to the memory of the application processor-i.e. the processor that runs the Linux kernel and the apps. Addition 2015-01-24: There seems to exist a project, OsmocomBB, that is trying to produce GSM-level baseband software as Free Software. (Unlike the project page, the Git repository shows recent signs of activity.) For smart phones, you really need 3G, though.

Technically, data transfer between the application processor and various radios does not need to be fast enough to require DMA access or other low-level coupling. Indeed, for desktop computers, you can get UMTS, Wi-Fi, Bluetooth and GPS radios as external USB devices. It should be possible to document the serial protocol these devices use over USB such that Free drivers can be written on the Linux side while the proprietary radio control software is embedded on the USB device.

This would solve the problem of kernel coupling with non-free drivers in a way that hinders the exercise of Software Freedom relative to the kernel. But wouldn’t the radio control software embedded on the USB device still be non-free? Well, yes it would, but in the current regulatory environment it’s unrealistic to fix that. Moreover, if the software on the USB devices is truly embedded to the point where no one can update it, the Free Software Foundation considers the bundle of hardware and un-updatable software running on the hardware as “hardware” as a whole for Software Freedom purposes. So even if you can’t get the freedom to modify the radio control software, if you make sure that no one can modify it and put it behind a well-defined serial interface, you can both solve the problem of non-free drivers holding back Software Freedom relative to the kernel and get the ideological blessing.

So I think the way to solve the radio side of the problem is to license circuit designs for UMTS, Wi-Fi, Bluetooth and GPS USB dongles and build those devices as hard-wired USB devices onto the main board of the phone inside the phone’s enclosure. (Building hard-wired USB devices into the device enclosure is a common practice in the case of laptops.) This would likely result in something more expensive, more battery draining, heavier and larger than the usual more integrated designs. How much more expensive, heavier, etc.? I don’t know. I hope within bounds that would be acceptable for people willing to pay some extra and accept some extra weigh and somewhat worse battery life and performance in order to get Software Freedom.

As for the GPU, there are a couple of Free drivers: There’s Freedreno for Adreno GPUs. There is the Lima driver for Mali-200 and Mali-400, but a Replicant developer says it’s not good enough yet. Intel has Free drivers for their desktop GPUs and Intel is trying to compete in the mobile space so, who knows, maybe in the reasonably near future Intel manages to integrate GPU design of their own (with a Free driver) with one of their mobile CPUs. Correction 2015-01-24: It appears that after I initially wrote that sentence in August 2014 but before I got around to publishing in January 2015, Intel announced such a CPU/GPU combination.

The current Replicant way to address the GPU driver situation is not to have hardware-accelerated OpenGL ES. I think that’s just not going to be good enough. For Firefox OS (or Ubuntu Touch or Sailfish OS or a more recent version of Android) to work reasonably, you have to have hardware-accelerated OpenGL ES. So I think the hardware design of a Free Software phone needs to grow around a mobile GPU that has a Free driver. Maybe that means using a non-phone (to put radios behind USB) QUALCOMM SoC with Adreno. Maybe that means pushing Lima to good enough a state and then licensing Mali-200 or Mali-400. Maybe that means using x86 and waiting for Intel to come up with a mobile GPU. But it seems clear that the GPU is the big constraint and the CPU choice will have to follow from the GPU solution.

For the encumbered codecs that everyone unfortunately needs to have in practice, it would be best to have true hardware implementations that are so complete that the drivers wouldn’t contain parts of the codec but would just push bits to the hardware. This way, the encumberance would be limited to the hardware. (Aside: Similarly, it would be possible to design a hardware CDM for EME. In that case, you could have video DRM without it being a Software Freedom problem.)

So I think that in order to achieve Software Freedom on the driver layer, it is necessary to commission hardware that fits Free Software instead of trying to just write software that fits the hardware that’s out there. This is significantly different from how software freedom has been achieved on desktop. Also, the notion of making a big upfront capital investment in order to achieve Software Freedom is rather different from the notion that you only need capital for a PC and then skill and time.

I think it could be possible to raise the necessary capital through crowdfunding. (Purism is trying it with the Librem laptop, but, unfortunately, the rate of donations looks bad as of the start of January 2015. Addition 2015-01-24: They have actually reached and exceeded their funding target! Awesome!) I’m not going to try to organize anything like that myself—I’m just theorizing. However, it seems that developing a phone by crowdfunding in order to get characteristics that the market isn’t delivering is something that is being attempted. The Indie Phone project expresses intent to crowdfund the development of a phone designed to allow users to own their own data. Which brings us to the topic of the services that the phone connects to.

Freedom on the Service Side: Easy Self-Hostability Needed

Unfortunately, Indie Phone is not about building hardware to run Firefox OS. The project’s Web site talks about an Indie OS but intentionally tries to make the OS seem uninteresting and doesn’t explain what existing software the small team is intending to build upon. (It seems implausible that such a small team could develop an operating system from scratch.) Also, the hardware intentions are vague. The site doesn’t explain if the project is serious about isolating the baseband processor from the application processor out of privacy concerns, for example. But enough about the vagueness of what the project is going to do. Let’s look at the reasons the FAQ gave against Firefox OS (linking to version control, since the FAQ appears to have been removed from the site between the time I started writing this post and the time I got around to publishing):

“As an operating system that runs web applications but without any applications of its own, Firefox OS actually incentivises the use of closed silos like Google. If your platform can only run web apps and the best web apps in town are made by closed silos like Google, your users are going to end up using those apps and their data will end up in these closed silos.”

The FAQ then goes on to express angst about Mozilla’s relationship with Google (the Indie Phone FAQ was published before Mozilla’s seach deal with Yahoo! was announced) and Telefónica and to talk about how Mozilla doesn’t control the hardware but Indie will.

I think there is truth to Web technology naturally having the effect of users gravitating towards whatever centralized service provides the best user experience. However, I think the answer is not to shun Firefox OS but to make de-centralized services easy to self-host and use with Firefox OS.

In particular, it doesn’t seem realistic that anyone would ship a smart phone without a Web browser. In that sense, any smartphone is susceptible to the lure of centralized Web-based services. On the other hand, Google Play and the iOS App Store contain plenty of applications whose user interface is not based on HTML, CSS and JavaScript but still those applications put the users’ data into centralized services. On the flip side, it’s not actually true that Firefox OS only runs Web apps hosted on a central server somewhere. Firefox OS allows you to use HTML, CSS and JavaScript to build apps that are distributed as a zip file and run entirely on the phone without a server component.

But the thing is that, these days, people don’t want even notes or calendar entries that are intended for their own eyes only to stay on the phone only. Instead, even for data meant for the user’s own eyes only, there is a need to have the data show up on multiple devices. I very much doubt that any underdog effort has the muscle to develop a non-Web decentralized network application platform that allows users to interact with their data from all the devices that they want to use to interact with their data. (That is, I wouldn’t bet on e.g Indienet that is going to launch with “with a limited release on OS X Yosemite”.)

I think the answer isn’t fighting the Web Platform but using the only platform that already has clients for all the devices that users want to use—in addition to their phone—to interact with their data: the Web Platform. To use the Web Platform as the application platform such that multiple devices can access the apps but also such that users have Software Freedom, the users need to host the Web apps themselves. Currently, this is way too difficult. Hosting Web apps at home needs to become at least as easy as maintaining a desktop computer at home-preferably easier.

For this to happen, we need:

  • Small home server hardware that is powerful enough to host Web apps for family, that consumes negligible energy (maybe in part by taking the place of the home router that people have always-on consuming electricity today), that is silent and that can boot a vanilla kernel that gets security updates.
  • A Free operating system that runs in such hardware, makes it easy to install Web apps and makes it easy for the apps to become securely reachable over the network.
  • High-quality apps for such a platform.

(Having Software Freedom on the server doesn’t strictly require the server to be placed in your home, but if that’s not a realistic option, there’s clearly a practical freedom deficit even if not under the definition of Free Software. Also, many times the interest in Software Freedom in this area is motivated by data privacy reasons and in the case of Web apps, the server of the apps can see the private data. For these reasons, it makes sense to consider home-hostability.)

Hardware

In this case, the hardware and driver side seems like the smallest problem. At least if you ignore the massive and creepy non-Free firmware, the price of the hardware and don’t try to minimize energy consumption particularly aggressively, suitable x86/x86_64 hardware already exists e.g. from CompuLab. To get the price and energy consumption minimized, it seems that ARM-based solutions would be better, but the situation with 32-bit ARM boards requiring per-board kernel builds and most often proprietary blobs that don’t get updated makes the 32-bit ARM situation so bad that it doesn’t make sense to use 32-bit ARM hardware for this. (At FOSDEM 2013, it sounded like a lot of the time of the FreedomBox project has been sucked into dealing with the badness of the Linux on 32-bit ARM situation.) It remains to be seen whether x86/x86_64 SoCs that boot with generic kernels reach ARM-style price and energy consumption levels first or whether the ARM side gets their generic kernel bootability and Free driver act together (including shipping) with 64-bit ARM first. Either way, the hardware side is getting better.

Apps

As for the apps, PHP-based apps that are supposed to be easy-ish to deploy as long as you have an Apache plus PHP server from a service provider are plentiful, but e.g. Roundcube is no match for Gmail in terms of user experience and even though it’s theoretically possible to write quality software in PHP, the execution paradigm of PHP and the culture of PHP don’t really guide things to that direction.

Instead of relying on the PHP-based apps that are out there and that are woefully uncompetitive with the centralized proprietary offerings, there is a need for better apps written on better foundations (e.g. Python and Node.js). As an example, Mailpile (Python on the server) looks very promising in terms of Gmail-competitive usability aspirations. Unfortunately, as of December 2014, it’s not ready for use yet. (I tried and, yes, filed bugs.) Ethercalc and Etherpad (Node.js on the server) are other important apps.

With apps, the question doesn’t seem to be whether people know how to write them. The question seems to be how to fund the development of the apps so that the people who know how to write them can devote a lot of time to these projects. I, for one, hope that e.g. Mailpile’s user-funded development is sustainable, but it remains to be seen. (Yes, I donated.)

Putting the Apps Together

A crucial missing piece is having a system that can be trivially installed on suitable hardware (or, perhaps in the future, can be pre-installed on suitable hardware) that allows users who want to get started without exercising their freedom to modify the software but provides the freedom to install modified apps if the user so chooses and-perhaps most importantly-makes the networking part very easy.

There are a number of projects that try to aggregate self-hostable apps into a (supposedly at least) easy to install and manage system. However, it seems to me that they tend to be of the PHP flavor, which I think fundamentally disadvantages them in terms of becoming competitive with proprietary centralized Web apps. I think the most promising project in the space that deals with making the better (Python and Node.js-based among others) apps installable with ease is Sandstorm.io, which unfortunately, like Mailpile, doesn’t seem quite ready yet. (Also, in common with Mailpile: a key developer is an ex-Googler. Looks like people who’ve worked there know what it takes to compete with GApps…)

Looking at Sandstorm.io is instructive in terms of seeing what’s hard about putting it all together. On the server, Sandstorm.io runs each Web app in a Linux container that’s walled off from the other apps. All the requests go through a reverse proxy that also provides additional browser-site UI for switching between the apps. Instead of exposing the usual URL structure of each app, Sandstorm.io exposes “grain” URLs, which are unintelligible random-looking character sequences. This design isn’t without problems.

The first problem is that the apps you want to run like Mailpile, Etherpad and Ethercalc have been developed to be deployed on a vanilla Linux server by using application-specific manual steps that puts hosting these apps on a server out of the reach of normal users. (Mailpile is designed to be run on localhost by normal users, but that doesn’t make it reachable from multiple devices, which is what you want from a Web app.) This means that each app needs to be ported to Sandstorm.io. This in turn means that compared to going to upstream, you get stale software, because except for Ethercalc, the maintainer of the Sandstorm.io port isn’t the upstream developer of the app. In fairness, though, the software doesn’t seem to be as a stale as it would be if you installed a package from Debian Stable… Also, as the platform and the apps mature, it’s possible that various app developers start to publish for Sandstorm.io directly on one hand and with more mature apps it’s less necessary to have the latest version (except for security fixes).

Unlike in the case getting a Web app as a Debian package, the URL structure and, it appears, in some cases the storage structure is different in a Sandstorm.io port of an app and in a vanilla upstream version of the app. Therefore, even though avoiding lock-in is one of the things the user is supposed to be able to accomplish by using Sandstorm.io, it’s non-trivial to migrate between the Sandstorm.io version and a non-Sandstorm.io version of a given app. It particularly bothers me that Sandstorm.io completely hides the original URL structure of the app.

Networking

And that leads to the last issue of self-hosting with the ease of just plugging a box into home Ethernet: Web security and Web addressing are rather unfriendly to easy self-hosting.

First of all, there is the problem of getting basic incoming IPv4 connectivity to work. After all, you must be able to reach port 443 (https) of your self-hosting box from all your devices-including reaching the box that’s on your wired home Internet connection from the mobile connection of your phone. Maybe your own router imposes a NAT between your server and the Internet and you’d need to set up port forwarding, which makes things significantly harder than just instructing people to plug stuff in. This might be partially alleviated by making the self-hosting box contain NAT functionality itself so that it could take the place of the NATting home router, but even then you might have to configure something like a cable modem to a bridging mode or, worse, you might be dealing with an ISP who doesn’t actually sell you neutral end-to-end Internet routing and blocks incoming traffic to port 443 (or detects incoming traffic to port 443 and complains to you about running a server even if it’s actually for your personal use so you aren’t violating any clause that prohibits you from using a home connection to offer a service to the public).

One way to solve this would be standardizing simple service were a service provider takes your credit card number and an ssh public key and gives you an IP address. The self-hosting system you run at home would then have a configuration interface that gives you an ssh public key and takes an IP address. The self-hosting box would then establish an ssh reverse tunnel to the IP address with 443 as the local target port and the service provided by the service provider would be sending port 443 of the IP address to this tunnel. You’d still own your data and your server and you’d terminate TLS on your server even though you’d rent an IP address from a data center.

(There are efforts to solve this by giving the user-hosted devices host names under the domain of a service that handles the naming, such as OPI giving the each user a hostname under the op-i.me domain, but then the naming service—not the user—is presumptively the one eligible to get the necessary certificates signed, and delegating away the control of the crypto defeats an important aspect of self-hosting. As a side note, one of the reasons I migrated from hsivonen.iki.fi to hsivonen.fi was that even though I was able to get the board of IKI to submit iki.fi to the Public Suffix List, CAs still seem to think that IKI, not me, is the party eligible for getting certificates signed for hsivonen.iki.fi.)

But even if you solved IPv4-level reachability of the home server from the public Internet as a turn-key service, there are still more hurdles on the way of making this easy. Next, instead of the user having to use an IP address, the user should be able to use a memorable name. So you need to tell the user to go register a domain name, get DNS hosting and point an A record to the IP address. And then you need a certificate for the name you chose for the A record, which at the moment (before Let’s Encrypt is operational) is another thing that makes things too hard.

And that brings us back to Sandstorm.io obscuring the URLs. Correction 2015-01-24: Rather paradoxically, even though Sandstorm.io is really serious about isolating apps from each other on the server, Sandstorm.io gives up the browser-side isolation of the apps that you’d get with a typical deployment of the upstream apps. The only true way to have browser-enforced privilege separation of the client-side JavaScript parts of the apps is for different apps to have different Origins. An Origin is a triple of URL scheme, host name and port. For the apps not to be ridiculously insecure, the scheme has to be https. This means that you either have to give each app a distinct port number or a distinct host name. On surface, it seems that it would be easy to mint port numbers, but users are not used to typing URLs with non-default port numbers and if you depend on port forwarding in a NATting home router or port forwarding through an ssh reverse tunnel, minting port numbers on-demand isn’t that convenient anymore.

So you really want a distinct host name for each app to have a distinct Origin for browser-enforced privilege separation of JavaScript on the client. But the idea was that you could install new apps easily. This means that you have to be able to generate a new working host name at the time of app installation. So unless you have a programmatic way to configure DNS on the fly and have certificates minted on the fly, neither of which you can currently realistically have for a home server, you need a wildcard in the DNS zone and you need a wildcard TLS certicate. Correction 2015-01-24: Sandstorm.io instead uses one hostname and obscure URLs, which is understandable. Despite being understandable, it is sad, since it loses both human-facing semantics of the URLs and browser-enforced privilege separation between the apps. To provide Origin-based privilege separation on the browser side, Sandstorm.io generates hostnames that do not look meaningful to the user and hides them in an iframe, but the URLs shown for the top-level origin in the URL bar are equally obscure. I find it unfortunate that Sandstorm.io does not mint human-friendly URL bar origins with the app names when it is capable of minting origins. (Instead of https://etherpad.example.org/example-document-title and https://ethercalc.example.org/example-spreadsheet-title, you get https://example.org/grain/FcTdrgjttPbhAzzKSv6ESD and https://example.org/grain/o96ouPLKQMEMZkFxNKf2Dr.) Fortunately, Let’s Encrypt seems to be on track to solving the certificate side of this problem by making it easy to get a cert for a newly-minted hostname signed automatically. Even so, the DNS part needs to be made easy enough that it doesn’t remain a blocker for self-hosting a box that allows on-demand Web app installation with browser-side app privilege separation.

Conclusion

There are lots of subproblems to work on, but, fortunately, things don’t seem fundamentally impossible. Interestingly, the problem with software that resides on the phone may be the relatively easy part to solve. That is not to say that it is easy to solve, but once solved, it can scale to a lot of users without the users having to do special things to get started in the role of a user who does not exercise the freedom to modify the system. However, since users these days are not satisfied by merely device-resident software but want things to work across multiple devices, the server-side part is relevant and harder to scale. Somewhat paradoxically, the hardest thing to scale in a usable way seems like a triviality on surface: the addressing of the server-side part in a way that gives sovereignty to users.

Pascal FinetteWeekend Link Pack (Jan 24)

What I was reading this week:

Benjamin KerensaGet a free U2F Yubikey to test on Firefox Nightly

U2F, Yubikey, Universal 2nd FactorPasswords are always going to be vulnerable to being cracked. Fortunately, there are solutions out there that are making it safer for users to interact with services on the web. The new standard in protecting users is Universal 2nd Factor (U2F) authentication which is already available in browsers like Google Chrome.

Mozilla currently has a bug open to start the work necessary to delivering U2F support to people around the globe and bring Firefox into parity with Chrome by offering this excellent new feature to users.

I recently reached out to the folks at Yubico who are very eager to see Universal 2nd Factor (U2F) support in Firefox. So much so that they have offered me the ability to give out up to two hundred Yubikeys with U2F support to testers and will ship them directly to Mozillians regardless of what country you live in so you can follow along with the bug we have open and begin testing U2F in Firefox the minute it becomes available in Firefox Nightly.

If you are a Firefox Nightly user and are interested in testing U2F, please use this form (offer now closed) and apply for a code to receive one of these Yubikeys for testing. (This is only available to Mozillians who use Nightly and are willing to help report bugs and test the patch when it lands)

Thanks again to the folks at Yubico for supporting U2F in Firefox!

Update: This offer is now closed check your email for a code or a request to verify you are a vouched Mozillian! We got more requests also then we had available so only the first two hundred will be fulfilled!

Mozilla WebDev CommunityBeer and Tell – January 2015

Once a month, web developers from across the Mozilla Project get together to trade and battle Pokémon. While we discover the power of friendship, we also find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Michael Kelly: gamegirl

Our first presenter, Osmose (that’s me!), shared a Gameboy emulator, written in Python, called gamegirl. The emulator itself is still very early in development and only has a few hundred CPU instructions implemented. It also includes a console-based debugger for inspecting the Gameboy state while executing instructions, powered by urwid.

Luke Crouch: Automatic Deployment to Heroku using TravisCI and Github

Next, groovecoder shared some wisdom about his new favorite continuous deployment setup. The setup involves hosting your code on Github, running continuous integration using Travis CI, and hosting the site on Heroku. Travis supports deploying your app to Heroku after a successful build, and groovecoder uses this to deploy his master branch to a staging server.

Once the code is ready to go to production, you can make a pull request to a production branch on the repo. Travis can be configured to deploy to a different app for each branch, so once that pull request is merged, the site is deployed to production. In addition, the pull request view gives a good overview of what’s being deployed. Neat!

This system is in use on codesy, and you can check out the codesy Github repo to see how they’ve configured their project to deploy using this pattern.

Peter Bengtsson: django-screencapper

Friend of the blog peterbe showed off django-screencapper, a microservice that generates screencaps from video files using ffmpeg. Developed as a test to see if generating AirMozilla icons via an external service was viable, it queues incoming requests using Alligator and POSTs the screencaps to a callback URL once they’ve been generated.

A live example of the app is available at http://screencapper.peterbe.com/receiver/.

tofumatt: i-dont-like-open-source AKA GitHub Contribution Hider

Motorcycle enthusiast tofumatt hates the Github contributor streak graph. To be specific, he hates the one on his own profile; it’s distracting and leads to bad behavior and imposter syndrome. To save himself and others from this terror, he created a Firefox add-on called the GitHub Contribution Hider that hides only the contribution graph on your own profile. You can install the addon by visiting it’s addons.mozilla.org page. Versions of the add-on for other browsers are in the works.


Fun fact: The power of friendship cannot, in fact, overcome type weaknesses.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Air MozillaWebmaker Demos January 23 2015

Webmaker Demos January 23 2015 Webmaker Demos January 23 2015

Adam LoftingWeeknotes: 23 Jan 2015

unrelated photo

unrelated photo

I managed to complete roughly five of my eleven goals for the week.

  • Made progress on (but have not cracked) daily task management for the newly evolving systems
  • Caught up on some email from time off, but still a chunk left to work through
  • Spent more time writing code than expected
  • Illness this week slowed me down
  • These aren’t very good weeknotes, but perhaps better than none.

 

Jess KleinDino Dribbble

The newly created Mozilla Foundation design team started out with a bang (or maybe I should say rawr) with our very first collaboration: a team debut on dribbble. Dribbble describes itself as a show and tell community for designers. I have not participated in this community yet but this seemed like a good moment to join in. For our debut shot, we decided to have some fun and plan out our design presence. We ultimately decided to go in a direction designed by Cassie McDaniel.

The concept was for us to break apart the famed Shepard Fairey Mozilla dinosaur into quilt-like
tiles.

 
Each member of the design team was assigned a tile or two and given a shape. This is the one I was assigned:
I turned that file into this:

We all met together in a video chat to upload our images on to the site.

Anticipation was building as we uploaded each shot one by one:
But the final reveal made it worth all the effort! 

Check out our new team page on dribbble. rawr!

Cassie also wrote about the exercise on her blog and discussed the opinion position for a designer to join the team.



Mozilla Reps CommunityReps Weekly Call – January 22th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

this-is-mozilla

Summary

  • Dashboard QA and UI.
  • Community Education.
  • Feedback on reporting.
  • Participation plan and Grow meeting.
  • Womoz Badges.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Rubén MartínWe have to fight again for the web

Versión en español: “Debemos volver a pelear por la web

It’s interesting to see how the history repeats itself and we repeat the same mistakes over and over again.

I started contributing to Mozilla in 2004, at that time Internet Explorer had the 95% of the market share. That meant that there was absolutely no way you could create a web not “adapted” to this browser and there were no way you could surf the full web with other browsers becuase a lot of sites used ActiveX and other IE-only non-standard technologies.

The web was Internet Explorer, Internet Explorer was the web, you had no choice.

We fought hard (really hard) from Mozilla and other organizations to bring user choice and open other ways to understand the web. People understood this and the web changed to an open diverse ecosystem where standards were the way to go and where everyone was able to be part of it, both users and developers.

Today everything we fought for is at risk.

It’s not the first time we see how some sites decide to offer their content just for one browser, sometimes using technologies that are not a standard and only work there but sometimes using technologies that are standards and blocking other browsers for no reason.

Business is business.

If we don’t want a web controlled again by a few, driven by stockholders interests and not by users, we have to stand up. We have to call out sites that try to hijack user choice asking them to use one browser to access their content.

whatsapp-hats

The web should run everywhere and users should be free to choose the browser they think best serve their interests/values.

I truly believe that Mozilla, as a non-profit organization, is still the only one that can provide an independent choice to users and balance the market to avoid the walled gardens some dream to build.

Don’t lower the guard, let’s not repeat the same mistakes again, let’s fight again for the web.

PD: If you want a deep analysis about the Whatsapp web fiasco, I recommend the post by my friend André: “Whatsapp doesn’t understand the web”.

Ehsan AkhgariRunning Microsoft Visual C++ 2013 under Wine on Linux

The Wine project lets you run Windows programs on other operating systems, such as Linux.  I spent some time recently trying to see what it would take to run Visual C++ 2013 Update 4 under Linux using Wine.

The first thing that I tried to do was to run the installer, but that unfortunately hits a bug and doesn’t work.  After spending some time looking into other solutions, I came up with a relatively decent solution which seems to work very well.  I put the instructions up on github if you’re interested, but the gist is that I used a Chromium depot_tools script to extract the necessary files for the toolchain and the Windows SDK, which you can copy to a Linux machine and with some DLL loading hackery you will get a working toolchain.  (Note that I didn’t try to run the IDE, and I strongly suspect that will not work out of the box.)

This should be the entire toolchain that is necessary to build Firefox for Windows under Linux.  I already have some local hacks which help us get past the configure script, hopefully this will enable us to experiment with using Linux to be able to build Firefox for Windows more efficiently.  But there is of course a lot of work yet to be done.

Armen ZambranoBacked out - Pinning for Mozharness is enabled for the fx-team integration tree

EDIT=We had to back out this change since it caused issues for PGO talos jobs. We will try again after further testing.

Pinning for Mozharness [1] has been enabled for the fx-team integration tree.
Nothing should be changing. This is a no-op change.

We're still using the default mozharness repository and the "production" branch is what is being checked out. This has been enabled on Try and Ash for almost two months and all issues have been ironed out. You can know if a job is using pinning of Mozharness if you see "repostory_manifest.py" in its log.

If you notice anything odd please let me know in bug 1110286.

If by Monday we don't see anything odd happening, I would like to enable it for mozilla-central for few days before enabling it on all trunk trees.

Again, this is a no-op change, however, I want people to be aware of it.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Doug BelshawOntology, mentorship and web literacy

This week’s Web Literacy Map community call was fascinating. They’re usually pretty interesting, but today’s was particularly good. I’m always humbled by the brainpower that comes together and concentrates on something I spend a good chunk of my life thinking about!

Brain

I’ll post an overview of the entire call in on the Web Literacy blog tomorrow but I wanted to just quickly zoom out and focus on things that Marc Lesser and Jess Klein were discussing during the call. Others mentioned really useful stuff too, but I don’t want to turn this into an epic post!


Marc

Marc reminded us of Clay Shirky’s post entitled Ontology is Overrated: Categories, Links, and Tags. It’s a great read but the point Marc wanted to extract is that pre-defined ontologies (i.e. ways of classifying things) are kind of outdated now we have the Internet:

No filesystem

In the Web 2.0 era (only 10 years ago!) this was called a folksonomic approach. I remembered that I actually used Shirky’s post in one of my own for DMLcentral a couple of years ago. To quote myself:

The important thing here is that we – Mozilla and the community – are creating a map of the territory. There may be others and we don’t pretend we’re not making judgements. We hope it’s “useful in the way of belief” (as William James would put it) but realise that there are other ways to understand and represent the skills and competencies required to read, write and participate on the Web.

Given that we were gently chastised at the LRA conference for having an outdated approach to representing literacies, we should probably think more about this.


Jess

Meanwhile, Jess, was talking about the Web Literacy Map as an ‘API’ upon which other things could be built. I reminded her of the WebLitMapper, a prototype I suggested and Atul Varma built last year. The WebLitMapper allows users to tag resources they find around the web using competencies from the Web Literacy Map.

This, however, was only part of what Jess meant (if I understood her correctly). She was interested in multiple representations of the map, kind of like these examples she put together around learning pathways. This would allow for the kind of re-visualisations of the Web Literacy Map that came out of the MozFest Remotee Challenge:

Re-visualising the Web Literacy Map

Capturing the complexity of literacy skill acquisition and development is particularly difficult given the constraints of two dimensions. It doubly-difficult if the representation has to be static.

Finally from Jess (for the purposes of this post, at least) she reminded us of some work she’d done around matching mentors and learners:

MentorN00b


Conclusion

The overwhelming feeling on the call was that we should retain the competency view of the Web Literacy Map for v1.5. It’s familiar, and helps with adoption:

Web Literacy Map v1.1

However, this decision doesn’t prevent us from exploring other avenues combining learning pathways, badges, and alternative ways to representing the overall skill/competency ecosystem. Perhaps diy.org/skills can teach us a thing or two?


Questions? Comments Tweet me (@dajbelshaw or email me (doug@mozillafoundation.org

Florian QuèzeProject ideas wanted for Summer of Code 2015

Google is running Summer of Code again in 2015. Mozilla has had the pleasure of participating every year so far, and we are hoping to participate again this year. In the next few weeks, we need to prepare a list of suitable projects to support our application.

Can you think of a 3-month coding project you would love to guide a student through? This is your chance to get a student focusing on it for 3 months! Summer of Code is a great opportunity to introduce new people to your team and have them work on projects you care about but that aren't on the critical path to shipping your next release.

Here are the conditions for the projects:

  • completing the project should take roughly 3 months of effort for a student;
  • any part of the Mozilla project (Firefox, Firefox OS, Thunderbird, Instantbird, SeaMonkey, Bugzilla, L10n, NSS, IT, and many more) can submit ideas, as long as they require coding work;
  • there is a clearly identified mentor who can guide the student through the project.


If you have an idea, please put it on the Brainstorming page, which is our idea development scratchpad. Please read the instructions at the top – following them vastly increases chances of your idea getting added to the formal Ideas page.

The deadline to submit project ideas and help us be selected by Google is February 20th.

Note for students: the student application period starts on March 16th, but the sooner you start discussing project ideas with potential mentors, the better.

Please feel free to discuss with me any question you may have related to Mozilla's participation in Summer of Code. Generic Summer of Code questions are likely already answered in the FAQ.

Benoit GirardGecko Bootcamp Talks

Last summer we held a short bootcamp crash course for Gecko. The talks have been posted to air.mozilla.com and collected under the TorontoBootcamp tag. The talks are about an hour each but will be very informative to some. They are aimed at people wanting a deeper understanding of Gecko.

View the talks here: https://air.mozilla.org/search/?q=tag%3A+TorontoBootcamp

Gecko Pipeline

Gecko Pipeline

In the talks you’ll find my first talk covering an overall discussion of the pipeline, what stages run when and how to skip stages for better performance. Kannan’s talk discusses Baseline, our first tier JIT. Boris’ talk discusses Restyle and Reflow. Benoit Jacob’s talk discusses the graphics stack (Rasterization + Compositing + IPC layer) but sadly the camera is off center for the first half. Jeff’s talk goes into depth into Rasterization, particularly path drawing. My second talk discusses performance analysis in Gecko using the gecko profiler where we look at real profiles of real performance problems.

I’m trying to locate two more videos about layout and graphics that were given at another session but would elaborate more the DisplayList/Layer Tree/Invalidation phase and another on Compositing.


Matt ThompsonMozilla Learning in 2015: our vision and plan

This post is a shortened, web page version of the 2015 Mozilla Learning plan we shared back in December. Over the next few weeks, we’ll be blogging and encouraging team and community members to post their reflections and detail on specific pieces of work in 2015 and Q1. Please post your comments and questions here — or get more involved.

Surman Keynote for Day One - Edit.002

Within ten years, there will be five billion citizens of the web.

Mozilla wants all of these people to know what the web can do. What’s possible. We want them to have the agency, skills and know-how they need to unlock the full power of the web. We want them to use the web to make their lives better. We want them to know they are citizens of the web.

Mozilla Learning is a portfolio of products and programs that helps people learn how to read, write and participate in the digital world.

Building on Webmaker, Hive and our fellowship programs, Mozilla Learning is a portfolio of products and programs that help these citizens of the web learn the most important skills of our age: the ability to read, write and participate in the digital world. These programs also help people become mentors and leaders: people committed to teaching others and to shaping the future of the web.

Mark Surman presents the Mozilla Learning vision and plan in Portland, Dec 2015

Three-year vision

By 2017, Mozilla will have established itself as the best place to learn the skills and know-how people need to use the web in their lives, careers and organizations. We will have:

  • Educated and empowered users by creating tools and curriculum for learning how to read, write and participate on the web. Gone mainstream.
  • Built leaders, everywhere by growing a global cadre of educators, researchers, coders, etc. who do this work with us. We’ve helped them lead and innovate.
  • Established the community as the classroom by improving and explaining our experiential learning model: learn by doing and innovating with Mozilla.

At the end of these three years, we may have established something like a “Mozilla University” — a learning side of Mozilla that can sustain us for many decades. Or, we may simply have a number of successful learning programs. Either way, we’ll be having impact.

We may establish something like a “Mozilla University” — a learning side of Mozilla that can sustain us for many decades.

Surman Keynote for Day One - Edit.008
2015 Focus

1) Learning Networks 2) Learning Products 3) Leadership Development

Our focus in 2015 will be to consolidate, improve and focus what we’ve been building for the last few years. In particular we will:

  • Improve and grow our local Learning Networks (Hive, Maker Party, etc).
  • Build up an engaged user base for our Webmaker Learning Products on mobile and desktop.
  • Prototype a Leadership Development program, and test it with fellows and ReMo.

The short term goal is to make each of our products and programs succeed in their own right in 2015. However, we also plan to craft a bigger Mozilla Learning vision that these products and programs can feed into over time.

Surman Keynote for Day One - Edit.003
A note on brand

Mozilla Learning is notional at this point. It’s a stake in the ground that says:

Mozilla is in the learning and empowerment business for the long haul.

In the short term, the plan is to use “Mozilla Learning” as an umbrella term for our community-driven learning and leadership development initiatives — especially those run by the Mozilla Foundation, like Webmaker and Hive. It may also grow over time to encompass other initiatives, like the Mozilla Developer Network and leadership development programs within the Mozilla Reps program. In the long term: we may want to a) build out a lasting Mozilla learning brand (“Mozilla University?”), or b) build making and learning into the Firefox brand (e.g., “Firefox for Making”). Developing a long-term Mozilla Learning plan is an explicit goal for 2015.

What we’re building

Practically, the first iteration of Mozilla Learning will be a portfolio of products and programs we’ve been working on for a number of years: Webmaker, Hive, Maker Party, Fellowship programs, community labs. Pulled together, these things make up a three-layered strategy we can build more learning offerings around over time.

  1. The Learning Networks layer is the most developed piece of this picture, with Hives and Maker Party hosts already in 100s of cities around the world.
  2. The Learning Products layer involves many elements of the Webmaker.org work, but will be relaunched in 2015 to focus on a mass audience.
  3. The Leadership Development piece has strong foundations, but a formal training element still needs to be developed.
Scope and scale

One of our goals with Mozilla Learning is to grow the scope and scale of Mozilla’s education and empowerment efforts. The working theory is that we will create an interconnected set of offerings that range from basic learning for large numbers of people, to deep learning for key leaders who will help shape the future of the web (and the future of Mozilla).

We want to increasing the scope and diversity of how people learn with Mozilla.

We’ll do that by building opportunities for people to get together to learn, hack and invent in cities on every corner of the planet. And also: creating communities that help people working in fields like science, news and government figure out how to tap into the technology and culture of the web in their own lives, organizations and careers. The plan is to elaborate and test out this theory in 2015 as a part of the Mozilla Learning strategy process. (Additional context on this here: http://mzl.la/depth_and_scale.) Surman Keynote for Day One - Edit.016

Contributing to Mozilla’s overall 2015 KPIs

How will we contribute to Mozilla’s top-line goals? In 2015, We’ll measure success through two key performance indicators: relationships and reach.

  • Relationships: 250K active Webmaker users
  • Reach: 500 cities with ongoing Learning Network activity

Surman Keynote for Day One - Edit.006

Learning Networks

In 2015, we will continue to grow and improve the impact of our local Learning Networks.

  • Build on the successful ground game we’ve established with teachers and mentors under the Webmaker, Hive and Maker Party banners.
  • Evolve Maker Party into year-round activity through Webmaker Clubs.
  • Establish deeper presence in new regions, including South Asia and East Africa.
  • Improve the websites we use to support teachers, partners, clubs and networks.
  • Sharpen and consolidate teaching tools and curriculum built in 2014. Package them on their own site, “teach.webmaker.org.”
  • Roll out largescale, extensible community-building software to run Webmaker clubs.
  • Empower more people to start Hive Learning Networks by improving documentation and support.
  • Expand scale, rigour and usability of curriculum and materials to help people better mentor and teach.
  • Expand and improve trainings online and in-person for mentors.
  • Recruit more partners to increase reach and scope of networks.

Surman Keynote for Day One - Edit.011

Learning Products

Grow a base of engaged desktop and mobile users for Webmaker.

  • Expand our platform to reach a broad market of learners directly.
  • Mobile & Desktop: Evolve current tools into a unified Webmaker making and learning platform for desktop, Firefox OS and Android.
  • Tablet: Build on our existing web property to address tablet browser users and ensure viability in classrooms.
  • Firefox: Experiment with ways to integrate Webmaker directly into Firefox.
  • Prioritize mobile. Few competitors here, and the key to emerging markets growth.
  • Lower the bar. Build user on-boarding that gets people making / learning quickly.
  • Engagement. Create sticky engagement. Build mentorship, online mentoring and social into the product.

Surman Keynote for Day One - Edit.014

Leadership Development

Develop a leadership development program, building off our existing Fellows programs.

  • Develop a strategy and plan. Document the opportunity, strategy and scope. Figure out how this leadership development layer could fit into a larger Mozilla Learning / Mozilla University vision.
  • Build a shared definition of what it means to be a ‘fellow’ at Mozilla. Empowering emerging leaders to use Mozilla values and methods in their own work.
  • Figure out the “community as labs” piece. How we innovate and create open tech along the way.
  • Hire leadership. Create an executive-level role to lead the strategy process and build out the program.
  • Test pilot programs. Develop a handbook / short course for new fellows.
  • Test with fellows and ReMo. Consider expanding fellows programs for science, web literacy and computer science research.
Get involved
  • Learn more. There’s much more detail on the Learning Networks, Learning Products and Leadership Development pieces in the complete Mozilla Learning plan.
  • Get involved. There’s plenty of easy ways to get involved now with Webmaker and our local Learning Networks today.
  • Get more hands-on. Want to go deeper? Get hands-on with code, curriculum, planning and more through build.webmaker.org

Schalk NeethlingResolving Error: pg_config executable not found on Mac

Every once in a while when I have to get an old project up and running or simply house cleaning a current project, I run into this error, and each time it trips me up, and I spend a ton of time yak shaving. Well, today was the last time. To future me, and whomever … Continue reading Resolving Error: pg_config executable not found on Mac

Mozilla Release Management TeamFirefox 36 beta1 to beta2

Beta 2 is a busy beta release. First, because of an holiday in the US, the go to build has been delayed by a day (Tuesday instead of Monday). Second, a lot of fixes for MSE landed.

  • 129 changesets
  • 271 files changed
  • 5021 insertions
  • 2064 deletions

ExtensionOccurrences
cpp80
h51
js37
ini21
xml15
html8
java6
list4
css4
jsm3
xul2
sjs2
nsi2
xhtml1
webidl1
nsh1
jsx1
json1
in1
cc1

ModuleOccurrences
dom105
browser36
toolkit19
mobile16
media13
layout12
netwerk11
testing10
security8
js5
uriloader2
gfx2
xpcom1
image1
editor1

List of changesets:

Armen Zambrano GasparnianBug 1064002 - Fix removal of --log-raw from xpcshell. r=chmanchester. a=testing - 93587eeda731
Karl TomlinsonBug 1108838 - Move stalled/progress timing from MediaDecoder to HTMLMediaElement. r=cpearce, a=sledru - 15e3be526862
Karl TomlinsonBug 1108838 - Dispatch "stalled" even when no bytes have been received. r=cpearce, a=sledru - b07f9144d190
Jeff MuizelaarBug 1090518 - Fix crash during webgl-depth-texture.html conformance test. r=jrmuizel, a=sledru - 36535f9806e6
Jan-Ivar BruaroeyBug 1098314 - Ignore and warn on turns: and stuns: urls until we support TURN/STUN TLS. r=bwc, a=sledru - 3b4908a629e8
Brad LasseyBug 1112345 - Tab streaming should scroll stream with layers and not offsets. r=snorp, a=sledru - 3956d52ad3f0
Karl TomlinsonBug 975782 - Bring media resource loads out of background while they delay the load event. r=cpearce, a=sledru - cdd335426a39
Karl TomlinsonBug 975782 - Stop delaying the load event when media fetch has stalled. r=cpearce, f=kinetik, a=sledru - 3abc61cb0abd
Dave TownsendBug 1102050 - Set consumeoutsideclicks="false" whenever the popup is opened. r=felipe, a=sledru - a33308dd5af8
Brad LasseyBug 1115802 - Scrolling no longer working when tab mirroring from fennec. r=snorp, a=sledru - f6d5f2303fea
Karl TomlinsonBug 1116676 - Ensure that AddRemoveSelfReference() is called on networkState changes. r=roc, a=sledru - ad2cfe2a92a5
Karl TomlinsonBug 1114885 - Allow media elements to be GC'd when their MediaSource is unreferenced. r=roc, a=sledru - 44e174f9d843
Dave TownsendBug 1094312 - Properly destroy browsers when switching between remote and non-remove pages and override the default destroy method in remote-browser.xml. r=mconley, a=sledru - 08f30b223076
Dave TownsendBug 1094312 - Fix browser_bug553455.js to handle the cases where the progress notification is hidden before it has fully appeared. r=Gijs, a=sledru - fc494bb31bec
Dave TownsendBug 1094312 - Fix browser_bug553455.js:test_cancel_restart by pausing the download for long enough for the progress notification to show reliably. r=Gijs, a=sledru - b71146fc0e37
Margaret LeibovicBug 1107925 - Don't launch fennec on search redirects. r=bnicholson, a=sledru - 6796cf5b59b1
Mark HammondBug 1116404 - Better timeout semantics for search service geoip lookups. r=felipe, a=sledru - 06bb4d89e2bf
Magnus MelinBug 1043310 - AutoCompletion doesn't take capitalization from address book entry, can leave angle brackets characters >> in field, when loosing focus by clicking outside (not enter/tab). r=mak, a=sledru - 1d406b3f20db
Matt WoodrowBug 1116626 - Null check mDecoder in AutoNotifyDecoded since it might have been shutdown already. r=karlt, a=sledru - e076d58d5b10
Matt WoodrowBug 1116284 - Don't run MP4Reader::Update after we've shut the reader down. r=cpearce, a=sledru - 2fd2c6de0a87
Bobby HolleyBug 1119456 - Make MP4Demuxer's blocking reads non-blocking and hoist blocking into callers with a hacky retry strategy. r=k17e, a=sledru - fa0128cdef95
Bobby HolleyBug 1119456 - Work around the fact that media cache does not quite guarantee the property we want. r=roc, a=sledru - 18f7174682d3
Andrea MarchesiniBug 1113062 - Part 1: PIFileImpl and FileImpl merged. r=smaug, a=sledru - 23f5b373f676
Andrea MarchesiniBug 1113062 - Part 2: ArchiveReaderZipFile non-CCed. r=smaug, a=sledru - f203230f49f4
Andrea MarchesiniBug 1113062 - IndexedDB FileSnapshot not CCed. r=janv, a=sledru - 962ac9efa80c
Matt WoodrowBug 1105066 - Make SeekPromise return the time we actually seeked to. r=kentuckyfriedtakahe, a=sledru - e16f64387888
Matt WoodrowBug 1105066 - Chain seeks in MediaSourceReader so that we seek audio to the same time as video. r=kentuckyfriedtakahe, a=sledru - 154dac808616
Anthony JonesBug 1105066 - Seek after switching reader. r=mattwoodrow, a=sledru - a0ffac1b2851
Matt WoodrowBug 1119033 - Don't try to evict when we don't have any initialized decoders. r=ajones, a=sledru - a78eb4dd84f0
Kai-Zhen LiBug 1119691 - Fix build bustage in dom/media/mediasource/MediaSource.cpp. r=bz, a=sledru - 7edfdc36c3cf
Bobby HolleyBug 1120014 - Initialize MediaSourceReader::mLast{Audio,Video}Time to 0 rather than -1. r=rillian, a=sledru - 201fee3158c1
Bobby HolleyBug 1120017 - Make the DispatchDecodeTasksIfNeeded path handle DECODER_STATE_DECODING_FIRSTFRAME. r=cpearce, a=sledru - aa8cdb057186
Bobby HolleyBug 1120023 - Clean up semantics of SourceBufferResource reading. r=cpearce, a=sledru - 60f6890d84cf
Bobby HolleyBug 1120023 - Fix some bugs in MockMediaResource. r=cpearce, a=sledru - e5cc2f8f3f7e
Bobby HolleyBug 1120023 - Switch SourceBufferResource::Read{,At} back to blocking. r=cpearce, a=sledru - 423cb20b5f43
Dão GottwaldBug 1115307 - Search bar alignment fixes and cleanup. r=florian, a=sledru - c7e58ab0e1f6
Dave TownsendBug 1119450 - Clicks on the search go button shouldn't open the search popup. r=felipe, a=sledru - 17b6018c53f0
David KeelerBug 1065909 - Canonicalize hostnames in nsSiteSecurityService and PublicKeyPinningService. r=mmc, a=sledru - 82cce51fb174
Abdelrhman AhmedBug 1102961 - Cannot navigate AMO without closing the Options window. r=florian, a=sledru - 5ac62d0df17e
JW WangBug 1115505 - Keep decoding to ensure the stream is initialized in the decode-to-stream case. r=roc, a=sledru - 4d3d7478ffa4
Olli PettayBug 1108721 - HTMLMediaElement.textTracks needs to be nullable in Gecko for now. r=peterv, a=sledru - 5fba52895751
Bobby HolleyBug 1120629 - Cache data directly on MP4Stream rather than relying on the media cache. r=roc, a=sledru - f7bd9ae15c9e
Julian SewardBug 1119803 - Uninitialised value use in StopPrerollingVideo. r=bobbyholley, a=sledru - 0a648dfd0459
Andrea MarchesiniBug 1111971 - A better life-time management of aListener and aContext in WebSocketChannel. r=smaug, a=abillings - 19e248751a1c
Ryan VanderMeulenBacked out changeset e91fcba59c18 (Bug 1119941) because we don't want to ship int in 36b1. a=sledru - 1d99e9a39847
Ryan VanderMeulenBug 1088708 - Disable testOSLocale on Android x86 for permafailing. r=gbrown, a=test-only - 483bad7e5e88
Ryan VanderMeulenNo bug - Adjust some Android reftest expectations now that they're passing again. r=gbrown, a=test-only - 454907933777
Mark BannerBug 1119765 - Joining and Leaving a Loop room quickly can leave the room as full. Ensure we send the leave notification if we've already sent the join. r=mikedeboer,a=sylvestre - 9b99fc7b7c20
Sotaro IkedaBug 1112410 - Handle set dormant during seeking r=cpearce a=sledru - 5d185a7d03b5
Mike ConleyBug 1117936 - If print preview throws in browser-content.js, make sure printUtils.js can handle the error. r=Mossop, a=sledru - 2fd253435fe4
Dave TownsendBug 1118135 - Clicking the magnifying glass while the suggestions are open should close the popup and not re-open it. r=felipe, a=sledru - ee9df2674663
Chris PearceBug 1112445 - Ignore the audio stream when determining whether we should skip-t-o-next-keyframe for async readers. r=mattwoodrow, a=sledru - f82a118e1064
Steve FinkBug 1117768 - Fix assertion in AutoStopVerifyingBarriers and add tests. r=terrence, a=sledru - 53ae5eeb6147
Matt WoodrowBug 1121661 - Null check mDemuxer in MP4Reader::ResetDecoder since we might not have created one yet. r=bholley, a=sledru - 28900712c87f
Chris PearceBug 1112822 - Don't allow MP4Reader to decode if someone shut it down. r=mattwoodrow, a=sledru - c8031be76a86
Ryan VanderMeulenBacked out changeset 53ae5eeb6147 (Bug 1117768) for bustage. - 96d0d77a3462
Martyn HaighBug 1117130 - URL bar border slightly covered by fading edge of title. r=mfinkle, a=sledru - ca609e2e5bea
JW WangBug 1112588 - Ignore 'stalled' events because the progress timer could time out before receiving any HTTP notifications on slow machines like B2G emulator. r=cpearce, a=test-only - d185df72bd0e
Ryan VanderMeulenBug 1111137 - Disable test_user_agent_overrides.html on Android due to frequent failures. a=test-only - cd07ffdd30c5
Steve FinkBug 1111330 - GetBacktrace needs to be able to free the results buffer. r=njn, a=lsblakk - f154bf489b34
Robert StrongBug 1120673 - Verify Firewall service is running before adding Firewall exceptions - Fx 35 installer crashes on XP x86 SP3 at the end (creating shortcuts) if the xp firewall service is stopped. r=bbondy, a=sledru - bc2de4c07f1b
Nicolas B. PierronBug 1118911 - GetPcScript should care about bailout frames. r=jandem, a=sledru - 66f61f3f9664
Bobby HolleyBug 1121841 - Clear the failed read after checking it. r=jya, a=sledru - 0f43b4df53bb
Bobby HolleyBug 1121248 - Stop logging unimplemented methods in SourceBufferResource. r=mattwoodrow, a=sledru - b8922f819a88
Michael ComellaBug 1106935 - Part 1: Replace old tablet pngs with null XML resources. r=mhaigh, a=sledru - 0b7d9ce1cdc7
Ehsan AkhgariBug 1113121 - Null check the parent node in nsHTMLEditRules::JoinNodesSmart() before passing it to MoveNode; r=roc a=sylvestre - 64d25509541e
Jean-Yves AvenardBug 1121757: Prevent out of bound memory access should AVC data be invalid. r=kinetik a=sledru - 84bf56da4a55
Jean-Yves AvenardBug 1121342: Re-Request audio or video to decode first frame after a failed attempt. r=cpearce a=sledru - 7a8d1dd9fff3
Jean-Yves AvenardBug 1121342: Re-search for Moof if an initial attempt to find it failed. r=kentuckyfriedtakahe a=sledru - dfbca180664d
John DaggettBug 1118981 - initialize mSkipDrawing correctly for already loading fonts. r=jfkthame, a=sylvestre - beb62e1ad523
Mike HommeyBug 1110760 - Followup to avoid build failure with Windows SDK v7.0 and v7.0A. r=gps a=lsblakk - fe217a0d2e9a
Paul Kerr [:pkerr]Bug 1028869 - Part 1: Add ping and ack operations to PushHandler. r=standard8, a=sledru - fc47c7a95f85
Paul Kerr [:pkerr]Bug 1028869 - Part 2: xpcshell test updated with ping/restore. r=standard8, a=sledru - b653be6b040a
Gijs KruitboschBug 1079355 - indexedDB pref should only apply for content pages, not chrome ones, r=bent,a=sylvestre - 97b34f0b9946
Jean-Yves AvenardBug 1118123: Update mediasource duration following sourcebuffer::appendBuffer. r=cajbir a=sledru - 9a4a8602e6f4
Jean-Yves AvenardBug 1118123: Mochitest to verify proper sourcebuffer behavior. r=cajbir a=sledru - 61e917f920c9
Jean-Yves AvenardBug 1118123: Update mediasource web platforms tests now passing. r=karlt a=sledru - 7cd63f89473b
Jean-Yves AvenardBug 1119119: Do not abort when calling appendBuffer with no data. r=cajbir a=sledru - 4fe580b632e5
Jean-Yves AvenardBug 1119119: Update web-platform-tests expected data. r=karlt a=sledru - a1a315b3ff6b
Jean-Yves AvenardBug 1120084: Implement MSE's AppendErrorAlgorithm. r=cajbir a=sledru - da605a71901e
Jean-Yves AvenardBug 1120086: Re-open SourceBuffer after call to appendBuffer if in ended state. r=cajbir a=sledru - 7dd701f60492
Ben HearsumBug 1120420: switch in-tree update server/certs to aus4.mozilla.org. r=rstrong, a=lmandel - 59702337a220
Kartikaya GuptaBug 1107009. r=BenWa, a=sledru - 8d886705af93
Kartikaya GuptaBug 1122408 - Fix potential deadlock codepath. r=BenWa, a=sledru - e6df6527d52e
Phil RingnaldaBug 786938 - Disable test_handlerApps.xhtml on OS X. a=test-only - d1b7588f273b
Jan de MooijBug 1115844 - Fix Baseline to emit a nop for JSOP_DEBUGLEAVEBLOCK to temporarily work around a pc lookup bug. r=shu, a=sledru - 54a53a093110
Andreas PehrsonBug 1113600 - Part 1. Send stream data right away after adding an output stream. r=roc, a=sledru - 73c3918b169f
Andreas PehrsonBug 1113600 - Part 2. Handle setting a MediaStream sync point mid-playback. r=roc, a=sledru - e30a4672f03f
Andreas PehrsonBug 1113600 - Part 3. Add mochitest for capturing media mid-playback. r=roc, a=sledru - c17e1f237ff0
Andreas PehrsonBug 1113600 - Part 4. Handle switching directly from audio clock to stream clock. r=roc, a=sledru - b269b8f5102c
Gian-Carlo PascuttoBug 1119852 - Don't forget to update _requestedCapability in Windows camera driver. r=jesup, a=sledru - ee09df3331d0
Paul Kerr [:pkerr]Bug 1108028 - Replace pushURL registered with LoopServer whenever PushServer does a re-assignment. r=dmose, a=sledru - be5eee20bba5
Daniel HolbertBug 1110950 - Trigger a reflow (as well as a repaint) for changes to 'object-fit' and 'object-position', so subdocuments can be repositioned/resized. r=roc, a=sledru - 2b2b697613eb
Chris DoubleBug 1055904 - Improve MSE eviction calculation. r=jya, a=sledru - 595835cd60a0
Martyn HaighBug 1111598 - [Tablet] Make action bar background color consistent with the new tablet tab strip background. r=mcomella, a=sledru - 3e58a43384cd
Tim TaubertBug 950399 - SessionStore shouldn't forget domain cookies. r=yoric, a=sledru - 91f8d6ca5030
Tim TaubertBug 950399 - Tests for domain cookies. r=yoric, a=sledru - 670d3f856665
Jean-Yves AvenardBug 1120075 - Use Movie Extend Header's duration as fallback when available. r=kentuckyfriedtakahe, a=sledru - 18ade4ad787e
Jean-Yves AvenardBug 1119757 - Allow seeking on media with infinite duration. r=cpearce, a=sledru - b0c42a7f0dc7
Jean-Yves AvenardBug 1119757 - MSE: handle duration of 0 in metadata as infinity. r=mattwoodrow, a=sledru - 3e5d8c21f3a2
Jean-Yves AvenardBug 1120079 - Do not call Range Removal algorithm after endOfStream. r=cajbir, a=sledru - 2a36e0243edd
Jean-Yves AvenardBug 1120282 - Do not fire durationchange during call to ReadMetadata. r=mattwoodrow, a=sledru - 9bb138f23d58
Dragana DamjanovicBug 1108971 - Fix parameter in call GetAddrInfo. r=sworkman, a=sledru - 2dbbd7362502
Sotaro IkedaBug 1110343 - Suppress redundant loadedmetadata event when dormant exit. r=cpearce, a=sledru - fae52bd681e0
Sotaro IkedaBug 1108728 - Remove dormant related state from MediaDecoder. r=cpearce, a=sledru - 9ad34e90e339
Bobby HolleyBug 1121692 - Remove unnecessary arguments to ::Seek. r=mattwoodrow, sr=cpearce, a=sledru - d7e079df1b3d
Bobby HolleyBug 1121692 - Stop honoring aEndTime in MediaSourceReader::Seek. r=mattwoodrow, a=sledru - 67f6899c6221
Bobby HolleyBug 1121692 - Fix potential race condition with mWaitingForSeekData. r=mattwoodrow, a=sledru - 871ab0d29bb8
Bobby HolleyBug 1121692 - Clean up semantics around m{Audio,Video}IsSeeking. r=mattwoodrow, a=sledru - 35f5cf685186
Bobby HolleyBug 1121692 - Move the interesting seek state logic into DecodeSeek. r=mattwoodrow, r=cpearce, a=sledru - 3e1dd9e96598
Bobby HolleyBug 1121692 - Make seeks cancelable. r=cpearce, r=mattwoodrow, a=sledru - 2195dc79a65f
Bobby HolleyBug 1121692 - Handle mid-seek Request{Audio,Video}Data calls. r=cpearce, a=sledru - 4f059ea15ecf
Bobby HolleyBug 1121692 - Tests. r=mattwoodrow, r=cpearce, a=sledru - 56744595737c
Michael ComellaBug 1116912 - Don't hide the dynamic toolbar when it was originally shown but a tab was selected. r=wesj, a=sledru - 55bd32c43abd
Valentin GosuBug 1121826 - Backout cc192030c28f - brackets shouldn't be automatically escaped in the Query. r=mcmanus, a=sledru - 12bda229bf83
Nicholas NethercoteBug 1122322 - Fix crash in worker memory reporter. r=bent, a=sledru - c5dfa7d081f4
Ryan VanderMeulenBug 1055904 - Fix non-unified bustage in TrackBuffer.cpp. a=bustage - c703f90c5b80
Patrick McManusBug 1121706 - Don't offer h2 in alpn if w/out mandatory suite. r=hurley, a=sledru - 131919c0babd
Barbara GuidaBug 1122586 - Unbreak build on platforms missing std::llabs since Bug 1073716. r=dholbert, a=sledru - 506cfb41b8f3
Jean-Yves AvenardBug 1121876 - Treat negative WMF's output sample timestamp as zero. r=cpearce, a=sledru - e017341d2486
Jean-Yves AvenardBug 1121876 - Configure WMF decoder to output PCM 16. r=cpearce, a=sledru - cd88be2b57ac
Robert LongsonBug 1119698 - Ensure image elements take pointer-events into account. r=jwatt, a=sledru - 94e7cb795a05
Chris PearceBug 1123498 - Make MP4Reader skip-to-next-keyframe less aggressively. r=mattwoodrow, a=sledru - cee6bfbbecd7
Jean-Yves AvenardBug 1123507 - Prevent out of bound memory access. r=edwin, a=sledru - 8691f7169392
Geoff BrownBug 1105388 - Avoid robocop shutdown crashes with longer wait. r=mfinkle, a=test-only - ea7deca21c27
Anthony JonesBug 1116056 - Change MOZ_ASSERT() to NS_WARNING() in Box::Read(). r=jya, a=sledru - 54386fba64a7
Jean-Yves AvenardBug 1116056 - Ensure all atoms read are valid. r=mattwoodrow, a=sledru - 1f392909ff1f
Xidorn QuanBug 1121350 - Fix extra space on right from whitespace. r=roc, a=sledru - 598cd9c2e480
Kartikaya GuptaBug 1120252 - Avoid trying to get the APZCTreeManager if APZ isn't enabled. r=mattwoodrow, a=bustage - 1b21115851ef

Advancing ContentRequest for Innovation: Content and Ad Tech

We founded the Content Services group at Mozilla in order to build user-first content experiences and services within the Firefox browser that:

  • Respect user choice
  • Respect user data
  • Provide user value
  • Where possible, create new revenue opportunities for Mozilla and our partners

We have delivered Tiles in Firefox and have successfully tested some content partnerships. Our next objectives are:

  1. To provide a better content and advertising experience for our users within Firefox.  This may include but is not limited to the creation of new units, better personalization, and a higher volume of partners for varied content.
  2. To push the industry forward.  We are sure that there are content and advertising technology companies who aspire to the same principles we do but do not have the tools to act with today.

That’s why in the next few days we will be contacting a number of content and advertising tech companies, both large and small, to discuss an RFI (“Request for Innovation” – a partnership proposal) for providing more automation and scale in our offering.  Scale allows us to deliver content to our users across the globe so we keep the experience for users fresh and current.  Automation allows us to do this on a scale that’s significant.  We have to engage with the industry’s state-of-the-art.  That means working programmatically (and this can be a very complex space to operate in).  We know that there are many people in ad tech who welcome our involvement – many have already joined the project.

One of Mozilla’s distinct qualities is its ability to bring in champions for our cause, from advocating for open standards to sharing the vision of an open mobile ecosystem, we are at our best when we focus on our own competence and bring others into our community.

This will not be business as usual.  We have a very clear sense of who we would and would not partner with, and any relationship we enter into has to support our values.  And while there may be some areas for discussion we will not partner with organizations who blatantly disrespect the user.

We are explicit about this in the RFI: we want to work with partners who align with the Mozilla mission and our user-centric ethos to change and evolve the industry through this engagement.  As talked about in previous posts on this blog, we’re looking for support amongst our three core principles:

  • Trust: Always architect with honesty in mind. Ask, “Do users understand why they are being presented with content? Do they understand what fragments of their data underscore advertising decisions?”
  • Transparency: Always be transparent. “Is it clear to users why advertising and content decisions are made? Is it clear how their data is being consumed and shared?  Are they aware and openly contributing to the dialog?”
  • Control: Always put the control with the user. “Do users have the ability to control their own data? Do they have the option to be completely private, completely public or somewhere in between?”

Our team is working hard to deliver against these promises to our users:

  • We believe digital advertising can respect users’ privacy choices.
  • We can build useful products and experiences that users will choose to engage with, and provide an experience that delivers value.
  • We believe publishers should respect browser signals around tracking and privacy. Our content projects will respect DNT signals.
  • We will collect and retain the minimal amount of data required to provide value to users, advertisers, and publishers.
  • We will put users in control of product feature opt-in/out.

We’ve launched the early version of our platform in the Firefox anniversary release (33.1), last November and we’ve been learning and tweaking it since.  2015 is a big year for us to scale and build better experiences and we’re looking forward to sharing these with you.

Feel free to reach out to us (contentservices@mozilla.com) or join our interest list.  

Christian HeilmannBrowsers, Services and the OS – oh my…

Yesterday’s two hour Windows 10 briefing by Microsoft had some very interesting things in it (The Verge did a great job live blogging it). I was waiting for lots of information about the new browser, code name Spartan, but most was about Windows 10 itself. This is, of course, understandable and shows that I care about browsers maybe too much. There was interesting information about Windows 10 being a free upgrade, Cortana integration on all platforms, streaming games from xbox to Windows and vice versa. The big wow factor at the end of the brief was HoloLens, which makes interactivity like Iron Man had in his lab not that far-fetched any longer.

hololens working

For me, however, the whole thing was a bit of an epiphany about browsers. I’ve always seen browsers as my main playground and got frustrated by lack of standards support across them. I got annoyed by users not upgrading to new ones or companies making that hard. And I was disappointed by developers having their pet browsers to support and demand people to use the same. What I missed out on was how amazing browsers themselves have become as tools for end users.

For end users the browser is just another app. The web is not the thing alongside your computing interaction any longer, it is just a part of it. Just because I spend most of my day in the browser doesn’t make it the most important thing. In esssence, the interaction of the web and the hardware you have is what is the really interesting part.

A lot of innovation I have seen over the years that was controversial at that time or even highly improbable is now in phones and computers we use every day. And we don’t really appreciate it. Google Now, Siri and now Microsoft’s Cortana integration into the whole system is amazingly useful. Yes, it is also a bit creepy and there should be more granular insight into what gets crawled and what isn’t. But all in all isn’t it incredible that computers tell us about upcoming flights, traffic problems and remind us about things we didn’t even explicitly set as a reminder?

Spartan Demo screenshot by the verge

The short, 8 minute Spartan demo in the briefing showed some incredible functionality:

  • You can annotate web page with a stylus, mouse or add comments to any part of the text
  • You can then collect these, share them with friends or watch them offline later
  • Reading mode turns the web into a one-column, easy to read version. Safari, Mobile browsers like Firefox Mobile have this and third party services like Readability did that before.
  • Firefox’s awesome bar and Chrome’s Google Now integration also is in Windows with Cortana being available anywhere in the browser.

Frankly, not all of that is new, but I have never used these features. I was too bogged down into what browsers can not do instead of checking what is already possible for normal users.

I’ve mentioned this a few times in talks lately: a lot of the innovation of add-ons, apps and products is merging with our platforms. Where in the past it was a sensible idea to build a weather app and expect people to go there or even pay for it, we get this kind of functionality with our platforms. This is great for end users, but it means we have to be up to speed what user interfaces of the platforms look like these days instead of assuming we need to invent all the time.

Looking at this functionality made me remember a lot of things promised in the past but never really used (at least by me or my surroundings):

  • Back in 2001, Microsoft introduced Smart Tags, which caused quite a stir in the writing community as it allows third party commenting on your web content without notifying you. Many a web site added the MSSmartTagsPreventParsing to disallow this. The annotation feature of Spartan now is this on steroids. Thirdvoice (wayback machine archive) was a browser add-on that did the same, but got creepy very quickly by offering you things to buy. Weirdly enough Awesome Screenshot, an annotation plug-in also now gets very creepy by offering you price comparisons for your online shopping. This shows that a functionality like that doesn’t seem to be viable as a stand-alone business model, but very much makes sense as a feature of the platform.
  • Back in 2006, Ray Ozzie of Microsoft at eTech introduced the idea of the Live Clipboard. It was this:
    [Live Clipboard…] allows the copy and pasting of data, including dynamic, updating data, across and between web applications and desktop applications.
    The big thing about this was that it would have been an industrial size use case for Microformats and could have given that idea the boost it needed. However, despite me pestering Chris Wilson of – then – Microsoft at @media AJAX 2006 about it, this never took off. Until now, it seems – except that the clippings aren’t live.
  • When I worked in Yahoo, Browser Plus came out of a hackday, an extension to browsers that allows easier file uploads and drag and drop between browser and OS. It also gave you Desktop notifications. One of the use cases shown at the hack day was to drag and drop products from several online stores and then checkout in one step with all of them. This, still, is not possible. I’d wager to guess that legal problems and tax reasons are the main blocker there. Drag and Drop and uploads as well as Desktop notifications are now reality without add-ons. So we’re getting there.

This year will be very exciting. Not only does HTML5 and JavaScript get new features all the time. It seems to me that browsers become much, much smoother at integrating into our daily lives. This spells doom for a lot of apps. Why use an app when the functionality is already available with a simple click or voice command?

Of course, there are still many issues to fix, mainly offline and slow connection use cases. Privacy and security is another problem. Convenient as it is, there should be some way to know what is listening in on me right now and where the data goes. But, I for one am very interested about the current integration of services into the browser and the browser into the OS.

Henrik SkupinFirefox Automation report – week 49/50 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 49 and 50.

Highlights

During the first week of December the all-hands work week happened in Portland. Those were some great and inspiring days, full of talks, discussions, and conversations about various things. Given that I do not see my colleagues that often in real life, I have taken this opportunity to talk to everyone who is partly or fully involved in projects of our automation team. There are various big goals in front of us, so clearing questions and finding the next steps to tackle ongoing problems was really important. Finally we came out with a long list of todo items and more clarity about so far unclear tasks.

In week 50 we got some updates landed for Mozmill CI. Given a regression from the blacklist landing, our l10n tests haven’t been executed for any locale of the Firefox Developer Edition. Since the fix landed, we have seen problems with access keys in nearly each locale for a new test, which covers the context menu of web content.

Also we would like to welcome Barbara Miller in our team. She joined us as an intern via the FOSS outreach program as driven by Gnome. She will be with us until March and will mainly work on testdaybot and the conversion of Mozmill tests to Marionette. The latter project is called m21s and details can be found on its project page. Soon I will post more details about it.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 49 and week 50.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meeting of week 48. Due to the Mozilla all-hands workweek there was no meeting in week 49.

Stormy PetersAmazon Echo: 7 missing features

We have an Amazon Echo. It’s been a lot of fun and a bit frustrating and a bit creepy.

  • My youngest loves walking into the room and saying “Alexa, play Alvin and the Chipmunks”.
  • I like saying “Alexa, set a 10 minute timer.”
  • And we use it for news updates and music playing.

7 features it’s missing:

  1. “Alexa, answer my phone.” The Echo can hear me across the room. When my phone rings, I’d love to be able to answer it and just talk from where ever I am.
  2. “Alexa, tell me about the State of the Union Address last night.” I asked a dozen different ways and finally gave up and said “Alexa, play iHeartRadio Eric Church.” (I also tried to use it to cheat at Trivia Crack. It didn’t get any of the five questions I asked it right.)
  3. Integration with more services. We use Pandora, not iHeartRadio. We can switch. Or not. But ultimately the more services that Echo can integrate with, the better for its usefulness. It should search my email, Evernote, recipes, …
  4. Search. Not just the State of the Union, but pretty much any search I’ve tried has failed. “Alexa, when is the post office open today?” It just added the post office to my to do list. Or questions that any 2 year old can answer, “Alexa, what sound does a dog make?” It does do math for my eight year old. “Alexa, what’s 10,000 times 1 billion.” and she spits it out to his delight. He’s going to be a billionaire.
  5. More lists. Right now you can add items to your shopping list, your todo list and your music play lists. That doesn’t work well for a multi-person household. Each of us want multiple lists.
  6. Do stuff. I’d love to say “Alexa, reply to that email from Frank and say …” Or “Alexa, buy the top rated kitchen glove on Amazon.” or “Alexa, when will my package arrive?”
  7. Actually cook dinner. Or maybe just order it. :)

What do you want your Amazon Echo to do?

Mike ShalCombining Nodes in Graphviz

Graphviz is a handy tool for making graphs. The "dot" command in particular is great for drawing dependency graphs, though for many real-world scenarios there are simply too many nodes to generate a useful graph. In this post, we'll look at one strategy for automatically combining similar nodes so that a more understandable dependency structure is revealed.

Mike HommeyFx0を購入

Two weeks ago, I went to a local au shop to get a hand on the Fx0, KDDI’s LG-manufactured Firefox OS phone that was released in Japan for Christmas in a few flagship shops and on the web, and on January 6 everywhere else in Japan.

They had it on display, like any other phone.

But they didn’t have any stock, so I couldn’t bring one home, but ordered one.

Fast forward two days ago, the shop called to say they received it, and I went yesterday to get it.

Unboxing

Since the phone is not sold without a carrier subscription, the shop staff does the unboxing for you, to place the SIM card in the phone. But let’s pretend that didn’t happen.

The Fx0 comes in a gold box with a gold Firefox logo, wrapped in a white box with the characters “Fx0″ embossed.

Opening the gold box, unsurprisingly, reveals the gold transparent phone.

Reading articles about this phone, opinions are divided about its look. I’m on the side that thinks it looks awesome, especially in the back. I does look bulky, probably because of its rather sharp edges, but it’s not much larger than a Nexus 4. Nor is it much thicker.

One side has “au” embossed, and the other has “Fx0″.

One downside of the transparent theme is that it limited the types of materials that could be used, so it sadly feels plastic to the touch. At least that’s why I think it is this way.

At the bottom of its front, a single “home” button, showing the Firefox logo.

Turning it on

Well, it was already on when I first got my hands on it, but in our pretense of unboxing, let’s say it was not, and that I turned it on for the first time (which, in some sense, is true). This is what it looks like when it boots:

After unlocking, the home screen appears.

I’ll be trying to use it as my main (smart)phone, and see how that goes. I’ll also test some of its KDDI specific features. Blog posts will come along. Stay tuned.

Kim MoirReminder: Releng 2015 submissions due Friday, January 23

Just a reminder that submissions for the Releng 2015 conference are due this Friday, January 23. 

It will be held on May 19, 2015 in Florence Italy.

If you've done recent work like
  • migrating your build or test pipeline to the cloud
  • switching to a new build system
  • migrating to a new version control system
  • optimized your configuration management system or switched to a new one
  • implemented continuous integration for mobile devices
  • reduced end to end build times
  • or anything else build, release, configuration and test related
we'd love to hear from you.  Please consider submitting a talk!

In addition, if you have colleagues that work in this space that might have interesting topics to discuss at this workshop, please forward this information. I'm happy to talk to people about the submission process or possible topics if there are questions.

Il Duomo di Firenze by ©eddi_07, Creative Commons by-nc-sa 2.0


Sono nel comitato che organizza la conferenza Releng 2015 che si terrà il 19 Maggio 2015 a Firenze. La scadenza per l’invio dei paper è il 23 Gennaio 2015.

http://releng.polymtl.ca/RELENG2015/html/index.html

se avete competenze in:
  • migrazione del sistema di build o dei test nel cloud
  • aggiornamento del processo di build
  • migrazione ad un nuovo sistema di version control
  • ottimizzazione o aggiornamento del configuration management system
  • implementazione di un sistema di continuos integration per dispositivi mobili
  • riduzione dei tempi di build
  • qualsiasi cambiamento che abbia migliorato il sistema di build/test/release
e volete discutere della vostra esperienza, inviateci una proposta di talk!

Per favore inoltrate questa richiesta ai vostri colleghi e alle persone interessate a questi argomenti. Nel caso ci fossero domande sul processo di invio o sui temi di discussione, non esitate a contattarmi.

(Thanks Massimo for helping with the Italian translation).

More information
Releng 2015 web page
Releng 2015 CFP now open

Air MozillaPassages: Leveraging Machine Virtualization and VPNs to Isolate the Browser from the Local Desktop

Passages: Leveraging Machine Virtualization and VPNs to Isolate the Browser from the Local Desktop Lance Cottrell, chief scientist for Ntrepid, presents Passages, a secure browsing platform for business which leverages machine virtualization and VPNs to completely isolate the browser...

Ian BickingA Product Journal: The Technology Demo

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous and first post was Conception.

As I finished my last post I had a product idea built around a strategy (growth through social tools and sharing) and a technology (freezing or copying the markup). But that’s not a concise product definition centered around user value. It’s not even trying. The result is a technology demo, not a product.

In my defense I’m searching for some product, I don’t know what it is, and I don’t know if it exists. I have to push this past a technology demo, but if I have to start with a technology demo then so it goes.

I’ve found a couple specific experiences that help me adapt the product:

  • I demo the product and I sense an excitement for something I didn’t expect. For example, a view that I thought was just a logical necessity might be what most appeals to someone else. To do this I have to show the tool to people, and it has to include things that I think are somewhat superfluous. And I have to be actively reading the person viewing the demo to sense their excitement.

  • Remind myself continuously of the strategy. It also helps when I remind other people, even if they don’t need reminding – it centers the discussion and my thinking around the goal. In this case there’s a lot of personal productivity use cases for the technology, and it’s easy to drift in that direction. It’s easy because the technology facilitates those use cases. And while it’s cool to make something widely useful, that won’t make this tool work the way I want as a product, or work for Mozilla. (And because I plan to build this on Mozilla’s dime it better work for Mozilla! But that’s a discussion for another post.)

  • I’ll poorly paraphrase something I’m sure someone can source in the comments: a product that people love is one that makes those people feel great about themselves. In this case, makes them feel like a journalist and not just a crank, or makes them feel like they are successfully posing as a professional, or makes them feel like what they are doing is appreciated by other people, or makes them feel like an efficient organizer. In the product design you can exult the product, try to impress people, try to attract compliments on your own prowess, but love comes when a person is impressed with themselves when they use your product. This advice helps keep me from valuing cleverness.

A common way to pull people out of technology-focused thinking is to ask “what problem does this solve?” While I appreciate this question more than I used to, it still makes me bristle. Why must everything be focused on problems? Why not opportunities! Why? An answer: problems are cases where a person has already articulated a tension and an openness to resolution. You have a customer in waiting. But must we confine ourselves to the partially formed conventional wisdom that makes something a “problem”? (One fair answer to this question is: yes. I remain open to other answers.) Maybe a more positive alternative to “what problem does this solve?” is “what does this let people do that they couldn’t do before?”

What I’m certain of is that you should constantly remember the people using your tool will care most about their interests, goals, and perspective; and will not care much about the interests, goals, or perspective of the tool maker.

So what should this tool do? If not technology, what defines it? A pithy byline might be share better. I don’t like pithy, but maybe a whole bag of pithy:

  • Improving on the URL
  • Own what you share
  • Share content, not pointers
  • Share what you see, anything you see
  • Every share is a message, make it your message
    Dammit, why do I feel compelled to noun “share”?
  • Share the context, the journey, not just the web destination
  • Own your perspective, don’t give it over to site owners
  • Know how and when people see what you share
  • Build better content, even if the publisher doesn’t
  • Trade in content, not promises for content
  • Copy/enhance/share

No… quantity doesn’t equal quantity I suppose. Another attempt:

When you share, you are a publisher. Your medium is the IM text input, or the Facebook status update, or the email composition window. It seems casual, it seems pithy, but that individual publishing is what the web is built on. I respect everyone as a publisher, every medium as worthy of improvement, and this project will respect your efforts. We will try to make a tool that can make every instance just a little bit better, simple when all you need is simple, polished if you want. We will defer your decisions because you should decide in context, not make decisions in the order that makes our work easier; we will be transparent to you, your audience, and your source; respect for the reader is part of our brand promise, and that adds to the quality of your shares; we believe content is a message, a relationship between you and your audience, and there is no universally appropriate representation; we believe there is order and structure in information, but only when that information is put to use; we believe our beliefs are always provisional and tomorrow it is our prerogative to rebelieve whatever we want most.

Who is we? Just me. A pretentiously royal we. It can’t stay that way for long though. More on that soon…

Gervase Markham“Interactive” Posters

Picture of advertising poster with sticker alongside with QR code and short URL

I saw this on a First Capital Connect train here in the UK. What could possibly go wrong?

Ignoring the horrible marketing-speak “Engage with this poster” header, several things can go wrong. I didn’t have NFC, so I couldn’t try that out. But scanning the QR code took me to http://kbhengage.zpt.im/u/aCq58 which, at the time, was advertising for… Just Eat. Not villaplus.com. Oops.

Similarly, texting “11518” to 78400 produced:

Thanks for your txt, please tap the link:
http://kbhengage.zpt.im/u/b6q58

Std. msg&data rates may apply
Txt STOP to end
Txt HELP for help

which also produced content which did not match the displayed poster.

So clearly, the first risk is that the electronic interactive bits are not part of the posters themselves, and so the posters can be changed without the interactive parts being updated to match.

But also, there’s the secondary risk of QR codes – they are opaque to humans. Someone can easily make a sticker and paste a new QR code on top of the existing one, and no-one would see anything immediately amiss. But when you tried to “engage with this poster”, it would then take you to a website of the attacker’s choice.

Mozilla FundraisingShould we use the Mozilla or Firefox logo on our donation form?

Our end of year fundraising campaign has finished now, but while it’s fresh in our minds we still want to write up and share the results of some of the A/B tests we ran during the campaign that might be … Continue reading

Pete MooreWeekly review 2015-01-21

This week I’ve started work on the Go port of the taskcluster client: https://github.com/petemoore/taskcluster-client-go.

This week I learned about AMQP, go routines and channels, Hawk authentication, TaskCluster architecture, and started using some go libraries.

Other activities:

  • b2g bumper code reviews

Bogomil ShopovWhy is Bulgaria Web Summit 2015 so different from any other event?

When I talk to sponsors and even to friends about the Summit, they always ask me what makes our event different.

So here’s the secret:

We started this event 11 years ago (under a different name) as an effort to create something amazing and affordable for IT guys in Bulgaria. At the same time we never compromise with quality. The main purpose of the event is for our attendees to learn new things, which they can apply in their work on the very next day and to return the “investment” they have made in the conference.

Speakers

In most of the conferences I’ve been in Europe, well-trained company folks talk about their success at Fakebook or Playpal and how to clone it to your company – This doesn’t work and you will not see it at our event and in the same time you have to spend tons of money just to listen to the guy.

In the most conferences I’ve been in Europe, well-respected gurus talk about some programming art – they do that all the time, they just talk, they don’t code anymore – You will not see this at our event – We invite only professionals and they share their experience with you and on the next day, they will not depart for another event, but they will go back to do the thing they do the best.

We have had amazing speakers over the years. Some of them became friends of the event and they can come again and again, even without paying them a dime. We build relationships with our speakers, because we are Balkan people and this is what we do.

Many people still remember Monty’s Black Vodka, Richard Stallman‘s socks and many other stories that must be kept secret :)

 

The audience

We do have the best audience ever! I mean it. We have people that haven’t missed an event since 2004. They are honest and if you screw up they will tell you and they will give you kudos if you do something amazing. In most of the years, the tickets are sold months before the event, even without a schedule and even without the speakers yet known, because we proved the event is good.

We have people who met at our event and got married, we have people who met at our event and started business together, we have companies that hired great professionals because of our events; we have kicked off many careers by showing the people great technologies and ways to use them.

 

The money

Of course it’s not all about money. We do need them to make the event great, but our main goal is not to make money out of it. As you can see the entrance fee is low – for the same event in Europe (same speakers) you would have to pay 5-10 times more. We realize that we live in a different country and the conditions are different, but we are trying to find a way to keep the fee low and at the same time to still keep up the quality of the talks and emotions. We can achieve this only thanks to our sponsors. Thank you, dear sponsors!

 

Experiments

We do experiment a lot. We are trying to make a stress-free event, full of nice surprises, parties and interesting topics.

We are not one of those conferences where you get tons of coffee in the breaks (sometime we even don’t have breaks, nor coffee for that matter, just beer!) and a schedule 3 months in advance or you can sit and pretend you are listening, because someone paid you the fee. With us you are a part of the event all the time: we have games, hackathons and other stuff you can take part in. We give you the bread and butter, use your mind to make a sandwich. :)

 

We grow

We failed many times at many tasks, but we are learning and improving. We are not a professional team doing this for the money. We are doing this for fun and to help our great and amazing community. We count on volunteers. Thank you, dear volunteers!

 

Marketing?

We are one of the few events that don’t have history of the event on their website. Duh! We do believe that if you visit us once (because a friend told you about us) you don’t need a silly website to convince you again to come :) We do not spend (a lot of) money on marketing or professional services. We count on word of mouth and you. Thank you!

Join us and see for yourself!

Gervase MarkhamYour Top 50 DOS Problems Solved

I was clearing out some cupboards at our family home when I came across a copy of “Your Top 50 DOS Problems Solved”, a booklet published free with “PC Answers” magazine in 1992 – 23 years ago. PC Answers has sadly not survived, closing in 2010, and its domain is now a linkfarm. However, the sort of problems people had in those days make fascinating reading.

Now I’ve finished blogging quotes from “Producing Open Source Software” (the updated version of which has, sadly, yet to hit our shelves), I think I’ll blog through these on an occasional basis. Expect the first one soon.

Air MozillaBay Area useR Group Official Meetup

Bay Area useR Group Official Meetup The Bay Area R Users Group hosts Ryan Hafen, Hadley Wikham and Nick Elprin. Ryan Hafen - Tessera is a statistical computing environment that enables...

Andreas GalWebVR is coming to Firefox Nightly

In 2014 Mozilla started working on adding VR capabilities to the Web. Our VR team proposed a number of new Web APIs and made an experimental VR build of Firefox available that supports rendering VR content using the Web to Oculus Rift headsets.

Consumer VR products are still in a nascent state, but clearly there is great promise for this technology. We have enough confidence in the new APIs we have proposed that we are today taking the step of integrating them into our regular nightly Firefox builds. Head over to MozVR for all the details, and if you own an Oculus Rift headset or mobile VR-capable hardware we support, give it a spin!

 


Filed under: Mozilla

Matt ThompsonWhat we’re working on this Heartbeat

Transparency. Agililty. Radical participation. That’s how we want to work on Webmaker this year. We’ve got a long way to go,  but we’re building concrete improvements and momentum — every two weeks.

We work mostly in two-week sprints or “Heartbeats.” Here’s the priorities we’ve set together for the current Heartbeat ending January 30.

Questions? Want to get involved? Ask questions in any of the tickets linked below, say hello in #webmaker IRC, or get in touch with @OpenMatt.

What we’re working on now

See it all (always up to date): http://build.webmaker.org/now 

Or see the work broken down by:

Learning Networks

  • Design & test new teach.webmaker.org wireframes
  • Get the first Webmaker Club curriculum module ready for testing
  • Finalize our documentation for Badges / Credentialing
  • Document our Q1 / Q2 plan for Training

Learning Products

Desktop / Tablet:

  • Improve user on-boarding (Phase II)
  • Improve our email communications after users sign up
  • Create better moderation functionality for webmaker.org/explore (formerly known as “the gallery”)
  • Build a unified tool prototype (Phase II)

 Mobile

  • Draft demo script and plan our marketing activities for Mobile World Congress
  • Make localization improvements to the Webmaker App
  • Build and ship device integrations and a screenshot service for Webmaker App
  • Distribute the first draft of our Kenya Field Report

 Engagement

  • Prep and execute Data Privacy Day campaign (Jan 28)
  • Prep for Net Neutrality Campaign (Feb 5)
  • Draft a branding plan for Learning Products and Learning Networks
  • Design a splash page for Mobile World Congress

 Planning & Process

  • Design and execute a communications plan on our overall 2015 plan
  • Document all our Q1 goals and KPIs in one spot
  • Add those quarterly goals to our dashboard
  • Ship updated documentation to build.webmaker.org (including: “How we do Heartbeats” & “How to use Git Hub Issues”)

Air MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Raniere SilvaMathml January Meeting

Mathml January Meeting

This is a report about the Mozilla MathML January IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in March 11th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Note

Our February meeting was cancelled. =(

Leia mais...

Cameron KaiserUpgrading the unupgradeable: video card options for the Quad G5

Now that the 2015 honeymoon and hangovers are over, it's back to business, including the annual retro-room photo spread (check out the new pictures of the iMac G3, the TAM and the PDP-11/44). And, as previously mentioned on my ripping yarn about long-life computing -- by this way, this winter the Quad G5's cores got all the way down to 30 C on the new CPU assembly, which is positively arctic -- 2015 is my year for a hard disk swap. I was toying with getting an apparently Power Mac compatible Seagate hybrid SSHD that Martin Kuka&ccaron was purchasing (perhaps he'll give his capsule review in the comments or on his blog?), but I couldn't find out if it failed gracefully to the HD when the flash eventually dies, and since I do large amounts of disk writes for video and development I decided to stick with a spinning disk. The Quad now has two 64MB-buffer 7200rpm SATA II Western Digital drives and the old ones went into storage as desperation backups; while 10K or 15Krpm was a brief consideration, their additional heat may be problematic for the Quad (especially with summers around here) and I think I'll go with what I know works. Since I'm down to only one swap left I think I might stretch the swap interval out to six years, and that will get me through 2027.

At the same time I was thinking of what more I could do to pump the Quad up. Obviously the CPU is a dead-end, and I already have 8GB of RAM in it, which Tiger right now indicates I am only using 1.5GB of (with TenFourFox, Photoshop, Terminal, Texapp, BBEdit and a music player open) -- I'd have to replace all the 1GB sticks with 2GB sticks to max it out, and I'd probably see little if any benefit except maybe as file cache. So I left the memory alone; maybe I'll do it for giggles if G5 RAM gets really cheap.

However, I'd consolidated the USB and FireWire PCIe cards into a Sonnet combo card, so that freed up a slot and meant I could think about the video card. When I bought my Quad G5 new I dithered over the options: the 6600LE, 7800GT and 2-slot Quadro FX 4500, all NVIDIA. I prefer(red) ATIAMD in general because of their long previous solid support for the classic Mac OS, but Apple only offered NVIDIA cards as BTO options at the time. The 6600LE's relatively anaemic throughput wasn't ever in the running, and the Quadro was incredibly expensive (like, 4x the cost!) for a marginal increase in performance in typical workloads, so I bought the 7800GT. Overall, it's been a good card; other than the fan failing on me once, it's been solid, and prices on G5-compatible 7800GTs are now dropping through the floor, making it a reasonably inexpensive upgrade for people still stuck on a 6600. (Another consideration is the aftermarket ATI X1900 GT, which is nearly as fast as the 7800GT.)

However, that also means that prices on other G5-compatible video cards are also dropping through the floor. Above the 7800GT are two options: the Quadro FX 4500, and various third-party hacked video cards, most notably the 2-slot 7800GTX. The GTX is flashed with a hacked Mac 7800GT ROM but keeps the core and memory clocks at the same high speed, yielding a chimera card that's anywhere between 15-30% faster than the Quadro. I bought one of these about a year and a half ago as a test, and while it was noticeably faster in certain tasks and mostly compatible, it had some severe glitchiness with older games and that was unacceptable to me (for example, No One Lives Forever had lots of flashing polygons and bad distortion). I also didn't like that it didn't come with a support extension to safely anchor it in the G5's card guide, leaving it to dangerously flex out of the card slot, so I pulled it and it's sitting in my junk box while I figure out what to do with it. Note that it uses a different power adapter cable than the 7800 or Quadro, so you'll need to make sure it's included if you want to try this card out, and if you dislike the lack of a card guide extension as much as I do you'll need a sacrificial card to steal one from.

Since then Quadro prices plummeted as well, so I picked up a working-pull used Apple OEM FX 4500 on eBay for about $130. The Quadro has 512MB of GDDR3 VRAM (same as the 7800GTX and double the 7800GT), two dual-link DVI ports and a faster core clock; although it also supports 3D glasses, something I found fascinating, it doesn't seem to work with LCD panels, so I can't evaluate that. Many things are not faster, but some things are: 1080p video playback is now much smoother because the Quadro can push more pixels, and high end games now run more reliably at higher resolutions as you would expect without the glitchiness I got in older titles with the 7800GTX. Indeed, returning to the BareFacts graph, the marginal performance improvement and the additional hardware rendering support is now at least for me worth $130 (I just picked up a spare for $80), it's a fully kitted and certified OEM card (no hacks!), and it uses the same power adapter cable as the 7800GT. One other side benefit is that, counterintuitively, the GPU is several degrees cooler (despite being bigger and beefier) and the fan is nearly inaudible, no doubt due to that huge honking heatsink.

It's not a big bump, but it's a step up, and I'm happy. I guess all that leaves is the RAM ...

In TenFourFox news, I'm done writing IonPower (phase 1). Phase 2 is compilation. That'll be some drudgery, but I think we're on target for release with 38ESR.

Gervase MarkhamCredit as Currency

Credit is the primary currency of the free software world. Whatever people may say about their motivations for participating in a project, I don’t know any developers who would be happy doing all their work anonymously, or under someone else’s name. There are tangible reasons for this: one’s reputation in a project roughly governs how much influence one has, and participation in an open source project can also indirectly have monetary value, because some employers now look for it on resumés. There are also intangible reasons, perhaps even more powerful: people simply want to be appreciated, and instinctively look for signs that their work was recognized by others. The promise of credit is therefore one of best motivators the project has. When small contributions are acknowledged, people come back to do more.

— Karl Fogel, Producing Open Source Software

Christian HeilmannBe my eyes, my brain, my second pair of eyes…

(cross published on Medium, in case you want to comment on paragraphs).

In the last few days, the “Be My Eyes” App made quite a splash. And with good reason, as it is a wonderful idea.

Be my eyes

The app plans to connect non-sighted people with sighted ones when they are stuck with a certain task. You ask for a pair of eyes, you connect over a smart phone, video the problem you have and get a volunteer human to help you out with a video call. Literally you offer to be the eyes for another person.

This is not that new, for example there were services that allow for annotation of inaccessible web content (WebVisum, IBM’s (now defunct) social accessibility project) before. But, be my eyes is very pretty and makes it much easier to take part and help people.

Only for the richer eyes…

Right now the app is iOS only, which is annoying. Whilst the accessibility features of iOS used to be exceptional it seems to be losing in quality with iOS8. Of course, the other issue is the price. Shiny Apple things are expensive, Android devices and computers with built-in cameras less so. The source code of be my eyes is on GitHub which is a great start. We might be able to see versions of it on Android and WebRTC driven versions for the web and mobile soon.

Concerns mentioned

As with any product of this ilk, concerns and criticism happen quickly:

  • This may portrait people with disabilities as people who are dependent on others to work. In essence, all you need to do is remove barriers. I know many, very independent blind people and it is depressing how many prejudices are still there that people with disabilities need our help for everything. They don’t. What they need is less people who make assumptions about abilities when building products.
  • There is a quality concern here. We assume that people signing up want to help and have good intentions. However, nothing stops trolls from using this either and deliberately giving people wrong advice. There are people who post seizure-inducing GIFs on epilepsy forums, for example. For a sociopath who wants to hurt people this could be “fun” to abuse. Personally, I want to believe that people are better than that, but only one incident where a blind user gets harmed “for the lulz” might be enough to discredit the whole product.

Extending the scope of this app

I don’t see why this app could not become more than it is now. We all could need a second pair of eyes from time to time. For example:

  • to help with some translation,
  • to recognise what breed a certain puppy is,
  • to help us find inspiration for a painting,
  • to learn how to fix a certain appliance in my kitchen without destroying it
  • to have some locals show us which roads are easier to walk,
  • to have an expert eye tell me if my makeup looks good and what could be done
  • to get fashion advice on what I could mix and match in my closet to look great

Some of those have great potential for monetisation, others were done before and died quickly (the local experts one was a product I was involved in at Yahoo called Yocal, which never saw the light of day and could have been foursquare years before foursquare).

Again, this would be nothing new: expert peer to peer systems have come and gone before. When I worked on Yahoo Answers there were discussions to allow for video upload for questions and answers. A prospect that scared the hell out of me seeing that “is my penis big enough” was one of the most asked questions in the Yahoo Answers Men’s health section (and any other, to be fair).

The defunct Google Answers had the idea to pay experts to answer your questions quickly and efficiently. Newer services like LiveNinja and AirPair do this with video chats (and Google, of course may want Hangouts to be a player in that space).

The issues that all of these services face is quality control and safety. Sooner or later any of the original attempts at this failed because of these. Skype services to pay for audio or video advice very quickly became camsex hangouts or phonesex alternatives. This even happens in the offline world – my sister used to run a call centre and they found out that one of their employees offered her phonesex services to eligible men on the line. Yikes.

Another issue is retain-ability and re-use. It is not fun to try to find a certain part of a video without a timed transcript. This can be automated to a degree – YouTube’s subtitling is a good start – but that brings up the question who else reads the private coaching session you had?

Can this be the start or will hype kill it again?

If anything, the user interface and interaction pattern of Be my Eyes is excellent, and the availability of video phones and chat abilities like WebRTC make it possible to have more of these services soon.

In the coding world, real live interaction is simple these days. JSFiddle’s collaboration button allows you to code together, JSBin allows people to watch you while you code and Mozilla’s together.js allows you to turn any web page into a live audio and video chat with multiple cursors.

We use Google Docs collaboratively, we probably have some live chat going with our colleagues. The technology is there. Firefox now has a built-in peer to peer chat system called Hello. Wouldn’t it be cool to have an API for that to embed it in your products?

The thing that might kill those is hype and inflated demands. Yahoo Answers was an excellent idea to let the human voice and communication patterns prevail over algorithmic results. It failed when all that it was measured against was the amount of users and interactions in the database. This is when the quality was thrown out the window and How is babby formed got through without a blip on the QA radar.

Let’s hope that Be my Eyes will survive the first spike of attention and get some support of people who are OK with a small amount of users who thoroughly want to help each other. I’d like to see that.

Andrea MarchesiniRequestSync API

Last week a new API just for B2G certified apps is landed in mozilla-central: RequestSync API. The basic purpose of this API is to allow apps to schedule tasks, while also letting the user decide when these tasks have to be executed.

Consider the following example: your mail app wants to synchronize your mailbox regularly.

Here is what it does:

navigator.sync.register('mail-synchronizer',
                        { minInterval: 120 /* 2 minutes */,
                          oneShot: false,
                          data: { accountID: 123 },
                          wifiOnly: true,
                          wakeUpPage: location.href }).then(
function() {
  console.log("mail-synchronizer task has been registered");
},
function() {
  console.log("Something bad happened.");
});

Through this process, the app has registered a new task, called ‘mail-synchronizer’. It will be scheduled every 2 minutes if the device is connected to the wifi. As you can see, the second parameter of the register method is a configuration object. Here are some more details:

  • minInterval is the number of seconds between 1 execution of the task and the next. This is not entirely precise, but it’s an indication for the RequestSyncService. It can happen that the device may be busy doing something, and this task may be postponed for a while.

  • oneShot: boolean. If we want just 1 execution of the task, we should set it to “true”.

  • data: this can be anything. It’s something useful for the app.

  • wifiOnly: by default it is true and it informs the RequestSyncService about the fact that this task must be executed only if we are using a wifi connection.

  • wakeUpPage: this is the page that will be activated when the task is about to be executed.

Register() method returns a Promise object and if called more than once, the app overwrites the previous task configuration.

In the navigator.sync object there are other methods: unregister(), registrations() and registration(), but we can skip them for now.

When the task is executed, the wakeUpPage is activated and it receives a message in this way:

navigator.mozSetMessageHandler('request-sync', function(e) {
  ...
});

the message contains this attributes:

  • task - the task name. In our example this will be ‘mail-synchronizer’.

  • data - what we sent in the registration: { accountID: 123 }.

  • lastSync - This is a DOMTimeStamp containing the date/time of the last execution of this task.

  • minInterval, oneShot, wifiOnly, wakeUpPage - these attributes will be the same as what we set during the registration.

Now, back to our example, the synchronization of the mailbox may take a while. In order to help RequestSyncService to schedule tasks correctly, and to keep the device alive for the all operation (we internally use a CPU wakelock) the mail app may do an operation like this:

navigator.mozSetMessageHandler('request-sync', function(e) {
  // The synchornization of the mailbox will take a while.
  // Let's set a promise object and resolve/reject it when needed.
  navigator.mozSetMessageHandlerPromise(new Promise(function(resolve, reject) {
    do_the_magic_synchronization(resolve, reject);
   }));
});

Setting a message handler promise, we know that the device will be kept alive until the promise is not resolved or rejected. Furthermore, no other tasks will be executed in the meantime (this is not properly correct, because if the promise takes more than a few minutes to be resolved/rejected, RequestSyncService will continue scheduling other tasks).

As far as the settings app is concerned, the RequestSync API has also a manager of tasks available only for certified apps with a particular permission, but reality just the settings app will have this permission.

Using the requestSync API, the settings app is able to do:

navigator.syncManager.registrations().then(
  function(results) {
     ...
  }
);

In this piece of code, results is an array of RequestSyncTask objects. Each object is an active task with attributes such as lastSync, wakeUpPage, oneShot, minInterval, wifiOnly, data, and task, the name. From this object the settings-app can change the policy to this task:

for (var i = 0; i < results.length; ++i) {
  if (results[i].task == 'mail-synchronizer' &&
      results[i].app.manifestURL == 'http://the/mail/app/manifest.url') {
    results[i].setPolicy('disabled');
  }
}

setPolicy receives 2 parameters, the second one is optional:

  • state, this is the new state of the task. It can be enabled, disabled or wifiOnly.

  • overwrittenMinInterval: a new minInterval value.

This is all. Any feedback is welcome!

Axel HechtOn the process to code

gps blogs a bunch about his vision on how coding on Firefox should evolve. Sadly there’s a bunch of process debt in commenting, so I’ll put my comments in a medium where I already have an account.

A lot of his thinking is based on Code First. There’s an earlier post with that title, but with less content on it, IMHO.

In summary, code first capitalizes on the fact that any change to software involves code and therefore puts code front and center in the change process.

I strongly disagree.

Code is really just one artifact of many of creating and maintaining a software product.

Or, as Shane blogged,

First rate hackers need process too

Bugzilla is much more than a review tool, it’s a store of good and bad ideas. It’s where we document which of our ideas are bad, or just not good enough. It’s where we build out our shared understanding of where we are, and where we want to be.

I understand how gps might come to his conclusions. Mostly because I find myself in a bucket that might be like his. Where your process is really just documenting that you agree with yourself, and that you’re going forward with it. And then that you did. Yeah, that’s tedious and boring. Which is why I took my hat as module owner more often lately, and landed changes without a bug, and with rs=foopy. Old guard.

Without applying my basic disagreement to several paragraphs over several posts, some random notes on a few of gps’ thinking and posts:

I agree that we need to do something about reviews. splinter’s better than plain text, but not quite there. Github PRs feel pretty horrible at least in the way we use them for gaia. As soon as things get slightly interesting, no status in either the PR nor bugzilla makes any sense anymore. I’m curious on MozReview/reviewboard, though the process to get things hooked up there is steep. I’ll give it a few more tries for the few things I do on Firefox, and improvements on code and docs. The one time I used it, I felt very new.

Much of the list gps has on process debt are things he cares about, and very few other people. In particular newcomers won’t even know to bother, unless they happen to across that post by accident. The other questions are not process debt, but good things to learn, and to learn early.

Gervase MarkhamThe Zeroth Human Freedom

We who lived in concentration camps can remember those who walked through the huts comforting others, giving away their last piece of bread. They may have been few in number, but they offer sufficient proof that everything can be taken from a person but the last of the human freedoms – to choose one’s attitude to any set of circumstances – to choose our own way.

This quote is from From Death-Camp to Existentialism (a.k.a. Man’s Search for Meaning) by Victor Frankl. Frankl was an Austrian Jew who spent four years in concentration camps, and afterwards wrote a book about his experiences which has sold over 10 million copies. This quote was part of a sermon yesterday (on contentment) but I share it here because it’s very powerful, and I think it’s also very relevant to how communities live together – with Mozilla being a case in point.

Choosing one’s attitude to a set of circumstances – of which “someone has written something I disagree with and I have become aware of it” is but a small example – is an ability we all have. If someone even in the unimaginable horror of a concentration camp can still retain it, we should all be able to exercise it too. We can choose to react with equanimity… or not. We can choose to be offended and outraged and angry… or not. To say that we cannot do this is to say that we have lost the most basic of human freedoms. No. We are all more than the sum of our circumstances.

Adam LoftingThe week ahead: 19 Jan 2015

January

If all goes to plan, I will:

  • Write a daily working process
  • Use a public todo list, and make it work
  • Catch up on more email from time off
  • Ship V1 of Webmaker Metrics retention dashboard
  • Work out a plan for aligning metrics work with dev team heartbeats
  • Don’t let the immediate todo list get in the way of planning long term processes
  • Invest time in working open
  • Wrestle with multiple todo list systems until they (or I) work together nicely
  • Survive a 5 day week (it’s been a while)
  • Write up final testing blog posts from EOY before those tests are forgotten
  • Book data practices kick-off meetings with all teams

To try and solve some of the process challenges, I’ve gone back to a tool I built a couple of years ago (Done by When) and I’m breaking it a little bit to make it useful to me again. This might end up being an evening time project to learn about some of the new tech the Webmaker team are using this year (particularly rewriting the front end with React). I find it useful to have a side-project to use as a playground for learning new things.

Anyway, have a great week. I’ll try and write up some more notes at the end.

Mark CôtéBMO 2014 Statistics

Everyone loves statistics! Right? Right? Hello?

tap tap

feedback screech

Well anyway, here are some numbers from BMO in 2014:

BMO Usage:

33 243 new users registered
45 628 users logged in
23 063 users performed an action
160 586 new bugs filed
138 127 bugs resolved
100 194 patches attached

BMO Development:

1 325 bugs filed
1 214 bugs resolved

Conclusion: there are a lot of dedicated Mozillians out there!

Chris IliasMy Installed Add-ons – Clippings

I love finding new extensions that do things I never even thought to search for. One of the best ways to find them is through word of mouth. In this case, I guess you can call it “word of blog”. I’m doing a series of blog posts about the extensions I use, and maybe you’ll see one that you want to use.

The first one is Context Search, which I’ve already blogged about.

The second is Clippings. Clippings allows you to keep pieces of text to paste on demand. If you frequently answer email messages with one of a set of replies, you can paste which reply you want to use using the context menu. In my case, I take part in support forums, which means I have to respond to frequently asked questions, typing the same answers frequently. Clippings allows me to have canned responses, so I can answer more support questions in less time.

To save a piece of text as a clipping, select it, then right-click and go to “Clippings”, then “New from Selection“. You’ll then be asked to name the clipping and where to save it among your list of clippings. It supports folders too.

When you want to use that clipping just right-click on the text area, then go to “Clippings” and select the clipping you want to paste.

Clippings is also very useful in Mozilla Thunderbird.

You can install it via the Mozilla Add-ons site.

Tess JohnData Migration in Django

Changing the database is one side of the equation, but often a migration involves changing data as well;

Consider this case

Bug 1096431 - Able to create tasks with duplicate names.Task names should be unique

class Task:

     name = models.CharField(max_length=255, verbose_name=title)

    start_date = models.DateTimeField(blank=True, null=True)
Solution

Schema migration of course. ie Add unique = True. If you apply this migration to production it will cause an IntergrityError because you make a database column unique, while it’s contents are not unique. To solve this you need to find all tasks with duplicate names and rename them to something unique. This will be a datamigration instead of a schemamigration. Thanks to Giorgos Logiotatidis for the guidance,

Step1 : Create a few task with same names say ‘task’

Step2 : This is datamigration part

python manage.py datamigration Tasks _make_taskname_unique.py

Step 3: When you Open up the newly created file you can see the skeleton  of forwards and backwards function.I wrote a code to rename the duplicate taskname.

Step4 : Now add the unique keyword to the field and apply a schemamigration

Step5 :Finally migrate the model tasks.Now you can the the duplicate tasks gets renamed as  task 2 task 3 etc


K Lars LohnThe Smoothest Migration

I must say that it was the smoothest migration that I have ever witnessed. The Socorro system data has left our data center and taken up residence at Amazon.

Since 2010, HBase has been our primary storage for Firefox crash data.  Spread across something like 70 machines, we maintained a constant cache of at least six months of crash data.  It was never a pain free system.  Thrift, the system through which Socorro communicated with HBase, seemed to develop a dislike for us from the beginning.  We fought it and it fought back.

Through the adversity that embodied our relationship with Thrift/HBase, Socorro evolved fault tolerance and self healing.  All connections to external resources in Socorro are wrapped with our TransactionExecutor.  It's a class that recognizes certain types of failures and executes a backing off retry when a connection fails.  It's quite generic, as it wraps our connections to HBase, PostgreSQL, RabbitMQ, ElasticSearch and now AmazonEC2.  It ensures that if an external resource fails with a temporary problem, Socorro doesn't fail, too.

Periodically, HBase would become unavailable. The Socorro system, detecting the problem, would back down, biding its time while waiting for the failed resource to recover.  Eventually, after probing the failed resource, Socorro detects recovery and picks up where it left off.

Over the years, we realized that one of the major features that originally attracted us to HBase was not giving us the payoff that we had hoped.  We just weren't using the MapReduce capabilities and found the HBase maintenance costs were not worth the expense.

Thus came the decision that we were to migrate away.  Initially, we considered moving to Ceph and began a Ceph implementation of what we call our CrashStorage API.

Every external resource in Socorro lives encapsulated in a class that implements the Crash Storage API.  Using the Python package Configman, crash storage classes can be loaded at run time, giving us a plugin interface.  Ceph turned out to be a bust when the winds of change directed us to move to AmazonS3. Because we implemented the CrashStorage API using the Boto library, we were able to reuse the code.

Then began the migration.  Rather than just flipping a switch, our migration was gradual.  We started 2014 with HBase as primary storage:


Then, in December, we started running HBase and AmazonS3 together.   We added the new AmazonS3 CrashStorage classes to the Configman managed Socorro INI files.  While we likely restarted the Socorro services, we could have just sent SIGHUP, prompting them to reread their config files, load the new Crash Storage modules and continue running as if nothing had happened.



After most of a month, and completing a migration of old data from HBase to  Amazon, we were ready to cut HBase loose.

I was amused by the non-event of the severing of Thrift from Socorro.  Again, it was a matter of editing HBase out of the configuration, sending a SIGHUP, causing HBase to fall silent. Socorro didn't care.  Announced several hours later on the Socorro mailing list, it seem more like a footnote than an announcement: "oh, by the way, HBase is gone".



Oh, the migration wasn't completely perfect, there were some glitches.  Most of those were from minor cron jobs that were used for special purposes and inadvertently neglected.

The primary datastore migration is not the end of the road.  We still have to move the server processes themselves to Amazon system.  Because everything is captured in the Socorro configuration, however, we do not anticipate that this will an onerous process.

I am quite proud of the success of Socorro's modular design.  I think we programmers only ever really just shuffle complexity around from one place to another.  In my design of Socorro's crash storage system, I have swung a pendulum far to one side, moving the complexity into the configuration.  That has disadvantages.  However, in a system that has to rapidly evolve to changing demands and changing environments, we've just demonstrated a spectacular success.

Credit where credit is due: Rob Helmer spearheaded this migration as the DevOp lead. He pressed the buttons and reworked the configuration files.  Credit also goes to Selena Deckelmann, who lead the way to Boto for Ceph that gave us Boto for Amazon.  Her contribution in writing the Boto CrashStorage class was invaluable.  Me?  While I wrote most of the Boto CrashStorage class and I'm responsible for the overall design, I was able to mainly just be a witness to this migration.  Kind of like watching my children earn great success, I'm proud of the Socorro team and look forward to the next evolutionary steps for Socorro.

Marco ZeheBlog change: Now using encrypted connections

This is just a quick note to let you all know that this blog has switched over to using encrypted connections. The URLs (web site addresses) are now redirected to their encrypted counterparts, starting with https instead of http. For links to posts you may have bookmarked, it means that they’ll be automatically redirected to their encrypted counterparts, too, so you don’t need to do anything, and permalinks will still work.

For you, this means two main things:

First, you can check in your browser’s address bar that this is indeed my blog you’re on, and not some fraud site which may have copied my content.

Second, when you comment, the data you send to my blog is now encrypted during transport, so your e-mail address, which you may not want everybody to see, is now not readable for everyone sitting by the sidelines of the internet.

This is my contribution to making encrypted communication over the internet the norm rather than the exception. The more people do it, the less likely it is that one becomes a suspect for some security agencies just because one uses encryption.

Please let me know should you run into any problems!

Hannah KaneTeach.webmaker.org: Initial Card Sorting Results

This past week I conducted a small user research project to help inform the IA of the new teach.webmaker.org site.

I chose a card sorting activity, which is a common research method for IA projects. In a card sorting activity, you give members of your target audience a stack of cards, each of which has one of the site content areas printed on it. You ask the participants to group items together and explain their thought process. In this way, you gain an understanding of the participants mental models. This is helpful for avoiding a common pitfall in site design, which is organizing content in a way that make sense to you but not your users.

Big Giant Caveat

This study was flawed in a couple ways. First, Jacob Nielsen (who is generally considered to be a real smartypants when it comes to usability and user research) recommends that you do card sorting with 15 users. I’ve only been able to get 11 to do the activity so far, though I think a few more are pending.

Another flaw is that I deviated from a common best practice of running these activities in person. A lot of the insights are gained by listening to the person think aloud. There are some tools for running an online card sorting activity, but they’re largely for what’s called “closed” card sorts, where you pre-determine the categories and the person’s task is to sort cards within those categories. Since one of my goals with this activity was to generate a better understanding of what terminology to use, I wanted to do an “open” sort, where the participants name their groupings themselves.

All that’s to say that we shouldn’t take these results or my analysis as gospel. I do think the participant responses will be useful as we move forward with designing some wireframes to user test in the next heartbeat.

Participant Demographics and Background Information

There were a range of ages and locations represented in the study.

Four participants are between 18 and 24 years old, three are between 25 and 34, two between 35 and 44, one between 45 and 54, and one between 55 and 64.

Four participants are from the United States, three from India, and one each from Colombia, Bangladesh, Canada, and the United Kingdom.

Participants were asked to rate their level of familiarity with the Webmaker Mentors program on a scale of 1 to 5, with 5 being the most familiar. Again, there was a range. Four participants rated themselves a 5, two a 4 or 4.5, two a 3, one a 2, and two a 1.

Initial Findings

The participants in the study had a range of different mental models they used to organize the content. Those models were:

  1. Grouping by program offering—that is, organizing by specific programs, concepts, or offerings, typically expressed as nouns (e.g. Web Literacy, Teaching Kits, Webmaker Clubs, Trainings, Activities, Resources, Social, Learning, Philosophy, Mentoring, Research, Events, Supportive Team)Five participants used a model like this as their primary model. The average familiarity level with Webmaker Mentoring for these participants matches the average for the entire sample (3.7 on a five-point scale).
  2. Grouping by functional area—that is, actions that a user might take, typically expressed as verbs (e.g. participate, learn, market/promote, meet others, do, lead, get involved, collaborate, organize, develop yourself, teach, experiment, host, attend).Four participants used a model like this as their primary model. Notably, all of the participants are from the United States, Canada, or the United Kingdom, and their average familiarity with Webmaker Mentoring is below the average of the entire sample (2.75 as compared to 3.67).
  3. Grouping by role or identity—some study participants organized the content by the type of user who would be interested in it (e.g. Learner, Mentor).One participant  used this as their primary model. Another made a distinction between Learning and Teaching, but it was framed more like the functional areas described above. One more used “Learning Geeks” as a topic area.
  4. Level of expertise—in this model, there is a pathway through the content based on level of expertise (e.g. intermediate, advanced).One participant used this as their primary model.

Other patterns, themes, and notable terminology:

  • Seven participants grouped together content related to hosting or attending events, and three participants made references to face-to-face communication. Of the seven who grouped content into the “Events” topic, five of them included the one item that referenced “Maker Party” (including two participants who rated their level of familiarity with the program at a 1), indicating a strong understanding of “Maker Party” as a type of event.
  • Five participants made references to the broader community. Three of them are from the United States, one from Canada, and one from India. (The specific terminology used were “Meet others,” “Social,” “Webmaker Community,” “Collaborate,” and “Supportive team”).
  • Four participants used the word “Webmaker” in their groupings, which gives us some insight into how they understand the brand. In each case, participants seem to connect the term to either teaching and teaching kits, or to the community of interested people.
  • Three participants used the term “Leading.”
  • One participant referenced a particular context (“Webmaker for Schools”).
  • One participant distinguished Mozilla-produced content (as “Mozilla Outputs”).
  • We included the term “Peer Learning Networks” in the content list to represent Hives (we assumed the meaning of “Hive” would be difficult to intuit for those unfamiliar). While we can’t draw any conclusions based on this data, it’s notable that this term was grouped into a wide variety of topics, including community (“Meet others,” “Social,” and “Collaborate”), “Get Involved,” “Intermediate,” “Mozilla Outputs,” and “Learning Geeks.” Three participants felt it didn’t fit under any category.
  • We tested both “Professional Development” and “Trainings” to see if we could understand how people interpret those terms. The results are fairly ambiguous. Both terms were associated with “Activities for teachers & mentors”, “Leading,” “Get Involved,” and “Research (things you learn on your own).” “Professional Development” was also associated with “Learning,” “Develop Yourself,” and “Learning Geeks”. “Trainings” was associated with “Intermediate,” “Mentoring,” “Organize in person events,” and “Supportive team.” For both terms, three participants could not categorize this term.

Let me know if you’re interested in seeing the raw data.


Jess KleinEYE Witness News: Promotional Content on Webmaker

On January 28th Mozilla will be celebrating Data Privacy Day. This is an international effort centered on "Respecting Privacy, Safeguarding Data and Enabling Trust." There will be content on Mozilla, Webmaker and Mozilla Advocacy. The Webmaker team had previously developed privacy content with the Private Eye activity (featuring the Lightbeam add-on), so the primary challenge here was how to promote that content via the Webmaker splash page. This is actually a two - fold design opportunity:

1. micro: how might we promote the unique Privacy Day content on the splash page for the 28th?

2. macro: how might we incorporate promotional interest-based content into the real estate on the Webmaker splash page on an ongoing basis?

Constraints: needs to be conceived, designed and implemented within 2 weeks.

Start from the beginning 



I took a look at the current splash page. The content that we are promoting is directly connected to the Mozilla mission, so I identified a sliver of space directly above the section where we state the project's values. My thinking here is that we are creating a three tier hierarchy of values on the page: 1) we are webmaker - we are all about making - and this is what you can do right this second to get started, 2) we are deeply concerned about [privacy] - and this what you can do right now to dive into that topic and 3)we are more than just making + [privacy] - here are all the things that we value.

I SEE what you did there

That sliver was great, but it was below the non-existent but deeply considered fold of the page. If this was a painting I would create a repoussoir element to bring the users attention to the core content  by framing the edge. In the painting below you can see that tree branch that directs your attention directly into the heart of the composition.



Building off of my thinking from designing the Mozilla snippet and the onboarding ux,  I wanted to make this repoussoir element something that a user might find quirky, whimsical or relateable. All of the other elements on the page were expected and kind of standard elements for a webpage. I needed to create something that would be subtle yet attention grabbing.  Looking at subject of privacy, I immediately had associations with corporations and individuals big- brothering me as I visited web pages. I realized that the activity we were directing users to was called private eye - and this led me to create a small asset that features an eyeball that follows your cursor around as you explore the splash page. On hover it will flip and direct you to the activity.This worked for desktop, but for mobile we would have to simulate the action by having a simple CSS eyeball animation center aligned on the sliver. Major props here go out to Aki who had to invoke the pythagorean theorem to get the eye to follow the cursor without leaving the sclera.



  I did a study of eyeballs on redpen and immediately got a ton of community and staff feedback - which told me two things: 1. it was a conversation topic and 2. people liked the very first eyeball that I drew. 




    Air MozillaWebdev Beer and Tell: January 2015

    Webdev Beer and Tell: January 2015 Web developers across the Mozilla community get together (in person and virtually) to share what side projects or cool stuff we've been working on.

    Dave TownsendWelcome the new Toolkit peers

    I have been a little lax in my duty of keeping the list of peers for Toolkit up to date and there have been a few notable exceptions. Thankfully we’re good about disregarding rules when it makes sense to and so many people who should be peers have already been doing reviews. Of course that’s no use to new contributors trying to figure out who should review what so I am grateful to someone who prodded me into updating the list.

    As I was doing so I came to the conclusion that there is a lot of overlap between Firefox code and Toolkit code. Lots of patches touch both at the same time and it often doesn’t make a lot of sense to require different reviewers there. I also couldn’t think of a reason why someone would be a trusted reviewer of Firefox code and yet not be trusted to review Toolkit code. Looking at the differences in the lists of peers confirmed that all those missing really should be Toolkit peers too.

    So going forwards I have decided that Firefox peers will automatically be considered to be Toolkit peers. That means I can announce a whole bunch of new people who are now Toolkit peers, please congratulate them in the usual way, by flooding their review queue:

    • Ehsan Akhgari
    • Mike de Boer
    • Mike Conley
    • Georg Fritzsche
    • Mark Hammond
    • Felipe Gomes
    • Gijs Kruitbosch
    • Florian Quèze
    • Tim Taubert

    You might ask if the reverse should hold true, should all Toolkit peers be Firefox peers? i.e. should we just merge the lists. I leave that to the Firefox owner to decide but I will say that there are a few pieces of Toolkit that are very much not front-end and so in some cases I could see a reviewer for that area not needing to be listed in the Firefox list, not because they wouldn’t be trusted to turn down the patches they couldn’t review but just because there would be almost no patches in their area in Firefox. Maybe that case is so rare that it isn’t worth the hassle of two lists though.

    Giorgio MaoneBoth Your Cheeks

    Pope Punch

    Dear pope Francis,

    Thank you for for this chance to punch your face (both cheeks, the way you christians enjoy best) because your organization routinely defames and insults His Majesty Satan.

    Sincerely,
    Your friendly neighbourhood satanist

    P.S.: a very good article about this from The Guardian.

    P.P.S.: Yes, I think free thinking, free speech and censorship are very relevant to the Open Web.

    Mozilla Release Management TeamFirefox 36 in beta

    Firefox 36 (Desktop and Mobile) is now available on the beta channel.

    The release notes are published on the Mozilla website:

    This version introduces many new HTML5/CSS features, in particular the Media Source Extensions (MSE) API which allow native HTML5 playback on YouTube. The new preferences implementation is also enabled for the first half of the beta cycle, please help us to test this new feature!

    On the mobile version of Firefox, we are also shipping the new Tablet user interface!

    Download this new version:

    And as usual, please report any issues.

    Roberto A. VitilloNext-gen Data Analysis Framework for Telemetry

    The easier it is to get answers, the more questions will be asked

    In that spirit me and Mark Reid have been working for a while now on a new analysis infrastracture to make it as easy as possible for engineers to get answers to data related questions.

    Our shiny new analysis infrastructure is based primarily on IPython and Spark. I blogged about Spark before, I even gave a short tutorial on it at our last workweek in Portland (slides and tutorial); IPython might be something you are not familiar with unless you have a background in science. In a nutshell it’s a browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media.

    IPythonAn IPython notebook in all its glory

    The combination of IPython and Spark allows to write data analyses interactively from a browser and seemingly parallelize them over multiple machines thanks to a rich API with over 80 distributed operators! It’s a huge leap forward in terms of productivity compared to traditional batch oriented map-reduce frameworks. An IPython notebook contains both the code and the product of the execution of that code, like plots. Once executed, a notebook can simply be serialized and uploaded to Github. Then, thanks to nbviewer, it can be visualized and shared among colleagues.

    In fact, the issue with sharing just the end product of an analysis is that it’s all too easy for bugs to creep in or to make wrong assumptions. If your end result is a plot, how do you test it? How do you know that what you are looking at does actually reflect the truth? Having the code side by side with its evaluation allows more people to  inspect it and streamlines the review process.

    This is what you need to do to start your IPython backed Spark cluster with access to Telemetry data:

    1. Visit the analysis provisioning dashboard at telemetry-dash.mozilla.org and sign in using Persona with an @mozilla.com email address.
    2. Click “Launch an ad-hoc Spark cluster”.
    3. Enter some details:
      • The “Cluster Name” field should be a short descriptive name, like “chromehangs analysis”.
      • Set the number of workers for the cluster. Please keep in mind to use resources sparingly; use a single worker to write and debug your job.
      • Upload your SSH public key
    4. Click “Submit”.
    5. A cluster will be launched on AWS preconfigured with Spark, IPython and some handy data analysis libraries like pandas and matplotlib.

    Once the cluster is ready, you can tunnel IPython through SSH by following the instructions on the dashboard, e.g.:

    ssh -i my-private-key -L 8888:localhost:8888 hadoop@ec2-54-70-129-221.us-west-2.compute.amazonaws.com
    

    Finally, you can launch IPython in Firefox by visiting http://localhost:8888.

    Now what? Glad you asked. In your notebook listing you will see a Hello World notebook. It’s a very simple analysis that produces the distribution of startup times faceted by operating system for a small fraction of Telemetry submissions; let’s quickly review it here.

    We start by importing a telemetry utility to fetch pings and some commonly needed libraries for analysis: a json parser, pandas and matplotlib.

    import ujson as json
    import matplotlib.pyplot as plt
    import pandas as pd
    from moztelemetry.spark import get_pings
    

    To execute a block of code in IPython, aka cell, press Shift-Enter. While a cell is being executed, a gray circle will appear in the upper right border of the notebook. When the circle is full, your code is being executed by the IPython kernel; when only the borders of the circle are visible then the kernel is idle and waiting for commands.

    Spark exploits parallelism across all cores of your cluster. To see the degree of parallelism you have at your disposal simply yield:

    sc.defaultParallelism
    
    

    Now, let’s fetch a set of telemetry submissions and load it in a RDD using the get_pings utility function from the moztelemetry library:

    pings = get_pings(sc,
                      appName="Firefox",
                      channel="nightly",
                      version="*",
                      buildid="*",
                      submission_date="20141208",
                      fraction=0.1)
    

    That’s pretty much self documenting. The fraction parameter, which defaults to 1, selects a random subset of the selected submissions. This comes in handy when you first write your analysis and don’t need to load lots of data to test and debug it.

    Note that both the buildid and submission_date parameters accept also a tuple specifying, inclusively, a range of dates, e.g.:

    pings = get_pings(sc,
                      appName="Firefox",
                      channel="nightly",
                      version="*",
                      buildid=("20141201", "20141202"),
                      submission_date=("20141202", ""20141208"))
    

    Let’s do something with those pings. Since we are interested in the distribution of the startup time of Firefox faceted by operating system, let’s extract the needed fields from our submissions:

    def extract(ping):
        ping = json.loads(ping)
        os = ping["info"]["OS"]
        startup = ping["simpleMeasurements"].get("firstPaint", -1)
        return (os, startup)
    
    cached = pings.map(lambda ping: extract(ping)).filter(lambda p: p[1] > 0).cache()
    

    As the Python API matches closely the one used from Scala, I suggest to have a look at my older Spark tutorial if you are not familiar with Spark. Another good resource are the hands-on exercises from AMP Camp 4.

    Now, let’s collect the results back and stuff it into a pandas DataFrame. This is a very common pattern, once you reduce your dataset to a manageable size with Spark you collect it back on your driver (aka the master machine) and finalize your analysis with statistical tests, plots and whatnot.

    grouped = cached.groupByKey().collectAsMap()
    
    frame = pd.DataFrame({x: log(pd.Series(list(y))) for x, y in grouped.items()})
    frame.boxplot()
    plt.ylabel("log(firstPaint)")
    plt.show()
    
    
    startup_helloStartup distribution by OS

    Finally, you can save the notebook, upload it to Github or Bugzilla and visualize it on nbviewer, it’s that simple. Here is the nbviewer powered Hello World notebook. I warmly suggest that you open a bug report on Bugzilla for your custom Telemetry analysis and ask me or Vladan Djeric to review it. Mozilla has been doing code reviews for years and with good reasons, why should data analyses be different?

    Congrats, you just completed your first Spark analysis with IPython! If you need any help with your custom job feel free to drop me a line in #telemetry.


    Mark FinkleFirefox for Android: What’s New in v35

    The latest release of Firefox for Android is filled with new features, designed to work with the way you use your mobile device.

    Search

    Search is the most common reason people use a browser on mobile devices. To help make it easier to search using Firefox, we created the standalone Search application. We have put the features of Firefox’s search system into an activity that can more easily be accessed. You no longer need to launch the full browser to start a search.

    When you want to start a search, use the new Firefox Widget from the Android home screen, or use the “swipe up” gesture on the Android home button, which is available on devices with software home buttons.

    fennec-swipeup

    Once started, just start typing your search. You’ll see your search history, and get search suggestions as you type.

    fennec-search-1-crop

    The search results are displayed in the same activity, but tapping on any of the results will load the page in Firefox.

    fennec-search-3

    Your search history is shared between the Search and Firefox applications. You have access to the same search engines as in Firefox itself. Switching search engines is easy.

    Sharing

    Another cool feature is the Sharing overlay. This feature grew out of the desire to make Firefox work with the way you use mobile devices. Instead of forcing you to switch away from applications when sharing, Firefox gives you a simple overlay with some sharing actions, without leaving the current application.

    fennec-share-overlay

    You can add the link to your bookmarks or reading list. You can also send the link to a different device, via Firefox Sync. Once the action is complete, you’re back in the application. If you want to open the link, you can tap the Firefox logo to open the link in Firefox itself.

    Synced Tabs

    Firefox Sync makes it easy to access your Firefox data across your different devices, including getting to the browser tabs you have open elsewhere. We have a new Synced Tabs panel available in the Home page that lets you easily access open tabs on other devices, making it simple to pick up where you left off.

    Long-tap an item to easily add a bookmark or share to another application. You can expand/collapse the device lists to manage the view. You can even long-tap a device and hide it so you won’t see it again!

    fennec-synctabs

    Improved Error Pages

    No one is happy when an error page appears, but in the latest version of Firefox the error pages try to be a bit more helpful. The page will look for WiFi problems and also allow you to quickly search for a problematic address.

    fennec-errorpage-search

    Matjaž HorvatPontoon report 2014: Get involved

    This is the last in a series of blog posts outlining Pontoon development in 2014. I’ll mostly focus on new features targeting translators. If you’re more interested in developer oriented updates, please have a look at the release notes.

    Part 1. User interface
    Part 2. Backend
    Part 3. Meet our top contributors
    Part 4. Make your translations better
    Part 5. Get involvedyou are here

    In the past years, Pontoon has come a long way from an idea, a prototype, to a working product. As of today, there’s a dozen of Mozilla projects available for localization in Pontoon. If you want to move it even further, there are plenty of ways to do so.

    For localizers
    Start learning how things work by looking at the new Pontoon homepage, which is also used as a demo project to be translated using Pontoon. Perhaps you can translate it to your mother language. You can also learn more advanced features.

    For developers
    Making your website or web application localizable with Pontoon is quick and easy. A simple script needs to be added and you are halfway through. Follow implementation instructions for more details.

    Take action
    Do you have ideas for improvement? Are you a developer? Learn how to get your hands dirty. It has never been easier to set up development environment and start contributing. We’re on GitHub.

    Mozilla Reps CommunityReps Weekly Call – January 15th 2015

    Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

    andrefox

    Summary

    • Data Privacy Day.
    • Hello Campaign.
    • Womoz Update.
    • Event metrics challenges update.
    • Mozlandia videos.
    • How we can improve reports to be more easy?
    • Reps and schools.

    Detailed notes

    AirMozilla video

    Don’t forget to comment about this call on Discourse and we hope to see you next week!

    Christian HeilmannYou’re a spokesperson, why do you talk about things breaking?

    Every once in a while you will find someone saying something “bad” about a product of the company they work for. This could be employees or – god forbid – even official spokespeople.

    silence, please

    It happens to me, too, for example when my browser crashes on me. The inevitable direct response to this is most of the time some tweet in the style of:

    Should a spokesperson of $company talk badly about it? Think about the followers you have and what that means for the people who worked on the product!

    It is a knee-jerk reaction making a lot of assumptions:

    • that the person is not rooting for the team,
    • that the person is abusing his or her reach,
    • that the intentions are to harm with this,
    • that criticising a product means criticising the company and
    • that the person has no respect for his or her colleagues.

    Or, that they are bad at their job and cause a lot of damage without meaning to and should be chastised by some other person on Twitter.

    All these could be valid points, had the person mentioneded something in a terrible way or without context. It is – of course – bad style and not professional for any employee to speak ill of their employer or its products publicly.

    However, things go wrong and things break, and no matter if you are a professional spokesperson or not, it is simply honest to mention that. It also begs the question what is better: help the team that build a product to fix an obvious issue by owning the fixing process or to wait till someone else finds it? The latter means you’ll have a much shorter time to fix it.

    It is ironic that an audience who hates sales pitches and advertisement is complaining when an official advocate of something points out a flaw.

    It all comes down to how you mention an issue. You can cause a lot of good by mentioning an issue. Or you could cause a lot of problems.

    How to report a failure and make it useful

    Things you do by mentioning a fault:

    • You admit that things go wrong for you, too. That makes you a user of your products, not a salesperson (or shill, really)
    • You mention the fault before somebody else does. This puts you in the driver’s seat. Instead of reacting to criticism, you advertise that you are aware of the issue and that you are looking into it. It is better when you find a flaw than when the competition does.
    • You show that you are a user of the product. There is nothing worse than a spokesperson who only listens to what the marketing team talks about or who starts believing exclusively in their own “feel good” messages about a product. You need to use the product to be able to talk about it. And this means that you inevitably will find problems.
    • You stay approachable and honest. Things go wrong for all of us – you are no exception.

    Of course, just complaining is bad form. To make your criticism something useful, you should do more:

    • Be detailed about your environment. Did you use a developer edition of your product? What’s your setup? When did the thing go wrong?
    • Stick to one thing that goes wrong. “Browser $x is unstable” is a bad message, “$x just crashed on me when trying to play this video/game” is an OK one.
    • You should report the problem internally. In the best case, this should happen before you mention it. You can then follow up your public criticism with a report how the issue is being dealt with. This step is crucial and in many cases you already find a reason why something is broken. You can then mention the issue and the solution at the same time. This is powerful – people like solutions.
    • Investigate what happened. Other people might run into the same issue and there is nothing more powerful than a post somewhere on how to fix an issue. Don’t let the thing just lie and be broken. And don’t let people come up with quick fixes or workarounds that might prove to be harmful in the long run.
    • Deal with the feedback. People fixing the issue shouldn’t have this as an extra burden. This is where your job as a spokesperson comes in: deal with feedback in a grown-up fashion and keep people updated when things get fixed or more information is unearthed why something happens.

    It is very tempting to just vent when something goes wrong. This is not good. Count to ten and consider the steps above first. I am not saying that you shouldn’t report things that annoy you. On the contrary, it is part of your job to do that as it shows that you care about the product. It makes a lot of sense though to turn your gripes into actions.

    When not to mention an issue

    There are times though when you should not mention an issue. Not many, but there are. It mostly boils down to who will suffer by you mentioning the problem.

    • Don’t punish your users. It is a bad idea to publicly talk about a security flaw that would endanger your users. That needs immediate fixing and any public disclosure just makes it harder to fix the problem. It also is a feast for the tech press. People love a security drama and you and your press people will have to deal with a lot of half-truths and hyperbole by the press. You don’t want a bug tarnish the trust in your company as a whole, and this is what happens with premature security issue reports and the inevitable spin the press is wont to give it.
    • Don’t report without knowing who can fix the issue. Investigate who is responsible and give them a heads up. Failing this will cause massive bad blood in the company and you don’t want to have to deal with public feedback and internal grumblings and mistrust at the same time. A scorned developer is not one that will do things for you or help fixing the issue. They are much more likely to join the public conversation and strongly disagree with you and other critics. Be the person who helps fixing an issue by showing your colleagues in a light that they deal with problems swiftly and professionally. Don’t throw blame into the unknown.
    • Don’t report your own faults as problems. You might have a setup that is very unique and causes issues. Make sure you can reproduce the issue in several environments and not just one setting in a certain environment. Make sure you used the product correctly. If you didn’t, write about how you used it wrongly to avoid other false reports of bugs.

    Be aware about the effects you have

    Reporting bad things happening without causing internal and external issues requires good communication skills. The most important part is keeping everyone involved in the loop and be very open about the fixing process. If you can’t be sure that things will get fixed, it might not be worth your while to report them publicly. It would be a kind of blackmail or blame game you can not turn into something useful. Instead, be prepared to respond when others find the problem – as inevitably they will.

    Stay honest and open and there is no problem with reporting flaws.

    Photo Credit: martins.nunomiguel via Compfight cc

    Gregory SzorcBugzilla and the Future of Firefox Development

    Bugzilla has played a major role in the Firefox development process for over 15 years. With upcoming changes to how code changes to Firefox are submitted and reviewed, I think it is time to revisit the central role of Bugzilla and bugs in the Firefox development process. I know this is a contentious thing to say. Please, gather your breath, and calmly read on as I explain why I believe this.

    The current Firefox change process defaults to requiring a Bugzilla bug for everything. It is rare (and from my experience frowned upon) when a commit to Firefox doesn't reference a bug number. We've essentially made Bugzilla and a bug prerequisites for changing anything in the Firefox version control repository. For the remainder of this post, I'm going to say that we require a bug for any change, even though that statement isn't technically accurate. Also, when I say Bugzilla, I mean bugzilla.mozilla.org, not the generic project.

    Before I go on, let's boil the Firefox change process down to basics.

    At the heart of any change to the Firefox source repository is a diff. The diff (a representation of the differences between a set of files) is the smallest piece of data necessary to represent a change to the Firefox code. I argue that anything more than the vanilla diff is overhead and could contribute to process debt. Now, there is some essential overhead. Version control tools supplement diffs with metadata, such as the author, commit message, and date. Mozilla has also instituted a near-mandatory code review policy, where changes need to be signed off by a set of trusted individuals. I view both of these additions to the vanilla diff as essential for Firefox development and non-negotiable. Therefore, the bare minimum requirements for changing Firefox code are a diff plus metadata (a commit/patch) and (almost always) a review/sign-off. That's it. Notably absent from this list is a Bugzilla bug. I argue that a bug is not strictly required to change Firefox. Instead, we've instituted a near-universal policy that we should have bugs. We've chosen to add overhead and process debt - interaction with Bugzilla - to our Firefox change process.

    Now, this choice to require all changes be associated with bugs has its merits. Bugs provide excellent anchor points for historical context and for additional information after the change has been committed and is forever set in stone in the repository (commits are immutable in Mercurial and Git and you can't easily attach metadata to the commit after the fact). Bugs are great to track relationships between different problems or units of work. Bugs can even be used to track progress towards a large feature. Bugzilla components also provide a decent mechanism to follow related activity. There's also a lot of tooling and familiar process standing on top of the Bugzilla platform. There's a lot to love here and I don't want diminish the importance of all these things.

    When I look to the future, I see a world where the current, central role of Bugzilla and bugs as part of the Firefox change process begin to wane. I see a world where the benefits to maintaining our current Bugzilla-centric workflow start to erode and the cost of maintaining it becomes higher and harder to justify. You actually don't have to look too far into the future: that world is already here and I've already started to feel the pains of it.

    A few days ago, I blogged about GitHub and its code first approach to change. That post was spun off from an early draft of this post (as were the posts about Firefox contribution debt and utilizing GitHub for Firefox development). I wanted to introduce the concept of code first because it is central to my justification for changing how we do things. In summary, code first capitalizes on the fact that any change to software involves code and therefore puts code front and center in the change process. (In hindsight, I probably should have used the term code centric, because that's how I want people to think about things.) So how does code first relate to Bugzilla and Firefox development?

    Historically, code review has occurred in Bugzilla: upload a patch to Bugzilla, ask for review, and someone will look at it. And, since practically every change to Firefox requires review, you need a bug in Bugzilla to contain that review. Thus, one way to view a bug is as a vehicle for code review. Not every bug is just a code review, of course. But a good number of them are.

    The only constant is change. And the way Mozilla conducts code review for changes to Firefox (and other projects) is changing. We now have MozReview, a code review tool that is not Bugzilla. If we start accepting GitHub pull requests, we may perform reviews exclusively on GitHub, another tool that is not Bugzilla.

    (Before I go on, I want to quickly point out that MozReview is nowhere close to its final form. Parts of MozReview are pretty bad right now. The maintainers all know this and we have plans to fix it. We'll be in Toronto all of next week working on it. If you don't think you'll ever use it because parts are bad today, I ask you to withhold judgement for a few more months.)

    In case you were wondering, the question on whether Bugzilla should always be used for code review for Firefox has been answered and that answer is no. People, including maintainers of Bugzilla, realized that better-than-Splinter/Bugzilla code review tools exist and that continuing to invest time to develop Bugzilla/Splinter into a best-in-class code review tool would be better spent integrating Bugzilla with an existing tool. This is why we now have a Review Board based code review tool - MozReview - integrated with Bugzilla. If you care about code quality and more powerful workflows, you should be rejoicing at this because the implementation of code review in Bugzilla does not maximize optimal outcomes.

    The world we're moving to is one where code review occurs outside of Bugzilla. This raises an important question: if Bugzilla was being used primarily as a vehicle for code review, what benefit and/or role should Bugzilla play when code review is conducted outside of Bugzilla?

    I posit that there are a class of bugs that won't need to exist going forward because bugs will provide little to no value. Put another way, I believe that a growing number of commits to the Firefox repository won't reference bugs.

    Come with me on a journey to the future.

    MozReview is purposefully being designed in a code and repository centric way. To initiate the formal process for considering a change to code, you push to a Mercurial (or Git!) repository. This could be directly to Mozilla's review repository. If I have my way, this could even be kicked off by submitting a pull request on GitHub or Bitbucket. No Bugzilla attachment uploading here: our systems talk in terms of repositories and commits. Again, this is by design: we don't want submitting code to Mozilla to be any harder than hg push or git push so as to not introduce process debt. If you have code, you'll be able to send it to us.

    In the near future, MozReview will stop cross-posting detailed review updates to Bugzilla. Instead, we'll use Review Board's e-mail feature to send its flavor of emails. These will have rich HTML content (or plain text if you insist) and will provide a better experience than Bugzilla ever will. We'll adopt the model of tools like Phabricator and GitHub and only post summaries or links of activity, not full content, to bugs. You may be familiar with the concept as applied to the web: it's called hyperlinking.

    Work is being invested into Autoland. Autoland is an automated landing queue that pushes/lands commits semi-automatically once they are ready (have review, pass automation, etc). Think of Autoland as a bot that does all the labor intensive and menial actions around pushing that you do now. I believe Autoland will eventually handle near 100% of pushes to the Firefox repository. And, if I have my way, Autoland will result in the abolishment of integration branches and merge commits in the Firefox repository. Good riddance.

    MozReview and Autoland will be highly integrated. MozReview will be the primary user interface for interacting with Autoland. (Some of this should be in place by the end of the quarter.)

    In this world, MozReview and its underlying version control repositories essentially become a database of all submitted, pending, and discarded commits to Firefox. The metaphorical primary keys of this database are not bug numbers: they are code/commits. (Code first!) Some of the flags stored in this database tell Autoland what it should do. And the MozReview user interface (and API) provide a mechanism into controlling those flags.

    Landing a change in Firefox will be initiated by a simple action such as clicking a checkbox in MozReview. (That could even be the Grant Review checkbox.) Commits cleared for landing will be picked up by Autoland and eventually automatically pushed to the Firefox repository (assuming the build and test automation is happy, of course). Once Autoland takes control, humans are just passengers. We won't be bothered with menial tasks like updating the commit message to reflect a review was performed: this will happen automatically inside MozReview or Autoland. (Although, there's a chance we may adopt some PGP-based signing to more strongly convey review for some code changes in order to facilitate stronger auditing and trust guarantees. Stay tuned.) Likewise, if a commit becomes associated with a bug, we can add that metadata to the commit before it is landed, no human involvement necessary beyond specifying the link in the MozReview web UI (or API). Autoland/MozReview will close review requests and/or bugs automatically. (Are you excited about performing less work yet?)

    When commits are added to MozReview, MozReview will read metadata from the repository they came from to automatically determine an appropriate reviewer. (We plan to leverage moz.build files for this in the case of Firefox.) This should eliminate a lot of process debt around choosing a reviewer. Similar metadata will also be used to determine what Bugzilla component a change is related to, static analysis rules to use to critique the phsyical structure of the change, and even automation jobs that should be executed given the set of files that changed. The use of this metadata will erode significant process debt around the change contribution workflow.

    As commits are pushed into MozReview/Autoland, the systems will be intelligent about automatically tracking dependencies and facilitating complex development workflows that people run into on a daily basis.

    If I create a commit on top of someone else's commit that hasn't been checked in yet, MozReview will detect the dependency between my changes and the parent ones. This is an advantage of being code first: by interfacing with repositories rather than patch files, you have an explicit dependency graph embedded in the repository commit DAG that can be used to aid machines in their activities.

    It will also be possible to partially land a series of commits. If I get review on the first 5 of 10 commits but things stall on commit 6, I can ask Autoland to land the already-reviewed commits so they don't get bit rotted and so you have partial progress (psychological studies show that a partial reward for work results in greater happiness through a sense of accomplishment).

    Since initiating actions in MozReview is light weight (just hg push), itch scratching is encouraged. I don't know about you, but in the course of working on the Firefox code base, I frequently find myself wanting to make small, 15-30s changes to fix something really minor. In today's world, the overhead for these small changes is often high. I need to upload a separate patch to Bugzilla. Sometimes I even need to create a new bug to hold that patch. If that patch depends on other work I did, I need to set up bug dependencies then worry about landing everything in the right order. All of a sudden, the overhead isn't worth it and my positive intentions go unacted on. Multiplied by hundreds of developers over many years, and you can imagine the effect on software quality. With MozReview, the overhead for itch scratching like this is minor. Just make a small commit, push, and the system will sort everything out. (These small commits are where I think a bugless process really shines.)

    This future world revolves around code and commits and operations on them. While MozReview has review in its name, it's more than a review tool: it's a database and interface to code and its state.

    In this code first world, Bugzilla performs an ancillary role. Bugzilla is still there. Bugs are still there. MozReview review requests and commits link to bugs. But it is the code, not bugs, that are king. If you want to do anything with code, you interact with the code tools. And Bugzilla is not one of them.

    Another way of looking at this is that nearly everything involving code or commits becomes excised from Bugzilla. This would relegate Bugzilla to, well, an issue/bug tracker. And - ta da - that's something it excels at since that's what it was originally designed to do! MozReview will provide an adequate platform to discuss code (a platform that Bugzilla provides today since it hosts code review). So if not Bugzilla tools are handling everything related to code, do you really need a bug any more?

    This is the future we're trying to build with MozReview and Autoland. And this is why I think bugs and Bugzilla will play a less central role in the development process of Firefox in the future.

    Yes, there are many consequences and concerns about making this shift. You would be rational to be skeptical and doubt that this is the right thing to do. I have another post in the works that attempts to outline some common conerns and propose solutions to many of them. Before writing a long comment pointing out every way in which this will fail to work, I encourage you to wait for that post to be published. Stay tuned.

    Gervase MarkhamUsing Instantbird to Connect to IRC Servers Requiring a Username and Password

    [Update 2014-01-16: A point of clarification. There are two possible ways to send a password for IRC. One is supported in the Instantbird UI – it’s the one that automatically identifies your nick with NickServ, the bot which makes sure people don’t steal other people’s nicks. The other, which is rarer but which I needed, involves sending a password to connect at all, using the PASS command in the IRC protocol. That is what is documented here.]

    I was trying to do this; turns out it currently requires about:config manipulation and is not documented anywhere I can find.

    Using about:config (type /about config in a message window, or access via Preferences), set the following prefs:

    messenger.account.accountN.options.serverPassword
    messenger.account.accountN.options.username
    

    to the obvious values. Other useful tip: if the IRC server uses a self-signed cert, connect to it on the right port using Firefox and HTTPS, and you can save the cert out of the warning/exception dialog you get. You can then import it into Instantbird using the deeply-buried Certificate section of the Advanced Preferences and it will trust the cert and connect. (I think this is what I did, although memory is hazy.)