Looking For

William Lachancemozregression updates

Lots of movement in mozregression (a tool for automatically determining when a regression was introduced in Firefox by bisecting builds on ftp.mozilla.org) in the last few months. Here’s some highlights:

  • Support for win64 nightly and inbound builds (Kapil Singh, Vaibhav Agarwal)
  • Support for using an http cache to reduce time spent downloading builds (Sam Garrett)
  • Way better logging and printing of remaining time to finish bisection (Julien Pagès)
  • Much improved performance when bisecting inbound (Julien)
  • Support for automatic determination on whether a build is good/bad via a custom script (Julien)
  • Tons of bug fixes and other robustness improvements (me, Sam, Julien, others…)

Also thanks to Julien, we have a spiffy new website which documents many of these features. If it’s been a while, be sure to update your copy of mozregression to the latest version and check out the site for documentation on how to use the new features described above!

Thanks to everyone involved (especially Julien) for all the hard work. Hopefully the payoff will be a tool that’s just that much more useful to Firefox contributors everywhere. :)

Justin WoodRelease Engineering does a lot…

Hey Everyone,

I spent a few minutes a week over the last month or two working on compiling a list of Release Engineering work areas. Included in that list is identifying which repositories we “own” and work in, as well as where these repositories are mirrored. (We have copies in hg.m.o git.m.o and github, some exclusively in their home).

While we transition to a more uniform and modern design style and philosphy.

My major takeaway here is we have A LOT of things that we do. (this list is explicitly excluding repositories that are obsolete and unused)

So without further ado, I present our page ReleaseEngineering/Repositories

repositoriesYou’ll notice a few things about this, we have a column for Mirrors, and RoR (Repository of Record), “Committable Location” was requested by Hal and is explicitly for cases where “Where we consider our important location the RoR, it may not necessarily be where we allow commits to”

The other interesting thing is we have automatic population of travis and coveralls urls/status icons. This is for free using some magic wiki templates I did.

The other piece of note here, is the table is generated by a list of pages, using “SemanticMediaWiki” so the links to the repositories can be populated with things like “where are the docs” “what applications use this repo”, “who are suitable reviewers” etc. (all those are TODO on the releng side so far).

I’m hoping to be putting together a blog post at some point about how I chose to do much of this with mediawiki, however in the meantime should any team at Mozilla find this enticing and wish to have one for themselves, much of the work I did here can be easily replicated for your team, even if you don’t need/like the multiple repo location magic of our table. I can help get you setup to add your own repos to the mix.

Remember the only fields that are necessary is a repo name, the repo location, and owner(s). The last field can even be automatically filled in by a form on your page (see the end of Release Engineerings page for an example of that form)

Reach out to me on IRC or E-mail (information is on my mozillians profile) if you desire this for your team and we can talk. If you don’t have a need for your team, you can stare at all the stuff Releng is doing and remember to thank one of us next time you see us. (or inquire about what we do, point contributors our way, we’re a friendly group, I promise.)

Hannah KaneA new online home for those who #teachtheweb

We’ve recently begun work on a new website that will serve the mentors in our Webmaker community—a gathering place for anyone who is teaching the Web. They’ll find activity kits, trainings, badges, the Web Literacy Map, and more. It will also be an online clubhouse for Webmaker Clubs, and will showcase the work of Hives to the broader network.

Our vision for the site is that it will provide pathways for sustained involvement in teaching the Web. Imagine a scenario where, after hosting a Maker Party, a college student in Pune wants to build on the momentum, but doesn’t know how. Or imagine a librarian in Seattle who is looking for activities for her weekly teen drop-in hours. Or a teacher in Buenos Aires who is looking to level up his own digital literacy skills. In each of these scenarios, we hope the person will look to this new site to find what they need.

We’re in the very early stages of building out the site. One of our first challenges is to figure out the best way to organize all of the content.

Fortunately, we were able to find 14 members of the community who were willing to participate in a “virtual card-sorting” activity. We gave each of the volunteers a list of 22 content areas (e.g. “Find a Teaching Kit,” “Join a Webmaker Club,” “Participate in a community discussion”), and asked them to organize the items into groups that made sense to them.

The results were fascinating. Some grouped the content by specific programs, concepts, or offerings. Others grouped by function (e.g “Participate,” “Learn,” “Lead”). Others organized by identity (e.g. “Learner” or “Mentor”). Still others grouped by level of expertise needed.

We owe a debt of gratitude to those who participated in the research. We were able to better understand the variety of mental models, and we’re currently using those insights to build out some wireframes to test in the next heartbeat.

Once we firm up the information architecture, we’ll build and launch v1 of the site (our goal is to launch it by the end of Q1). From there, we’ll continue to iterate, adding more functionality and resources to meet the needs of our mentor community.

Future iterations will likely include:

  • Improving the way we share and discover curriculum modules
  • Enhancing our online training platform
  • Providing tools for groups to self-organize
  • Making improvements to our badging platform
  • Incorporating the next version of the Web Literacy Map

Stay tuned for more updates and opportunities to provide feedback throughout the process. We’ve also started a Discourse thread for continuing discussion of the platform.


Christian HeilmannWhere would people like to see me – some interesting answers

Will code for Bananas

For pure Shits and Giggles™ I put up a form yesterday asking people where I should try to work now that I left Mozilla. By no means I have approached all the companies I listed (hence an “other” option). I just wanted to see what people see me as and where I could do some good work. Of course, some of the answers disagreed and made a lot of assumptions:

Your ego knows no bounds. Putting companies that have already turned you down is very special behavior.

This is utterly true. I applied at Yahoo in 1997 and didn’t get the job. I then worked at Yahoo for almost five years a few years later. I should not have done that. Companies don’t change and once you have a certain skillset there is no way you could ever learn something different that might make yourself appealing to others. Know your place, and all that.

Sarcasm aside, I am always amazed how lucky we are to have choices in our market. There is not a single day I am not both baffled and very, very thankful for being able to do what I like and make a living with it. I feel like a fraud many a time, and I know many other people who seemingly have a “big ego” doing the same. The trick is to not let that stop you but understand that it makes you a better person, colleague and employee. We should strive to get better all the time, and this means reaching beyond what you think you can achieve.

I’m especially flattered that people thought I had already been contacted by all the companies I listed and asked for people to pick for me. I love working in the open, but that’s a bit too open, even for my taste. I am not that lucky – I don’t think anybody is.

The answers were pretty funny, and of course skewed as I gave a few options rather than leaving it completely open. The final “wish of the people list” is:

  • W3C (108 votes)
  • Canonical (39 votes)
  • Microsoft (38 votes)
  • Google (37 votes)
  • Facebook (14 votes)
  • Twitter (9 votes)
  • Mozilla (7 votes)
  • PubNub (26 votes)

Pubnub’s entries were having exceedingly more exclamation points the more got submitted – I don’t know what happened there.

Other options with multiple votes were Apple, Adobe, CozyCloud, EFF, Futurice, Khan Academy, Opera, Spotify (I know who did that!) and the very charming “Retirement”.

Options labeled “fascinating” were:

  • A Circus
  • Burger King (that might still be a problem as I used to work on McDonalds.co.uk – might be a conflict of interest)
  • BangBros (no idea what that might be – a Nintendo Game?)
  • Catholic Church
  • Derick[SIC] Zoolander’s School for kids who can’t read good
  • Kevin Smith’s Movie
  • “Pizza chef at my local restaurant” (I was Pizza delivery guy for a while, might be possible to level up)
  • Playboy (they did publish Fahrenheit 451, let’s not forget that)
  • Taco Bell (this would have to be remote, or a hell of a commute)
  • The Avengers (I could be persuaded, but it probably will be an advisory role, Tony Stark style)
  • UKIP (erm, no, thanks)
  • Zombocom and
  • Starbucks barista. (this would mean I couldn’t go to Sweden any longer – they really like their coffee and Starbucks is seen as the Antichrist by many)

Some of the answers were giving me super powers I don’t have but show that people would love to have people like me talk to others outside the bubble more:

  • “NASA” (I really, really think I have nothing they need)
  • “A book publisher (they need help to move into the 21st century)”
  • “Data.gov or another country’s open data platform.” (did work with that, might re-visit it)
  • “GCHQ; Be the bridge between our worlds”
  • “Spanish Government – Podemos CTO” (they might realise I am not Spanish)
  • “any bank for a11y online banking :-/”

Some answers showed a need to vent:

  • “Anything but Google or Facebook for God sake!”
  • “OK, and option 2: perhaps Twitter? You might improve their horrible JS code in the website! ;)”

The most confusing answers were “My butthole” which sounds cramped and not a creative working environment and “Who are you?” which begs the answer “Why did you fill this form?”.

Many of the answers showed a lot of trust in me and made me feel all warm and fuzzy and I want to thank whoever gave those:

  • be CTO of awesome startup
  • Enjoy life Chris!
  • Start something of your own. You rock too hard, anyway!
  • you were doing just fine. choose the one where your presence can be heard the loudest. cheers!
  • you’ve left Mozilla for something else, so you are jobless for a week or so! :-)
  • Yourself then hire me and let me tap your Dev knowledge :D

I have a new job, I am starting on Monday and I will announce in probably too much detail here on Thursday. Thanks for everyone who took part in this little exercise. I have an idea what I need to do in my new job, and these ideas listed and the results showed me that I am on the right track.

Stormy PetersCan or Can’t?

10628746_986665307681_7544861487392315883_o

Can read or can’t eat books?

What I love about open source is that it’s a “can” world by default. You can do anything you think needs doing and nobody will tell you that you can’t. (They may not take your patch but they won’t tell you that you can’t create it!)

It’s often easier to define things by what they are not or what we can’t do. And the danger of that is you create a culture of “can’t”. Any one who has raised kids or animals knows this. “No, don’t jump.” You can’t jump on people. “No, off the sofa.” You can’t be on the furniture. “No, don’t lick!” You can’t slobber on me. And hopefully when you realize it, you can fix it. “You can have this stuffed animal (instead of my favorite shoe). Good dog!”

Often when we aren’t sure how to do something, we fill the world with can’ts. “I don’t know how we should do this, but I know you can’t do that on a proprietary mailing list.” “I don’t know how I should lose weight, but I know you can’t have dessert.” I don’t know. Can’t. Don’t know. Can’t. Unsure. Can’t.

Watch the world around you. Is your world full of can’ts or full of “can do”s? Can you change it for the better?

Nathan Froydexamples of poor API design, 1/N – pldhash functions

The other day in the #content IRC channel:

<bz> I have learned so many things about how to not define APIs in my work with Mozilla code ;)
<bz> (probably lots more to learn, though)

I, too, am still learning a lot about what makes a good API. Like a lot of other things, it’s easier to point out poor API design than to describe examples of good API design, and that’s what this blog post is about. In particular, the venerable XPCOM datastructure PLDHashTable has been undergoing a number of changes lately, all aimed at bringing it up to date. (The question of why we have our own implementations of things that exist in the C++ standard library is for a separate blog post.)

The whole effort started with noticing that PL_DHashTableOperate is not a well-structured API. It’s necessary to quote some more of the API surface to fully understand what’s going on here:

typedef enum PLDHashOperator {
    PL_DHASH_LOOKUP = 0,        /* lookup entry */
    PL_DHASH_ADD = 1,           /* add entry */
    PL_DHASH_REMOVE = 2,        /* remove entry, or enumerator says remove */
    PL_DHASH_NEXT = 0,          /* enumerator says continue */
    PL_DHASH_STOP = 1           /* enumerator says stop */
} PLDHashOperator;

typedef PLDHashOperator
(* PLDHashEnumerator)(PLDHashTable *table, PLDHashEntryHdr *hdr, uint32_t number,
                      void *arg);

uint32_t
PL_DHashTableEnumerate(PLDHashTable *table, PLDHashEnumerator etor, void *arg);

PLDHashEntryHdr*
PL_DHashTableOperate(PLDHashTable* table, const void* key, PLDHashOperator op);

(PL_DHashTableOperate no longer exists in the tree due to other cleanup bugs; the above is approximately what it looked like at the end of 2014.)

There are several problems with the above slice of the API:

  • PL_DHashTableOperate(table, key, PL_DHASH_ADD) is a long way to spell what should have been named PL_DHashTableAdd(table, key)
  • There’s another problem with the above: it’s making a runtime decision (based on the value of op) about what should have been a compile-time decision: this particular call will always and forever be an add operation. We shouldn’t have the (admittedly small) runtime overhead of dispatching on op. It’s worth noting that compiling with LTO and a quality inliner will remove that runtime overhead, but we might as well structure the code so non-LTO compiles benefit and the code at callsites reads better.
  • Given the above definitions, you can say PL_DHashTableOperate(table, key, PL_DHASH_STOP) and nothing will complain. The PL_DHASH_NEXT and PL_DHASH_STOP values are really only for a function of type PLDHashEnumerator to return, but nothing about the above definition enforces that in any way. Similarly, you can return PL_DHASH_LOOKUP from a PLDHashEnumerator function, which is nonsensical.
  • The requirement to always return a PLDHashEntryHdr* from PL_DHashTableOperate means doing a PL_DHASH_REMOVE has to return something; it happens to return nullptr always, but it really should return void. In a similar fashion, PL_DHASH_LOOKUP always returns a non-nullptr pointer (!); one has to check PL_DHASH_ENTRY_IS_{FREE,BUSY} on the returned value. The typical style for an API like this would be to return nullptr if an entry for the given key didn’t exist, and a non-nullptr pointer if such an entry did. The free-ness or busy-ness of a given entry should be a property entirely internal to the hashtable implementation (it’s possible that some scenarios could be slightly more efficient with direct access to the busy-ness of an entry).

We might infer corresponding properties of a good API from each of the above issues:

  • Entry points for the API produce readable code.
  • The API doesn’t enforce unnecessary overhead.
  • The API makes it impossible to talk about nonsensical things.
  • It is always reasonably clear what return values from API functions describe.

Fixing the first two bulleted issues, above, was the subject of bug 1118024, done by Michael Pruett. Once that was done, we really didn’t need PL_DHashTableOperate, and removing PL_DHashTableOperate and related code was done in bug 1121202 and bug 1124920 by Michael Pruett and Nicholas Nethercote, respectively. Fixing the unusual return convention of PL_DHashTableLookup is being done in bug 1124973 by Nicholas Nethercote. Maybe once all this gets done, we can move away from C-style PL_DHashTable* functions to C++ methods on PLDHashTable itself!

Next time we’ll talk about the actual contents of a PL_DHashTable and how improvements have been made there, too.

Gregory SzorcCommit Part Numbers and MozReview

It is common for commit messages in Firefox to contains strings like Part 1, Part 2, etc. See this push for bug 784841 for an extreme multi-part example.

When code review is conducted in Bugzilla, these identifiers are necessary because Bugzilla orders attachments/patches in the order they were updated or their patch title (I'm not actually sure!). If part numbers were omitted, it could be very confusing trying to figure out which order patches should be applied in.

However, when code review is conducted in MozReview, there is no need for explicit part numbers to convey ordering because the ordering of commits is implicitly defined by the repository history that you pushed to MozReview!

I argue that if you are using MozReview, you should stop writing Part N in your commit messages, as it provides little to no benefit.

I, for one, welcome this new world order: I've previously wasted a lot of time rewriting commit messages to reflect new part ordering after doing history rewriting. With MozReview, that overhead is gone and I barely pay a penalty for rewriting history, something that often produces a more reviewable series of commits and makes reviewing and landing a complex patch series significantly easier.

Cameron KaiserAnd now for something completely different: the Pono Player review and Power Macs (plus: who's really to blame for Dropbox?)

Regular business first: this is now a syndicated blog on Planet Mozilla. I consider this an honour that should also go a long way toward reminding folks that not only are there well-supported community tier-3 ports, but lots of people still use them. In return I promise not to bore the punters too much with vintage technology.

IonPower crossed phase 2 (compilation) yesterday -- it builds and links, and nearly immediately asserts after some brief codegen, but at this phase that's entirely expected. Next, phase 3 is to get it to build a trivial script in Baseline mode ("var i=0") and run to completion without crashing or assertions, and phase 4 is to get it to pass the test suite in Baseline-only mode, which will make it as functional as PPCBC. Phase 5 and 6 are the same, but this time for Ion. IonPower really repays most of our technical debt -- no more fragile glue code trying to keep the JaegerMonkey code generator working, substantially fewer compiler warnings, and a lot less hacks to the JIT to work around oddities of branching and branch optimization. Plus, many of the optimizations I wrote for PPCBC will transfer to IonPower, so it should still be nearly as fast in Baseline-only mode. We'll talk more about the changes required in a future blog post.

Now to the Power Mac scene. I haven't commented on Dropbox dropping PowerPC support (and 10.4/10.5) because that's been repeatedly reported by others in the blogscene and personally I rarely use Dropbox at all, having my own server infrastructure for file exchange. That said, there are many people who rely on it heavily, even a petition (which you can sign) to bring support back. But let's be clear here: do you really want to blame someone? Do you really want to blame the right someone? Then blame Apple. Apple dropped PowerPC compilation from Xcode 4; Apple dropped Rosetta. Unless you keep a 10.6 machine around running Xcode 3, you can't build (true) Universal binaries anymore -- let alone one that compiles against the 10.4 SDK -- and it's doubtful Apple would let such an app (even if you did build it) into the App Store because it's predicated on deprecated technology. Except for wackos like me who spend time building PowerPC-specific applications and/or don't give a flying cancerous pancreas whether Apple finds such work acceptable, this approach already isn't viable for a commercial business and it's becoming even less viable as Apple actively retires 10.6-capable models. So, sure, make your voices heard. But don't forget who screwed us first, and keep your vintage hardware running.

That said, I am personally aware of someoneTM who is working on getting the supported Python interconnect running on OS X Power Macs, and it might be possible to rebuild Finder integration on top of that. (It's not me. Don't ask.) I'll let this individual comment if he or she wants to.

Onto the main article. As many of you may or may not know, my undergraduate degree was actually in general linguistics, and all linguists must have (obviously) some working knowledge of acoustics. I've also been a bit of a poseur audiophile too, and while I enjoy good music I especially enjoy good music that's well engineered (Alan Parsons is a demi-god).

The Por Pono Player, thus, gives me pause. In acoustics I lived and died by the Nyquist-Shannon sampling theorem, and my day job today is so heavily science and research-oriented that I really need to deal with claims in a scientific, reproducible manner. That doesn't mean I don't have an open mind or won't make unusual decisions on a music format for non-auditory reasons. For example, I prefer to keep my tracks uncompressed, even though I freely admit that I'm hard pressed to find any difference in a 256kbit/s MP3 (let alone 320), because I'd like to keep a bitwise exact copy for archival purposes and playback; in fact, I use AIFF as my preferred format simply because OS X rips directly to it, everything plays it, and everything plays it with minimum CPU overhead despite FLAC being lossless and smaller. And hard disks are cheap, and I can convert it to FLAC for my Sansa Fuze if I needed to.

So thus it is with the Por Pono Player. For $400, you can get a player that directly pumps uncompressed, high-quality remastered 24-bit audio at up to 192kHz into your ears with no downsampling and allegedly no funny business. Immediately my acoustics professor cries foul. "Cameron," she says as she writes a big fat F on this blog post, "you know perfectly well that a CD using 44.1kHz as its sampling rate will accurately reproduce sounds up to 22.05kHz without aliasing, and 16-bit audio has indistinguishable quantization error in multiple blinded studies." Yes, I know, I say sheepishly, having tried to create high-bit rate digital playback algorithms on the Commodore 64 and failed because the 6510's clock speed isn't fast enough to pump samples through the SID chip at anything much above telephone call frequencies. But I figured that if there was a chance, if there was anything, that could demonstrate a difference in audio quality that I could uncover it with a Pono Player and a set of good headphones (I own a set of Grado SR125e cans, which are outstanding for the price). So I preordered one and yesterday it arrived, in a fun wooden box:

It includes a MicroUSB charger (and cable), an SDXC MicroSD card (64GB, plus the 64GB internal storage), a fawning missive from Neil Young, the instigator of the original Kickstarter, the yellow triangular unit itself (available now in other colours), and no headphones (it's BYO headset):

My original plan was to do an A-B comparison with Pink Floyd's Dark Side of the Moon because it was originally mastered by the godlike Alan Parsons, I have the SACD 30th Anniversary master, and the album is generally considered high quality in all its forms. When I tried to do that, though, several problems rapidly became apparent:

First, the included card is SDXC, and SDXC support (and exFAT) wasn't added to OS X until 10.6.4. Although you can get exFAT support on 10.5 with OSXFUSE, I don't know how good their support is on PowerPC and it definitely doesn't work on Tiger (and I'm not aware of a module for the older MacFUSE that does run on Tiger). That limits you to SDHC cards up to 32GB at least on 10.4, which really hurts on FLAC or ALAC and especially on AIFF.

Second, the internal storage is not accessible directly to the OS. I plugged in the Pono Player to my iMac G4 and it showed up in System Profiler, but I couldn't do anything with it. The 64GB of internal storage is only accessible to the music store app, which brings us to the third problem:

Third, the Pono Music World app (a skinned version of JRiver Media Center) is Intel-only, 10.6+. You can't download tracks any other way right now, which also means you're currently screwed if you use Linux, even on an Intel Mac. And all they had was Dark Side in 44.1kHz/16 bit ... exactly the same as CD!

So I looked around for other options. HDTracks didn't have Dark Side, though they did have The (weaksauce) Endless River and The Division Bell in 96kHz/24 bit. I own both of these, but 96kHz wasn't really what I had in mind, and when I signed up to try a track it turns out they need a downloader also which is also a reskinned JRiver! And their reasoning for this in the FAQ is total crap.

Eventually I was able to find two sites that offer sample tracks I could download in TenFourFox (I had to downsample one for comparison). The first offers multiple formats in WAV, which your Power Mac actually can play, even in 24-bit (but it may be downsampled for your audio chip; if you go to /Applications/Utilities/Audio MIDI Setup.app you can see the sample rate and quantization for your audio output -- my quad G5 offers up to 24/96kHz but my iMac only has 16/44.1). The second was in FLAC, which Audacity crashed trying to convert, MacAmp Lite X wouldn't even recognize, and XiphQT (via QuickTime) played like it was being held underwater by a chainsaw (sample size mismatch, no doubt); I had to convert this by hand. I then put them onto a SDHC card and installed it in the Pono.

Yuck. I was very disappointed in the interface and LCD. I know that display quality wasn't a major concern, but it looks clunky and ugly and has terrible angles (see for yourself!) and on a $400 device that's not acceptable. The UI is very slow sometimes, even with the hardware buttons (just volume and power, no track controls), and the touch screen is very low quality. But I duly tried the built-in Neil Young track, which being an official Por Pono track turns on a special blue light to tell you it's special, and on my Grados it sounded pretty good, actually. That was encouraging. So I turned off the display and went through a few cycles of A-B testing with a random playlist between the two sets of tracks.

And ... well ... my identification abilities were almost completely statistical chance. In fact, I was slightly worse than chance would predict on the second set of tracks. I can only conclude that Harry Nyquist triumphs. With high quality headphones, presumably high quality DSPs and presumably high quality recordings, it's absolutely bupkis difference for me between CD-quality and Pono-quality.

Don't get me wrong: I am happy to hear that other people are concerned about the deficiencies in modern audio engineering -- and making it a marketable feature. We've all heard the "loudness war," for example, which dramatically compresses the dynamic range of previously luxurious tracks into a bafflingly small amplitude range which the uncultured ear, used only to quantity over quality, apparently prefers. Furthermore, early CD masters used RIAA equalization, which overdrove the treble and was completely unnecessary with digital audio, though that grave error hasn't been repeated since at least 1990 or earlier. Fortunately, assuming you get audio engineers who know what they're doing, a modern CD is every bit as a good to the human ear as a DVD-Audio disc or an SACD. And if modern music makes a return to quality engineering with high quality intermediates (where 24-bit really does make a difference) and appropriate dynamic range, we'll all be better off.

But the Pono Player doesn't live up to the hype in pretty much any respect. It has line out (which does double as a headphone port to share) and it's high quality for what it does play, so it'll be nice for my hi-fi system if I can get anything on it, but the Sansa Fuze is smaller and more convenient as a portable player and the Pono's going back in the wooden box. Frankly, it feels like it was pushed out half-baked, it's problematic if you don't own a modern Mac, and the imperceptible improvements in audio mean it's definitely not worth the money over what you already own. But that's why you read this blog: I just spent $400 so you don't have to.

Tarek ZiadéCharity Python Code Review

Raising 2500 euros for a charity is hard. That's what I am trying to do for the Berlin Marathon on Alvarum.

Mind you, this is not to get a bib - I was lucky enough to get one from the lottery. It's just that it feels right to take the opportunity of this marathon to raise money for Doctors without Borders. Whatever my marathon result will be. I am not getting any money out of this, I am paying for all my Marathon fees. Every penny donated goes to MSF (Doctors without Borders).

It's the first time I am doing a fundraising for a foundation and I guess that I've exhausted all the potentials donators in my family, friends and colleagues circles.

I guess I've reached the point where I have to give back something to the people that are willing to donate.

So here's a proposal: I have been doing Python coding for quite some time, wrote some books in both English and French on the topic, and working on large scale projects using Python. I have also gave a lot of talks in Python conferences around the world.

I am not an expert of any specific fields like scientific Python, but I am good in "general Python" and in designing stuff that scales.

I am offering one of the following service:

  • Python code review
  • Slides review
  • Documentation review or translation from English to French

The contract (gosh this is probably very incomplete):

  • Your project have to be under an open source license, and available online.
  • I am looking from small reviews, between 30mn and 4 hours of work I guess.
  • You are responsible for the intial guidance. e.g. explain what specific review you want me to do.
  • I am allowed to say no (mostly if by any chance I have tons of proposals, or if I don't feel like I am the right person to review your code.)
  • This is on my free time so I can't really give deadlines - however depending on the project and amount of work I will be able to roughly estimate how long is going to take and when I should be able to do it.
  • If I do the work you can't back off if you don't like the result of my reviews. If you do without a good reason, this is mean and I might cry a little.
  • I won't be responsible for any damage or liability done to your project because of my review.
  • I am not issuing any invoice or anything like that. The fundraising site however will issue a classical invoice when you do the donation. I am not part of that transaction nor responsible for it.
  • Once the work will be done, I will tell you how long it took, and you are free to give wathever you think is fair and I will happily accept whatever you give my fundraising. If you give 1 euro for 4 hours of work I might make a sad face, but I will just accept it.

Interested ? Mail me! tarek@ziade.org

And if you just want to give to the fundraising it's here: http://www.alvarum.com/tarekziade

Air MozillaEngineering Meeting

Engineering Meeting The weekly Mozilla engineering meeting.

Michael KaplyWhat About Firefox Deployment?

You might have noticed that I spend most of my resources around configuring Firefox and not around deploying Firefox. There are a couple reasons for that:

  1. There really isn’t a "one size fits all" solution for Firefox deployment because there are so many products that can be used to deploy software within different organizations.
  2. Most discussions around deployment devolve into a "I wish Mozilla would do a Firefox MSI" discussion.

That being said, there are some things I can recommend around deploying Firefox on Windows.

If you want to modify the Firefox installer, I’ve done a few posts on this in the past:

If you need to integrate add-ons into that install, I've posted about that as well:

You could also consider asking on the Enterprise Working Group mailing list. There's probably someone that's already figured it out for your software deployment solution.

If you really need an MSI, check out FrontMotion. They've been doing MSI work for quite a while.

And if you really want Firefox to have an official MSI, consider working on bug 598647. That's where an MSI implementation got started but never finished.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1122269] no longer have access to https://bugzilla.mozilla.org/cvs-update.log
  • [1119184] Securemail incorrectly displays ” You will have to contact bugzilla-admin@foo to reset your password.” for whines
  • [1122565] editversions.cgi should focus the version field on page load to cut down on need for mouse
  • [1124254] form.dev-engagement-event: More changes to default NEEDINFO
  • [1119988] form.dev-engagement-event: disabled accounts causes invalid/incomplete bugs to be created
  • [616197] Wrap long bug summaries in dependency graphs, to avoid horizontal scrolling
  • [1117345] Can’t choose a resolution when trying to resolve a bug (with canconfirm rights)
  • [1125320] form.dev-engagement-event: Two new questions
  • [1121594] Mozilla Recruiting Requisition Opening Process Template
  • [1124437] Backport upstream bug 1090275 to bmo/4.2 to whitelist webservice api methods
  • [1124432] Backport upstream bug 1079065 to bmo/4.2 to fix improper use of open() calls

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jess KleinQuality Assurance reviews for Design, Functionality and Communications


This week a few of the features that I have been writing about will be shipping on webmaker.org - the work for Privacy Day and the new on-boarding experience. You might be wondering what we've been up to during that period of time after the project gets coded until the time it goes live. Two magical words: quality assurance (QA). We are still refining the process, and I am very open to suggestions as to how to improve it and streamline it. For the time being, let me walk you through this round of QA on the Privacy Day content.


It all starts out with a github issue




... and a kickoff meeting

The same team who worked on the prototyping phase of the Privacy Day campaign work are responsible for the quality assurance. We met to kick off and map out our plan for going live. This project required three kinds of reviews - that more or less had to happen simultaneously. We broke down the responsibilities like this:



Aki - (lead engineer) - responsible for preparing the docs and leading a functionality review
Paul - (communication/marketing) - responsible for preparing the docs and leading a marketing review
Jess - (lead designer) - responsible for preparing docs and leading design review
Bobby - (product manager) - responsible for recruiting participants to do the reviews and to wrangle  bug triage.
Cassie - (quality) - responsible for final look and thumbs up to say if the feature is acceptable to ship

Each of us who were responsible for docs wrote up instructions for QA reviewers to follow:




We recruited staff and community to user test on a variety of different devices:


This was done in a few different ways. I did both one on one and asynchronous review sessions with my colleagues and the community. It helps to have both kinds of user tests so that you can get honest feedback. Allowing for asynchronous or independent testing is particularly beneficial because it signals to the reviewer that this is an ongoing process and that bugs can be filed at any point during the review period specified. 


The process is completely open to the community. At any given point the github issues are public, the calls for help are public and the iteration is done openly. 


and if there were any problems, they were logged in github as issues:



The most effective issues have a screenshot with the problem and a recommended solution. Additionally, it's important to note if this problem is blocking the feature from shipping or not.

We acknowledge when user testers found something useful:


and identified when a problem was out of scope to fix before shipping: 






We quickly iterated on fixing bugs and closing issues as a team:








and gave each other some indication when we thought that the problem was fixed sufficiently:






When we are all happy and got the final thumbsup regarding quality, we then....


Close the github issue and celebrate:





Then we start to make preparations to push the feature live (and snoopy dance a little):







Doug BelshawConsiderations when creating a Privacy badge pathway

Between June and October 2014 I chaired the Badge Alliance working group for Digital and Web Literacies. This was an obvious fit for me, having previously been on the Open Badges team at Mozilla, and currently being Web Literacy Lead.

Running

We used a Google Group to organise our meetings. Our Badge Alliance liaison was my former colleague Carla Casilli. The group contained 208 people, although only around 10% of that number were active at any given time.

The deliverable we decided upon was a document detailing considerations individuals/organisations should take into account when creating a Privacy badge pathway.

Access the document here

We used Mozilla’s Web Literacy Map as a starting point for this work, mainly because many of us had been part of the conversations that led to the creation of it. Our discussions moved from monthly, to fortnightly, to weekly. They were wide-ranging and included many options. However, the guidance we ended up providing is as simple and as straightforward as possible.

For example, we advocated the creation of five badges:

  1. Identifying rights retained and removed through user agreements
  2. Taking steps to secure non-encrypted connections
  3. Explaining ways in which computer criminals are able to gain access to user information
  4. Managing the digital footprint of an online persona
  5. Identifying and taking steps to keep important elements of identity private

We presented options for how learners would level-up using these badges:

  • Trivial Pursuit approach
  • Majority approach
  • Cluster approach

More details on the badges and approaches can be found in the document. We also included more speculative material around federation. This involved exploring the difference between pathways, systems and ecosystems.

The deliverable from this working is currently still on Google Docs, but if there’s enough interest we’ll port it to GitHub pages so it looks a bit like the existing Webmaker whitepaper. This work is helping inform an upcoming whitepaper around Learning Pathways which should be ready by end of Q1 2014.

Karen Smith, co-author of the new whitepaper and part of the Badge Alliance working group, is also heading up a project (that I’m involved with in a small way) for the Office of the Privacy Commissioner of Canada. This is also informed in many ways by this work.


Comments? Questions? Comment directly on the document, tweet me (@dajbelshaw) or email me: doug@mozillafoundation.org

Stormy PetersYour app is not a lottery ticket

Many app developers are secretly hoping to win the lottery. You know all those horrible free apps full of ads? I bet most of them were hoping to be the next Flappy Bird app. (The Flappy Bird author was making $50K/day from ads for a while.)

The problem is that when you are that focused on making millions, you are not focused on making a good app that people actually want. When you add ads before you add value, you’ll end up with no users no matter how strategically placed your ads are.

So, the secret to making millions with your app?

  • Find a need or problem that people have that you can solve.
  • Solve the problem.
  • Make your users awesome. Luke first sent me a pointer to Kathy Sierra’s idea of making your users awesome.  Instagram let people create awesome pictures. Then their friends asked them how they did it …
  • Then monetize. (You can think about this earlier but don’t focus on it until you are doing well.)

If you are a good app developer or web developer, you’ll probably find it easier to do well financially helping small businesses around you create the apps and web pages they need than you will trying to randomly guess what game people might like. (If you have a good idea for a game, that you are sure you and your friends and then perhaps others would like to play, go for it!)

Alistair LaingRight tool for the Job

I’m still keen as ever and enjoying the experience of developing a browser extension. Last week was the first time I hung out in Google Hangouts with Jan and Florent. On first impressions Google hangouts is pretty sweet. It was smooth and clear(I’m not sure how much of that was down to broadband speeds and connection quality). I learnt so much in that first one hour session and enjoyed chatting to them face-to-face (in digital terms).

TOO COOL to Minify & TOO SASS’y for tools

One of things I learnt was how to approach JS/CSS now my front-end Developer head tells me to always minify and concatenate files to reduce HTTP request. While from a maintenance side, look to using a CSS pre-processor for variables etc. Now when it comes to developing browser extensions you do not have the same issues because of the following reasons:

  1. No HTTP requests are done because the files are actually packaged with the extension and therefore installed on the client machine anyway. Theres also NO network latency because of this.
  2. File sizes aren’t that important as browser extensions (for Firefox at least). The extensions are packaged up in such an effective way that its basically zipping all the contents together so “reducing” the file sizes anyway.
  3. Whilst attempting to fix an issue I came across Mozilla’s implementation of CSS variables, which sort of solves the issue around css variables and modularise the code.

Later today, I’m scheduled to hangout with Jan again and I’m thinking about writing another post about XUL

[contact-form]

Mozilla Release Management TeamFirefox 36 beta3 to beta4

In this beta release, for both Desktop & Mobile, we fixed some issues in Javascript, some stability fixes, etc. We also increased the memory size of some components to decrease the number of crashes (example: bugs 869208 & 1124892)

  • 40 changesets
  • 121 files changed
  • 1528 insertions
  • 1107 deletions

ExtensionOccurrences
cpp28
h21
c16
11
java9
html7
py4
js4
mn3
ini3
mk2
cc2
xml1
xhtml1
svg1
sh1
in1
idl1
dep1
css1
build1

ModuleOccurrences
security46
js15
dom15
mobile10
browser10
editor6
gfx5
testing4
toolkit3
ipc2
xpcom1
services1
layout1
image1

List of changesets:

Cameron McCormackBug 1092363 - Disable Bug 931668 optimizations for the time being. r=dbaron a=abillings - 126d92ac00e9
Tim TaubertBug 1085369 - Move key wrapping/unwrapping tests to their own test file. r=rbarnes, a=test-only - afab84ec4e34
Tim TaubertBug 1085369 - Move other long-running tests to separate test files. r=keeler, a=test-only - d0660bbc79a1
Tim TaubertBug 1093655 - Fix intermittent browser_crashedTabs.js failures. a=test-only - 957b4a673416
Benjamin SmedbergBug 869208 - Increase the buffer size we're using to deliver network streams to OOPP plugins. r=aklotz, a=sledru - cb0fd5d9a263
Nicholas NethercoteBug 1122322 (follow-up) - Fix busted paths in worker memory reporter. r=bent, a=sledru - a99eabe5e8ea
Bobby HolleyBug 1123983 - Don't reset request status in MediaDecoderStateMachine::FlushDecoding. r=cpearce, a=sledru - e17127e00300
Jean-Yves AvenardBug 1124172 - Abort read if there's nothing to read. r=bholley, a=sledru - cb103a939041
Jean-Yves AvenardBug 1123198 - Run reset parser state algorithm when aborting. r=cajbir, a=sledru - 17830430e6be
Martyn HaighBug 1122074 - Normal Tabs tray has an empty state. r=mcomella, a=sledru - c1e9f11144a5
Michael ComellaBug 1096958 - Move TilesRecorder instance into TopSitesPanel. r=bnicholson, a=sledru - d6baa06d52b4
Michael ComellaBug 1110555 - Use real device dimensions when calculating LWT bitmap sizes. r=mhaigh, a=sledru - 2745f66dac6f
Michael ComellaBug 1107386 - Set internal container height as height of MenuPopup. r=mhaigh, a=sledru - e4e2855e992c
Ehsan AkhgariBug 1120233 - Ensure that the delete command will stay enabled for password fields. r=roc, ba=sledru - 34330baf2af6
Philipp KewischBug 1084066 - plugins and extensions moved to wrong directory by mozharness. r=ted,a=sledru - 64fb35ee1af6
Bob OwenBug 1123245 Part 1: Enable an open sandbox on Windows NPAPI processes. r=josh, r=tabraldes, a=sledru - 2ab5add95717
Bob OwenBug 1123245 Part 2: Use the USER_NON_ADMIN access token level for Windows NPAPI processes. r=tabraldes, a=sledru - f7b5148c84a1
Bob OwenBug 1123245 Part 3: Add prefs for the Windows NPAPI process sandbox. r=bsmedberg, a=sledru - 9bfc57be3f2c
Makoto KatoBug 1121829 - Support redirection of kernel32.dll for hooking function. r=dmajor, a=sylvestre - d340f3d3439d
Ting-Yu ChouBug 989048 - Clean up emulator temporary files and do not overwrite userdata image. r=ahal, a=test-only - 89ea80802586
Richard NewmanBug 951480 - Disable test_tokenserverclient on Android. a=test-only - 775b46e5b648
Jean-Yves AvenardBug 1116007 - Disable inconsistent test. a=test-only - 5d7d74f94d6a
Kai EngertBug 1107731 - Upgrade Mozilla 36 to use NSS 3.17.4. a=sledru - f4e1d64f9ab9
Gijs KruitboschBug 1098371 - Create localized version of sslv3 error page. r=mconley, a=sledru - e6cefc687439
Masatoshi KimuraBug 1113780 - Use SSL_ERROR_UNSUPPORTED_VERSION for SSLv3 error page. r=gijs, a=sylvestre (see Bug 1098371) - ea3b10634381
Jon CoppeardBug 1108007 - Don't allow GC to observe uninitialized elements in cloned array. r=nbp, a=sledru - a160dd7b5dda
Byron Campen [:bwc]Bug 1123882 - Fix case where offset != 0. r=derf, a=abillings - 228ee06444b5
Mats PalmgrenBug 1099110 - Add a runtime check before the downcast in BreakSink::SetCapitalization. r=jfkthame, a=sledru - 12972395700a
Mats PalmgrenBug 1110557. r=mak, r=gavin, a=abillings - 3f71dcaa9396
Glenn Randers-PehrsonBug 1117406 - Fix handling of out-of-range PNG tRNS values. r=jmuizelaar, a=abillings - a532a2852b2f
Tom SchusterBug 1111248. r=Waldo, a=sledru - 7f44816c0449
Tom SchusterBug 1111243 - Implement ES6 proxy behavior for IsArray. r=efaust, a=sledru - bf8644a5c52a
Ben TurnerBug 1122750 - Remove unnecessary destroy calls. r=khuey, a=sledru - 508190797a80
Mark CapellaBug 851861 - Intermittent testFlingCorrectness, etc al. dragSync() consumers. r=mfinkle, a=sledru - 3aca4622bfd5
Jan de MooijBug 1115776 - Fix LApplyArgsGeneric to always emit the has-script check. r=shu, a=sledru - 9ac8ce8d36ef
Nicolas B. PierronBug 1105187 - Uplift the harness changes to fix jit-test failures. a=test-only - b17339648b55
Nicolas SilvaBug 1119019 - Avoid destroying a SharedSurface before its TextureClient/Host pair. r=sotaro, a=abillings - 6601b8da1750
Markus StangeBug 1117304 - Also do the checks at the start of CopyRect in release builds. r=Bas, a=sledru - 4417d345698a
Markus StangeBug 1117304 - Make sure the tile filter doesn't call CopyRect on surfaces with different formats. r=Bas, a=sledru - bc7489448a98
David MajorBug 1124892 - Adjust Breakpad reservation for xul.dll inflation. r=bsmedberg, a=sledru - 59aa16cfd49f

Ian BickingA Product Journal: To MVP Or Not To MVP

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous post was The Tech Demo, and the first in the series is Conception.

The Minimal Viable Product

The Minimal Viable Product is a popular product development approach at Mozilla, and judging from Hacker News it is popular everywhere (but that is a wildly inaccurate way to judge common practice).

The idea is that you build the smallest thing that could be useful, and you ship it. The idea isn’t to make a great product, but to make something so you can learn in the field. A couple definitions:

The Minimum Viable Product (MVP) is a key lean startup concept popularized by Eric Ries. The basic idea is to maximize validated learning for the least amount of effort. After all, why waste effort building out a product without first testing if it’s worth it.

– from How I built my Minimum Viable Product (emphasis in original)

I like this phrase “validated learning.” Another definition:

A core component of Lean Startup methodology is the build-measure-learn feedback loop. The first step is figuring out the problem that needs to be solved and then developing a minimum viable product (MVP) to begin the process of learning as quickly as possible. Once the MVP is established, a startup can work on tuning the engine. This will involve measurement and learning and must include actionable metrics that can demonstrate cause and effect question.

– Lean Startup Methodology (emphasis added)

I don’t like this model at all: “once the MVP is established, a startup can work on tuning the engine.” You tune something that works the way you want it to, but isn’t powerful or efficient or fast enough. You’ve established almost nothing when you’ve created an MVP, no aspect of the product is validated, it would be premature to tune. But I see this antipattern happen frequently: get an MVP out quickly, often shutting down critically engaged deliberation in order to Just Get It Shipped, then use that product as the model for further incremental improvements. Just Get It Shipped is okay, incrementally improving products is okay, but together they are boring and uncreative.

There’s another broad discussion to be had another time about how to enable positive and constructive critical engagement around a project. It’s not easy, but that’s where learning happens, and the purpose of the MVP is to learn, not to produce. In contrast I find myself impressed by the shear willfulness of the Halflife development process which apparently involved months of six hour design meetings, four days a week, producing large and detailed design documents. Maybe I’m impressed because it sounds so exhausting, a feat of endurance. And perhaps it implies that waterfall can work if you invest in it properly.

Plan plan plan

I have a certain respect for this development pattern that Dijkstra describes:

Q: In practice it often appears that pressures of production reward clever programming over good programming: how are we progressing in making the case that good programming is also cost effective?

A: Well, it has been said over and over again that the tremendous cost of programming is caused by the fact that it is done by cheap labor, which makes it very expensive, and secondly that people rush into coding. One of the things people learn in colleges nowadays is to think first; that makes the development more cost effective. I know of at least one software house in France, and there may be more because this story is already a number of years old, where it is a firm rule of the house, that for whatever software they are committed to deliver, coding is not allowed to start before seventy percent of the scheduled time has elapsed. So if after nine months a project team reports to their boss that they want to start coding, he will ask: “Are you sure there is nothing else to do?” If they say yes, they will be told that the product will ship in three months. That company is highly successful.

– from Interview Prof. Dr. Edsger W. Dijkstra, Austin, 04–03–1985

Or, a warning from a page full of these kind of quotes: “Weeks of programming can save you hours of planning.” The planning process Dijkstra describes is intriguing, it says something like: if you spend two weeks making a plan for how you’ll complete a project in two weeks then it is an appropriate investment to spend another week of planning to save half a week of programming. Or, if you spend a month planning for a month of programming, then you haven’t invested enough in planning to justify that programming work – to ensure the quality, to plan the order of approach, to understand the pieces that fit together, to ensure the foundation is correct, ensure the staffing is appropriate, and so on.

I believe “Waterfall Design” gets much of its negative connotation from a lack of good design. A Waterfall process requires the design to be very very good. With Waterfall the design is too important to leave it to the experts, to let the architect arrange technical components, the program manager to arrange schedules, the database architect to design the storage, and so on. It’s anti-collaborative, disengaged. It relies on intuition and common sense, and those are not powerful enough. I’ll quote Dijkstra again:

The usual way in which we plan today for tomorrow is in yesterday’s vocabulary. We do so, because we try to get away with the concepts we are familiar with and that have acquired their meanings in our past experience. Of course, the words and the concepts don’t quite fit because our future differs from our past, but then we stretch them a little bit. Linguists are quite familiar with the phenomenon that the meanings of words evolve over time, but also know that this is a slow and gradual process.

It is the most common way of trying to cope with novelty: by means of metaphors and analogies we try to link the new to the old, the novel to the familiar. Under sufficiently slow and gradual change, it works reasonably well; in the case of a sharp discontinuity, however, the method breaks down: though we may glorify it with the name “common sense”, our past experience is no longer relevant, the analogies become too shallow, and the metaphors become more misleading than illuminating. This is the situation that is characteristic for the “radical” novelty.

Coping with radical novelty requires an orthogonal method. One must consider one’s own past, the experiences collected, and the habits formed in it as an unfortunate accident of history, and one has to approach the radical novelty with a blank mind, consciously refusing to try to link it with what is already familiar, because the familiar is hopelessly inadequate. One has, with initially a kind of split personality, to come to grips with a radical novelty as a dissociated topic in its own right. Coming to grips with a radical novelty amounts to creating and learning a new foreign language that can not be translated into one’s mother tongue. (Any one who has learned quantum mechanics knows what I am talking about.) Needless to say, adjusting to radical novelties is not a very popular activity, for it requires hard work. For the same reason, the radical novelties themselves are unwelcome.

– from EWD 1036, On the cruelty of really teaching computing science

Research

All this praise of planning implies you know what you are trying to make. Unlikely!

Coding can be a form of planning. You can’t research how interactions feel without having an actual interaction to look at. You can’t figure out how feasible some techniques are without trying them. Planning without collaborative creativity is dull, planning without research is just documenting someone’s intuition.

The danger is that when you are planning with code, it feels like execution. You can plan to throw one away to put yourself in the right state of mind, but I think it is better to simply be clear and transparent about why you are writing the code you are writing. Transparent because the danger isn’t just that you confuse your coding with execution, but that anyone else is likely to confuse the two as well.

So code up a storm to learn, code up something usable so people will use it and then you can learn from that too.

My own conclusion…

I’m not making an MVP. I’m not going to make a maximum viable product either – rather, the next step in the project is not to make a viable product. The next stage is research and learning. Code is going to be part of that. Dogfooding will be part of it too, because I believe that’s important for learning. I fear thinking in terms of “MVP” would let us lose sight of the why behind this iteration – it is a dangerous abstraction during a period of product definition.

Also, if you’ve gotten this far, you’ll see I’m not creating minimal viable blog posts. Sorry about that.

Stormy Peters7 reasons asynchronous communication is better than synchronous communication in open source

Traditionally, open source software has relied primarily on asynchronous communication. While there are probably quite a few synchronous conversations on irc, most project discussions and decisions will happen on asynchronous channels like mailing lists, bug tracking tools and blogs.

I think there’s another reason for this. Synchronous communication is difficult for an open source project. For any project where people are distributed. Synchronous conversations are:

  • Inconvenient. It’s hard to schedule synchronous meetings across time zones. Just try to pick a good time for Australia, Europe and California.
  • Logistically difficult. It’s hard to schedule a meeting for people that are working on a project at odd hours that might vary every day depending on when they can fit in their hobby or volunteer job.
  • Slower. If you have more than 2-3 people you need to get together every time you make a decision, things will move slower. I currently have a project right now that we are kicking off and the team wants to do everything in meetings. We had a several meeting last week and one this week. Asynchronously we could have had several rounds of discussion by now.
  • Expensive for many people. When I first started at GNOME, it was hard to get some of our board members on a phone call. They couldn’t call international numbers, or couldn’t afford an international call and they didn’t have enough bandwidth for an internet voice call. We ended up using a conference call line from one of our sponsor companies. Now it’s video.
  • Logistically difficult. Mozilla does most of our meetings as video meetings. Video is still really hard for many people. Even with my pretty expensive, supposedly high end internet in a developed country, I often have bandwidth problems when participating in video calls. Now imagine I’m a volunteer from Nigeria. My electricity might not work all the time, much less my high speed internet.
  • Language. Open source software projects work primarily in English and most of the world does not speak English as their first language. Asynchronous communication gives them a chance to compose their messages, look up words and communicate more effectively.
  • Confusing. Discussions and decisions are often made by a subset of the project and unless the team members are very diligent the decisions and rationale are often not communicated out broadly or effectively. You lose the history behind decisions that way too.

There are some major benefits to synchronous conversation:

  • Relationships. You build relationships faster. It’s much easier to get to know the person.
  • Understanding. Questions and answers happen much faster, especially if the question is hard to formulate or understand. You can quickly go back and forth and get clarity on both sides. They are also really good for difficult topics that might be easily misinterpreted or misunderstood over email where you don’t have tone and body language to help convey the message.
  • Quicker. If you only have 2-3 people, it’s faster to talk to them then to type it all out. Once you have more than 2-3, you lose that advantage.

I think as new technologies, both synchronous and asynchronous become main stream, open source software projects will have to figure out how to incorporate them. For example, at Mozilla, we’ve been working on how video can be a part of our projects. Unfortunately, they usually just add more synchronous conversations that are hard to share widely but we work on taking notes, sending notes to mailing lists and recording meetings to try to get the relationship and communication benefits of video meetings while maintaining good open source software project practices. I personally would like to see us use more asynchronous tools as I think video and synchronous tools benefit full time employees at the expense of volunteer involvement.

How does your open source software project use asynchronous and synchronous communication tools? How’s the balance working for you?

Darrin HeneinRapid Prototyping with Gulp, Framer.js and Sketch: Part One

process
When I save my Sketch file, my Framer.js prototype updates and reloads instantly.


Rationale

The process of design is often thought of as being entirely generative–people who design things study a particular problem, pull out their sketchbooks, markers and laptops, and produce artifacts which slowly but surely progress towards some end result which then becomes “The Design” of “The Thing”. It is seen as an additive process, whereby each step builds upon the previous, sometimes with changes or modifications which solve issues brought to light by the earlier work.

Early in my career, I would sit at my desk and look with disdain at all the crumpled-paper that filled my trash bin and cherish that one special solution that made the cut. The bin was filled with all of my “bad ideas”. It was overflowing with “failed” attempts before I finally “got it right”. It took me some time, but I’ve slowly learned that the core of my design work is defined not by that shiny mockup or design spec I deliver, but more truly by the myriad of sketches and ideas that got me there. If your waste bin isn’t full by the end of a project, you may want to ask yourself if you’ve spent enough time exploring the solution space.

I really love how Facebook’s Product Design Director Julie Zhuo put it in her essay “Junior Designers vs. Senior Designers”, where she illustrates (in a very non-scientific, but effective way) the difference in process that experience begets. The key delta to me is the singularity of the Junior Designer’s process, compared to the exploratory, branching, subtractive process of the more seasoned designer. Note all the dead ends and occasions where the senior designer just abandons an idea or concept. They clearly have a full trash bin by the end of this journey. Through the process of evaluation and subtraction, a final result is reached. The breadth of ideas explored and abandoned is what defines the process, rather than the evolution of a single idea. It is important to achieve this breadth of ideation to ensure that the solution you commit to was not just a lucky one, but a solution that was vetted against a variety of alternatives.

The unfortunate part of this realization is that often it is just that – an idealized process which faces little conceptual opposition but (in my experience) is often sacrificed in the name of speed or deadlines. Generating multiple sketches is not a huge cost, and is one of the primary reasons so much exploration should take place at that fidelity. Interactions, behavioural design and animations, however, are much more costly to generate, and so the temptation there is to iterate on an idea until it feels right. While this is not inherently a bad thing, wouldn’t it be nice if we could iterate and explore things like animations with the same efficiency we experience with sketching?

As a designer with the ability to write some code, my first goal with any project is to eliminate any inefficiencies – let me focus on the design and not waste time elsewhere. I’m going to walk through a framework I’ve developed during a recent project, but the principle is universal – eliminate or automate the things you can, and maximize the time you spend actually problem-solving and designing.

Designing an Animation Using Framer.js and Sketch

Get the Boilerplate Project on Github

User experience design has become a much more complex field as hardware and software have evolved to allow increasingly fluid, animated, and dynamic interfaces. When designing native applications (especially on mobile platforms such as Android or iOS) there is both an expectation and great value to leverage animation in our UI. Whether to bring attention to an element, educate the user about the hierarchy of the screens in an app, or just to add a moment of delight, animation can be a powerful tool when used correctly. As designers, we must now look beyond Photoshop and static PNG files to define our products, and leverage tools like Keynote or HTML to articulate how these interfaces should behave.

While I prefer to build tools and workflows with open-source software, it seems that the best design tools available are paid applications. Thankfully, Sketch is a fantastic application and easily worth it’s price.

My current tool of choice is a library called framer.js, which is an open-source framework for prototyping UI. For visual design I use Sketch. I’m going to show you how I combine these two tools to provide me with a fast, automated, and iterative process for designing animations.

I am also aware that Framer Studio exists, as well as Framer Generator. These are both amazing tools. However, I am looking for something as automated and low-friction as possible; both of these tools require some steps between modifying the design and seeing the results. Lets look at how I achieved a fully automated solution to this problem.

Automating Everything With Gulp

Here is the goal: let me work in my Sketch and/or CoffeeScript file, and just by saving, update my animated prototype with the new code and images without me having to do anything. Lofty, I know, but let’s see how it’s done.

Gulp is a Javascript-based build tool, the latest in a series of incredible node-powered command line build tools.

Some familiarity with build tools such as Gulp or Grunt will help here, but is not mandatory. Also, this will explain the mechanics of the tool, but you can still use this framework without understanding every line!

 The gulpfile is  just a list of tasks, or commands, that we can run in different orders or timings. Let’s breakdown my gulpfile.js:


var gulp        = require('gulp');
var coffee      = require('gulp-coffee');
var gutil       = require('gulp-util');
var watch       = require('gulp-watch');
var sketch      = require('gulp-sketch');
var browserSync = require('browser-sync');

This section at the top just requires (imports) the external libraries I’m going to use. These include Gulp itself, CoffeeScript support (which for me is faster than writing Javascript), a watch utility to run code whenever a file changes, and a plugin which lets me parse and export from Sketch files.


gulp.task('build', ['copy', 'coffee', 'sketch']);
gulp.task('default', ['build', 'watch']);

Next, I setup the tasks I’d like to be able to run. Notice that the build and default tasks are just sets of other tasks. This lets me maintain a separation of concern and have tasks that do only one thing.


gulp.task('watch', function(){
  gulp.watch('./src/*.coffee', ['coffee']);
  gulp.watch('./src/*.sketch', ['sketch']);
  browserSync({
    server: {
      baseDir: 'build'
    },
    browser: 'google chrome',
    injectChanges: false,
    files: ['build/**/*.*'],
    notify: false
  });
});

This is the watch task. I tell Gulp to watch my src folder for CoffeeScript files and Sketch files; these are the only source files that define my prototype and will be the ones I change often. When a CoffeeScript or Sketch file changes, the coffee or sketch tasks are run, respectively.

Next, I set up browserSync to push any changed files within the build directory to my browser, which in this case is Chrome. This keeps my prototype in the browser up-to-date without having to hit refresh. Notice I’m also specifying a server: key, which essentially spins up a web server with the files in my build directory.


gulp.task('coffee', function(){
  gulp.src('src/*.coffee')
    .pipe(coffee({bare: true}).on('error', gutil.log))
    .pipe(gulp.dest('build/'))
});

The second major task is coffee. This, as you may have guessed, simply transcompiles any *.coffee files in my src folder to Javascript, and places the resulting JS file in my build folder. Because we are containing our prototype in one app.coffee file, there is no need for concatenation or minification.


gulp.task('sketch', function(){
  gulp.src('src/*.sketch')
    .pipe(sketch({
      export: 'slices',
      format: 'png',
      saveForWeb: true,
      scales: 1.0,
      trimmed: false
    }))
    .pipe(gulp.dest('build/images'))
});

The sketch task is also aptly named, as it is responsible for exporting the slices I have defined in my Sketch file to pngs, which can then be used in the prototype. In Sketch, you can mark a layer or group as “exportable”, and this task only looks for those assets.


gulp.task('copy', function(){
  gulp.src('src/index.html')
    .pipe(gulp.dest('build'))
    gulp.src('src/lib/**/*.*')
    .pipe(gulp.dest('build/lib'))
    gulp.src('src/images/**/*.{png, jpg, svg}')
    .pipe(gulp.dest('build/images'));
});

The last task is simply housekeeping. It is only run once, when you first start the Gulp process on the command line. It copies any HTML files, JS libraries, or other images I want available to my prototype. This let’s me keep everything in my src folder, which is a best practice. As a general rule of thumb for build systems, avoid placing anything in your output directory (in this case, build), as you jeopardize your ability to have repeatable builds.

Recall my default task was defined above, as:


gulp.task('default', ['build', 'watch']);

This means that by running $ gulp in this directory from the command line, my default task is kicked off. It won’t exit without ctrl-C, as watch will run indefinitely. This lets me run this command only once, and get to work.


$ gulp

So where are we now? If everything worked, you should see your prototype available at http://localhost:3000. Saving either app.coffee or app.sketch should trigger the watch we setup, and compile the appropriate assets to our build directory. This change of files in the build directory should trigger BrowserSync, which will then update our prototype in the browser. Voila! We can now work in either of 2 files (app.coffee or app.sketch), and just by saving them have our shareable, web-based prototype updated in place. And the best part is, I only had to set this up once! I can now use this framework with mynext project and immediately begin designing, with a hyper-fast iteration loop to facilitate that work.

The next step is to actually design the animation using Sketch and framer.js, which deserves it’s own post altogether and will be covered in Part Two of this series.

Follow me on twitter @darrinhenein to be notified when part two is available.

Adam OkoyeTests, Feedback Results, and a New Thank You

As I’ve said in previous posts, my internship primarily revolves around creating a new “thank you” page that will be shown to people who leave negative (or “sad”) feedback on input.mozilla.org.

The current thank you page which the user gets directed to after giving any type of feedback, good or bad, looks like this:

current thank you page

As you can see it’s pretty basic. It does include a link to Support Mozilla (SUMO), which I think is very useful. It also has links a page that shows you how to download different builds of Firefox (beta, nightly, etc), a page with a lot of useful information on how to get involved with contributing to Mozilla, and links to Mozilla’s social networking profiles. While the links are interesting in their own right, they don’t do a lot in terms of quickly guiding someone a solution if they’re having a problem with Firefox. We want to change that in order to hopefully make the page more useful to people who are having trouble using Firefox. The new thank you page will end up being a different Django template that people will be redirected to.

Part of making the new page more useful will be including links to SUMO articles that are related to the feedback that people have given. Last week I wrote the code that redirects a specific segment of people to the new thank you page as well as a test for that code. The new thank you page will be rolled out via a Waffle flag which I made some weeks ago which made writing the test a tad more complex. Right now there are a few finishing touches that needed to be added to the test in order to close out the bug, but I’m hoping to finish those by the end of Tuesday, the 27th.

We’ll be using one of the three SUMO API endpoints to take the text from the feedback, search the knowledge base, and questions and return results. To figure out which endpoint to use I used a script that Will Kahn-Greene wrote to look at feedback taken from Input and results returned via SUMO’s endpoints and then rank which endpoint’s results were the best. I did that for 120 pieces of feedback.

Tomorrow I’m going to start sketching and mocking up the new thank you page which I’m really looking forward to. I’ll be using a white board for the sketching which will be a first for me I’m hoping that it’ll be easier for me than pencils/pen and paper. I’ll also be able to quickly and easily upload all of the pictures I take of the whiteboard to the my computer which I think will be useful.

Mark SurmanMozilla Participation Plan (draft)

Mozilla needs a more creative and radical approach to participation in order to succeed. That is clear. And, I think, pretty widely agreed upon across Mozilla at this stage. What’s less clear: what practical steps do we take to supercharge participation at Mozilla? And what does this more creative and radical approach to participation look like in the everyday work and lives of people involved Mozilla?

Mozilla and participation

This post outlines what we’ve done to begin answering these questions and, importantly, it’s a call to action for your involvement. So read on.

Over the past two months, we’ve written a first draft Mozilla Participation Plan. This plan is focused on increasing the impact of participation efforts already underway across Mozilla and on building new methods for involving people in Mozilla’s mission. It also calls for the creation of new infrastructure and ways of working that will help Mozilla scale its participation efforts. Importantly, this plan is meant to amplify, accelerate and complement the many great community-driven initiatives that already exist at Mozilla (e.g. SuMo, MDN, Webmaker, community marketing, etc.) — it’s not a replacement for any of these efforts.

At the core of the plan is the assumption that we need to build a virtuous circle between 1) participation that helps our products and programs succeed and 2) people getting value from participating in Mozilla. Something like this:

Virtuous circle of participation

This is a key point for me: we have to simultaneously pay attention to the value participation brings to our core work and to the value that participating provides to our community. Over the last couple of years, many of our efforts have looked at just one side or the other of this circle. We can only succeed if we’re constantly looking in both directions.

With this in mind, the first steps we will take in 2015 include: 1) investing in the ReMo platform and the success of our regional communities and 2) better connecting our volunteer communities to the goals and needs of product teams. At the same time, we will: 3) start a Task Force, with broad involvement from the community, to identify and test new approaches to participation for Mozilla.

Participation Plan

The belief is that these activities will inject the energy needed to strengthen the virtuous circle reasonably quickly. We’ll know we’re succeeding if a) participation activities are helping teams across Mozilla measurably advance product and program goals and b) volunteers are getting more value out of their participation out of Mozilla. These are key metrics we’re looking at for 2015.

Over the longer run, there are bigger ambitions: an approach to participation that is at once massive and diverse, local and global. There will be many more people working effectively and creatively on Mozilla activities than we can imagine today, without the need for centralized control. This will result in a different and better, more diverse and resilient Mozilla — an organization that can consistently have massive positive impact on the web and on people’s lives over the long haul.

Making this happen means involvement and creativity from people across Mozilla and our community. However, a core team is needed to drive this work. In order to get things rolling, we are creating a small set of dedicated Participation Teams:

  1. A newly formed Community Development Team that will focus on strengthening ReMo and tying regional communities into the work of product and program groups.
  2. A participation ‘task force’ that will drive a broad conversation and set of experiments on what new approaches could look like.
  3. And, eventually, a Participation Systems Team will build out new infrastructure and business processes that support these new approaches across the organization.

For the time being, these teams will report to Mitchell and me. We will likely create an executive level position later in the year to lead these teams.

As you’ll see in the plan itself, we’re taking very practical and action oriented steps, while also focusing on and experimenting with longer-term questions. The Community Development Team is working on initiatives that are concrete and can have impact soon. But overall we’re just at the beginning of figuring out ‘radical participation’.

This means there is still a great deal of scope for you to get involved — the plans  are still evolving and your insights will improve our process and the plan. We’ll come out with information soon on more structured ways to engage with what we’re calling the ‘task force’. In the meantime, we strongly encourage your ideas right away on ways the participation teams could be working with products and programs. Just comment here on this post or reach out to Mitchell or me.

PS. I promised a follow up on my What is radical participation? post, drawing on comments people made. This is not that. Follow up post on that topic still coming.


Filed under: mozilla, opensource

Mozilla Reps CommunityRep of the month: January 2015

Irvin Chen has been an inspiring contributor last month and we want to recognize his great work as a Rep.

irvinIrvin has been organizing weekly MozTW Lab and also other events to spread Mozilla in the local community space in Taiwan, such as Spark meetup, d3.js meetup or Wikimedia mozcafe.

He also helped to run an l10n sprint for video subtitle/Mozilla links/SUMO and webmaker on transifex.

Congratulations Irvin for your awesome work!

Don’t forget to congratulate him on Discourse!

Ben KeroAttempts source large E-Ink screens for a laptop-like device

One idea that’s been bouncing around in my head for the last few years has been a laptop with an E-Ink display. I would have thought this would be a niche that had been carved out already, but it doesn’t seem that any companies are interested in exploring it.

I use my laptop in some non-traditional environments, such as outdoors in direct sunlight. Almost all laptops are abysmal in a scenario like this. E-Ink screens are a natural response to this requirement. Unlike traditional TFT-LCD screens, E-Ink panels are meant to be viewed with an abundance of natural light. As a human, I too enjoy natural light.

Besides my fantasies of hacking on the beach, these would be very useful to combat the raster burn that seems to be so common among regular computer users. Since TFT-LCDs act as an artificial sunlight, they can have very negative side-effects on the eyes, and indirectly on the brain. Since E-Ink screens work without a backlight they are not susceptible to these problems. This has the potential to help me reclaim some of the time that I spend without a device before bedtime for health reasons.

The limitations of E-Ink panels are well known to anybody who has used one. The refresh rate is not nearly as good, the color saturation varies between abysmal to non-existent, and the available size are much more limited than LCD panels (smaller). Despite all these reasons, the panels do have advantages. They do not give the user raster burn like other backlit panels. They are cheap, standardized, and easy to replace. They are also useable in direct sunlight. Until recently they offered competitive DPI compared to laptop panels as well.

As a computer professional many of these downsides of LCD panels concern me. I spend a large amount of my work day staring at the displays. I fear this will have a lasting effect on me and many others who do the same.

The E-Ink manufacturer offerings are surprisingly sparse, with no devices that I can find targeted towards consumers or hobbyists. Traditional LCDs are available over a USB interface, able to be used as external displays on any embedded or workstation system. Interfaces for E-Ink displays are decidedly less advanced. The panels that Amazon sources use an undocumented DTO protocol/connector. The panels that everybody else seems to use also have a specific protocol/connector, but some controllers are available.

The one panel I’ve been able to source to try to integrate into a laptop-like object is PervasiveDisplay’s 9.7″ panel with SPI controller. This would allow a computer to speak SPI to the controller board, which would then translate the calls into operations to manage drawing to the panel. Although this is useful, availability is limited to a few component wholesale sites and Digikey. Likewise it’s not exactly cheap. Although the SPI controller board is only $28, the set of controller and 9.7″ panel is $310. Similar replacement Kindle DX panels cost around $85 elsewhere on the internet.

It would be cheaper to buy an entire Kindle DX, scrap the computer and salvage the panel than to buy the PervasiveDisplays evaluation kit on Digikey. To be fair this is comparing a used consumer device to a niche evaluation kit, so of course the former device is going to be cheaper.

To their credit, they’re also trying to be active in the Open Hardware community. They’ve launched RePaper.org, which is a site advocating freeing ePaper technology from the hands of the few companies and into the hands of open hardware enthusiasts and low-run product manufacturers.

From their site:

We recognize ePaper is a new technology and we’re asking your help in making it better known. Up till now, all industry players have kept the core technologies closed. We want to change this. If the history of the Internet has proven anything, it is that open technologies lead to unbounded innovation and unprecedented value added to the entire economy.

There are some panels listed up on SparkFun and Adafruit, although those are limited to 1.44 inch to 2.0 inch displays, which are useless for my use case. Likewise, these are geared towards Arduino compatibility, while I need something that is performant through a (relatively) fast and high bandwidth interface like exists on my laptop mainboard.

Bunnie/Xobs of the Kosagi Novena open laptop project clued me in to the fact that the iMX6 SoC present in the aforementioned device contains an EPD (Electronic Paper Display) controller. Although the pins on the chip likely aren’t broken out to the board, it gives me hope. My hope is that in the future devices such as the Raspberry Pi, CubieBoard, or other single-board computers will break out the controller to a header on the main board.

I think that making this literal stockpile of panels available to open hardware enthusiasts, we can empower them to create anything from innovations in the eBook reader market to creating an entirely new class of device.

Adam LoftingThe week ahead: 26 Jan 2015

unrelatedphoto

I should have started the week by writing this, but I’ll do it quickly now anyway.

My current todo list.
List status: Pretty good. Mostly organized near the top. Less so further down. Fine for now.

Objectives to call out for this week.

  • Bugzilla and Github clean-out / triage
  • Move my home office out to the shed (depending on a few things)

+ some things that carry over from last week

  • Write a daily working process
  • Work out a plan for aligning metrics work with dev team heartbeats
  • Don’t let the immediate todo list get in the way of planning long term processes
  • Invest time in working open
  • Wrestle with multiple todo list systems until they (or I) work together nicely

Mozilla FundraisingShould we put payment provider options directly on the snippet?

While our End of Year (EOY) fundraising campaign is finished, we still have a few updates to share with you. This post documents one of the A/B tests we ran during the campaign. Should we put payment provider options directly … Continue reading

Nick DesaulniersWriting my first technical book chapter

It’s a feeling of immense satisfaction when we complete a major achievement. Being able to say “it’s done” is such a great stress relief. Recently, I completed work on my first publication, a chapter about Emscripten for the upcoming book WebGL Insights to be published by CRC Press in time for SIGGRAPH 2015.

One of the life goals I’ve had for a while is writing a book. A romantic idea it seems to have your ideas transcribed to a medium that will outlast your bones. It’s enamoring to hold books from long dead authors, and see that their ideas are still valid and powerful. Being able to write a book, in my eyes, provides some form of life after death. Though, one could imagine ancestors reading blog posts from long dead relatives via utilities like the Internet Archive’s WayBack Machine.

Writing about highly technical content places an upper limit on the usefulness of the content, and shows as “dated” quickly. A book I recently ordered was Scott Meyers’ Effective Modern C++. This title strikes me, because what exactly do we consider modern or contemporary? Those adjectives only make sense in a time limited context. When C++ undergoes another revolution, Scott’s book may become irrelevant, at which point the adjective modern becomes incorrect. Not that I think Scott’s book or my own is time-limited in usefulness; more that technical books’ duration of usefulness is significantly less than philosophical works like 1984 or Brave New World. Almost like having a record in a sport is a feather in one’s cap, until the next best thing comes along and you’re forgotten to time.

Somewhat short of my goal of writing an entire book, I only wrote a single chapter for a book. It’s interesting to see that a lot of graphics programming books seem to follow the format of one author per chapter or at least multiple authors. Such book series as GPU Gems, Shader X, and GPU Pro follow this pattern, which is interesting. After seeing how much work goes into one chapter, I think I’m content with not writing an entire book, though I may revisit that decision later in life.

How did this all get started? I had followed Graham Sellers on Twitter and saw a tweet from him about a call to authors for WebGL Insights. Explicitly in the linked to page under the call for authors was interest in proposals about Emscripten and asm.js.

At the time, I was headlong into a project helping Disney port Where’s My Water from C++ to JavaScript using Emscripten. I was intimately familiar with Emscripten, having been trained by one of its most prolific contributors, Jukka Jylänki. Also, Emscripten’s creator, Alon Zakai, sat on the other side of the office from me, so I was constantly pestering him about how to do different things with Emscripten. The #emscripten irc channel on irc.mozilla.org is very active, but there’s no substitute for being able to have a second pair of eyes look over your shoulder when something is going wrong.

Knowing Emscripten’s strengths and limitations, seeing interest in the subject I knew a bit about (but wouldn’t consider myself an expert in), and having the goal of writing something to be published in book form, this was my opportunity to seize.

I wrote up a quick proposal with a few figures about why Emscripten was important and how it worked, and sent it off with fingers crossed. Initially, I was overjoyed to learn when my proposal was accepted, but then there was a slow realization that I had a lot of work to do. The editor, Patrick Cozzi, set up a GitHub repo for our additional code and figures, a mailing list, and sent us a chapter template document detailing the process. We had 6 weeks to write the rough draft, then 6 weeks to work with reviewers to get the chapter done. The chapter was written as a Google Doc, so that we could have explicit control over who we shared the document with, and what kinds of editing power they had over the document. I think this approach worked well.

I had most of the content written by week 2. This was surprising to me, because I’m a heavy procrastinator. The only issue was that the number of pages I wrote was double the allowed amount; way over page count. I was worried about the amount of content, but told myself to try not to be attached to the content, just as you shouldn’t stay attached with your code.

I took the additional 4 weeks I had left to finish the rough draft to invite some of my friends and coworkers to provide feedback. It’s useful to have a short list of people who have ever offered to help in this regard or owe you one. You’ll also want a diverse team of reviewers that are either close to the subject matter, or approaching it as new information. This allows you to stay technically correct, while not presuming your readers know everything that you do.

The strategy worked out well; some of the content I had initially written about how JavaScript VMs and JITs speculate types was straight up wrong. While it played nicely into the narrative I was weaving, someone more well versed in JavaScript virtual machines would be able to call BS on my work. The reviewers who weren’t as close to subject matter were able to point out when logical progressions did not follow.

Fear of being publicly corrected prevents a lot of people from blogging or contributing to open source. It’s important to not stay attached to your work, especially when you need to make cuts. When push came to shove, I did have difficulty removing sections.

Lets say you have three sequential sections: A, B, & C. If section A and section B both set up section C, and someone tells you section B has to go, it can be difficult to cut section B because as the author you may think it’s really important to include B for the lead into C. My recommendation is sum up the most important idea from section B and add it to the end of section A.

For the last six weeks, the editor, some invited third parties, and other authors reviewed my chapter. It was great that others even followed along and pointed out when I was making assumptions based on specific compiler or browser. Eric Haines even reviewed my chapter! That was definitely a highlight for me.

We used a Google Sheet to keep track of the state of reviews. Reviewers were able to comment on sections of the chapter. What was nice was that you were able to keep use the comment as a thread, being able to respond directly to a criticism. What didn’t work so well was then once you edited that line, the comment and thus the thread was lost.

Once everything was done, we zipped up the assets to be used as figures, submitted bios, and wrote a tips and tricks section. Now, it’s just a long waiting game until the book is published.

As far as dealing with the publisher, I didn’t have much interaction. Since the book was assembled by a dedicated editor, Patrick did most of the leg work. I only asked that what royalties I would receive be donated to Mozilla, which the publisher said would be too small (est $250) to be worth the paperwork. It would be against my advice if you were thinking of writing a technical book for the sole reason of monetary benefit. I’m excited to be receiving a hard cover copy of the book when it’s published. I’ll also have to see if I can find my way to SIGGRAPH this year; I’d love to meet my fellow authors in person and potential readers. Just seeing the list of authors was really a who’s-who of folks doing cool WebGL stuff.

If you’re interested in learning more about working with Emscripten, asm.js, and WebGL, I sugguest you pick up a copy of WebGL Insights in August when it’s published. A big thank you to my reviewers: Eric Haines, Havi Hoffman, Jukka Jylänki, Chris Mills, Traian Stanev, Luke Wagner, and Alon Zakai.

So that was a little bit about my first experience with authorship. I’d be happy to follow up with any further questions you might have for me. Let me know in the comments below, on Twitter, HN, or wherever and I’ll probably find it!

Gregory SzorcAutomatic Python Static Analysis on MozReview

A bunch of us were in Toronto last week hacking on MozReview.

One of the cool things we did was deploy a bot for performing Python static analysis. If you submit some .py files to MozReview, the bot should leave a review. If it finds violations (it uses flake8 internally), it will open an issue for each violation. It also leaves a comment that should hopefully give enough detail on how to fix the problem.

While we haven't done much in the way of performance optimizations, the bot typically submits results less than 10 seconds after the review is posted! So, a human should never be reviewing Python that the bot hasn't seen. This means you can stop thinking about style nits and start thinking about what the code does.

This bot should be considered an alpha feature. The code for the bot isn't even checked in yet. We're running the bot against production to get a feel for how it behaves. If things don't go well, we'll turn it off until the problems are fixed.

We'd like to eventually deploy C++, JavaScript, etc bots. Python won out because it was the easiest to integrate (it has sane and efficient tooling that is compatible with Mozilla's code bases - most existing JavaScript tools won't work with Gecko-flavored JavaScript, sadly).

I'd also like to eventually make it easier to locally run the same static analysis we run in MozReview. Addressing problems locally before pushing is a no-brainer since it avoids needless context switching from other people and is thus better for productivity. This will come in time.

Report issues in #mozreview or in the Developer Services :: MozReview Bugzilla component.

Gregory SzorcEnd to End Testing with Docker

I've written an extensive testing framework for Mozilla's version control tools. Despite it being a little rough around the edges, I'm a bit proud of it.

When you run tests for MozReview, Mozilla's heavily modified Review Board code review tool, the following things happen:

  • A MySQL server is started in a Docker container.
  • A Bugzilla server (running the same code as bugzilla.mozilla.org) is started on an Apache httpd server with mod_perl inside a Docker container.
  • A RabbitMQ server mimicking pulse.mozilla.org is started in a Docker container.
  • A Review Board Django development server is started.
  • A Mercurial HTTP server is started

In the future, we'll likely also need to add support for various other services to support MozReview and other components of version control tools:

  • The Autoland HTTP service will be started in a Docker container, along with any other requirements it may have.
  • An IRC server will be started in a Docker container.
  • Zookeeper and Kafka will be started on multiple Docker containers

The entire setup is pretty cool. You have actual services running on your local machine. Mike Conley and Steven MacLeod even did some pair coding of MozReview while on a plane last week. I think it's pretty cool this is even possible.

There is very little mocking in the tests. If we need an external service, we try to spin up an instance inside a local container. This way, we can't have unexpected test successes or failures due to bugs in mocking. We have very high confidence that if something works against local containers, it will work in production.

I currently have each test file owning its own set of Docker containers and processes. This way, we get full test isolation and can run tests concurrently without race conditions. This drastically reduces overall test execution time and makes individual tests easier to reason about.

As cool as the test setup is, there's a bunch I wish were better.

Spinning up and shutting down all those containers and processes takes a lot of time. We're currently sitting around 8s startup time and 2s shutdown time. 10s overhead per test is unacceptable. When I make a one line change, I want the tests to be instantenous. 10s is too long for me to sit idly by. Unfortunately, I've already gone to great pains to make test overhead as short as possible. Fig wasn't good enough for me for various reasons. I've reimplemented my own orchestration directly on top of the docker-py package to achieve some significant performance wins. Using concurrent.futures to perform operations against multiple containers concurrently was a big win. Bootstrapping containers (running their first-run entrypoint scripts and committing the result to be used later by tests) was a bigger win (first run of Bugzilla is 20-25 seconds).

I'm at the point of optimizing startup where the longest pole is the initialization of the services inside Docker containers themselves. MySQL takes a few seconds to start accepting connections. Apache + Bugzilla has a semi-involved initialization process. RabbitMQ takes about 4 seconds to initialize. There are some cascading dependencies in there, so the majority of startup time is waiting for processes to finish their startup routine.

Another concern with running all these containers is memory usage. When you start running 6+ instances of MySQL + Apache, RabbitMQ, + ..., it becomes really easy to exhaust system memory, incur swapping, and have performance fall off a cliff. I've spent a non-trivial amount of time figuring out the minimal amount of memory I can make services consume while still not sacrificing too much performance.

It is quite an experience having the problem of trying to minimize resource usage and startup time for various applications. Searching the internet will happily give you recommended settings for applications. You can find out how to make a service start in 10s instead of 60s or consume 100 MB of RSS instead of 1 GB. But what the internet won't tell you is how to make the service start in 2s instead of 3s or consume as little memory as possible. I reckon I'm past the point of diminishing returns where most people don't care about any further performance wins. But, because of how I'm using containers for end-to-end testing and I have a surplus of short-lived containers, it is clearly I problem I need to solve.

I might be able to squeeze out a few more seconds of reduction by further optimizing startup and shutdown. But, I doubt I'll reduce things below 5s. If you ask me, that's still not good enough. I want no more than 2s overhead per test. And I don't think I'm going to get that unless I start utilizing containers across multiple tests. And I really don't want to do that because it sacrifices test purity. Engineering is full of trade-offs.

Another takeaway from implementing this test harness is that the pre-built Docker images available from the Docker Registry almost always become useless. I eventually make a customization that can't be shoehorned into the readily-available image and I find myself having to reinvent the wheel. I'm not a fan of the download and run a binary model, especially given Docker's less-than-stellar history on the security and cryptography fronts (I'll trust Linux distributions to get package distribution right, but I'm not going to be trusting the Docker Registry quite yet), so it's not a huge loss. I'm at the point where I've lost faith in Docker Registry images and my default position is to implement my own builder. Containers are supposed to do one thing, so it usually isn't that difficult to roll my own images.

There's a lot to love about Docker and containerized test execution. But I feel like I'm foraging into new territory and solving problems like startup time minimization that I shouldn't really have to be solving. I think I can justify it given the increased accuracy from the tests and the increased confidence that brings. I just wish the cost weren't so high. Hopefully as others start leaning on containers and Docker more for test execution, people start figuring out how to make some of these problems disappear.

Karl DubostWorking With Transparency Habits. Something To Learn.

I posted this following text as a comment 3 days ago on Mark Surman's blog on Transparency habits, but it is still in the moderation queue. So instead of taking the chance to lose it. I'm reposting that comment here. This might need to be develop by a followup post.

Mark says:

I encourage everyone at Mozilla to ask themselves: how can we all build up our transparency habits in 2015? If you already have good habits, how can you help others? If, like me, you’re a bit rusty, what small things can you do to make your work more open?

The mistake we often do with transparency is that we think it is obvious for most people. But working in a transparent way requires a lot of education and mentoring. It’s one thing we should try to improve at Mozilla when onboarding new employees. Teaching what it means to be transparent. I’m not even sure everyone has the same notion of what transparency means already.

For example, too many times, I receive emails in private. That’s unfortunate because it creates information silos and it becomes a lot harder to open up a conversation which started in private. Because I was kind of tired of this, I created a set of slides and explanation for learning how to work with emails. Available in French and English.

Some people are afraid of working in the open for many reasons. They may come from a company where secrecy was very strong, or they had a bad experience by being too open. It takes then time to re-learn the benefits of working in the open.

So because you asked an open question :) Some items.

  • Each time you sent an email, it probably belongs to a project. Send the email to the person (To: field) and always add in copy the relevant mailing list (Cc: field). You’ll get an archive, URL pointer for the future, etc.
  • Each time you are explaining something (such as a process, an howto, etc) to someone, make it a blog post, then send this URL to this someone. It will benefit more people on the long term.
  • Each time, you’re having a meeting, choose one scribe and scribe down this meeting and publish the minutes of the meetings. There are many techniques associated to these. See for example the record of Web Compat team meeting on January 13, 2015 and the index of all meetings (I could explain how we manage that in a blog post).
  • Each time you have a F2F meeting with someone or a group, take notes and publish these notes online to a stable URI. It will help other people to participate.

Let's learn together how to work in a transparent way or in the open.

Otsukare.

Stormy PetersWorking in the open is hard

A recent conversation on a Mozilla mailing list about whether IRC channels should be archived or not shows what a commitment it is to remain open. It’s hard work and not always comfortable to work completely in the open.

Most of us in open source are used to “working in the open”. Everything we send to a mailing list is not only public, but archived and searchable via Google or Yahoo! forever. Five years later, you can go back and see how I did my job as Executive Director of GNOME. Not only did I blog about it, but many of the conversations I had were on open mailing lists and irc channels.

There are many benefits to being open.

Being open means that anybody can participate, so you get much more help and diversity.

Being open means that you are transparent and accountable.

Being open means you have history. You can also go back and see exactly why a decision was made, what the pros and cons were and see if any of the circumstances have changed.

But it’s not easy.

Being open means that when you have a disagreement, the world can follow along. We warn teenagers about not putting too much on social media, but can you imagine every disagreement you’ve ever had at work being visible to all your future employers. (And spouses!)

But those of us working in open source have made a commitment to be open and we try hard.

Many of us get used to working in the open, and we think it feels comfortable and we think we’re good at it. And then something will remind us that it is a lot of work and it’s not always comfortable. Like a conversation about whether irc conversations should be archived or not. IRC conversations are public but not always archived. So people treat them as a place where anyone can drop in but the conversation is bounded in time and limited to the people you can see in the room. The fact that these informal conversations might be archived and read by anyone and everyone later means that you now have to think a lot more about what you are saying. It’s less of a chat and more of a weighed conversation.

The fact that people steeped in open source are having a heated debate about whether Mozilla IRC channels should be archived or not shows that it’s not easy being open. It takes a lot of work and a lot of commitment.

 

Doug BelshawWeeknote 04/2015

This week I’ve been:

I wasn’t at BETT this week. It’s a great place to meet people I haven’t seen for a while and last year I even gave a couple of presentations and a masterclass. However, this time around, my son’s birthday and party gave me a convenient excuse to miss it.

Next week I’m working from home as usual. In fact, I don’t think I’m away again until our family holiday to Dubai in February half-term!

Image CC BY Dave Fayram

Mike HommeyExplicit rename/copy tracking vs. detection after the fact

One of the main differences in how mercurial and git track files is that mercurial does rename and copy tracking and git doesn’t. So in the case of mercurial, users are expected to explicitly rename or copy the files through the mercurial command line so that mercurial knows what happened. Git simply doesn’t care, and will try to detect after the fact when you ask it to.

The consequence is that my git-remote-hg, being currently a limited prototype, doesn’t make the effort to inform mercurial of renames or copies.

This week, Ehsan, as a user of that tool, pushed some file moves, and subsequently opened an issue, because some people didn’t like it.

It was a conscious choice on my part to make git-remote-hg public without rename/copies detection, because file renames/copies are not happening often, and can just as much not be registered by mercurial users.

In fact, they haven’t all been registered for as long as Mozilla has been using mercurial (see below, I didn’t actually know I was so spot on when I wrote this sentence), and people haven’t been pointed at for using broken tools (and I’ll skip the actual language that was used when talking about Ehsan’s push).

And since I’d rather not make unsubstantiated claims, I dug in all of mozilla-central and related repositories (inbound, b2g-inbound, fx-team, aurora, beta, release, esr*) and here is what I found, only accounting files that have been copied or renamed without being further modified (so, using git diff-tree -r -C100%, and eliminating empty files), and correlating with the mercurial rename/copy metadata:

  • There have been 45069 file renames or copies in 1546 changesets.
  • Mercurial doesn’t know 5482 (12.1%) of them, from 419 (27.1%) changesets.
  • 72 of those changesets were backouts.
  • 19 of those backouts were of changesets that didn’t have rename/copy information, so 53 of those backouts didn’t actually undo what mercurial knew of those backed out changesets.
  • Those 419 changesets were from 144 distinct authors (assuming I didn’t miss some duplicates from people who changed email).
  • Fun fact, the person with colorful language, and that doesn’t like git-remote-hg, is part of them. I am too, and that was with mercurial.
  • The most recent occurrence of renames/copies unknown to mercurial is already not Ehsan’s anymore.
  • The oldest occurrence is in the 19th (!) mercurial changeset.

And that’s not counting all the copies and renames with additional modifications.

Fun fact, this is what I found in the Mercurial mercurial repository:

  • There have been 255 file renames or copies in 41 changesets.
  • Mercurial doesn’t know about 38 (14.9%) of them, from 4 (9.7%) changesets.
  • One of those changesets was from Matt Mackall himself (creator and lead developer of mercurial).

There are 1061 files in mercurial, versus 115845 in mozilla-central, so there is less occasion for renames/copies there, still, even they forget to use “hg move” and break their history as a result.

I think this shows how requiring explicit user input simply doesn’t pan out.

Meanwhile, I have prototype copy/rename detection for git-remote-hg working, but I need to tweak it a little bit more before publishing.

Mozilla Release Management TeamFirefox 36 beta2 to beta3

In this beta, as in beta 2, we have a bug fixes for MSE. We have also a few bugs found with the release of Firefox 35. As usual, beta 3 is a desktop only version.

  • 43 changesets
  • 118 files changed
  • 1261 insertions
  • 476 deletions

ExtensionOccurrences
cpp17
js13
h7
ini6
html6
jsm3
xml1
webidl1
txt1
svg1
sjs1
py1
mpd1
list1

ModuleOccurrences
dom20
browser18
toolkit6
dom4
dom4
testing3
gfx3
netwerk2
layout2
mobile1
media1
docshell1
build1

List of changesets:

Steven MichaudBug 1118615 - Flash hangs in HiDPI mode on OS X running peopleroulette app. r=mstange a=sledru - 430bff48811d
Mark GoodwinBug 1096197 - Ensure SSL Error reports work when there is no failed certificate chain. r=keeler, a=sledru - a7f164f7c32d
Seth FowlerBug 1121802 - Only add #-moz-resolution to favicon URIs that end in ".ico". r=Unfocused, a=sledru - d00b4a85897c
David MajorBug 1122367 - Null check the result of D2DFactory(). r=Bas, a=sledru - 57cb206153af
Michael ComellaBug 1116910 - Add new share icons in the action bar for tablet. r=capella, a=sledru - f6b2623900f1
Jonathan WattBug 1122578 - Part 1: Make DrawTargetCG::StrokeRect stroke from the same corner and in the same direction as older OS X and other Moz2D backends. r=Bas, a=sledru - d3c92eebdf3e
Jonathan WattBug 1122578 - Part 2: Test start point and direction of dashed stroking on SVG rect. r=longsonr, a=sledru - 7850d99485e6
Mats PalmgrenBug 1091709 - Make Transform() do calculations using gfxFloat (double) to avoid losing precision. r=mattwoodrow, a=sledru - 0b22d12d0736
Hiroyuki IkezoeBug 1118749 - Need promiseAsyncUpdates() before frecencyForUrl. r=mak, a=test-only - 4501fcac9e0b
Jordan LundBug 1121599 - remove android-api-9-constrained and android-api-10 mozconfigs from all trees, r=rnewman a=npotb DONTBUILD - 787968dadb44
Tooru FujisawaBug 1115616 - Commit composition string forcibly when search suggestion list is clicked. r=gavin,adw a=sylvestre - 2d629038c57b
Ryan VanderMeulenBug 1075573 - Disable test_videocontrols_standalone.html on Android 2.3 due to frequent failures. a=test-only - a666c5c8d0ba
Ryan VanderMeulenBug 1078267 - Skip netwerk/test/mochitests/ on Android due to frequent failures. a=test-only - 0c36034999bb
Christoph KerschbaumerBug 1121857 - CSP: document.baseURI should not get blocked if baseURI is null. r=sstamm, a=sledru - a9b183f77f8d
Christoph KerschbaumerBug 1122445 - CSP: don't normalize path for CSP checks. r=sstamm, a=sledru - 7f32601dd394
Christoph KerschbaumerBug 1122445 - CSP: don't normalize path for CSP checks - test updates. r=sstamm, a=sledru - a41c84bee024
Karl TomlinsonBug 1085247 enable remaining mediasource-duration subtests a=sledru - d918f7ea93fe
Sotaro IkedaBug 1121658 - Remove DestroyDecodedStream() from MediaDecoder::SetDormantIfNecessary() r=roc a=sledru - 731843c58e0d
Jean-Yves AvenardBug 1123189: Use sourceended instead of loadeddata to check durationchanged count r=karlt a=sledru - 09df37258699
Karl TomlinsonBug 1123189 Queue "durationchange" instead of dispatching synchronously r=cpearce a=sledru - 677c75e4d519
Jean-Yves AvenardBug 1123269: Better fix for Bug 1121876 r=cpearce a=sledru - 56b7a3953db2
Jean-Yves AvenardBug 1123054: Don't check VDA reference count. r=rillian a=sledru - a48f8c55a98c
Andreas PehrsonBug 1106963 - Resync media stream clock before destroying decoded stream. r=roc, a=sledru - cdffc642c9b9
Ben TurnerBug 1113340 - Make sure blob urls can load same-prcess PBackground blobs. r=khuey, a=sledru - c16ed656a43b
Paul AdenotBug 1113925 - Don't return null in AudioContext.decodeAudioData. r=bz, a=sledru - 46ece3ef808e
Masatoshi KimuraBug 1112399 - Treat NS_ERROR_NET_INTERRUPT and NS_ERROR_NET_RESET as SSL errors on https URLs. r=bz, a=sledru - ba67c22c1427
Hector ZhaoBug 1035400 - 'restart to update' button not working. r=rstrong, a=sledru - 8a2a86c11f7c
Ryan VanderMeulenBacked out the code changes from changeset c16ed656a43b (Bug 1113340) since Bug 701634 didn't land on Gecko 36. - e8effa80da5b
Ben TurnerBug 1120336 - Land the test-only changes on beta. r=khuey, a=test-only - a6e5dedbd0c0
Sami JaktholmBug 1001821 - Wait for eyedropper to be destroyed before ending tests and checking for leaks. r=pbrosset, a=test-only - 4036f72a0b10
Mark HammondBug 1117979 - Fix orange by not relying on DNS lookup failure in the 'error' test. r=gavin, a=test-only - e7d732bf6091
Honza BambasBug 1123732 - Null-check uri before trying to use it. r=mcmanus, a=sledru - 3096b7b44265
Florian QuèzeBug 1103692 - ReferenceError: bundle is not defined in webrtcUI.jsm. r=felipe, a=sledru - 9b565733c680
Bobby HolleyBug 1120266 - Factor some machinery out of test_BufferingWait into mediasource.js and make it Promise-friendly. r=jya, a=sledru - ff1b74ec9f19
Jean-Yves AvenardBug 1120266 - Add fragmented mp4 sample videos. r=cajbir, a=sledru - 53f55825252a
Paul AdenotBug 698079 - When using the WASAPI backend, always output audio to the default audio device. r=kinetik, a=sledru - 20f7d44346da
Paul AdenotBug 698079 - Synthetize the clock when using WASAPI to prevent A/V desynchronization issues when switching the default audio output device. r=kinetik, a=sledru - 0411d20465b4
Matthew NoorenbergheBug 1079554 - Ignore most UITour messages from pages that aren't visible. r=Unfocused, a=sledru - e35e98044772
Markus StangeBug 1106906 - Always return false from nsFocusManager::IsParentActivated in the parent process. r=smaug, a=sledru - 0d51214654ad
Bobby HolleyBug 1121148 - Move constants that we should not be using directly into a namespace. r=cpearce, a=sledru - 1237ddff18be
Bobby HolleyBug 1121148 - Make QUICK_BUFFERING_LOW_DATA_USECS a member variable and adjust it appropriately. r=cpearce, a=sledru - 62f7b8ea571f
Chris AtLeeBug 1113606 - Use app-specific API keys. r=mshal, r=nalexander, a=gavin - b3836e49ae7f
Ryan VanderMeulenBug 1121148 - Add missing detail:: to fix bustage. a=bustage - b3792d13df24

Henri SivonenIf You Want Software Freedom on Phones, You Should Work on Firefox OS, Custom Hardware and Web App Self-Hostablility

TL;DR

To achieve full-stack Software Freedom on mobile phones, I think it makes sense to

  • Focus on Firefox OS, which is already Free Software above the driver layer, instead of focusing on removing proprietary stuff from Android whose functionality is increasingly moving into proprietary components including Google Play Services.
  • Commission custom hardware whose components have been chosen such that the foremost goal is achieving Software Freedom on the driver layer.
  • Develop self-hostable Free Software Web apps for the on-phone software to connect to and a system that makes installing them on a home server as easy as installing desktop or mobile apps and connecting the home server to the Internet as easy as connecting a desktop.

Inspiration

Back in August, I listened to an episode of the Free as in Freedom oggcast that included a FOSDEM 2013 talk by Aaron Williamson titled “Why the free software phone doesn’t exist”. The talk actually didn’t include much discussion of the driver situation and instead devoted a lot of time to talking about services that phones connect to and the interaction of the DMCA with locked bootloaders.

Also, I stumbled upon the Indie Phone project. More on that later.

Software Above the Driver Layer: Firefox OS—Not Replicant

Looking at existing systems, it seems that software close to the hardware on mobile phones tends to be more proprietary than the rest of the operating system. Things like baseband software, GPU drivers, touch sensor drivers and drivers for hardware-accelerated video decoding (and video DRM) tend to be proprietary even when the Linux kernel is used and substantial parts of other system software are Free Software. Moreover, most of the mobile operating systems built on the Linux kernel are actually these days built on the Android flavor of the Linux kernel in order to be able to use drivers developed for Android. Therefore, the driver situation is the same for many of the different mobile operating systems. For these reasons, I think it makes sense to separate the discussion of Software Freedom on the driver layer (code closest to hardware) and the rest of the operating system.

Why Not Replicant?

For software above the driver layer, there seems to be something of a default assumption in the Free Software circles that Replicant is the answer for achieving Software Freedom on phones. This perception of mine probably comes from Replicant being the contender closest to the Free Software Foundation with the FSF having done fundraising and PR for Replicant.

I think betting on Replicant is not a good strategy for the Free Software community if the goal is to deliver Software Freedom on phones to many people (and, therefore, have more of a positive impact on society) instead of just making sure that a Free phone OS exists in a niche somewhere. (I acknowledge that hardline FSF types keep saying negative things about projects that e.g. choose permissive licenses in order to prioritize popularity over copyleft, but the “Free Software, Free Society” thing only works if many people actually run Free Software on the end-user devices, so in that sense, I think it makes sense to think of what has a chance to be run by many people instead of just the existence of a Free phone OS.)

Android is often called an Open Source system, but when someone buys a typical Android phone, they get a system with substantial proprietary parts. Initially, the main proprietary parts above the driver layer were the Google applications (Gmail, Maps, etc.) but the non-app, non-driver parts of the system were developed as Open Source / Free Software in the Android Open Source Project (AOSP). Over time, as Google has realized that OEMs don’t care to deliver updates for the base system, Google has moved more and more stuff to the proprietary Google application package. Some apps that were originally developed as part of AOSP no longer are. Also, Google has introduced Google Play Services, which is a set of proprietary APIs that keeps updating even when the base system doesn’t.

Replicant takes Android and omits the proprietary parts. This means that many of the applications that users expect to see on an Android phone aren’t actually part of Replicant. But more importantly, Replicant doesn’t provide the same APIs as a normal Android system does, because Google Play Services are missing. As more and more applications start relying on Google Play Services, Replicant and Android-as-usually-shipped diverge as development platforms. If Replicant was supposed to benefit from the network effects of being compatible with Android, these benefits will be realized less and less over time.

Also, Android isn’t developed in the open. The developers of Replicant don’t really get to contribute to the next version of AOSP. Instead, Google develops something and then periodically throws a bundle of code over the wall. Therefore, Replicant has the choice of either having no say over how the platform evolves or has the option to diverge even more from Android.

Instead of the evolution of the platform being controlled behind closed doors and the Free Software community having to work with a subset of the mass-market version of the platform, I think it would be healthier to focus efforts on a platform that doesn’t require removing or replacing (non-driver) system components as the first step and whose development happens in public repositories where the Free Software community can contribute to the evolution of the platform.

What Else Is There?

Let’s look at the options. What at least somewhat-Free mobile operating systems are there?

First, there’s software from the OpenMoko era. However, the systems have no appeal to people who don’t care that much about the Free Software aspect. I think it would be strategically wise for the Free Software community to work on a system that has appeal beyond the Free Software community in order to be able to benefit from contributions and network effects beyond the core Free Software community.

Open webOS is not on an upwards trajectory (on phones despite there having been a watch announcement at CES). (Addition 2015-01-24: There exists a project called LuneOS to port Open webOS to Nexus phones, though.) Tizen (on phones) has been delayed again and again and became available just a few days ago, so it’s not (at least quite yet) a system with demonstrated appeal (on phones) beyond the Free Software community, and it seems that Tizen has substantial non-Free parts. Jolla’s Sailfish OS is actually shipping on a real phone, but Jolla keeps some components proprietary, so the platform fails the criterion of not having to remove or replace (non-driver) system components as the first step (see Nemo). I don’t actually know if Ubuntu Touch has proprietary non-driver system components. However, it does appear to have central components to which you cannot contribute on an “inbound=outbound” licensing basis, because you have to sign a CLA that gives Canonical rights to your code beyond the Free Software license of the project as a condition of your patch getting accepted. In any case, Ubuntu Touch is not shipping yet on real phones, so it is not yet demonstratably a system that has appeal beyond the Free Software community.

Firefox OS, in contrast, is already shipping on multiple real phones (albeit maybe not in your country) demonstrating appeal beyond the Free Software community. Also, Mozilla’s leverage is the control of the trademark—not keeping some key Mozilla-developed code proprietary. The (non-trademark) licensing of the project works on the “inbound=outbound” basis. And, importantly, the development repositories are visible and open to contribution in real time as opposed to code getting thrown over the wall from time to time. Sure, there is code landing such that the motivation of the changes is confidential or obscured with codenames, but if you want to contribute based on your motivations, you can work on the same repositories that the developers who see the confidential requirements work on.

As far as I can tell, Firefox OS has the best combination of not being vaporware, having appeal beyond the Free Software appeal and being run closest to the manner a Free Software project is supposed to be run. So if you want to advance Software Freedom on mobile phones, I think it makes the most sense to put your effort into Firefox OS.

Software Freedom on the Driver Layer: Custom Hardware Needed

Replicant, Firefox OS, Ubuntu Touch, Sailfish OS and Open webOS all use an Android-flavored Linux kernel in order to be able to benefit from the driver availability for Android. Therefore, the considerations for achieving Software Freedom on the driver layer apply equally to all these systems. The foremost problems are controlling the various radios—the GSM/UMTS radio in particular—and the GPU.

If you consider the Firefox OS reference device for 2014 and 2015, Flame, you’ll notice that Mozilla doesn’t have the freedom to deliver updates to all software on the device. Firefox OS is split into three layers: Gonk, Gecko and Gaia. Gonk contains the kernel, drivers and low-level helper processes. Gecko is the browser engine and runs on top of Gonk. Gaia is the system UI and set of base apps running on top of Gecko. You can get Gecko and Gaia builds from Mozilla, but you have to get Gonk builds from the device vendor.

If Software Freedom extended to the whole stack—including drivers—Mozilla (or anyone else) could give you Gonk buids, too. That is, to get full-stack Software Freedom with Firefox OS, the challenge is to come up with hardware whose driver situation allows for a Free-as-in-Freedom Gonk.

As noted, Flame is not that kind of hardware. When this is lamented, it is typically pointed out that “not even the mighty Google” can get the vendors of all the hardware components going into the Nexus devices to provide Free Software drivers and, therefore, a Free Gonk is unrealistic at this point in time.

That observation is correct, but I think it lacks some subtlety. Both Flame and the Nexus devices are reference devices on which the software platform is developed with the assumption that the software platform will then be shipped on other devices that are sufficiently similar that the reference devices can indeed serve as reference. This means that the hardware on the reference devices needs to be reasonably close to the kind of hardware that is going to be available with mass-market price/performance/battery life/weight/size characteristics. Similarity to mass-market hardware trumps Free Software driver availability for these reference devices. (Disclaimer: I don’t participate in the specification of these reference devices, so this paragraph is my educated guess about what’s going on—not any sort of inside knowledge.)

I theorize that building a phone that puts the availability of Free Software drivers first is not impossible but would involve sacrificing on the current mass-market price/performance/battery life/weight/size characteristics and be different enough from the dominant mass-market designs not to make sense as a reference device. Let’s consider how one might go about designing such a phone.

In the radio case, there is proprietary software running on a baseband processor to control the GSM/UMTS radio and some regulatory authorities, such as the FCC, require this software to be certified for regulatory purposes. As a result, the chances of gaining Software Freedom relative to this radio control software in the near term seem slim. From the privacy perspective, it is problematic that this mystery software can have DMA access to the memory of the application processor-i.e. the processor that runs the Linux kernel and the apps. Addition 2015-01-24: There seems to exist a project, OsmocomBB, that is trying to produce GSM-level baseband software as Free Software. (Unlike the project page, the Git repository shows recent signs of activity.) For smart phones, you really need 3G, though.

Technically, data transfer between the application processor and various radios does not need to be fast enough to require DMA access or other low-level coupling. Indeed, for desktop computers, you can get UMTS, Wi-Fi, Bluetooth and GPS radios as external USB devices. It should be possible to document the serial protocol these devices use over USB such that Free drivers can be written on the Linux side while the proprietary radio control software is embedded on the USB device.

This would solve the problem of kernel coupling with non-free drivers in a way that hinders the exercise of Software Freedom relative to the kernel. But wouldn’t the radio control software embedded on the USB device still be non-free? Well, yes it would, but in the current regulatory environment it’s unrealistic to fix that. Moreover, if the software on the USB devices is truly embedded to the point where no one can update it, the Free Software Foundation considers the bundle of hardware and un-updatable software running on the hardware as “hardware” as a whole for Software Freedom purposes. So even if you can’t get the freedom to modify the radio control software, if you make sure that no one can modify it and put it behind a well-defined serial interface, you can both solve the problem of non-free drivers holding back Software Freedom relative to the kernel and get the ideological blessing.

So I think the way to solve the radio side of the problem is to license circuit designs for UMTS, Wi-Fi, Bluetooth and GPS USB dongles and build those devices as hard-wired USB devices onto the main board of the phone inside the phone’s enclosure. (Building hard-wired USB devices into the device enclosure is a common practice in the case of laptops.) This would likely result in something more expensive, more battery draining, heavier and larger than the usual more integrated designs. How much more expensive, heavier, etc.? I don’t know. I hope within bounds that would be acceptable for people willing to pay some extra and accept some extra weigh and somewhat worse battery life and performance in order to get Software Freedom.

As for the GPU, there are a couple of Free drivers: There’s Freedreno for Adreno GPUs. There is the Lima driver for Mali-200 and Mali-400, but a Replicant developer says it’s not good enough yet. Intel has Free drivers for their desktop GPUs and Intel is trying to compete in the mobile space so, who knows, maybe in the reasonably near future Intel manages to integrate GPU design of their own (with a Free driver) with one of their mobile CPUs. Correction 2015-01-24: It appears that after I initially wrote that sentence in August 2014 but before I got around to publishing in January 2015, Intel announced such a CPU/GPU combination.

The current Replicant way to address the GPU driver situation is not to have hardware-accelerated OpenGL ES. I think that’s just not going to be good enough. For Firefox OS (or Ubuntu Touch or Sailfish OS or a more recent version of Android) to work reasonably, you have to have hardware-accelerated OpenGL ES. So I think the hardware design of a Free Software phone needs to grow around a mobile GPU that has a Free driver. Maybe that means using a non-phone (to put radios behind USB) QUALCOMM SoC with Adreno. Maybe that means pushing Lima to good enough a state and then licensing Mali-200 or Mali-400. Maybe that means using x86 and waiting for Intel to come up with a mobile GPU. But it seems clear that the GPU is the big constraint and the CPU choice will have to follow from the GPU solution.

For the encumbered codecs that everyone unfortunately needs to have in practice, it would be best to have true hardware implementations that are so complete that the drivers wouldn’t contain parts of the codec but would just push bits to the hardware. This way, the encumberance would be limited to the hardware. (Aside: Similarly, it would be possible to design a hardware CDM for EME. In that case, you could have video DRM without it being a Software Freedom problem.)

So I think that in order to achieve Software Freedom on the driver layer, it is necessary to commission hardware that fits Free Software instead of trying to just write software that fits the hardware that’s out there. This is significantly different from how software freedom has been achieved on desktop. Also, the notion of making a big upfront capital investment in order to achieve Software Freedom is rather different from the notion that you only need capital for a PC and then skill and time.

I think it could be possible to raise the necessary capital through crowdfunding. (Purism is trying it with the Librem laptop, but, unfortunately, the rate of donations looks bad as of the start of January 2015. Addition 2015-01-24: They have actually reached and exceeded their funding target! Awesome!) I’m not going to try to organize anything like that myself—I’m just theorizing. However, it seems that developing a phone by crowdfunding in order to get characteristics that the market isn’t delivering is something that is being attempted. The Indie Phone project expresses intent to crowdfund the development of a phone designed to allow users to own their own data. Which brings us to the topic of the services that the phone connects to.

Freedom on the Service Side: Easy Self-Hostability Needed

Unfortunately, Indie Phone is not about building hardware to run Firefox OS. The project’s Web site talks about an Indie OS but intentionally tries to make the OS seem uninteresting and doesn’t explain what existing software the small team is intending to build upon. (It seems implausible that such a small team could develop an operating system from scratch.) Also, the hardware intentions are vague. The site doesn’t explain if the project is serious about isolating the baseband processor from the application processor out of privacy concerns, for example. But enough about the vagueness of what the project is going to do. Let’s look at the reasons the FAQ gave against Firefox OS (linking to version control, since the FAQ appears to have been removed from the site between the time I started writing this post and the time I got around to publishing):

“As an operating system that runs web applications but without any applications of its own, Firefox OS actually incentivises the use of closed silos like Google. If your platform can only run web apps and the best web apps in town are made by closed silos like Google, your users are going to end up using those apps and their data will end up in these closed silos.”

The FAQ then goes on to express angst about Mozilla’s relationship with Google (the Indie Phone FAQ was published before Mozilla’s seach deal with Yahoo! was announced) and Telefónica and to talk about how Mozilla doesn’t control the hardware but Indie will.

I think there is truth to Web technology naturally having the effect of users gravitating towards whatever centralized service provides the best user experience. However, I think the answer is not to shun Firefox OS but to make de-centralized services easy to self-host and use with Firefox OS.

In particular, it doesn’t seem realistic that anyone would ship a smart phone without a Web browser. In that sense, any smartphone is susceptible to the lure of centralized Web-based services. On the other hand, Google Play and the iOS App Store contain plenty of applications whose user interface is not based on HTML, CSS and JavaScript but still those applications put the users’ data into centralized services. On the flip side, it’s not actually true that Firefox OS only runs Web apps hosted on a central server somewhere. Firefox OS allows you to use HTML, CSS and JavaScript to build apps that are distributed as a zip file and run entirely on the phone without a server component.

But the thing is that, these days, people don’t want even notes or calendar entries that are intended for their own eyes only to stay on the phone only. Instead, even for data meant for the user’s own eyes only, there is a need to have the data show up on multiple devices. I very much doubt that any underdog effort has the muscle to develop a non-Web decentralized network application platform that allows users to interact with their data from all the devices that they want to use to interact with their data. (That is, I wouldn’t bet on e.g Indienet that is going to launch with “with a limited release on OS X Yosemite”.)

I think the answer isn’t fighting the Web Platform but using the only platform that already has clients for all the devices that users want to use—in addition to their phone—to interact with their data: the Web Platform. To use the Web Platform as the application platform such that multiple devices can access the apps but also such that users have Software Freedom, the users need to host the Web apps themselves. Currently, this is way too difficult. Hosting Web apps at home needs to become at least as easy as maintaining a desktop computer at home-preferably easier.

For this to happen, we need:

  • Small home server hardware that is powerful enough to host Web apps for family, that consumes negligible energy (maybe in part by taking the place of the home router that people have always-on consuming electricity today), that is silent and that can boot a vanilla kernel that gets security updates.
  • A Free operating system that runs in such hardware, makes it easy to install Web apps and makes it easy for the apps to become securely reachable over the network.
  • High-quality apps for such a platform.

(Having Software Freedom on the server doesn’t strictly require the server to be placed in your home, but if that’s not a realistic option, there’s clearly a practical freedom deficit even if not under the definition of Free Software. Also, many times the interest in Software Freedom in this area is motivated by data privacy reasons and in the case of Web apps, the server of the apps can see the private data. For these reasons, it makes sense to consider home-hostability.)

Hardware

In this case, the hardware and driver side seems like the smallest problem. At least if you ignore the massive and creepy non-Free firmware, the price of the hardware and don’t try to minimize energy consumption particularly aggressively, suitable x86/x86_64 hardware already exists e.g. from CompuLab. To get the price and energy consumption minimized, it seems that ARM-based solutions would be better, but the situation with 32-bit ARM boards requiring per-board kernel builds and most often proprietary blobs that don’t get updated makes the 32-bit ARM situation so bad that it doesn’t make sense to use 32-bit ARM hardware for this. (At FOSDEM 2013, it sounded like a lot of the time of the FreedomBox project has been sucked into dealing with the badness of the Linux on 32-bit ARM situation.) It remains to be seen whether x86/x86_64 SoCs that boot with generic kernels reach ARM-style price and energy consumption levels first or whether the ARM side gets their generic kernel bootability and Free driver act together (including shipping) with 64-bit ARM first. Either way, the hardware side is getting better.

Apps

As for the apps, PHP-based apps that are supposed to be easy-ish to deploy as long as you have an Apache plus PHP server from a service provider are plentiful, but e.g. Roundcube is no match for Gmail in terms of user experience and even though it’s theoretically possible to write quality software in PHP, the execution paradigm of PHP and the culture of PHP don’t really guide things to that direction.

Instead of relying on the PHP-based apps that are out there and that are woefully uncompetitive with the centralized proprietary offerings, there is a need for better apps written on better foundations (e.g. Python and Node.js). As an example, Mailpile (Python on the server) looks very promising in terms of Gmail-competitive usability aspirations. Unfortunately, as of December 2014, it’s not ready for use yet. (I tried and, yes, filed bugs.) Ethercalc and Etherpad (Node.js on the server) are other important apps.

With apps, the question doesn’t seem to be whether people know how to write them. The question seems to be how to fund the development of the apps so that the people who know how to write them can devote a lot of time to these projects. I, for one, hope that e.g. Mailpile’s user-funded development is sustainable, but it remains to be seen. (Yes, I donated.)

Putting the Apps Together

A crucial missing piece is having a system that can be trivially installed on suitable hardware (or, perhaps in the future, can be pre-installed on suitable hardware) that allows users who want to get started without exercising their freedom to modify the software but provides the freedom to install modified apps if the user so chooses and-perhaps most importantly-makes the networking part very easy.

There are a number of projects that try to aggregate self-hostable apps into a (supposedly at least) easy to install and manage system. However, it seems to me that they tend to be of the PHP flavor, which I think fundamentally disadvantages them in terms of becoming competitive with proprietary centralized Web apps. I think the most promising project in the space that deals with making the better (Python and Node.js-based among others) apps installable with ease is Sandstorm.io, which unfortunately, like Mailpile, doesn’t seem quite ready yet. (Also, in common with Mailpile: a key developer is an ex-Googler. Looks like people who’ve worked there know what it takes to compete with GApps…)

Looking at Sandstorm.io is instructive in terms of seeing what’s hard about putting it all together. On the server, Sandstorm.io runs each Web app in a Linux container that’s walled off from the other apps. All the requests go through a reverse proxy that also provides additional browser-site UI for switching between the apps. Instead of exposing the usual URL structure of each app, Sandstorm.io exposes “grain” URLs, which are unintelligible random-looking character sequences. This design isn’t without problems.

The first problem is that the apps you want to run like Mailpile, Etherpad and Ethercalc have been developed to be deployed on a vanilla Linux server by using application-specific manual steps that puts hosting these apps on a server out of the reach of normal users. (Mailpile is designed to be run on localhost by normal users, but that doesn’t make it reachable from multiple devices, which is what you want from a Web app.) This means that each app needs to be ported to Sandstorm.io. This in turn means that compared to going to upstream, you get stale software, because except for Ethercalc, the maintainer of the Sandstorm.io port isn’t the upstream developer of the app. In fairness, though, the software doesn’t seem to be as a stale as it would be if you installed a package from Debian Stable… Also, as the platform and the apps mature, it’s possible that various app developers start to publish for Sandstorm.io directly on one hand and with more mature apps it’s less necessary to have the latest version (except for security fixes).

Unlike in the case getting a Web app as a Debian package, the URL structure and, it appears, in some cases the storage structure is different in a Sandstorm.io port of an app and in a vanilla upstream version of the app. Therefore, even though avoiding lock-in is one of the things the user is supposed to be able to accomplish by using Sandstorm.io, it’s non-trivial to migrate between the Sandstorm.io version and a non-Sandstorm.io version of a given app. It particularly bothers me that Sandstorm.io completely hides the original URL structure of the app.

Networking

And that leads to the last issue of self-hosting with the ease of just plugging a box into home Ethernet: Web security and Web addressing are rather unfriendly to easy self-hosting.

First of all, there is the problem of getting basic incoming IPv4 connectivity to work. After all, you must be able to reach port 443 (https) of your self-hosting box from all your devices-including reaching the box that’s on your wired home Internet connection from the mobile connection of your phone. Maybe your own router imposes a NAT between your server and the Internet and you’d need to set up port forwarding, which makes things significantly harder than just instructing people to plug stuff in. This might be partially alleviated by making the self-hosting box contain NAT functionality itself so that it could take the place of the NATting home router, but even then you might have to configure something like a cable modem to a bridging mode or, worse, you might be dealing with an ISP who doesn’t actually sell you neutral end-to-end Internet routing and blocks incoming traffic to port 443 (or detects incoming traffic to port 443 and complains to you about running a server even if it’s actually for your personal use so you aren’t violating any clause that prohibits you from using a home connection to offer a service to the public).

One way to solve this would be standardizing simple service were a service provider takes your credit card number and an ssh public key and gives you an IP address. The self-hosting system you run at home would then have a configuration interface that gives you an ssh public key and takes an IP address. The self-hosting box would then establish an ssh reverse tunnel to the IP address with 443 as the local target port and the service provided by the service provider would be sending port 443 of the IP address to this tunnel. You’d still own your data and your server and you’d terminate TLS on your server even though you’d rent an IP address from a data center.

(There are efforts to solve this by giving the user-hosted devices host names under the domain of a service that handles the naming, such as OPI giving the each user a hostname under the op-i.me domain, but then the naming service—not the user—is presumptively the one eligible to get the necessary certificates signed, and delegating away the control of the crypto defeats an important aspect of self-hosting. As a side note, one of the reasons I migrated from hsivonen.iki.fi to hsivonen.fi was that even though I was able to get the board of IKI to submit iki.fi to the Public Suffix List, CAs still seem to think that IKI, not me, is the party eligible for getting certificates signed for hsivonen.iki.fi.)

But even if you solved IPv4-level reachability of the home server from the public Internet as a turn-key service, there are still more hurdles on the way of making this easy. Next, instead of the user having to use an IP address, the user should be able to use a memorable name. So you need to tell the user to go register a domain name, get DNS hosting and point an A record to the IP address. And then you need a certificate for the name you chose for the A record, which at the moment (before Let’s Encrypt is operational) is another thing that makes things too hard.

And that brings us back to Sandstorm.io obscuring the URLs. Correction 2015-01-24: Rather paradoxically, even though Sandstorm.io is really serious about isolating apps from each other on the server, Sandstorm.io gives up the browser-side isolation of the apps that you’d get with a typical deployment of the upstream apps. The only true way to have browser-enforced privilege separation of the client-side JavaScript parts of the apps is for different apps to have different Origins. An Origin is a triple of URL scheme, host name and port. For the apps not to be ridiculously insecure, the scheme has to be https. This means that you either have to give each app a distinct port number or a distinct host name. On surface, it seems that it would be easy to mint port numbers, but users are not used to typing URLs with non-default port numbers and if you depend on port forwarding in a NATting home router or port forwarding through an ssh reverse tunnel, minting port numbers on-demand isn’t that convenient anymore.

So you really want a distinct host name for each app to have a distinct Origin for browser-enforced privilege separation of JavaScript on the client. But the idea was that you could install new apps easily. This means that you have to be able to generate a new working host name at the time of app installation. So unless you have a programmatic way to configure DNS on the fly and have certificates minted on the fly, neither of which you can currently realistically have for a home server, you need a wildcard in the DNS zone and you need a wildcard TLS certicate. Correction 2015-01-24: Sandstorm.io instead uses one hostname and obscure URLs, which is understandable. Despite being understandable, it is sad, since it loses both human-facing semantics of the URLs and browser-enforced privilege separation between the apps. To provide Origin-based privilege separation on the browser side, Sandstorm.io generates hostnames that do not look meaningful to the user and hides them in an iframe, but the URLs shown for the top-level origin in the URL bar are equally obscure. I find it unfortunate that Sandstorm.io does not mint human-friendly URL bar origins with the app names when it is capable of minting origins. (Instead of https://etherpad.example.org/example-document-title and https://ethercalc.example.org/example-spreadsheet-title, you get https://example.org/grain/FcTdrgjttPbhAzzKSv6ESD and https://example.org/grain/o96ouPLKQMEMZkFxNKf2Dr.) Fortunately, Let’s Encrypt seems to be on track to solving the certificate side of this problem by making it easy to get a cert for a newly-minted hostname signed automatically. Even so, the DNS part needs to be made easy enough that it doesn’t remain a blocker for self-hosting a box that allows on-demand Web app installation with browser-side app privilege separation.

Conclusion

There are lots of subproblems to work on, but, fortunately, things don’t seem fundamentally impossible. Interestingly, the problem with software that resides on the phone may be the relatively easy part to solve. That is not to say that it is easy to solve, but once solved, it can scale to a lot of users without the users having to do special things to get started in the role of a user who does not exercise the freedom to modify the system. However, since users these days are not satisfied by merely device-resident software but want things to work across multiple devices, the server-side part is relevant and harder to scale. Somewhat paradoxically, the hardest thing to scale in a usable way seems like a triviality on surface: the addressing of the server-side part in a way that gives sovereignty to users.

Pascal FinetteWeekend Link Pack (Jan 24)

What I was reading this week:

Benjamin KerensaGet a free U2F Yubikey to test on Firefox Nightly

U2F, Yubikey, Universal 2nd FactorPasswords are always going to be vulnerable to being cracked. Fortunately, there are solutions out there that are making it safer for users to interact with services on the web. The new standard in protecting users is Universal 2nd Factor (U2F) authentication which is already available in browsers like Google Chrome.

Mozilla currently has a bug open to start the work necessary to delivering U2F support to people around the globe and bring Firefox into parity with Chrome by offering this excellent new feature to users.

I recently reached out to the folks at Yubico who are very eager to see Universal 2nd Factor (U2F) support in Firefox. So much so that they have offered me the ability to give out up to two hundred Yubikeys with U2F support to testers and will ship them directly to Mozillians regardless of what country you live in so you can follow along with the bug we have open and begin testing U2F in Firefox the minute it becomes available in Firefox Nightly.

If you are a Firefox Nightly user and are interested in testing U2F, please use this form (offer now closed) and apply for a code to receive one of these Yubikeys for testing. (This is only available to Mozillians who use Nightly and are willing to help report bugs and test the patch when it lands)

Thanks again to the folks at Yubico for supporting U2F in Firefox!

Update: This offer is now closed check your email for a code or a request to verify you are a vouched Mozillian! We got more requests also then we had available so only the first two hundred will be fulfilled!

Mozilla WebDev CommunityBeer and Tell – January 2015

Once a month, web developers from across the Mozilla Project get together to trade and battle Pokémon. While we discover the power of friendship, we also find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Michael Kelly: gamegirl

Our first presenter, Osmose (that’s me!), shared a Gameboy emulator, written in Python, called gamegirl. The emulator itself is still very early in development and only has a few hundred CPU instructions implemented. It also includes a console-based debugger for inspecting the Gameboy state while executing instructions, powered by urwid.

Luke Crouch: Automatic Deployment to Heroku using TravisCI and Github

Next, groovecoder shared some wisdom about his new favorite continuous deployment setup. The setup involves hosting your code on Github, running continuous integration using Travis CI, and hosting the site on Heroku. Travis supports deploying your app to Heroku after a successful build, and groovecoder uses this to deploy his master branch to a staging server.

Once the code is ready to go to production, you can make a pull request to a production branch on the repo. Travis can be configured to deploy to a different app for each branch, so once that pull request is merged, the site is deployed to production. In addition, the pull request view gives a good overview of what’s being deployed. Neat!

This system is in use on codesy, and you can check out the codesy Github repo to see how they’ve configured their project to deploy using this pattern.

Peter Bengtsson: django-screencapper

Friend of the blog peterbe showed off django-screencapper, a microservice that generates screencaps from video files using ffmpeg. Developed as a test to see if generating AirMozilla icons via an external service was viable, it queues incoming requests using Alligator and POSTs the screencaps to a callback URL once they’ve been generated.

A live example of the app is available at http://screencapper.peterbe.com/receiver/.

tofumatt: i-dont-like-open-source AKA GitHub Contribution Hider

Motorcycle enthusiast tofumatt hates the Github contributor streak graph. To be specific, he hates the one on his own profile; it’s distracting and leads to bad behavior and imposter syndrome. To save himself and others from this terror, he created a Firefox add-on called the GitHub Contribution Hider that hides only the contribution graph on your own profile. You can install the addon by visiting it’s addons.mozilla.org page. Versions of the add-on for other browsers are in the works.


Fun fact: The power of friendship cannot, in fact, overcome type weaknesses.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Adam LoftingWeeknotes: 23 Jan 2015

unrelated photo

unrelated photo

I managed to complete roughly five of my eleven goals for the week.

  • Made progress on (but have not cracked) daily task management for the newly evolving systems
  • Caught up on some email from time off, but still a chunk left to work through
  • Spent more time writing code than expected
  • Illness this week slowed me down
  • These aren’t very good weeknotes, but perhaps better than none.

 

Jess KleinDino Dribbble

The newly created Mozilla Foundation design team started out with a bang (or maybe I should say rawr) with our very first collaboration: a team debut on dribbble. Dribbble describes itself as a show and tell community for designers. I have not participated in this community yet but this seemed like a good moment to join in. For our debut shot, we decided to have some fun and plan out our design presence. We ultimately decided to go in a direction designed by Cassie McDaniel.

The concept was for us to break apart the famed Shepard Fairey Mozilla dinosaur into quilt-like
tiles.

 
Each member of the design team was assigned a tile or two and given a shape. This is the one I was assigned:
I turned that file into this:

We all met together in a video chat to upload our images on to the site.

Anticipation was building as we uploaded each shot one by one:
But the final reveal made it worth all the effort! 

Check out our new team page on dribbble. rawr!

Cassie also wrote about the exercise on her blog and discussed the opinion position for a designer to join the team.



Mozilla Reps CommunityReps Weekly Call – January 22th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

this-is-mozilla

Summary

  • Dashboard QA and UI.
  • Community Education.
  • Feedback on reporting.
  • Participation plan and Grow meeting.
  • Womoz Badges.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Rubén MartínWe have to fight again for the web

Versión en español: “Debemos volver a pelear por la web

It’s interesting to see how the history repeats itself and we repeat the same mistakes over and over again.

I started contributing to Mozilla in 2004, at that time Internet Explorer had the 95% of the market share. That meant that there was absolutely no way you could create a web not “adapted” to this browser and there were no way you could surf the full web with other browsers becuase a lot of sites used ActiveX and other IE-only non-standard technologies.

The web was Internet Explorer, Internet Explorer was the web, you had no choice.

We fought hard (really hard) from Mozilla and other organizations to bring user choice and open other ways to understand the web. People understood this and the web changed to an open diverse ecosystem where standards were the way to go and where everyone was able to be part of it, both users and developers.

Today everything we fought for is at risk.

It’s not the first time we see how some sites decide to offer their content just for one browser, sometimes using technologies that are not a standard and only work there but sometimes using technologies that are standards and blocking other browsers for no reason.

Business is business.

If we don’t want a web controlled again by a few, driven by stockholders interests and not by users, we have to stand up. We have to call out sites that try to hijack user choice asking them to use one browser to access their content.

whatsapp-hats

The web should run everywhere and users should be free to choose the browser they think best serve their interests/values.

I truly believe that Mozilla, as a non-profit organization, is still the only one that can provide an independent choice to users and balance the market to avoid the walled gardens some dream to build.

Don’t lower the guard, let’s not repeat the same mistakes again, let’s fight again for the web.

PD: If you want a deep analysis about the Whatsapp web fiasco, I recommend the post by my friend André: “Whatsapp doesn’t understand the web”.

Ehsan AkhgariRunning Microsoft Visual C++ 2013 under Wine on Linux

The Wine project lets you run Windows programs on other operating systems, such as Linux.  I spent some time recently trying to see what it would take to run Visual C++ 2013 Update 4 under Linux using Wine.

The first thing that I tried to do was to run the installer, but that unfortunately hits a bug and doesn’t work.  After spending some time looking into other solutions, I came up with a relatively decent solution which seems to work very well.  I put the instructions up on github if you’re interested, but the gist is that I used a Chromium depot_tools script to extract the necessary files for the toolchain and the Windows SDK, which you can copy to a Linux machine and with some DLL loading hackery you will get a working toolchain.  (Note that I didn’t try to run the IDE, and I strongly suspect that will not work out of the box.)

This should be the entire toolchain that is necessary to build Firefox for Windows under Linux.  I already have some local hacks which help us get past the configure script, hopefully this will enable us to experiment with using Linux to be able to build Firefox for Windows more efficiently.  But there is of course a lot of work yet to be done.

Armen ZambranoBacked out - Pinning for Mozharness is enabled for the fx-team integration tree

EDIT=We had to back out this change since it caused issues for PGO talos jobs. We will try again after further testing.

Pinning for Mozharness [1] has been enabled for the fx-team integration tree.
Nothing should be changing. This is a no-op change.

We're still using the default mozharness repository and the "production" branch is what is being checked out. This has been enabled on Try and Ash for almost two months and all issues have been ironed out. You can know if a job is using pinning of Mozharness if you see "repostory_manifest.py" in its log.

If you notice anything odd please let me know in bug 1110286.

If by Monday we don't see anything odd happening, I would like to enable it for mozilla-central for few days before enabling it on all trunk trees.

Again, this is a no-op change, however, I want people to be aware of it.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Doug BelshawOntology, mentorship and web literacy

This week’s Web Literacy Map community call was fascinating. They’re usually pretty interesting, but today’s was particularly good. I’m always humbled by the brainpower that comes together and concentrates on something I spend a good chunk of my life thinking about!

Brain

I’ll post an overview of the entire call in on the Web Literacy blog tomorrow but I wanted to just quickly zoom out and focus on things that Marc Lesser and Jess Klein were discussing during the call. Others mentioned really useful stuff too, but I don’t want to turn this into an epic post!


Marc

Marc reminded us of Clay Shirky’s post entitled Ontology is Overrated: Categories, Links, and Tags. It’s a great read but the point Marc wanted to extract is that pre-defined ontologies (i.e. ways of classifying things) are kind of outdated now we have the Internet:

No filesystem

In the Web 2.0 era (only 10 years ago!) this was called a folksonomic approach. I remembered that I actually used Shirky’s post in one of my own for DMLcentral a couple of years ago. To quote myself:

The important thing here is that we – Mozilla and the community – are creating a map of the territory. There may be others and we don’t pretend we’re not making judgements. We hope it’s “useful in the way of belief” (as William James would put it) but realise that there are other ways to understand and represent the skills and competencies required to read, write and participate on the Web.

Given that we were gently chastised at the LRA conference for having an outdated approach to representing literacies, we should probably think more about this.


Jess

Meanwhile, Jess, was talking about the Web Literacy Map as an ‘API’ upon which other things could be built. I reminded her of the WebLitMapper, a prototype I suggested and Atul Varma built last year. The WebLitMapper allows users to tag resources they find around the web using competencies from the Web Literacy Map.

This, however, was only part of what Jess meant (if I understood her correctly). She was interested in multiple representations of the map, kind of like these examples she put together around learning pathways. This would allow for the kind of re-visualisations of the Web Literacy Map that came out of the MozFest Remotee Challenge:

Re-visualising the Web Literacy Map

Capturing the complexity of literacy skill acquisition and development is particularly difficult given the constraints of two dimensions. It doubly-difficult if the representation has to be static.

Finally from Jess (for the purposes of this post, at least) she reminded us of some work she’d done around matching mentors and learners:

MentorN00b


Conclusion

The overwhelming feeling on the call was that we should retain the competency view of the Web Literacy Map for v1.5. It’s familiar, and helps with adoption:

Web Literacy Map v1.1

However, this decision doesn’t prevent us from exploring other avenues combining learning pathways, badges, and alternative ways to representing the overall skill/competency ecosystem. Perhaps diy.org/skills can teach us a thing or two?


Questions? Comments Tweet me (@dajbelshaw or email me (doug@mozillafoundation.org

Florian QuèzeProject ideas wanted for Summer of Code 2015

Google is running Summer of Code again in 2015. Mozilla has had the pleasure of participating every year so far, and we are hoping to participate again this year. In the next few weeks, we need to prepare a list of suitable projects to support our application.

Can you think of a 3-month coding project you would love to guide a student through? This is your chance to get a student focusing on it for 3 months! Summer of Code is a great opportunity to introduce new people to your team and have them work on projects you care about but that aren't on the critical path to shipping your next release.

Here are the conditions for the projects:

  • completing the project should take roughly 3 months of effort for a student;
  • any part of the Mozilla project (Firefox, Firefox OS, Thunderbird, Instantbird, SeaMonkey, Bugzilla, L10n, NSS, IT, and many more) can submit ideas, as long as they require coding work;
  • there is a clearly identified mentor who can guide the student through the project.


If you have an idea, please put it on the Brainstorming page, which is our idea development scratchpad. Please read the instructions at the top – following them vastly increases chances of your idea getting added to the formal Ideas page.

The deadline to submit project ideas and help us be selected by Google is February 20th.

Note for students: the student application period starts on March 16th, but the sooner you start discussing project ideas with potential mentors, the better.

Please feel free to discuss with me any question you may have related to Mozilla's participation in Summer of Code. Generic Summer of Code questions are likely already answered in the FAQ.

Benoit GirardGecko Bootcamp Talks

Last summer we held a short bootcamp crash course for Gecko. The talks have been posted to air.mozilla.com and collected under the TorontoBootcamp tag. The talks are about an hour each but will be very informative to some. They are aimed at people wanting a deeper understanding of Gecko.

View the talks here: https://air.mozilla.org/search/?q=tag%3A+TorontoBootcamp

Gecko Pipeline

Gecko Pipeline

In the talks you’ll find my first talk covering an overall discussion of the pipeline, what stages run when and how to skip stages for better performance. Kannan’s talk discusses Baseline, our first tier JIT. Boris’ talk discusses Restyle and Reflow. Benoit Jacob’s talk discusses the graphics stack (Rasterization + Compositing + IPC layer) but sadly the camera is off center for the first half. Jeff’s talk goes into depth into Rasterization, particularly path drawing. My second talk discusses performance analysis in Gecko using the gecko profiler where we look at real profiles of real performance problems.

I’m trying to locate two more videos about layout and graphics that were given at another session but would elaborate more the DisplayList/Layer Tree/Invalidation phase and another on Compositing.


Matt ThompsonMozilla Learning in 2015: our vision and plan

This post is a shortened, web page version of the 2015 Mozilla Learning plan we shared back in December. Over the next few weeks, we’ll be blogging and encouraging team and community members to post their reflections and detail on specific pieces of work in 2015 and Q1. Please post your comments and questions here — or get more involved.

Surman Keynote for Day One - Edit.002

Within ten years, there will be five billion citizens of the web.

Mozilla wants all of these people to know what the web can do. What’s possible. We want them to have the agency, skills and know-how they need to unlock the full power of the web. We want them to use the web to make their lives better. We want them to know they are citizens of the web.

Mozilla Learning is a portfolio of products and programs that helps people learn how to read, write and participate in the digital world.

Building on Webmaker, Hive and our fellowship programs, Mozilla Learning is a portfolio of products and programs that help these citizens of the web learn the most important skills of our age: the ability to read, write and participate in the digital world. These programs also help people become mentors and leaders: people committed to teaching others and to shaping the future of the web.

Mark Surman presents the Mozilla Learning vision and plan in Portland, Dec 2015

Three-year vision

By 2017, Mozilla will have established itself as the best place to learn the skills and know-how people need to use the web in their lives, careers and organizations. We will have:

  • Educated and empowered users by creating tools and curriculum for learning how to read, write and participate on the web. Gone mainstream.
  • Built leaders, everywhere by growing a global cadre of educators, researchers, coders, etc. who do this work with us. We’ve helped them lead and innovate.
  • Established the community as the classroom by improving and explaining our experiential learning model: learn by doing and innovating with Mozilla.

At the end of these three years, we may have established something like a “Mozilla University” — a learning side of Mozilla that can sustain us for many decades. Or, we may simply have a number of successful learning programs. Either way, we’ll be having impact.

We may establish something like a “Mozilla University” — a learning side of Mozilla that can sustain us for many decades.

Surman Keynote for Day One - Edit.008
2015 Focus

1) Learning Networks 2) Learning Products 3) Leadership Development

Our focus in 2015 will be to consolidate, improve and focus what we’ve been building for the last few years. In particular we will:

  • Improve and grow our local Learning Networks (Hive, Maker Party, etc).
  • Build up an engaged user base for our Webmaker Learning Products on mobile and desktop.
  • Prototype a Leadership Development program, and test it with fellows and ReMo.

The short term goal is to make each of our products and programs succeed in their own right in 2015. However, we also plan to craft a bigger Mozilla Learning vision that these products and programs can feed into over time.

Surman Keynote for Day One - Edit.003
A note on brand

Mozilla Learning is notional at this point. It’s a stake in the ground that says:

Mozilla is in the learning and empowerment business for the long haul.

In the short term, the plan is to use “Mozilla Learning” as an umbrella term for our community-driven learning and leadership development initiatives — especially those run by the Mozilla Foundation, like Webmaker and Hive. It may also grow over time to encompass other initiatives, like the Mozilla Developer Network and leadership development programs within the Mozilla Reps program. In the long term: we may want to a) build out a lasting Mozilla learning brand (“Mozilla University?”), or b) build making and learning into the Firefox brand (e.g., “Firefox for Making”). Developing a long-term Mozilla Learning plan is an explicit goal for 2015.

What we’re building

Practically, the first iteration of Mozilla Learning will be a portfolio of products and programs we’ve been working on for a number of years: Webmaker, Hive, Maker Party, Fellowship programs, community labs. Pulled together, these things make up a three-layered strategy we can build more learning offerings around over time.

  1. The Learning Networks layer is the most developed piece of this picture, with Hives and Maker Party hosts already in 100s of cities around the world.
  2. The Learning Products layer involves many elements of the Webmaker.org work, but will be relaunched in 2015 to focus on a mass audience.
  3. The Leadership Development piece has strong foundations, but a formal training element still needs to be developed.
Scope and scale

One of our goals with Mozilla Learning is to grow the scope and scale of Mozilla’s education and empowerment efforts. The working theory is that we will create an interconnected set of offerings that range from basic learning for large numbers of people, to deep learning for key leaders who will help shape the future of the web (and the future of Mozilla).

We want to increasing the scope and diversity of how people learn with Mozilla.

We’ll do that by building opportunities for people to get together to learn, hack and invent in cities on every corner of the planet. And also: creating communities that help people working in fields like science, news and government figure out how to tap into the technology and culture of the web in their own lives, organizations and careers. The plan is to elaborate and test out this theory in 2015 as a part of the Mozilla Learning strategy process. (Additional context on this here: http://mzl.la/depth_and_scale.) Surman Keynote for Day One - Edit.016

Contributing to Mozilla’s overall 2015 KPIs

How will we contribute to Mozilla’s top-line goals? In 2015, We’ll measure success through two key performance indicators: relationships and reach.

  • Relationships: 250K active Webmaker users
  • Reach: 500 cities with ongoing Learning Network activity

Surman Keynote for Day One - Edit.006

Learning Networks

In 2015, we will continue to grow and improve the impact of our local Learning Networks.

  • Build on the successful ground game we’ve established with teachers and mentors under the Webmaker, Hive and Maker Party banners.
  • Evolve Maker Party into year-round activity through Webmaker Clubs.
  • Establish deeper presence in new regions, including South Asia and East Africa.
  • Improve the websites we use to support teachers, partners, clubs and networks.
  • Sharpen and consolidate teaching tools and curriculum built in 2014. Package them on their own site, “teach.webmaker.org.”
  • Roll out largescale, extensible community-building software to run Webmaker clubs.
  • Empower more people to start Hive Learning Networks by improving documentation and support.
  • Expand scale, rigour and usability of curriculum and materials to help people better mentor and teach.
  • Expand and improve trainings online and in-person for mentors.
  • Recruit more partners to increase reach and scope of networks.

Surman Keynote for Day One - Edit.011

Learning Products

Grow a base of engaged desktop and mobile users for Webmaker.

  • Expand our platform to reach a broad market of learners directly.
  • Mobile & Desktop: Evolve current tools into a unified Webmaker making and learning platform for desktop, Firefox OS and Android.
  • Tablet: Build on our existing web property to address tablet browser users and ensure viability in classrooms.
  • Firefox: Experiment with ways to integrate Webmaker directly into Firefox.
  • Prioritize mobile. Few competitors here, and the key to emerging markets growth.
  • Lower the bar. Build user on-boarding that gets people making / learning quickly.
  • Engagement. Create sticky engagement. Build mentorship, online mentoring and social into the product.

Surman Keynote for Day One - Edit.014

Leadership Development

Develop a leadership development program, building off our existing Fellows programs.

  • Develop a strategy and plan. Document the opportunity, strategy and scope. Figure out how this leadership development layer could fit into a larger Mozilla Learning / Mozilla University vision.
  • Build a shared definition of what it means to be a ‘fellow’ at Mozilla. Empowering emerging leaders to use Mozilla values and methods in their own work.
  • Figure out the “community as labs” piece. How we innovate and create open tech along the way.
  • Hire leadership. Create an executive-level role to lead the strategy process and build out the program.
  • Test pilot programs. Develop a handbook / short course for new fellows.
  • Test with fellows and ReMo. Consider expanding fellows programs for science, web literacy and computer science research.
Get involved
  • Learn more. There’s much more detail on the Learning Networks, Learning Products and Leadership Development pieces in the complete Mozilla Learning plan.
  • Get involved. There’s plenty of easy ways to get involved now with Webmaker and our local Learning Networks today.
  • Get more hands-on. Want to go deeper? Get hands-on with code, curriculum, planning and more through build.webmaker.org

Schalk NeethlingResolving Error: pg_config executable not found on Mac

Every once in a while when I have to get an old project up and running or simply house cleaning a current project, I run into this error, and each time it trips me up, and I spend a ton of time yak shaving. Well, today was the last time. To future me, and whomever … Continue reading Resolving Error: pg_config executable not found on Mac

Mozilla Release Management TeamFirefox 36 beta1 to beta2

Beta 2 is a busy beta release. First, because of an holiday in the US, the go to build has been delayed by a day (Tuesday instead of Monday). Second, a lot of fixes for MSE landed.

  • 129 changesets
  • 271 files changed
  • 5021 insertions
  • 2064 deletions

ExtensionOccurrences
cpp80
h51
js37
ini21
xml15
html8
java6
list4
css4
jsm3
xul2
sjs2
nsi2
xhtml1
webidl1
nsh1
jsx1
json1
in1
cc1

ModuleOccurrences
dom105
browser36
toolkit19
mobile16
media13
layout12
netwerk11
testing10
security8
js5
uriloader2
gfx2
xpcom1
image1
editor1

List of changesets:

Armen Zambrano GasparnianBug 1064002 - Fix removal of --log-raw from xpcshell. r=chmanchester. a=testing - 93587eeda731
Karl TomlinsonBug 1108838 - Move stalled/progress timing from MediaDecoder to HTMLMediaElement. r=cpearce, a=sledru - 15e3be526862
Karl TomlinsonBug 1108838 - Dispatch "stalled" even when no bytes have been received. r=cpearce, a=sledru - b07f9144d190
Jeff MuizelaarBug 1090518 - Fix crash during webgl-depth-texture.html conformance test. r=jrmuizel, a=sledru - 36535f9806e6
Jan-Ivar BruaroeyBug 1098314 - Ignore and warn on turns: and stuns: urls until we support TURN/STUN TLS. r=bwc, a=sledru - 3b4908a629e8
Brad LasseyBug 1112345 - Tab streaming should scroll stream with layers and not offsets. r=snorp, a=sledru - 3956d52ad3f0
Karl TomlinsonBug 975782 - Bring media resource loads out of background while they delay the load event. r=cpearce, a=sledru - cdd335426a39
Karl TomlinsonBug 975782 - Stop delaying the load event when media fetch has stalled. r=cpearce, f=kinetik, a=sledru - 3abc61cb0abd
Dave TownsendBug 1102050 - Set consumeoutsideclicks="false" whenever the popup is opened. r=felipe, a=sledru - a33308dd5af8
Brad LasseyBug 1115802 - Scrolling no longer working when tab mirroring from fennec. r=snorp, a=sledru - f6d5f2303fea
Karl TomlinsonBug 1116676 - Ensure that AddRemoveSelfReference() is called on networkState changes. r=roc, a=sledru - ad2cfe2a92a5
Karl TomlinsonBug 1114885 - Allow media elements to be GC'd when their MediaSource is unreferenced. r=roc, a=sledru - 44e174f9d843
Dave TownsendBug 1094312 - Properly destroy browsers when switching between remote and non-remove pages and override the default destroy method in remote-browser.xml. r=mconley, a=sledru - 08f30b223076
Dave TownsendBug 1094312 - Fix browser_bug553455.js to handle the cases where the progress notification is hidden before it has fully appeared. r=Gijs, a=sledru - fc494bb31bec
Dave TownsendBug 1094312 - Fix browser_bug553455.js:test_cancel_restart by pausing the download for long enough for the progress notification to show reliably. r=Gijs, a=sledru - b71146fc0e37
Margaret LeibovicBug 1107925 - Don't launch fennec on search redirects. r=bnicholson, a=sledru - 6796cf5b59b1
Mark HammondBug 1116404 - Better timeout semantics for search service geoip lookups. r=felipe, a=sledru - 06bb4d89e2bf
Magnus MelinBug 1043310 - AutoCompletion doesn't take capitalization from address book entry, can leave angle brackets characters >> in field, when loosing focus by clicking outside (not enter/tab). r=mak, a=sledru - 1d406b3f20db
Matt WoodrowBug 1116626 - Null check mDecoder in AutoNotifyDecoded since it might have been shutdown already. r=karlt, a=sledru - e076d58d5b10
Matt WoodrowBug 1116284 - Don't run MP4Reader::Update after we've shut the reader down. r=cpearce, a=sledru - 2fd2c6de0a87
Bobby HolleyBug 1119456 - Make MP4Demuxer's blocking reads non-blocking and hoist blocking into callers with a hacky retry strategy. r=k17e, a=sledru - fa0128cdef95
Bobby HolleyBug 1119456 - Work around the fact that media cache does not quite guarantee the property we want. r=roc, a=sledru - 18f7174682d3
Andrea MarchesiniBug 1113062 - Part 1: PIFileImpl and FileImpl merged. r=smaug, a=sledru - 23f5b373f676
Andrea MarchesiniBug 1113062 - Part 2: ArchiveReaderZipFile non-CCed. r=smaug, a=sledru - f203230f49f4
Andrea MarchesiniBug 1113062 - IndexedDB FileSnapshot not CCed. r=janv, a=sledru - 962ac9efa80c
Matt WoodrowBug 1105066 - Make SeekPromise return the time we actually seeked to. r=kentuckyfriedtakahe, a=sledru - e16f64387888
Matt WoodrowBug 1105066 - Chain seeks in MediaSourceReader so that we seek audio to the same time as video. r=kentuckyfriedtakahe, a=sledru - 154dac808616
Anthony JonesBug 1105066 - Seek after switching reader. r=mattwoodrow, a=sledru - a0ffac1b2851
Matt WoodrowBug 1119033 - Don't try to evict when we don't have any initialized decoders. r=ajones, a=sledru - a78eb4dd84f0
Kai-Zhen LiBug 1119691 - Fix build bustage in dom/media/mediasource/MediaSource.cpp. r=bz, a=sledru - 7edfdc36c3cf
Bobby HolleyBug 1120014 - Initialize MediaSourceReader::mLast{Audio,Video}Time to 0 rather than -1. r=rillian, a=sledru - 201fee3158c1
Bobby HolleyBug 1120017 - Make the DispatchDecodeTasksIfNeeded path handle DECODER_STATE_DECODING_FIRSTFRAME. r=cpearce, a=sledru - aa8cdb057186
Bobby HolleyBug 1120023 - Clean up semantics of SourceBufferResource reading. r=cpearce, a=sledru - 60f6890d84cf
Bobby HolleyBug 1120023 - Fix some bugs in MockMediaResource. r=cpearce, a=sledru - e5cc2f8f3f7e
Bobby HolleyBug 1120023 - Switch SourceBufferResource::Read{,At} back to blocking. r=cpearce, a=sledru - 423cb20b5f43
Dão GottwaldBug 1115307 - Search bar alignment fixes and cleanup. r=florian, a=sledru - c7e58ab0e1f6
Dave TownsendBug 1119450 - Clicks on the search go button shouldn't open the search popup. r=felipe, a=sledru - 17b6018c53f0
David KeelerBug 1065909 - Canonicalize hostnames in nsSiteSecurityService and PublicKeyPinningService. r=mmc, a=sledru - 82cce51fb174
Abdelrhman AhmedBug 1102961 - Cannot navigate AMO without closing the Options window. r=florian, a=sledru - 5ac62d0df17e
JW WangBug 1115505 - Keep decoding to ensure the stream is initialized in the decode-to-stream case. r=roc, a=sledru - 4d3d7478ffa4
Olli PettayBug 1108721 - HTMLMediaElement.textTracks needs to be nullable in Gecko for now. r=peterv, a=sledru - 5fba52895751
Bobby HolleyBug 1120629 - Cache data directly on MP4Stream rather than relying on the media cache. r=roc, a=sledru - f7bd9ae15c9e
Julian SewardBug 1119803 - Uninitialised value use in StopPrerollingVideo. r=bobbyholley, a=sledru - 0a648dfd0459
Andrea MarchesiniBug 1111971 - A better life-time management of aListener and aContext in WebSocketChannel. r=smaug, a=abillings - 19e248751a1c
Ryan VanderMeulenBacked out changeset e91fcba59c18 (Bug 1119941) because we don't want to ship int in 36b1. a=sledru - 1d99e9a39847
Ryan VanderMeulenBug 1088708 - Disable testOSLocale on Android x86 for permafailing. r=gbrown, a=test-only - 483bad7e5e88
Ryan VanderMeulenNo bug - Adjust some Android reftest expectations now that they're passing again. r=gbrown, a=test-only - 454907933777
Mark BannerBug 1119765 - Joining and Leaving a Loop room quickly can leave the room as full. Ensure we send the leave notification if we've already sent the join. r=mikedeboer,a=sylvestre - 9b99fc7b7c20
Sotaro IkedaBug 1112410 - Handle set dormant during seeking r=cpearce a=sledru - 5d185a7d03b5
Mike ConleyBug 1117936 - If print preview throws in browser-content.js, make sure printUtils.js can handle the error. r=Mossop, a=sledru - 2fd253435fe4
Dave TownsendBug 1118135 - Clicking the magnifying glass while the suggestions are open should close the popup and not re-open it. r=felipe, a=sledru - ee9df2674663
Chris PearceBug 1112445 - Ignore the audio stream when determining whether we should skip-t-o-next-keyframe for async readers. r=mattwoodrow, a=sledru - f82a118e1064
Steve FinkBug 1117768 - Fix assertion in AutoStopVerifyingBarriers and add tests. r=terrence, a=sledru - 53ae5eeb6147
Matt WoodrowBug 1121661 - Null check mDemuxer in MP4Reader::ResetDecoder since we might not have created one yet. r=bholley, a=sledru - 28900712c87f
Chris PearceBug 1112822 - Don't allow MP4Reader to decode if someone shut it down. r=mattwoodrow, a=sledru - c8031be76a86
Ryan VanderMeulenBacked out changeset 53ae5eeb6147 (Bug 1117768) for bustage. - 96d0d77a3462
Martyn HaighBug 1117130 - URL bar border slightly covered by fading edge of title. r=mfinkle, a=sledru - ca609e2e5bea
JW WangBug 1112588 - Ignore 'stalled' events because the progress timer could time out before receiving any HTTP notifications on slow machines like B2G emulator. r=cpearce, a=test-only - d185df72bd0e
Ryan VanderMeulenBug 1111137 - Disable test_user_agent_overrides.html on Android due to frequent failures. a=test-only - cd07ffdd30c5
Steve FinkBug 1111330 - GetBacktrace needs to be able to free the results buffer. r=njn, a=lsblakk - f154bf489b34
Robert StrongBug 1120673 - Verify Firewall service is running before adding Firewall exceptions - Fx 35 installer crashes on XP x86 SP3 at the end (creating shortcuts) if the xp firewall service is stopped. r=bbondy, a=sledru - bc2de4c07f1b
Nicolas B. PierronBug 1118911 - GetPcScript should care about bailout frames. r=jandem, a=sledru - 66f61f3f9664
Bobby HolleyBug 1121841 - Clear the failed read after checking it. r=jya, a=sledru - 0f43b4df53bb
Bobby HolleyBug 1121248 - Stop logging unimplemented methods in SourceBufferResource. r=mattwoodrow, a=sledru - b8922f819a88
Michael ComellaBug 1106935 - Part 1: Replace old tablet pngs with null XML resources. r=mhaigh, a=sledru - 0b7d9ce1cdc7
Ehsan AkhgariBug 1113121 - Null check the parent node in nsHTMLEditRules::JoinNodesSmart() before passing it to MoveNode; r=roc a=sylvestre - 64d25509541e
Jean-Yves AvenardBug 1121757: Prevent out of bound memory access should AVC data be invalid. r=kinetik a=sledru - 84bf56da4a55
Jean-Yves AvenardBug 1121342: Re-Request audio or video to decode first frame after a failed attempt. r=cpearce a=sledru - 7a8d1dd9fff3
Jean-Yves AvenardBug 1121342: Re-search for Moof if an initial attempt to find it failed. r=kentuckyfriedtakahe a=sledru - dfbca180664d
John DaggettBug 1118981 - initialize mSkipDrawing correctly for already loading fonts. r=jfkthame, a=sylvestre - beb62e1ad523
Mike HommeyBug 1110760 - Followup to avoid build failure with Windows SDK v7.0 and v7.0A. r=gps a=lsblakk - fe217a0d2e9a
Paul Kerr [:pkerr]Bug 1028869 - Part 1: Add ping and ack operations to PushHandler. r=standard8, a=sledru - fc47c7a95f85
Paul Kerr [:pkerr]Bug 1028869 - Part 2: xpcshell test updated with ping/restore. r=standard8, a=sledru - b653be6b040a
Gijs KruitboschBug 1079355 - indexedDB pref should only apply for content pages, not chrome ones, r=bent,a=sylvestre - 97b34f0b9946
Jean-Yves AvenardBug 1118123: Update mediasource duration following sourcebuffer::appendBuffer. r=cajbir a=sledru - 9a4a8602e6f4
Jean-Yves AvenardBug 1118123: Mochitest to verify proper sourcebuffer behavior. r=cajbir a=sledru - 61e917f920c9
Jean-Yves AvenardBug 1118123: Update mediasource web platforms tests now passing. r=karlt a=sledru - 7cd63f89473b
Jean-Yves AvenardBug 1119119: Do not abort when calling appendBuffer with no data. r=cajbir a=sledru - 4fe580b632e5
Jean-Yves AvenardBug 1119119: Update web-platform-tests expected data. r=karlt a=sledru - a1a315b3ff6b
Jean-Yves AvenardBug 1120084: Implement MSE's AppendErrorAlgorithm. r=cajbir a=sledru - da605a71901e
Jean-Yves AvenardBug 1120086: Re-open SourceBuffer after call to appendBuffer if in ended state. r=cajbir a=sledru - 7dd701f60492
Ben HearsumBug 1120420: switch in-tree update server/certs to aus4.mozilla.org. r=rstrong, a=lmandel - 59702337a220
Kartikaya GuptaBug 1107009. r=BenWa, a=sledru - 8d886705af93
Kartikaya GuptaBug 1122408 - Fix potential deadlock codepath. r=BenWa, a=sledru - e6df6527d52e
Phil RingnaldaBug 786938 - Disable test_handlerApps.xhtml on OS X. a=test-only - d1b7588f273b
Jan de MooijBug 1115844 - Fix Baseline to emit a nop for JSOP_DEBUGLEAVEBLOCK to temporarily work around a pc lookup bug. r=shu, a=sledru - 54a53a093110
Andreas PehrsonBug 1113600 - Part 1. Send stream data right away after adding an output stream. r=roc, a=sledru - 73c3918b169f
Andreas PehrsonBug 1113600 - Part 2. Handle setting a MediaStream sync point mid-playback. r=roc, a=sledru - e30a4672f03f
Andreas PehrsonBug 1113600 - Part 3. Add mochitest for capturing media mid-playback. r=roc, a=sledru - c17e1f237ff0
Andreas PehrsonBug 1113600 - Part 4. Handle switching directly from audio clock to stream clock. r=roc, a=sledru - b269b8f5102c
Gian-Carlo PascuttoBug 1119852 - Don't forget to update _requestedCapability in Windows camera driver. r=jesup, a=sledru - ee09df3331d0
Paul Kerr [:pkerr]Bug 1108028 - Replace pushURL registered with LoopServer whenever PushServer does a re-assignment. r=dmose, a=sledru - be5eee20bba5
Daniel HolbertBug 1110950 - Trigger a reflow (as well as a repaint) for changes to 'object-fit' and 'object-position', so subdocuments can be repositioned/resized. r=roc, a=sledru - 2b2b697613eb
Chris DoubleBug 1055904 - Improve MSE eviction calculation. r=jya, a=sledru - 595835cd60a0
Martyn HaighBug 1111598 - [Tablet] Make action bar background color consistent with the new tablet tab strip background. r=mcomella, a=sledru - 3e58a43384cd
Tim TaubertBug 950399 - SessionStore shouldn't forget domain cookies. r=yoric, a=sledru - 91f8d6ca5030
Tim TaubertBug 950399 - Tests for domain cookies. r=yoric, a=sledru - 670d3f856665
Jean-Yves AvenardBug 1120075 - Use Movie Extend Header's duration as fallback when available. r=kentuckyfriedtakahe, a=sledru - 18ade4ad787e
Jean-Yves AvenardBug 1119757 - Allow seeking on media with infinite duration. r=cpearce, a=sledru - b0c42a7f0dc7
Jean-Yves AvenardBug 1119757 - MSE: handle duration of 0 in metadata as infinity. r=mattwoodrow, a=sledru - 3e5d8c21f3a2
Jean-Yves AvenardBug 1120079 - Do not call Range Removal algorithm after endOfStream. r=cajbir, a=sledru - 2a36e0243edd
Jean-Yves AvenardBug 1120282 - Do not fire durationchange during call to ReadMetadata. r=mattwoodrow, a=sledru - 9bb138f23d58
Dragana DamjanovicBug 1108971 - Fix parameter in call GetAddrInfo. r=sworkman, a=sledru - 2dbbd7362502
Sotaro IkedaBug 1110343 - Suppress redundant loadedmetadata event when dormant exit. r=cpearce, a=sledru - fae52bd681e0
Sotaro IkedaBug 1108728 - Remove dormant related state from MediaDecoder. r=cpearce, a=sledru - 9ad34e90e339
Bobby HolleyBug 1121692 - Remove unnecessary arguments to ::Seek. r=mattwoodrow, sr=cpearce, a=sledru - d7e079df1b3d
Bobby HolleyBug 1121692 - Stop honoring aEndTime in MediaSourceReader::Seek. r=mattwoodrow, a=sledru - 67f6899c6221
Bobby HolleyBug 1121692 - Fix potential race condition with mWaitingForSeekData. r=mattwoodrow, a=sledru - 871ab0d29bb8
Bobby HolleyBug 1121692 - Clean up semantics around m{Audio,Video}IsSeeking. r=mattwoodrow, a=sledru - 35f5cf685186
Bobby HolleyBug 1121692 - Move the interesting seek state logic into DecodeSeek. r=mattwoodrow, r=cpearce, a=sledru - 3e1dd9e96598
Bobby HolleyBug 1121692 - Make seeks cancelable. r=cpearce, r=mattwoodrow, a=sledru - 2195dc79a65f
Bobby HolleyBug 1121692 - Handle mid-seek Request{Audio,Video}Data calls. r=cpearce, a=sledru - 4f059ea15ecf
Bobby HolleyBug 1121692 - Tests. r=mattwoodrow, r=cpearce, a=sledru - 56744595737c
Michael ComellaBug 1116912 - Don't hide the dynamic toolbar when it was originally shown but a tab was selected. r=wesj, a=sledru - 55bd32c43abd
Valentin GosuBug 1121826 - Backout cc192030c28f - brackets shouldn't be automatically escaped in the Query. r=mcmanus, a=sledru - 12bda229bf83
Nicholas NethercoteBug 1122322 - Fix crash in worker memory reporter. r=bent, a=sledru - c5dfa7d081f4
Ryan VanderMeulenBug 1055904 - Fix non-unified bustage in TrackBuffer.cpp. a=bustage - c703f90c5b80
Patrick McManusBug 1121706 - Don't offer h2 in alpn if w/out mandatory suite. r=hurley, a=sledru - 131919c0babd
Barbara GuidaBug 1122586 - Unbreak build on platforms missing std::llabs since Bug 1073716. r=dholbert, a=sledru - 506cfb41b8f3
Jean-Yves AvenardBug 1121876 - Treat negative WMF's output sample timestamp as zero. r=cpearce, a=sledru - e017341d2486
Jean-Yves AvenardBug 1121876 - Configure WMF decoder to output PCM 16. r=cpearce, a=sledru - cd88be2b57ac
Robert LongsonBug 1119698 - Ensure image elements take pointer-events into account. r=jwatt, a=sledru - 94e7cb795a05
Chris PearceBug 1123498 - Make MP4Reader skip-to-next-keyframe less aggressively. r=mattwoodrow, a=sledru - cee6bfbbecd7
Jean-Yves AvenardBug 1123507 - Prevent out of bound memory access. r=edwin, a=sledru - 8691f7169392
Geoff BrownBug 1105388 - Avoid robocop shutdown crashes with longer wait. r=mfinkle, a=test-only - ea7deca21c27
Anthony JonesBug 1116056 - Change MOZ_ASSERT() to NS_WARNING() in Box::Read(). r=jya, a=sledru - 54386fba64a7
Jean-Yves AvenardBug 1116056 - Ensure all atoms read are valid. r=mattwoodrow, a=sledru - 1f392909ff1f
Xidorn QuanBug 1121350 - Fix extra space on right from whitespace. r=roc, a=sledru - 598cd9c2e480
Kartikaya GuptaBug 1120252 - Avoid trying to get the APZCTreeManager if APZ isn't enabled. r=mattwoodrow, a=bustage - 1b21115851ef

Advancing ContentRequest for Innovation: Content and Ad Tech

We founded the Content Services group at Mozilla in order to build user-first content experiences and services within the Firefox browser that:

  • Respect user choice
  • Respect user data
  • Provide user value
  • Where possible, create new revenue opportunities for Mozilla and our partners

We have delivered Tiles in Firefox and have successfully tested some content partnerships. Our next objectives are:

  1. To provide a better content and advertising experience for our users within Firefox.  This may include but is not limited to the creation of new units, better personalization, and a higher volume of partners for varied content.
  2. To push the industry forward.  We are sure that there are content and advertising technology companies who aspire to the same principles we do but do not have the tools to act with today.

That’s why in the next few days we will be contacting a number of content and advertising tech companies, both large and small, to discuss an RFI (“Request for Innovation” – a partnership proposal) for providing more automation and scale in our offering.  Scale allows us to deliver content to our users across the globe so we keep the experience for users fresh and current.  Automation allows us to do this on a scale that’s significant.  We have to engage with the industry’s state-of-the-art.  That means working programmatically (and this can be a very complex space to operate in).  We know that there are many people in ad tech who welcome our involvement – many have already joined the project.

One of Mozilla’s distinct qualities is its ability to bring in champions for our cause, from advocating for open standards to sharing the vision of an open mobile ecosystem, we are at our best when we focus on our own competence and bring others into our community.

This will not be business as usual.  We have a very clear sense of who we would and would not partner with, and any relationship we enter into has to support our values.  And while there may be some areas for discussion we will not partner with organizations who blatantly disrespect the user.

We are explicit about this in the RFI: we want to work with partners who align with the Mozilla mission and our user-centric ethos to change and evolve the industry through this engagement.  As talked about in previous posts on this blog, we’re looking for support amongst our three core principles:

  • Trust: Always architect with honesty in mind. Ask, “Do users understand why they are being presented with content? Do they understand what fragments of their data underscore advertising decisions?”
  • Transparency: Always be transparent. “Is it clear to users why advertising and content decisions are made? Is it clear how their data is being consumed and shared?  Are they aware and openly contributing to the dialog?”
  • Control: Always put the control with the user. “Do users have the ability to control their own data? Do they have the option to be completely private, completely public or somewhere in between?”

Our team is working hard to deliver against these promises to our users:

  • We believe digital advertising can respect users’ privacy choices.
  • We can build useful products and experiences that users will choose to engage with, and provide an experience that delivers value.
  • We believe publishers should respect browser signals around tracking and privacy. Our content projects will respect DNT signals.
  • We will collect and retain the minimal amount of data required to provide value to users, advertisers, and publishers.
  • We will put users in control of product feature opt-in/out.

We’ve launched the early version of our platform in the Firefox anniversary release (33.1), last November and we’ve been learning and tweaking it since.  2015 is a big year for us to scale and build better experiences and we’re looking forward to sharing these with you.

Feel free to reach out to us (contentservices@mozilla.com) or join our interest list.  

Christian HeilmannBrowsers, Services and the OS – oh my…

Yesterday’s two hour Windows 10 briefing by Microsoft had some very interesting things in it (The Verge did a great job live blogging it). I was waiting for lots of information about the new browser, code name Spartan, but most was about Windows 10 itself. This is, of course, understandable and shows that I care about browsers maybe too much. There was interesting information about Windows 10 being a free upgrade, Cortana integration on all platforms, streaming games from xbox to Windows and vice versa. The big wow factor at the end of the brief was HoloLens, which makes interactivity like Iron Man had in his lab not that far-fetched any longer.

hololens working

For me, however, the whole thing was a bit of an epiphany about browsers. I’ve always seen browsers as my main playground and got frustrated by lack of standards support across them. I got annoyed by users not upgrading to new ones or companies making that hard. And I was disappointed by developers having their pet browsers to support and demand people to use the same. What I missed out on was how amazing browsers themselves have become as tools for end users.

For end users the browser is just another app. The web is not the thing alongside your computing interaction any longer, it is just a part of it. Just because I spend most of my day in the browser doesn’t make it the most important thing. In esssence, the interaction of the web and the hardware you have is what is the really interesting part.

A lot of innovation I have seen over the years that was controversial at that time or even highly improbable is now in phones and computers we use every day. And we don’t really appreciate it. Google Now, Siri and now Microsoft’s Cortana integration into the whole system is amazingly useful. Yes, it is also a bit creepy and there should be more granular insight into what gets crawled and what isn’t. But all in all isn’t it incredible that computers tell us about upcoming flights, traffic problems and remind us about things we didn’t even explicitly set as a reminder?

Spartan Demo screenshot by the verge

The short, 8 minute Spartan demo in the briefing showed some incredible functionality:

  • You can annotate web page with a stylus, mouse or add comments to any part of the text
  • You can then collect these, share them with friends or watch them offline later
  • Reading mode turns the web into a one-column, easy to read version. Safari, Mobile browsers like Firefox Mobile have this and third party services like Readability did that before.
  • Firefox’s awesome bar and Chrome’s Google Now integration also is in Windows with Cortana being available anywhere in the browser.

Frankly, not all of that is new, but I have never used these features. I was too bogged down into what browsers can not do instead of checking what is already possible for normal users.

I’ve mentioned this a few times in talks lately: a lot of the innovation of add-ons, apps and products is merging with our platforms. Where in the past it was a sensible idea to build a weather app and expect people to go there or even pay for it, we get this kind of functionality with our platforms. This is great for end users, but it means we have to be up to speed what user interfaces of the platforms look like these days instead of assuming we need to invent all the time.

Looking at this functionality made me remember a lot of things promised in the past but never really used (at least by me or my surroundings):

  • Back in 2001, Microsoft introduced Smart Tags, which caused quite a stir in the writing community as it allows third party commenting on your web content without notifying you. Many a web site added the MSSmartTagsPreventParsing to disallow this. The annotation feature of Spartan now is this on steroids. Thirdvoice (wayback machine archive) was a browser add-on that did the same, but got creepy very quickly by offering you things to buy. Weirdly enough Awesome Screenshot, an annotation plug-in also now gets very creepy by offering you price comparisons for your online shopping. This shows that a functionality like that doesn’t seem to be viable as a stand-alone business model, but very much makes sense as a feature of the platform.
  • Back in 2006, Ray Ozzie of Microsoft at eTech introduced the idea of the Live Clipboard. It was this:
    [Live Clipboard…] allows the copy and pasting of data, including dynamic, updating data, across and between web applications and desktop applications.
    The big thing about this was that it would have been an industrial size use case for Microformats and could have given that idea the boost it needed. However, despite me pestering Chris Wilson of – then – Microsoft at @media AJAX 2006 about it, this never took off. Until now, it seems – except that the clippings aren’t live.
  • When I worked in Yahoo, Browser Plus came out of a hackday, an extension to browsers that allows easier file uploads and drag and drop between browser and OS. It also gave you Desktop notifications. One of the use cases shown at the hack day was to drag and drop products from several online stores and then checkout in one step with all of them. This, still, is not possible. I’d wager to guess that legal problems and tax reasons are the main blocker there. Drag and Drop and uploads as well as Desktop notifications are now reality without add-ons. So we’re getting there.

This year will be very exciting. Not only does HTML5 and JavaScript get new features all the time. It seems to me that browsers become much, much smoother at integrating into our daily lives. This spells doom for a lot of apps. Why use an app when the functionality is already available with a simple click or voice command?

Of course, there are still many issues to fix, mainly offline and slow connection use cases. Privacy and security is another problem. Convenient as it is, there should be some way to know what is listening in on me right now and where the data goes. But, I for one am very interested about the current integration of services into the browser and the browser into the OS.

Henrik SkupinFirefox Automation report – week 49/50 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 49 and 50.

Highlights

During the first week of December the all-hands work week happened in Portland. Those were some great and inspiring days, full of talks, discussions, and conversations about various things. Given that I do not see my colleagues that often in real life, I have taken this opportunity to talk to everyone who is partly or fully involved in projects of our automation team. There are various big goals in front of us, so clearing questions and finding the next steps to tackle ongoing problems was really important. Finally we came out with a long list of todo items and more clarity about so far unclear tasks.

In week 50 we got some updates landed for Mozmill CI. Given a regression from the blacklist landing, our l10n tests haven’t been executed for any locale of the Firefox Developer Edition. Since the fix landed, we have seen problems with access keys in nearly each locale for a new test, which covers the context menu of web content.

Also we would like to welcome Barbara Miller in our team. She joined us as an intern via the FOSS outreach program as driven by Gnome. She will be with us until March and will mainly work on testdaybot and the conversion of Mozmill tests to Marionette. The latter project is called m21s and details can be found on its project page. Soon I will post more details about it.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 49 and week 50.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meeting of week 48. Due to the Mozilla all-hands workweek there was no meeting in week 49.

Ian BickingA Product Journal: The Technology Demo

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous and first post was Conception.

As I finished my last post I had a product idea built around a strategy (growth through social tools and sharing) and a technology (freezing or copying the markup). But that’s not a concise product definition centered around user value. It’s not even trying. The result is a technology demo, not a product.

In my defense I’m searching for some product, I don’t know what it is, and I don’t know if it exists. I have to push this past a technology demo, but if I have to start with a technology demo then so it goes.

I’ve found a couple specific experiences that help me adapt the product:

  • I demo the product and I sense an excitement for something I didn’t expect. For example, a view that I thought was just a logical necessity might be what most appeals to someone else. To do this I have to show the tool to people, and it has to include things that I think are somewhat superfluous. And I have to be actively reading the person viewing the demo to sense their excitement.

  • Remind myself continuously of the strategy. It also helps when I remind other people, even if they don’t need reminding – it centers the discussion and my thinking around the goal. In this case there’s a lot of personal productivity use cases for the technology, and it’s easy to drift in that direction. It’s easy because the technology facilitates those use cases. And while it’s cool to make something widely useful, that won’t make this tool work the way I want as a product, or work for Mozilla. (And because I plan to build this on Mozilla’s dime it better work for Mozilla! But that’s a discussion for another post.)

  • I’ll poorly paraphrase something I’m sure someone can source in the comments: a product that people love is one that makes those people feel great about themselves. In this case, makes them feel like a journalist and not just a crank, or makes them feel like they are successfully posing as a professional, or makes them feel like what they are doing is appreciated by other people, or makes them feel like an efficient organizer. In the product design you can exult the product, try to impress people, try to attract compliments on your own prowess, but love comes when a person is impressed with themselves when they use your product. This advice helps keep me from valuing cleverness.

A common way to pull people out of technology-focused thinking is to ask “what problem does this solve?” While I appreciate this question more than I used to, it still makes me bristle. Why must everything be focused on problems? Why not opportunities! Why? An answer: problems are cases where a person has already articulated a tension and an openness to resolution. You have a customer in waiting. But must we confine ourselves to the partially formed conventional wisdom that makes something a “problem”? (One fair answer to this question is: yes. I remain open to other answers.) Maybe a more positive alternative to “what problem does this solve?” is “what does this let people do that they couldn’t do before?”

What I’m certain of is that you should constantly remember the people using your tool will care most about their interests, goals, and perspective; and will not care much about the interests, goals, or perspective of the tool maker.

So what should this tool do? If not technology, what defines it? A pithy byline might be share better. I don’t like pithy, but maybe a whole bag of pithy:

  • Improving on the URL
  • Own what you share
  • Share content, not pointers
  • Share what you see, anything you see
  • Every share is a message, make it your message
    Dammit, why do I feel compelled to noun “share”?
  • Share the context, the journey, not just the web destination
  • Own your perspective, don’t give it over to site owners
  • Know how and when people see what you share
  • Build better content, even if the publisher doesn’t
  • Trade in content, not promises for content
  • Copy/enhance/share

No… quantity doesn’t equal quantity I suppose. Another attempt:

When you share, you are a publisher. Your medium is the IM text input, or the Facebook status update, or the email composition window. It seems casual, it seems pithy, but that individual publishing is what the web is built on. I respect everyone as a publisher, every medium as worthy of improvement, and this project will respect your efforts. We will try to make a tool that can make every instance just a little bit better, simple when all you need is simple, polished if you want. We will defer your decisions because you should decide in context, not make decisions in the order that makes our work easier; we will be transparent to you, your audience, and your source; respect for the reader is part of our brand promise, and that adds to the quality of your shares; we believe content is a message, a relationship between you and your audience, and there is no universally appropriate representation; we believe there is order and structure in information, but only when that information is put to use; we believe our beliefs are always provisional and tomorrow it is our prerogative to rebelieve whatever we want most.

Who is we? Just me. A pretentiously royal we. It can’t stay that way for long though. More on that soon…

[The next post in this series is To MVP Or Not To MVP]

Stormy PetersAmazon Echo: 7 missing features

We have an Amazon Echo. It’s been a lot of fun and a bit frustrating and a bit creepy.

  • My youngest loves walking into the room and saying “Alexa, play Alvin and the Chipmunks”.
  • I like saying “Alexa, set a 10 minute timer.”
  • And we use it for news updates and music playing.

7 features it’s missing:

  1. “Alexa, answer my phone.” The Echo can hear me across the room. When my phone rings, I’d love to be able to answer it and just talk from where ever I am.
  2. “Alexa, tell me about the State of the Union Address last night.” I asked a dozen different ways and finally gave up and said “Alexa, play iHeartRadio Eric Church.” (I also tried to use it to cheat at Trivia Crack. It didn’t get any of the five questions I asked it right.)
  3. Integration with more services. We use Pandora, not iHeartRadio. We can switch. Or not. But ultimately the more services that Echo can integrate with, the better for its usefulness. It should search my email, Evernote, recipes, …
  4. Search. Not just the State of the Union, but pretty much any search I’ve tried has failed. “Alexa, when is the post office open today?” It just added the post office to my to do list. Or questions that any 2 year old can answer, “Alexa, what sound does a dog make?” It does do math for my eight year old. “Alexa, what’s 10,000 times 1 billion.” and she spits it out to his delight. He’s going to be a billionaire.
  5. More lists. Right now you can add items to your shopping list, your todo list and your music play lists. That doesn’t work well for a multi-person household. Each of us want multiple lists.
  6. Do stuff. I’d love to say “Alexa, reply to that email from Frank and say …” Or “Alexa, buy the top rated kitchen glove on Amazon.” or “Alexa, when will my package arrive?”
  7. Actually cook dinner. Or maybe just order it. :)

What do you want your Amazon Echo to do?

Mike ShalCombining Nodes in Graphviz

Graphviz is a handy tool for making graphs. The "dot" command in particular is great for drawing dependency graphs, though for many real-world scenarios there are simply too many nodes to generate a useful graph. In this post, we'll look at one strategy for automatically combining similar nodes so that a more understandable dependency structure is revealed.

Mike HommeyFx0を購入

Two weeks ago, I went to a local au shop to get a hand on the Fx0, KDDI’s LG-manufactured Firefox OS phone that was released in Japan for Christmas in a few flagship shops and on the web, and on January 6 everywhere else in Japan.

They had it on display, like any other phone.

But they didn’t have any stock, so I couldn’t bring one home, but ordered one.

Fast forward two days ago, the shop called to say they received it, and I went yesterday to get it.

Unboxing

Since the phone is not sold without a carrier subscription, the shop staff does the unboxing for you, to place the SIM card in the phone. But let’s pretend that didn’t happen.

The Fx0 comes in a gold box with a gold Firefox logo, wrapped in a white box with the characters “Fx0″ embossed.

Opening the gold box, unsurprisingly, reveals the gold transparent phone.

Reading articles about this phone, opinions are divided about its look. I’m on the side that thinks it looks awesome, especially in the back. I does look bulky, probably because of its rather sharp edges, but it’s not much larger than a Nexus 4. Nor is it much thicker.

One side has “au” embossed, and the other has “Fx0″.

One downside of the transparent theme is that it limited the types of materials that could be used, so it sadly feels plastic to the touch. At least that’s why I think it is this way.

At the bottom of its front, a single “home” button, showing the Firefox logo.

Turning it on

Well, it was already on when I first got my hands on it, but in our pretense of unboxing, let’s say it was not, and that I turned it on for the first time (which, in some sense, is true). This is what it looks like when it boots:

After unlocking, the home screen appears.

I’ll be trying to use it as my main (smart)phone, and see how that goes. I’ll also test some of its KDDI specific features. Blog posts will come along. Stay tuned.

Kim MoirReminder: Releng 2015 submissions due Friday, January 23

Just a reminder that submissions for the Releng 2015 conference are due this Friday, January 23. 

It will be held on May 19, 2015 in Florence Italy.

If you've done recent work like
  • migrating your build or test pipeline to the cloud
  • switching to a new build system
  • migrating to a new version control system
  • optimized your configuration management system or switched to a new one
  • implemented continuous integration for mobile devices
  • reduced end to end build times
  • or anything else build, release, configuration and test related
we'd love to hear from you.  Please consider submitting a talk!

In addition, if you have colleagues that work in this space that might have interesting topics to discuss at this workshop, please forward this information. I'm happy to talk to people about the submission process or possible topics if there are questions.

Il Duomo di Firenze by ©eddi_07, Creative Commons by-nc-sa 2.0


Sono nel comitato che organizza la conferenza Releng 2015 che si terrà il 19 Maggio 2015 a Firenze. La scadenza per l’invio dei paper è il 23 Gennaio 2015.

http://releng.polymtl.ca/RELENG2015/html/index.html

se avete competenze in:
  • migrazione del sistema di build o dei test nel cloud
  • aggiornamento del processo di build
  • migrazione ad un nuovo sistema di version control
  • ottimizzazione o aggiornamento del configuration management system
  • implementazione di un sistema di continuos integration per dispositivi mobili
  • riduzione dei tempi di build
  • qualsiasi cambiamento che abbia migliorato il sistema di build/test/release
e volete discutere della vostra esperienza, inviateci una proposta di talk!

Per favore inoltrate questa richiesta ai vostri colleghi e alle persone interessate a questi argomenti. Nel caso ci fossero domande sul processo di invio o sui temi di discussione, non esitate a contattarmi.

(Thanks Massimo for helping with the Italian translation).

More information
Releng 2015 web page
Releng 2015 CFP now open

Air MozillaPassages: Leveraging Machine Virtualization and VPNs to Isolate the Browser from the Local Desktop

Passages: Leveraging Machine Virtualization and VPNs to Isolate the Browser from the Local Desktop Lance Cottrell, chief scientist for Ntrepid, presents Passages, a secure browsing platform for business which leverages machine virtualization and VPNs to completely isolate the browser...

Patrick ClokeGoogle Summer of Code 2015 Project Ideas for Mozilla

As Florian announced last Thursday, now is the time to brainstorm and discuss project ideas for Google Summer of Code 2015. Mozilla has participated in every previous Google Summer of Code (GSoC), and hopes to participate again this year! In order to help ensure we’re selected, we need project ideas before February 20th, 2015!

There are always projects that we’re passionate about, but keep getting pushed down our ever growing to-do lists. GSoC is a great opportunity to introduce a new member to your team, and have a student work full time on a project for 3 months.

What makes a good project?
  • A project you’re passionate about and has a clear mentor.
  • It should take (a student) roughly 3 months to design, code, test, review, etc.
  • It should not be in the critical path to your next release/milestone.
  • Is related to any Mozilla project (e.g. Firefox, Firefox OS, Thunderbird, Instantbird, SeaMonkey, Bugzilla, l10n, NSS, QA, SUMO, Rust, and many more!)

Please add ideas you might have to the brainstorming page, eventually these ideas will move to the formal ideas page. Please ensure you read the directions at the top of the page.

I’d also like to thank Gerv for doing an awesome job for the past 10 years as the organization administrator. He is now passing the reins off to Florian and I, who are the new points of contact for GSoC at Mozilla. If you have any questions about GSoC, please check the FAQ and, if it is still not answered, please contact Florian or I directly.

For Students

The application period for students is March 16th, 2015 to March 27th, 2015. It is not too soon to start discussing ideas with a potential mentor/community, however. If you have an idea of what you’d like to work on, feel free to seek out that area of the community, introduce yourself and maybe find a mentored bug to work on.

Gervase Markham“Interactive” Posters

Picture of advertising poster with sticker alongside with QR code and short URL

I saw this on a First Capital Connect train here in the UK. What could possibly go wrong?

Ignoring the horrible marketing-speak “Engage with this poster” header, several things can go wrong. I didn’t have NFC, so I couldn’t try that out. But scanning the QR code took me to http://kbhengage.zpt.im/u/aCq58 which, at the time, was advertising for… Just Eat. Not villaplus.com. Oops.

Similarly, texting “11518” to 78400 produced:

Thanks for your txt, please tap the link:
http://kbhengage.zpt.im/u/b6q58

Std. msg&data rates may apply
Txt STOP to end
Txt HELP for help

which also produced content which did not match the displayed poster.

So clearly, the first risk is that the electronic interactive bits are not part of the posters themselves, and so the posters can be changed without the interactive parts being updated to match.

But also, there’s the secondary risk of QR codes – they are opaque to humans. Someone can easily make a sticker and paste a new QR code on top of the existing one, and no-one would see anything immediately amiss. But when you tried to “engage with this poster”, it would then take you to a website of the attacker’s choice.

Mozilla FundraisingShould we use the Mozilla or Firefox logo on our donation form?

Our end of year fundraising campaign has finished now, but while it’s fresh in our minds we still want to write up and share the results of some of the A/B tests we ran during the campaign that might be … Continue reading

Pete MooreWeekly review 2015-01-21

This week I’ve started work on the Go port of the taskcluster client: https://github.com/petemoore/taskcluster-client-go.

This week I learned about AMQP, go routines and channels, Hawk authentication, TaskCluster architecture, and started using some go libraries.

Other activities:

  • b2g bumper code reviews

Bogomil ShopovWhy is Bulgaria Web Summit 2015 so different from any other event?

When I talk to sponsors and even to friends about the Summit, they always ask me what makes our event different.

So here’s the secret:

We started this event 11 years ago (under a different name) as an effort to create something amazing and affordable for IT guys in Bulgaria. At the same time we never compromise with quality. The main purpose of the event is for our attendees to learn new things, which they can apply in their work on the very next day and to return the “investment” they have made in the conference.

Speakers

In most of the conferences I’ve been in Europe, well-trained company folks talk about their success at Fakebook or Playpal and how to clone it to your company – This doesn’t work and you will not see it at our event and in the same time you have to spend tons of money just to listen to the guy.

In the most conferences I’ve been in Europe, well-respected gurus talk about some programming art – they do that all the time, they just talk, they don’t code anymore – You will not see this at our event – We invite only professionals and they share their experience with you and on the next day, they will not depart for another event, but they will go back to do the thing they do the best.

We have had amazing speakers over the years. Some of them became friends of the event and they can come again and again, even without paying them a dime. We build relationships with our speakers, because we are Balkan people and this is what we do.

Many people still remember Monty’s Black Vodka, Richard Stallman‘s socks and many other stories that must be kept secret :)

 

The audience

We do have the best audience ever! I mean it. We have people that haven’t missed an event since 2004. They are honest and if you screw up they will tell you and they will give you kudos if you do something amazing. In most of the years, the tickets are sold months before the event, even without a schedule and even without the speakers yet known, because we proved the event is good.

We have people who met at our event and got married, we have people who met at our event and started business together, we have companies that hired great professionals because of our events; we have kicked off many careers by showing the people great technologies and ways to use them.

 

The money

Of course it’s not all about money. We do need them to make the event great, but our main goal is not to make money out of it. As you can see the entrance fee is low – for the same event in Europe (same speakers) you would have to pay 5-10 times more. We realize that we live in a different country and the conditions are different, but we are trying to find a way to keep the fee low and at the same time to still keep up the quality of the talks and emotions. We can achieve this only thanks to our sponsors. Thank you, dear sponsors!

 

Experiments

We do experiment a lot. We are trying to make a stress-free event, full of nice surprises, parties and interesting topics.

We are not one of those conferences where you get tons of coffee in the breaks (sometime we even don’t have breaks, nor coffee for that matter, just beer!) and a schedule 3 months in advance or you can sit and pretend you are listening, because someone paid you the fee. With us you are a part of the event all the time: we have games, hackathons and other stuff you can take part in. We give you the bread and butter, use your mind to make a sandwich. :)

 

We grow

We failed many times at many tasks, but we are learning and improving. We are not a professional team doing this for the money. We are doing this for fun and to help our great and amazing community. We count on volunteers. Thank you, dear volunteers!

 

Marketing?

We are one of the few events that don’t have history of the event on their website. Duh! We do believe that if you visit us once (because a friend told you about us) you don’t need a silly website to convince you again to come :) We do not spend (a lot of) money on marketing or professional services. We count on word of mouth and you. Thank you!

Join us and see for yourself!

Gervase MarkhamYour Top 50 DOS Problems Solved

I was clearing out some cupboards at our family home when I came across a copy of “Your Top 50 DOS Problems Solved”, a booklet published free with “PC Answers” magazine in 1992 – 23 years ago. PC Answers has sadly not survived, closing in 2010, and its domain is now a linkfarm. However, the sort of problems people had in those days make fascinating reading.

Now I’ve finished blogging quotes from “Producing Open Source Software” (the updated version of which has, sadly, yet to hit our shelves), I think I’ll blog through these on an occasional basis. Expect the first one soon.

Air MozillaBay Area useR Group Official Meetup

Bay Area useR Group Official Meetup The Bay Area R Users Group hosts Ryan Hafen, Hadley Wikham and Nick Elprin. Ryan Hafen - Tessera is a statistical computing environment that enables...

Andreas GalWebVR is coming to Firefox Nightly

In 2014 Mozilla started working on adding VR capabilities to the Web. Our VR team proposed a number of new Web APIs and made an experimental VR build of Firefox available that supports rendering VR content using the Web to Oculus Rift headsets.

Consumer VR products are still in a nascent state, but clearly there is great promise for this technology. We have enough confidence in the new APIs we have proposed that we are today taking the step of integrating them into our regular nightly Firefox builds. Head over to MozVR for all the details, and if you own an Oculus Rift headset or mobile VR-capable hardware we support, give it a spin!

 


Filed under: Mozilla

Alistair LaingNew Year…..new opportunities.

I have been meaning to get to grips with Git and Github to help contribute towards an Open Source project and the community. My main excuses have been trying to find a suitable repository but found that it was either a project that had little support both from the owner as well as the community or had a large following with far to much noise for a beginner. I didn’t even attempt to go for large project like jQuery or bootstrap as I consider that to be on a whole new different level.

The Seed

Recently I’ve been trying to teach a junior developer at work how to debug and develop using Firefoxes Firebug extension (The only devtool my team has for developing/debugging the frontend). There was a really useful extension for Firebug called fireQuery that extended Firebugs capabilities and assist with developing/debugging projects that used jQuery. I noticed that the latest version of fireQuery no longer works because it is not compatible with Firebug 2+. I made contact with fireQuery owner who was really supportive with the idea of me getting the plug-in back up to speed.

Water, light and love

After learning that Firebug 3 (dubbed FireBug.next) was on the horizon and that it would basically be a complete re-write. I decided to push for fireQuery to be compatible with v.3. At first it seemed like climbing Mount Everast trying to get my development environment in order because of the following reasons:

  1. I’ve only ever developed for the Web and never thought of developing for a browser.
  2. As Firebug.next is going through a considerable amount of change (Supporting e10s, using the native Dev Tools, remote logging). I found the documentation and guidance notes a bit confusing in terms of what you needed. It wasn’t so much the fault of Firebug but also Firefox itself that I’ll explain in the next point.
  3. Firefox seems to also be going through a transition. The extensions for it was previous built using a SDK (Software Development Kit) called cfx. cfx is based on python and becoming deprecated. It is being replaced with a SDK called JPM. Jpm is based on nodeJS, which for those who don’t know is server-side JavaScript.

Flowering

Having scratched my head a couple of times and asking myself  “why on earth I was thinking of starting this” I decided to try get in contact with the Firebug team to let them know of my plans. The easiest way to get hold of the team is through #firebug IRC channel. Since that first conversation, they have managed to persuade me to contribute to Firebug instead. Its only been a couple of weeks but I’ve learnt so much. It’s been really interesting but sometimes mind-boggling when you think that you are effectively trying to debug a debugger.

Some of the things I’ve come to learn about the SDK cfx and jpm. Initially I thought that I needed to install both of them but after chatting to Jan ‘Honza’ Odvarko  (Team Leader) and Florent it was clear the guidance was outdated and in fact JPM is the way to go. JPM isn’t as well documented mainly because it’s still relatively new and you can’t actually submit plug-in built with JPM to the official plug-in repository.

Another important point is that you don’t have to download and install the addon-SDK, which again I thought was a requirement. Addon-SDK is actually included in Firefox and you the only reason you would want to download/install and use it when you ran jpm is to try it out with another version of addon-SDK (i.e the latest and greatest version).

I’m hoping to blog a bit more about what I have learnt and explain my thoughts on the whole process.

Please leave me any comments or questions below and I will try and answer them as best I can.


Matt ThompsonWhat we’re working on this Heartbeat

Transparency. Agililty. Radical participation. That’s how we want to work on Webmaker this year. We’ve got a long way to go,  but we’re building concrete improvements and momentum — every two weeks.

We work mostly in two-week sprints or “Heartbeats.” Here’s the priorities we’ve set together for the current Heartbeat ending January 30.

Questions? Want to get involved? Ask questions in any of the tickets linked below, say hello in #webmaker IRC, or get in touch with @OpenMatt.

What we’re working on now

See it all (always up to date): http://build.webmaker.org/now 

Or see the work broken down by:

Learning Networks

  • Design & test new teach.webmaker.org wireframes
  • Get the first Webmaker Club curriculum module ready for testing
  • Finalize our documentation for Badges / Credentialing
  • Document our Q1 / Q2 plan for Training

Learning Products

Desktop / Tablet:

  • Improve user on-boarding (Phase II)
  • Improve our email communications after users sign up
  • Create better moderation functionality for webmaker.org/explore (formerly known as “the gallery”)
  • Build a unified tool prototype (Phase II)

 Mobile

  • Draft demo script and plan our marketing activities for Mobile World Congress
  • Make localization improvements to the Webmaker App
  • Build and ship device integrations and a screenshot service for Webmaker App
  • Distribute the first draft of our Kenya Field Report

 Engagement

  • Prep and execute Data Privacy Day campaign (Jan 28)
  • Prep for Net Neutrality Campaign (Feb 5)
  • Draft a branding plan for Learning Products and Learning Networks
  • Design a splash page for Mobile World Congress

 Planning & Process

  • Design and execute a communications plan on our overall 2015 plan
  • Document all our Q1 goals and KPIs in one spot
  • Add those quarterly goals to our dashboard
  • Ship updated documentation to build.webmaker.org (including: “How we do Heartbeats” & “How to use Git Hub Issues”)