Hannah KaneKayaks, Nachos, Pipelines, and Funnels, or: The Mofo Engagement Fall Work Week

Last week, the Mofo Engagement Team and Friends (terrible band name) met in Toronto for some some serious boating, eating, and hacking.

Prelude: With some much appreciated assistance from Erika, I got over my fear of boats and managed to canoe up and down a bit of the Humber river during our pre-work week Super Engagement Team Fun Day. We may not have been fast, but we had style (assuming your definition of style includes crashing into the riverbank). Later, I tricked several of my co-workers into ordering giant platters of nachos at Sneaky Dee’s. It was a cheesy, oversized start to the work week.

On to the work-y part!

The agenda was ambitious. We had four concurrent tracks, each with their own projected outcomes:

  • The set-up for the End of Year fundraising campaign
  • The creative brief, RACI, and roadmap for the Fall Webmaker Sales Campaign
  • The 2015 Grants Pipeline
  • And the partnership strategy, systems, and sales team for growing Webmaker

Did we achieve what we wanted to achieve?

All that, and more.

The End of Year Fundraising team was on fire. The project had been well-prepped in advance, so the team was able to use the work week as a sprint and deliver a slew of prototypes and designs. They tackled the snippet, optimized donation forms including a brand new sequential flow, localization, the fundraising.mozilla.org website redesign, overarching branding, and even an awesome community marketing idea.

The Partnerships team had several rich conversations where they identified possible partners, clarified roles, and simulated the entire sales flow using human props, sticky notes, and impressive improv skills. (I left that session with a post-it note on my laptop that read, “Why would clown mentors come back to the site?” A question for the ages.)

A centerpiece of the Fall Webmaker Campaign is the post-sales funnel which includes custom partner landing pages and a choose-your-own-adventure style event wizard to help people get started with one of three easy/fun Maker Party events. The entire funnel got spec’ed and wireframed and the Event Wizard got some design love during the work week.

(Note: The Partnerships track and the Fall Webmaker Campaign track were largely merged into a single Fall Webmaker Campaign with elements including sales, marketing, design, and product dev. The campaign is focused on reaching our 10K contributor goal, and will leverage key partners with large networks. In addition to honing our value proposition to partners, we’ll also use the opportunity to refine our marketing funnel. Later, we’ll go out and introduce a broader addressable audience to the top of the funnel. At that point, there will be a clear distinction between sales and marketing, but for the Fall campaign, we’re working in tandem.)

The 2015 Grants Pipeline team got to spend some quality time with our brand new Salesforce installation. They make web-to-lead forms, trigger events, and dashboards seem downright glamorous!

Phew.

Does this seem like a lot of stuff? That’s because it is a lot of stuff! But never fear, it’s all summarized on the Engagement Team Workbench. (You can also see a complete list of what we delivered during the work week here.)

I was amazed at how much was accomplished during the work week. My co-workers are some of the raddest, most capable and action-oriented people I’ve had the privilege of knowing. Lucky me to be a part of it!

Frédéric HarperL’état de l’Open Source en 2014 au Salon du Logiciel Libre et des technologies ouvertes du Québec

Fred@S2LQ

Il y a deux semaines, je me dirigeais à Québec pour présenter au Salon du Logiciel Libre et des technologies ouvertes du Québec (S2LQ). Lorsqu’on m’a approché pour présenter, je ne savais trop de quoi parler: l’audience visée étant moins technique que lorsque je présente habituellement, soit aux développeurs. J’avais pensé parler de Mozilla, mais la structure étant tellement différente d’autres entreprises vivant des technologies ouvertes que je me suis abstenu. Je pensais présenter Firefox OS comme je le fais souvent, mais j’aurais par défaut tanguer trop souvent vers un discours technique. J’ai donc décidé de retourner aux bases du pourquoi, mais aussi d’où vient l’Open Source et où on se dirige: une présentation plus haut niveau pour sensibiliser les gens sur place, qui fut ma foi, fort bien reçu.

Encore une fois, Christian Aubry étant présent avec Savoir Faire Linux, j’ai eu droit à un enregistrement de qualité de ma présentation.

Au bout du compte, mon but étant bien sûr de sensibiliser les gens à l’Open Source, mais aussi de montrer une approche moins oppressante pour rallier les gens à la cause. Malgré mon court séjour à Québec, j’ai bien aimé ma journée à cet événement et fortement apprécié le keynote du colonel Guimard sur la migration vers l’Open Source à la gendarmerie française: un modèle utile et pragmatique que le gouvernement d’ici devrait envier et copier!


--
L’état de l’Open Source en 2014 au Salon du Logiciel Libre et des technologies ouvertes du Québec is a post on Out of Comfort Zone from Frédéric Harper

Henrik SkupinFirefox Automation report – week 31/32 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 31 and 32. It’s a bit lesser than usual, mainly because many of us were on vacation.

Highlights

The biggest improvement as came in during week 32 were the fixes for the TPS tests. Cosmin spent a bit of time on investigating the remaining underlying issues, and got them fixed. Since then we have a constant green testrun, which is fantastic.

While development for the new TPS continuous integration system continued, we were blocked for a couple of days by the outage of restmail.net due to a domain move. After the DNS entries got fixed, everything was working fine again for Jenkins and Mozilla Pulse based TPS tests.

For Mozmill CI we agreed on that the Endurance tests we run across all branches are not that useful, but only take a lot of time to execute – about 2h per testrun! The most impact also regarding of new features landed will be for Nightly. So Henrik came up with a patch to only let those tests run for en-US Nightly builds.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 31 and week 32.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 32. There was no meeting in week 31.

Jay PatelGrow Mozilla discussion this Thursday

If you’re interested in helping new people get involved with Mozilla, join us Thursday for an open community building forum.

Daniel StenbergGood bye Rockbox

I’m officially not taking part in anything related to Rockbox anymore. I’ve unsubscribed and I’m out.

In the fall of 2001, my friend Linus and my brother Björn had both bought the portable Archos Player, a harddrive based mp3 player and slightly underwhelmed by its firmware they decided they would have a go at trying to improve it. All three of us had been working with embedded systems for many years already and I was immediately attracted to the idea of reverse engineering this kind of device and try to improve it. It sounded like a blast to me.

In December 2001 we had the first test program actually running on the device and flashing a led. The first little step of what would become a rather big effort. We wrote a GPLed mp3 player firmware replacement, entirely from scratch without re-using any original parts. A full home-grown tiny multitasking operating system with a UI.

Fast-forwarding through history: we managed to get a really good firmware done for the early Archos players and we managed to move on to follow-up mp3 players too. After a decade or so, we supported well over 60 different mp3 player models and we played every music format known to man, we usually had better battery life than the original firmwares. We could run doom and we had a video player, a plugin system and a system full of crazy things.

We gathered large amounts of skilled and intelligent hackers from all over the world who contributed to make this possible. We had yearly meetups, or developer conferences, and we hung out on IRC every day of the week. I still hang out on our off-topic IRC channel!

Over time, smart phones emerged as the preferred devices people would use to play music while on the go. We ported Rockbox over to Android as an app, but our pixel-based UI was never really suitable for the flexible Android world and I also think that most contributors were more interested in hacking devices than writing Android apps. The app never really attracted many users or developers so while functional it never “took off”.

mp3 players are now already a thing of the past and will soon fall into the cave of forgotten old things our children will never even know or care about.

Developers and users of Rockbox have mostly moved on to other ventures. I too stopped actually contributing to the project several years ago but I was running build clients for a long while and I’ve kept being subscribed to the development mailing list. Until now. I’m now finally cutting off the last rope. Good bye Rockbox, it was fun while it lasted. I had a massive amount of great fun and I learned a lot while in the project.

Rockbox

Robert O'CallahanUpcoming rr Talk

Currently I'm in the middle of a 3-week visit to North America. Last week I was at a Mozilla graphics-team work week in Toronto. This week I'm mostly on vacation, but I'm scheduled to give a talk at MIT this Thursday about rr. This is a talk about the design of rr and how it compares to other approaches. I'll make the content of that talk available on the Web in some form as well. Next week I'm also mostly on vacation but will be in Mountain View for a couple of days for a planning meeting. Fun times!

Soledad PenadesUsing a Flame as my main phone, day 1

Today I finally got a Flame to use as my main phone (what they call dogfooding, but it sounds atrocious to me). I had been using a Flame for testing since June or so, but I kept flashing nightly builds and let me tell you… it’s risky at least.

Sadly I was busy attending other matters (namely the DevTools meetup which is happening this week at the London office) so I didn’t have much of a chance to experiment on the phone.

My main goal was basically flash it with an updated version of the operating system, since the Flame comes with 1.3 and I wanted to use 2.x. Then I took my SIM card out of my Android Nexus 5 and put it into the Flame. Bam, it works. Including data! No need to tinker with GPRS and APN settings and what not. Sweet! I already even got a spam call advising me on how to claim compensation on that accident I never had. Yay!

I also imported some of my contacts from my Google account. The importer lets you connect to GMail and then loads the contacts, and you can go through the list to choose which ones to import. Good time for some pruning of old contacts I haven’t spoken to in a while :-P
There were some weirdnesses on the rendering but I didn’t file a bug yet as I want to compare with the other phone and a freshly flashed version and see if the weirdnesses have been fixed or not.

I can also confirm that the Twitter “app” (it’s actually more like a glorified bookmark for m.twitter.com) for FxOS is as terrible as usual. I keep internally whispering to myself: OAuth, Oauth, tokens, rate limits each time I try to use the Twitter app and get frustrated by how badly it works on every single mobile browser, so as to scare myself and avoid writing my own client with support for offline and push notifications.

Now I have to find out how to configure the alarm clock. If it doesn’t work I’ll be late to the office tomorrow—it won’t be my fault! :P

Oh and before you ask: no one at Mozilla is forcing us to use this or that phone. This is just done on my own volition because other platforms keep creeping me out and I’d rather contribute to something I can trust.

PS I don’t actually have any grand plan for writing a long series of posts on my experiences on using the Flame as my main phone so don’t get too excited, teehee!

flattr this!

Mozilla Release Management TeamFirefox 33 beta7 to beta8

  • 46 changesets
  • 110 files changed
  • 1976 insertions
  • 805 deletions

ExtensionOccurrences
cpp34
h14
html13
js11
jsm6
css4
xul3
xml3
ini3
cc3
c2
xhtml1
webidl1
svg1
py1
properties1
nsh1
list1
in1
idl1
dtd1

ModuleOccurrences
dom18
gfx16
browser14
layout12
toolkit9
security6
js6
content6
netwerk5
mobile3
media3
widget2
xpfe1
xpcom1
modules1
ipc1
embedding1

List of changesets:

David KeelerBug 1057123 - mozilla::pkix: Certificates with Key Usage asserting the keyCertSign bit may act as end-entities. r=briansmith, a=sledru - 599ae9ec1b9c
Robert StrongBug 1070988 - Windows installer should remove leftover chrome.manifest on pave over install to prevent startup crash with Firefox 32 and above with unpacked omni.ja. r=tabraldes, a=sledru - 9286fb781568
Bobby HolleyBug 1072174 - Handle all the cases XrayWrapper.cpp. r=peterv, a=abillings - bb4423c0da47
Brian NicholsonBug 1067429 - Alphabetize theme styles. r=lucasr, a=sledru - f29b8812b6d0
Brian NicholsonBug 1067429 - Create GeckoAppBase as the parent for Gecko.App. r=lucasr, a=sledru - 112a9fe148d2
Brian NicholsonBug 1067429 - Add values-v14, removing v14-only styles from values-v11. r=lucasr, a=sledru - 89d93cece9fd
David KeelerBug 1060929 - mozilla::pkix: Allow explicit encodings of default-valued BOOLEANs because lol standards. r=briansmith, a=sledru - 008eb429e655
Tim TaubertBug 1067173 - Bail out early if _resizeGrid() is called before the page has loaded. f=Mardak, r=adw, a=sledru - c043fec932a6
Markus StangeBug 1011166 - Improve the workarounds cairo does when rendering large gradients with pixman. r=roc, r=jrmuizel, a=sledru - a703ff0c7861
Edwin FloresBug 976023 - Fix crash in AppleMP3Reader. r=rillian, a=sledru - f2933e32b654
Nicolas SilvaBug 1066139 - Put stereo video behind a pref (off by default). r=Bas, a=sledru - e60e089a7904
Nicholas NethercoteBug 1070251 - Anonymize non-chrome inProcessTabChildGlobal URLs in memory reports when necessary. r=khuey, a=sledru - 09dcf9d94d33
Andrea MarchesiniBug 1060621 - WorkerScope should CC mLocation and mNavigator. r=bz, a=sledru - 32d5ee00c3ab
Andrea MarchesiniBug 1062920 - WorkerNavigator strings should honor general.*.override prefs. r=khuey, a=sledru - 6d53cfba12f0
Andrea MarchesiniBug 1069401 - UserAgent cannot be changed for specific websites in workers, r=khuey, r=bz, a=sledru - e178848e43d1
Gijs KruitboschBug 1065998 - Empty-check Windows8WindowFrameColor's customizationColor in case its registry value is gone. r=jaws, a=sledru - 12a5b8d685b2
Richard BarnesBug 1045973 - sec_error_extension_value_invalid: mozilla::pkix does not accept certificates with x509v3 extensions in x509v1 or x509v2 certificates. r=keeler, a=sledru - a4697303afa6
Branislav RankovBug 1058024 - IonMonkey: (ARM) Fix jsapi-tests/testJitMoveEmitterCycles. r=mjrosenb, a=sledru - 371e802df4dc
Rik CabanierBug 1072100 - mix-blend-mode doesn't work when set in JS. r=dbaron, a=sledru - badc5be25cc1
Jim ChenBug 1067018 - Make sure calloc/malloc/free usages match in Tools.h. r=jwatt, a=sledru - cf8866bd741f
Bill McCloskeyBug 1071003 - Fix null crash in XULDocument::ExecuteScript. r=smaug, a=sledru - b57f0af03f78
Felipe GomesBug 1063848 - Disable e10s in safe mode. r=bsmedberg, r=ally, a=sledru, ba=jorgev - 2b061899d368
Gijs KruitboschBug 1069300 - strings for panic/privacy/forget-button for beta, r=jaws,shorlander, a=dolske, l10n=pike, DONTBUILD=strings-only - 16e19b9cec72
Valentin GosuBug 1011354 - Use a mutex to guard access to nsHttpTransaction::mConnection. r=mcmanus, r=honzab, a=abillings - ac926de428c3
Terrence ColeBug 1064346 - JSFunction's extended attributes expect POD-style initialization. r=billm, a=abillings - fd4720dd6a46
Marty RosenbergBug 1073771 - Add namespaces and whatnot to make JitMoveEmitterCycles compile. r=dougc, a=test-only - 97feda79279e
Ed LeeBug 1058971 - [Legal]: text for sponsored tiles needs to be localized for Firefox 33 [r=adw a=sylvestre] - deaa75a553ac
Ed LeeBug 1064515 - update learn more link for sponsored tiles overlay [r=adw a=sylvestre] - b58a231c328c
Ed LeeBug 1071822 - update the learn more link in the tiles intro popup [r=adw a=sylvestre] - 0217719f20c5
Ed LeeBug 1059591 - Incorrectly formatted remotely hosted links causes new tab to be empty [r=adw a=sylvestre] - d34488e06177
Ed LeeBug 1070022 - Improve Contrast of Text on New Tab Page [r=adw a=sylvestre] - 8dd30191477e
Ed LeeBug 1068181 - NEW Indicator for Pinned Tiles on New Tab Page [r=ttaubert a=sylvestre] - 02da3cf36508
Ed LeeBug 1062256 - Improve the design of the »What is this« bubble on about:newtab [r=adw a=sylvestre] - 2a8947c986ed
Bas SchoutenBug 1072404: Firefox may crash when the D3D device is removed while rendering. r=mattwoodrow a=sylvestre - 3d41bbe16481
Bas SchoutenBug 1074045: Turn OMTC back on on beta. r=nical a=sylvestre - b9e8ce2a141b
Jim MathiesBug 1068189 - Force disable browser.tabs.remote.autostart in non-nightly builds. r=felipe, a=sledru - d41af0c7fdaf
Randell JesupBug 1033066 - Never let AudioSegments underflow mDuration and cause OOM allocation. r=karlt, a=sledru - 82f4086ba2c7
Georg FritzscheBug 1070036 - Catch NS_ERROR_NOT_AVAILABLE during OpenH264Provider startup. r=irving, a=sledru - b6985e15046b
Nicolas SilvaBug 1061712 - Don't crash in DrawTargetDual::CreateSimilar if allocation fails. r=Bas, a=sledru - 69047a750833
Nicolas SilvaBug 1061699 - Only crash deBug builds if BorrowDrawTarget is called on an unlocked TextureClient. r=Bas, a=sledru - 4020480a6741
Aaron KlotzBug 1072752 - Make Chromium UI message loops for Windows call into WinUtils::WaitForMessage. r=jimm, a=sledru - 737fbc0e3df4
Florian QuèzeBug 1067367 - Tapping the icon of a second doorhanger reopens the first doorhanger if it was already open. r=Enn, a=sledru - 3ff9831143fd
Robert LongsonBug 1073924 - Hovering over links in SVG does not cause cursor to change. r=jwatt, a=sledru - 19338c25065c
Ryan VanderMeulenBacked out changeset d41af0c7fdaf (Bug 1068189) for reftest-ipc crashes/failures. - dabbfa2c0eac
Randell JesupBug 1069646 - Scale frame rate initialization in webrtc media_opimization. r=gcp, a=sledru - bc5451d18901
David KeelerBug 1053565 - Update minimum system NSS requirement in configure.in (it is now 3.17.1). r=glandium, a=sledru - 0780dce35e25

Mozilla Reps CommunityReMo Camp 2014: Impact through action

For the last 3 years the council, peers and mentors of the Mozilla Reps program have been meeting annually at ReMo Camp, a 3-day meetup to check the temperature of the program and plan for the next 12 months. This year’s Camp was particularly special because for the first time, Mitchell Baker, Mark Surman and Mary Ellen Muckerman participated in it. With such a great mix of leadership both at the program level and at the organization, it was clear this ReMo Camp would be our most interesting and productive one.

The meeting spanned 3 days:

Day 1:
The Council and Peers got together to add the finishing touches and tweaks to the program content and schedule but also to discuss the program’s governance structure. Council and Peers defined the different roles in the program that allow the Reps to keep each leadership body accountable and made sure there was general alignment. We will post a separate blog post on governance explaining the exact functions of the module owner, the peers, the council, mentors and Reps.

Day 2
The second day was very exciting and was coined the “challenges” day where we had Mitchell, Mark and Mary Ellen joining the Reps to work on 6 “contribution challenges”. These challenges are designed to be concrete initiatives that aim to have quick and concrete impact on Mozilla’s product goals with large scale volunteer participation. Mozillians around the globe work tireless to push the Mozilla mission forward and one of the most powerful ways of doing so is by improving our products. We worked on 6 specific areas to have an impact and identify the next steps. There’s a lot of excitement already and the Reps program will play a central role as a platform to mobilize and empower local communities participating in these challenges. More on this shortly…

Day 3
The last day of the was entirely dedicated to the Reps program. We had so many things to talk about, so many ideas and alas the day only has so many hours, so we focused on three thematic pillars: impact, mentorship training and getting stuff done. The council and peers had spent Friday setting those priorities, the rationale being that Mozilla Reps leadership is very good at identifying what needs to get done, and not as good with follow-through. The sessions on “impact” were prioritized over others as we wanted to figure out how to best enable/empower Reps to have an impact and follow up with all the great plans we do. Impact was broken down into three thematic buckets:

Accountability: how to we keep Reps accountable for what the have signed up for?

Impact measurement: how do we measure the impact of all the wonderful things we do?

Recognition: how do we recognize in a more systematic and fair way our volunteers who are going out of their way?

After the impact discussion, we changed gears and moved to the Mentorship training. During the preparations leading to ReMo Camp most of the mentors asked for training. Our mentors are really committed to helping Reps on the ground to do a great job, so the council and the peers facilitated a mentorship training divided in 5 different stations. We got a lot of great feedback and we’ll be producing videos with the materials of the training so that any mentor (or interested Rep) has access to this content. We will be also rolling out Q&A sessions for each mentorship station. Stay tuned if you want to learn more about mentorship and the Reps program in general.

The third part of Day 3 was “getting stuff done” a session where we identified 10 concrete tasks (most of them pending from the last ReMo Camp) that we could actually get done by the end of the day.

The overall take-away from this Camp was that instead of designing grand ambitious plans we need to be more agile and sometimes be more realistic with what work we can get accomplished. Ultimately, it will help us get more stuff done more quickly. That spirit of urgency and agility permeated the entire weekend, and we hope to be able to transmit this feeling to each and every Rep.

There wasn’t enough time, but we spent it in the best possible way. Having the Mozilla leadership with us was incredibly empowering and inspiring. The Reps have organized themselves and created this powerful platform. Now it’s time to focus our efforts. The weekend in Berlin proved that the Reps are a cohesive group of volunteer leaders with a lot of experience and the eyes and ears of Mozilla in every corner of the world. Now let’s get together and committing to doing everything we set ourselves to do before ReMo Camp 2015.

Roberto A. VitilloTelemetry meets Clojure.

tldr: Data related telemetry alerts (e.g. histograms or main-thread IO) are now aggregated by medusa, which allows devs to post, view and filter alerts. The dashboard allows to subscribe to search criterias or individual metrics.

As mentioned in my previous post, we recently switched to a dashboard generator, “iacomus“, to visualize the data produced by some of our periodic map-reduce jobs. Given that the dashboards gained some metadata that describes their datasets, writing a regression detection algorithm based on the iacomus data-format followed naturally.

The algorithm generates a time-series for each possible combination of the filtering and sorting criterias of a dashboard, compares the latest data-point to the distribution of the previous N, and generates an alert if it detects an outlier. Stats 101.

Alerts are aggregated by medusa, which provides a RESTful API to submit alerts and exposes a dashboard that allows users to view and filter alerts using regular expressions and subscribe to alerts.

Writing the aggregator and regression detector in Clojure[script] has been a lot of fun. I found particularly attracting the fact that Clojure doesn’t have any big web framework a la Ruby or Python that forces you in one specific mindset. Instead you can roll your own using a wide set of libraries, like:

  • HTTP-Kit, an event-driven HTTP client/server
  • Compojure, a routing library
  • Korma, a SQL DSL
  • Liberator, RESTful resource handlers
  • om, React.js interface for Clojurescript
  • secretary, a client-side routing library

The ability to easily compose functionality from different libraries is exceptionally well explained by a quote from Alan Perlis: “It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures”. And so as it happens instead of each library having its own set of independent abstractions and data-structures, Clojure libraries tend to use mostly just lists, vectors, sets and maps which greatly simplify interoperability.

Lisp gets criticized for its syntax, or lack thereof, but I don’t feel that’s fair. Using any editor that inserts and balances parentheses for you does the trick. I also feel like I didn’t have to run a background thread in my mind to think if what I was writing would please the compiler or not, unlike in Scala for instance. Not to speak of the ability to use macros which allows one to easily extend the compiler with user-defined code. The expressiveness of Clojure means also that more thought is required per LOC but that might be just a side-effect of not being a full-time functional programmer.

What I do miss in the clojure ecosystem is a strong set of tools for statistics and machine learning. Incanter is a wonderful library but, coming from a R and python/scipy background, I had the impression that there is still a lot of catching up to do.


David Rajchenbach TellerWhat David Did During Q3

September is ending, and with it Q3 of 2014. It’s time for a brief report, so here is what happened during the summer.

Session Restore

After ~18 months working on Session Restore, I am progressively switching away from that topic. Most of the main performance issues that we set out to solve have been solved already, we have considerably improved safety, cleaned up lots of the code, and added plenty of measurements.

During this quarter, I have been working on various attempts to optimize both loading speed and saving speed. Unfortunately, both ongoing works were delayed by external factors and postponed to a yet undetermined date. I have also been hard at work on trying to pin down performance regressions (which turned out to be external to Session Restore) and safety bugs (which were eventually found and fixed by Tim Taubert).

In the next quarter, I plan to work on Session Restore only in a support role, for the purpose of reviewing and mentoring.

Also, a rant The work on Session Restore has relied heavily on collaboration between the Perf team and the FxTeam. Unfortunately, the resources were not always available to make this collaboration work. I imagine that the FxTeam is spread too thin onto too many tasks, with too many fires to fight. Regardless, the symptom I experienced is that during the course of this work, both low-priority, high-priority and safety-critical patches have been left to rot without reviews, despite my repeated requests, for 6, 8 or 10 weeks, much to the dismay of everyone involved. This means man·months of work thrown to /dev/null, along with quarterly objectives, morale, opportunities, contributors and good ideas.

I will try and blog about this, eventually. But please, in the future, everyone: remember that in the long run, the priority of getting reviews done (or explaining that you’re not going to) is a quite higher than the priority of writing code.

Async Tooling

Many improvements to Async Tooling landed during Q3. We now have the PromiseWorker, which simplifies considerably the work of interacting between the main thread and workers, for both Firefox and add-on developers. I hear that the first add-on to make use of this new feature is currently being developed. New features, bugfixes and optimizations landed for OS.File. We have also landed the ability to watch for changes in a directory (under Windows only, for the time being).

Sadly, my work on interactions between Promise and the Test Suite is currently blocked until the DevTools team manages to get all the uncaught asynchronous errors under control. It’s hard work, and I can understand that it is not a high priority for them, so in Q4, I will try to find a way to land my work and activate it only for a subset of the mochitest suites.

Places

I have recently joined the newly restarted effort to improve the performance of Places, the subsystem that handles our bookmarks, history, etc. For the moment, I am still getting warmed up, but I expect that most of my work during Q4 will be related to Places.

Shutdown

Most of my effort during Q3 was spent improving the Shutdown of Firefox. Where we already had support for shutting down asynchronously JavaScript services/consumers, we now also have support for native services and consumers. Also, I am in the process of landing Telemetry that will let us find out the duration of the various stages of shutdown, an information that we could not access until now.

As it turns out, we had many crashes during asynchronous shutdown, a few of them safety-critical. At the time, we did not have the necessary tools to determine to prioritize our efforts or to find out whether our patches had effectively fixed bugs, so I built a dashboard to extract and display the relevant information on such crashes. This proved a wise investment, as we spent plenty of time fighting AsyncShutdown-related fires using this dashboard.

In addition to the “clean shutdown” mechanism provided by AsyncShutdown, we also now have the Shutdown Terminator. This is a watchdog subsystem, launched during shutdown, and it ensures that, no matter what, Firefox always eventually shuts down. I am waiting for data from our Crash Scene Investigators to tell us how often we need this watchdog in practice.

Community

I lost track of how many code contributors I interacted with during the quarter, but that represents hundreds of e-mails, as well as countless hours on IRC and Bugzilla, and a few hours on ask.mozilla.org. This year’s mozEdu teaching is also looking good.

We also launched FirefoxOS in France, with big success. I found myself in a supermarket, presenting the ZTE Open C and the activities of Mozilla to the crowds, and this was a pleasing experience.

For Q4, expect more mozEdu, more mentoring, and more sleepless hours helping contributors debug their patches :)


Andrew HalberstadtHow many tests are disabled?

tl;dr Look for [reports like this][0] in the near future! At Mozilla, platform developers are culturally bound to [tbpl][1]. We spend a lot of time staring at those bright little letters, and their colour can mean the difference between hours, days or even weeks of work. With so many people performing over [420 pushes per day][2], all watching, praying, rejoicing and cursing, it's paramount that the whole process operates like a well oiled machine. So when a test starts intermittently failing, and there aren't any obvious changesets to blame, it'll often get disabled in the name of keeping things running. A bug will be filed, some people will be cc'ed, and more often than not, it will languish. People who really care about tests know this. They have an innate and deep fear that there are tests out there that would catch major and breaking regressions, but for the fact that they are disabled. Unfortunately, there was never a good way to see, at a high level, which tests were disabled for a given platform. So these people who care so much have to go about their jobs with a vague sense of impending doom. Until now. A Concrete Sense of Impending Doom ---------------------------------- [Test Informant][3] is a new service which aims to bring some visibility into the state of tests for a variety of suites and platforms. It listens to [pulse messages][4] from mozilla-central for a variety of build configurations, downloads the associated tests bundle, parses as many manifests as it can and saves the results to a mongo database. There is a script that queries the database and can generate reports (e.g [like this][0]), including how many tests have been enabled or disabled over a given period of time. This means instead of a vague sense of impending doom, you can tell at a glance exactly how doomed we are. There are still a few manual steps required to generate and post the reports, but I intend to fully automate the process (including a weekly digest link posted to dev.platform). Over the Hill and Far Away -------------------------- There are a number of improvements that can be made to this system. We may or may not implement them based on the initial feedback we get from these reports. Possible improvements include: * Support for additional suites and platforms. * A web dashboard with graphs and other visualizations. * Email notifications when tests are enabled/disabled on a per-module basis. * Exposing the database publicly so other tools can use it (e.g a mach command). There are also some known limitations: * No data for b2g or android platforms (blocked by bugs [1071642][5] and [1066735][6] respectively). * No data for suite \*. At the moment, only suites that live in the tests bundle and that have manifestparser based manifests (the .ini format) are supported. We may extend the tool to other formats at a later date. * Run-time filters not taken into account. Because the tool doesn't actually run any tests, it doesn't know about any filters added by the test harness at run-time. Because all of reftest's filtering happens at runtime, it's unlikely reftest will be supported anytime soon. If you would like to contribute, or just take a look at the source, it's [all on github][7]. As always, let me know if you have any questions! [0]: http://people.mozilla.org/~ahalberstadt/informant-reports/daily/2014-09-29.informant-report.html [1]: http://tbpl.mozilla.org [2]: http://relengofthenerds.blogspot.ca/2014/09/mozilla-pushes-august-2014.html [3]: https://wiki.mozilla.org/Auto-tools/Projects/Test-Informant [4]: https://wiki.mozilla.org/Auto-tools/Projects/Pulse [5]: https://bugzilla.mozilla.org/show_bug.cgi?id=1071642 [6]: https://bugzilla.mozilla.org/show_bug.cgi?id=1066735 [7]: https://github.com/ahal/test-informant

Niko MatsakisMulti- and conditional dispatch in traits

I’ve been working on a branch that implements both multidispatch (selecting the impl for a trait based on more than one input type) and conditional dispatch (selecting the impl for a trait based on where clauses). I wound up taking a direction that is slightly different from what is described in the trait reform RFC, and I wanted to take a chance to explain what I did and why. The main difference is that in the branch we move away from the crate concatenability property in exchange for better inference and less complexity.

The various kinds of dispatch

The first thing to explain is what the difference is between these various kinds of dispatch.

Single dispatch. Let’s imagine that we have a conversion trait:

1
2
3
trait Convert<Target> {
    fn convert(&self) -> Target;
}

This trait just has one method. It’s about as simple as it gets. It converts from the (implicit) Self type to the Target type. If we wanted to permit conversion between int and uint, we might implement Convert like so:

1
2
impl Convert<uint> for int { ... } // int -> uint
impl Convert<int> for uint { ... } // uint -> uint

Now, in the background here, Rust has this check we call coherence. The idea is (at least as implemented in the master branch at the moment) to guarantee that, for any given Self type, there is at most one impl that applies. In the case of these two impls, that’s satisfied. The first impl has a Self of int, and the second has a Self of uint. So whether we have a Self of int or uint, there is at most one impl we can use (and if we don’t have a Self of int or uint, there are zero impls, that’s fine too).

Multidispatch. Now imagine we wanted to go further and allow int to be converted to some other type MyInt. We might try writing an impl like this:

1
2
struct MyInt { i: int }
impl Convert<MyInt> for int { ... } // int -> MyInt

Unfortunately, now we have a problem. If Self is int, we now have two applicable conversions: one to uint and one to MyInt. In a purely single dispatch world, this is a coherence violation.

The idea of multidispatch is to say that it’s ok to have multiple impls with the same Self type as long as at least one of their other type parameters are different. So this second impl is ok, because the Target type parameter is MyInt and not uint.

Conditional dispatch. So far we have dealt only in concrete types like int and MyInt. But sometimes we want to have impls that apply to a category of types. For example, we might want to have a conversion from any type T into a uint, as long as that type supports a MyGet trait:

1
2
3
4
5
6
7
8
9
10
11
trait MyGet {
    fn get(&self) -> MyInt;
}

impl<T> Convert<MyInt> for T
    where T:MyGet
{
    fn convert(&self) -> MyInt {
        self.get()
    }
}

We call impls like this, which apply to a broad group of types, blanket impls. So how do blanket impls interact with the coherence rules? In particular, does the conversion from T to MyInt conflict with the impl we saw before that converted from int to MyInt? In my branch, the answer is “only if int implements the MyGet trait”. This seems obvious but turns out to have a surprising amount of subtlety to it.

Crate concatenability and inference

In the trait reform RFC, I mentioned a desire to support crate concatenability, which basically means that you could take two crates (Rust compilation units), concatenate them into one crate, and everything would keep building. It turns out that the coherence rules already basically guarantee this without any further thought – except when it comes to inference. That’s where things get interesting.

To see what I mean, let’s look at a small example. Here we’ll use the same Convert trait as we saw before, but with just the original set of impls that convert between int and uint. Now imagine that I have some code which starts with a int and tries to call convert() on it:

1
2
3
4
5
6
trait Convert<T> { fn convert(&self) -> T; }
impl Convert<uint> for int { ... }
impl Convert<int> for uint { ... }
...
let x: int = ...;
let y = x.convert();

What can we say about the type of y here? Clearly the user did not specify it and hence the compiler must infer it. If we look at the set of impls, you might think that we can infer that y is of type uint, since the only thing you can convert a int into is a uint. And that is true – at least as far as this particular crate goes.

However, if we consider beyond a single crate, then it is possible that some other crate comes along and adds more impls. For example, perhaps another crate adds the conversion to the MyInt type that we saw before:

1
2
struct MyInt { i: int }
impl Convert<MyInt> for int { ... } // int -> MyInt

Now, if we were to concatenate those two crates together, then this type inference step wouldn’t work anymore, because int can now be converted to either uint or MyInt. This means that the snippet of code we saw before would probably require a type annotation to clarify what the user wanted:

1
2
let x: int = ...;
let y: uint = x.convert();

Crate concatenation and conditional impls

I just showed that the crate concatenability principle interferes with inference in the case of multidispatch, but that is not necessarily bad. It may not seem so harmful to clarify both the type you are converting from and the type you are converting to, even if there is only one type you could legally choose. Also, multidispatch is fairly rare; most traits has a single type that decides on the impl and then all other types are uniquely determined. Moreover, with the associated types RFC, there is even a syntactic way to express this.

However, when you start trying to implement conditional dispatch that is, dispatch predicated on where clauses, crate concatenability becomes a real problem. To see why, let’s look at a different trait called Push. The purpose of the Push trait is to describe collection types that can be appended to. It has one associated type Elem that describes the element types of the collection:

1
2
3
4
5
trait Push {
    type Elem;

    fn push(&mut self, elem: Elem);
}

We might implement Push for a vector like so:

1
2
3
4
5
impl<T> Push for Vec<T> {
    type Elem = T;

    fn push(&mut self, elem: T) { ... }
}

(This is not how the actual standard library works, since push is an inherent method, but the principles are all the same and I didn’t want to go into inherent methods at the moment.) OK, now imagine I have some code that is trying to construct a vector of char:

1
2
3
4
let mut v = Vec::new();
v.push('a');
v.push('b');
v.push('c');

The question is, can the compiler resolve the calls to push() here? That is, can it figure out which impl is being invoked? (At least in the current system, we must be able to resolve a method call to a specific impl or type bound at the point of the call – this is a consequence of having type-based dispatch.) Somewhat surprisingly, if we’re strict about crate concatenability, the answer is no.

The reason has to do with DST. The impl for Push that we saw before in fact has an implicit where clause:

1
2
3
impl<T> Push for Vec<T>
    where T : Sized
{ ... }

This implies that some other crate could come along and implement Push for an unsized type:

1
impl<T> Push for Vec<[T]> { ... }

Now, when we consider a call like v.push('a'), the compiler must pick the impl based solely on the type of the receiver v. At the point of calling push, all we know is that is the type of v is a vector, but we don’t know what it’s a vector of – to infer the element type, we must first resolve the very call to push that we are looking at right now.

Clearly, not being able to call push without specifying the type of elements in the vector is very limiting. There are a couple of ways to resolve this problem. I’m not going to go into detail on these solutions, because they are not what I ultimately opted to do. But briefly:

  • We could introduce some new syntax for distinguishing conditional dispatch vs other where clauses (basically the input/output distinction that we use for type parameters vs associated types). Perhaps a when clause, used to select the impl, versus a where clause, used to indicate conditions that must hold once the impl is selected, but which are not checked beforehand. Hard to understand the difference? Yeah, I know, I know.
  • We could use an ad-hoc rule to distinguish the input/output clauses. For example, all predicates applied to type parameters that are directly used as an input type. Limiting, though, and non-obvious.
  • We could create a much more involved reasoning system (e.g., in this case, Vec::new() in fact yields a vector whose types are known to be sized, but we don’t take this into account when resolving the call to push()). Very complicated, unclear how well it will work and what the surprising edge cases will be.

Or… we could just abandon crate concatenability. But wait, you ask, isn’t it important?

Limits of crate concatenability

So we’ve seen that crate concatenability conflicts with inference and it also interacts negatively with conditional dispatch. I now want to call into question just how valuable it is in the first place. Another way to phrase crate concatenability is to say that it allows you to always add new impls without disturbing existing code using that trait. This is actually a fairly limited guarantee. It is still possible for adding impls to break downstream code across two different traits, for example. Consider the following example:

1
2
3
4
5
6
7
8
9
10
11
12
13
struct Player { ... }
trait Cowboy {
    // draw your gun!
    fn draw(&self);
}
impl Cowboy for Player { ...}

struct Polygon { ... }
trait Image {
    // draw yourself (onto a canvas...?)
    fn draw(&self);
}
impl Image for Polygon { ... }

Here you have two traits with the same method name (draw). However, the first trait is implemented only on Player and the other on Polygon. So the two never actually come into conflict. In particular, if I have a player player and I write player.draw(), it could only be referring to the draw method of the Cowboy trait.

But what happens if I add another impl for Image?

1
impl Image for Player { ... }

Now suddenly a call to player.draw() is ambiguous, and we need to use so-called “UFCS” notation to disambiguate (e.g., Player::draw(&player)).

(Incidentally, this ability to have type-based dispatch is a great strength of the Rust design, in my opinion. It’s useful to be able to define method names that overlap and where the meaning is determined by the type of the receiver.)

Conclusion: drop crate concatenability

So I’ve been turning these problems over for a while. After some discussions with others, aturon in particular, I feel the best fix is to abandon crate concatenability. This means that the algorithm for picking an impl can be summarized as:

  1. Search the impls in scope and determine those whose types can be unified with the current types in question and hence could possibly apply.
  2. If there is more than one impl in that set, start evaluating where clauses to narrow it down.

This is different from the current master in two ways. First of all, to decide whether an impl is applicable, we use simple unification rather than a one-way match. Basically this means that we allow impl matching to affect inference, so if there is at most one impl that can match the types, it’s ok for the compiler to take that into account. This covers the let y = x.convert() case. Second, we don’t consider the where clauses unless they are needed to remove ambiguity.

I feel pretty good about this design. It is somewhat less pure, in that it blends the role of inputs and outputs in the impl selection process, but it seems very usable. Basically it is guided only by the ambiguities that really exist, not those that could theoretically exist in the future, when selecting types. This avoids forcing the user to classify everything, and in particular avoids the classification of where clauses according to when they are evaluated in the impl selection process. Moreover I don’t believe it introduces any significant compatbility hazards that were not already present in some form or another.

Gregory SzorcMozilla Mercurial Statistics

I recently gained SSH access to Mozilla's Mercurial servers. This allows me to run some custom queries directly against the data. I was interested in some high-level numbers and thought I'd share the results.

hg.mozilla.org hosts a total of 3,445 repositories. Of these, there are 1,223 distinct root commits (i.e. distinct graphs). Altogether, there are 32,123,211 commits. Of those, there are 865,594 distinct commits (not double counting commits that appear in multiple repositories).

We have a high ratio of total commits to distinct commits (about 37:1). This means we have high duplication of data on disk. This basically means a lot of repos are clones/forks of existing ones. No big surprise there.

What is surprising to me is the low number of total distinct commits. I was expecting the number to run into the millions. (Firefox itself accounts for ~240,000 commits.) Perhaps a lot of the data is sitting in Git, Bitbucket, and GitHub. Sounds like a good data mining expedition...

Pascal ChevrelMy Q2-2014 report

Summary of what I did last quarter (regular l10n-drivers work such as patch reviews, pushes to production, meetings and past projects maintenance excluded) .

Australis release

At the end of April, we shipped Firefox 29 which was our first major redesign of the Firefox user interface since Firefox 4 (released in 2011). The code name for that was Australis and that meant replacing a lot of content on mozilla.org to introduce this new UI and the new features that go with it. That also means that we were able to delete a lot of old content that now had become really obsolete or that was now duplicated on our support site.

Since this was a major UI change, we decided to show an interactive tour of the new UI to both new users and existing users upgrading to the new version. That tour was fully localized in a few weeks time in close to 70 languages, which represents 97.5% of our user base. For the last locales not ready on time, we either decided to show them a partially translated site (some locales had translated almost everything or some of the non-translated strings were not very visible to most users, such as alternative content to images for screen readers) or to let the page fall back to the best language available (like Occitan falling back to French for example).

Mozilla.org was also updated with 6 new product pages replacing a lot of old content as well as updates to several existing pages. The whole site was fully ready for the launch with 60 languages 100% ready and 20 partially ready, all that done in a bit less than 4 weeks, parallel to the webdev integration work.

I am happy to say that thanks to our webdev team, our amazing l10n community and with the help of my colleagues Francesco Lodolo (also Italian localizer) and my intern Théo Chevalier (also French localizer), we were able to not only offer a great upgrading experience for the quasi totality of our user base, we were also able to clean up a lot of old content, fix many bugs and prepare the site from an l10n perspective for the upcoming releases of our products.

Today, for a big locale spanning all of our products and activities, mozilla.org is about 2,000 strings to translate and maintain (+500 since Q1), for a smaller locale, this is about 800 strings (+200 since Q1). This quarter was a significant bump in terms of strings added across all locales but this was closely related to the Australis launch, we shouldn't have such a rise in strings impacting all locales in the next quarters.

Transvision releases

Last quarter we did 2 releases of Transvision with several features targeting out 3 audiences: localizers, localization tools, current and potential Transvision developers.

For our localizers, I worked on a couple of features, one is quick filtering of search results per component for Desktop repositories (you search for 'home' and with one click, you can filter the results for the browser, for mail or for calendar for example). The other one is providing search suggestions when your search yields no results with the best similar matches ("your search for 'lookmark' yielded no result, maybe you were searching for 'Bookmark'?").

For the localization tools community (software or web apps like Pontoon, Mozilla translator, Babelzilla, OmegaT plugins...), I rewrote entirely our old Json API and extended it to provide more services. Our old API was initially created for our own purposes and basically was just giving the possibility to get our search results as a Json feed on our most popular views. Tools started using it a couple of years ago and we also got requests for API changes from those tool makers, therefore it was time to rewrite it entirely to make it scalable. Since we don't want to break anybody's workflow, we now redirect all the old API calls to the new API ones. One of the significant new service to the API is a translation memory query that gives you results and a quality index based on the Levenshtein distance with the searched terms. You can get more information on the new API in our documentation.

I also worked on improving our internal workflow and make it easier for potential developers wanting to hack on Transvision to install and run it locally. That meant that now we do continuous integration with Travis CI (all of our unit tests are ran on each commit and pull request on PHP 5.4 and 5.5 environments), we have made a lot of improvements to our unit tests suite and coverage, we expose to developers peak memory usage and time per request on all views so as to catch performance problems early, and we also now have a "dev" mode that allows getting Transvision installed and running on the PHP development server in a matter of minutes instead of hours for a real production mode. One of the blockers for new developers was the time required to install Transvision locally. Since it is a spidering tool looking for localized strings in Mozilla source repositories, it needed to first clone all the repositories it indexes (mercurial/git/svn) which is about 20GB of data and takes hours even with a fast connection. We are now providing a snapshot of the final extracted data (still 400MB ;)) every 6 hours that is used by the dev install mode.

Check the release notes for 3.3 and 3.4 to see what other features were added by the team (/ex: on demand TMX generation or dynamic Gaia comparison view added by Théo, my intern).

Web dashboard / Langchecker

The main improvement I brought to the web dashboard is probably this quarter the deadline field to all of our .lang files, which allows to better communicate the urgency of projects and for localizers are an extra parameter allowing them to prioritize their work.

Theo's first project for his internship was to build a 'project' view on the web dashboard that we can use to get an overview of the translation of a set of pages/files, this was used for the Australis release (ex: http://l10n.mozilla-community.org/webdashboard/?project=australis_all) but can be used to any other project we want to define , here is an example for the localization of two Android Add-ons I did for the World Cup that we did and tracked with .lang files.

We brought other improvements to our maintenance scripts for example to be able to "bulk activate" a page for all the locales ready, we improved our locamotion import scripts, started adding unit tests etc. Generally speaking, the Web dashboard keeps improving regularly since I rewrote it last quarter and we regularly experiment using it for more projects, especially for projects which don't fit in the usual web/product categories and that also need tracking. I am pretty happy too that now I co-own the dashboard with Francesco who brings his own ideas and code to streamline our processes.

Théo's internship

I mentionned it before, our main French localizer Théo Chevalier, is doing an internship with me and Delphine Lebédel as mentors, this is the internship that ends his 3rd year of engineering (in a 5 years curriculum). He is based in Montain View, started early April and will be with us until late July.

He is basically working on almost all of the projects I, Delphine and Flod work on.

So far, apart from regular work as an l10n-driver, he has worked for me on 3 projects, the Web Dashboard projects view, building TMX files on demand on Transvision and the Firefox Nightly localized page on mozilla.org. This last project I haven't talked about yet and he blogged about it recently, in short, the first page that is shown to users of localized builds of Firefox Nightly can now be localized, and by localized we don't just mean translated, we mean that we have a community block managed by the local community proposing Nightly users to join their local team "on the ground". So far, we have this page in French, Italian, German and Czech, if your locale workflow is to translate mozilla-central first, this is a good tooll for you to reach a potential technical audience to grow your community .

Community

This quarter, I found 7 new localizers (2 French, 1 Marahati, 2 Portuguese/Portugal, 1 Greek, 1 Albanian) to work with me essentially on mozilla.org content. One of them, Nicolas Delebeque, took the lead on the Australis launch and coordinated the French l10n team since Théo, our locale leader for French, was starting his internship at Mozilla.

For Transvision, 4 people in the French community (after all, Transvision was created initially by them ;)) expressed interest or small patches to the project, maybe all the efforts we did in making the application easy to install and hack are starting to pay, we'll probably see in Q3/Q4 :)

I spent some time trying to help rebuild the Portugal community which is now 5 people (instead of 2 before), we recently resurrected the mozilla.pt domain name to actually point to a server, the MozFR one already hosting the French community and WoMoz (having the French community help the Portuguese one is cool BTW). A mailing list for Portugal was created (accessible also as nntp and via google groups) and the #mozilla-portugal IRC channel was created. This is a start, I hope to have time in Q3 to help launch a real Portugal site and help them grow beyond localization because I think that communities focused on only one activity have no room to grow or renew themselves (you also need coding, QA, events, marketing...).

I also started looking at Babelzilla new platform rewrite project to replace the current aging platform (https://github.com/BabelZilla/WTS/) to see if I can help Jürgen, the only Babelzilla dev, with building a community around his project. Maybe some of the experience I gained through Transvision will be transferable to Babelzilla (was a one man effort, now 4 people commit regularly out of 10 committers). We'll see in the next quarters if I can help somehow, I only had time to far to install the app locally.

In terms of events, this was a quiet quarter, apart from our l10n-drivers work week, the only localization event I was in was the localization sprint over a whole weekend in the Paris office. Clarista, the main organizer blogged about it in French, many thanks to her and the whole community that came over, it was very productive, we will definitely do it again and maybe make it a recurring event.

Summary

This quarter was a good balance between shipping, tooling and community building. The beginning of the quarter was really focused on shipping Australis and as usual with big releases, we created scripts and tools that will help us ship better and faster in the future. Tooling and in particular Transvision work which is probably now my main project, took most of my time in the second part of the quarter.

Community building was as usual a constant in my work, the one thing that I find more difficult now in this area is finding time for it in the evening/week end (when most potential volunteers are available for synchronous communication) basically because it conflicts with my family life a bit. I am trying to be more efficient recruiting using asynchronous communication tools (email, forums…) but as long as I can get 5 to 10 additional people per quarter to work with me, it should be fine with scaling our projects.

Erik VoldJetpack Pro Tip - JPM --prefs

JPM allows you to dynamically set preferences which can be used when an add-on developer uses jpm run or jpm test. This new --prefs feature that I added yesterday because Firefox DevTools requested it JPM 0.0.16.

With --prefs you can point to a json file, which should include the an object with keys for each pref that you want set and the values of these keys should be the desired value for your pref setting, here is a json file example jpm test --prefs ~/firefox-prefs.json:

{
  "extensions.test.pref": true
}

This would be the static way to add prefs, if you want dynamic prefs then you can use a CommonJS file, with jpm test --prefs ~/firefox-prefs.js, where the ~/firefox-prefs.js looks something like this:

var prefs = {};
prefs["extensions.test.time"] = Date.now();
module.exports = prefs;

Erik VoldProcessing Jetpack

This is the first post of my new Jetpacks Labs series, which is a project that I am working on in my personal time.

I think Processing is a great language because it is very simple and good and what it was meant to do. I’ve only had a little time to try hacking on some Processing arto projects, and it’s been a lot of fun (I will post those scripts when they are done). However, using the Java Processing client was not such a pleasent experience, and I thought that making a Firefox add-on, using Processing-js with the same features would not be hard. This is partly what led me to write about my Art Tools for Firefox idea in Feburary.

This week I found some time to hack a prototype together, and it’s working pretty well now, you can find the source on Github at jetpack-labs/processing-jetpack.

At the moment this add-on is using Scratchpad as an editor, but in the future I want to use the WebIDE. Also at the moment I’ve only added a “Processing Start” menuitem, and there should also be pause and stop menuitems, and there should be corresponding buttons for these actions. All of this and more are features that need to be added, and on top of that I would like to integrate this add-on with openprocessing.org, so if you’re intertested in the project, this is my request for contributors :)

There is a lot of work to do here still.

Daniel StenbergA day in the curl project

cURLI maintain curl and lead the development there. This is how I spend my time an ordinary day in the project. Maybe I don’t do all of these things every single day, but sometimes I do and sometimes I just do a subset of them. I just want to give you a look into what I do and why I don’t add new stuff more often or faster… I spend about one to three hours on the project every day. Let me also stress that curl is a tiny little project in comparison with many other open source projects. I’m certainly not saying otherwise.

the new bug

Someone submits a new bug in the bug tracker or on one of the mailing lists. Most initial bug reports lack sufficient details so the first thing I do is ask for more info and possibly ask the submitter to try a recent version as very often we get bug reported on very old versions. Many bug reports take several demands for more info before the necessary details have been provided. I don’t really start to investigate a problem until I feel I have a sufficient amount of details. We’re a very small core team that acts on other people’s bugs.

the question by a newbie in the project

A new person shows up with a question. The question is usually similar to a FAQ entry or an example but not exactly. It deserves a proper response. This kind of question can often be answered by anyone, but also most people involved in the project don’t feel the need or “familiarity” to respond to such questions and therefore remain quiet.

the old mail I haven’t responded to yet

I want every serious email that reaches the mailing lists to get a response, so all mails that neither I nor anyone else responds to I keep around in my inbox and when I have idle time over I go back and catch up on old mails. Some of them can then of course result in a new bug or patch or whatever. Occasionally I have to resort to simply saving away the old mail without responding in order to catch up, just to cut the list of outstanding things to do a little.

the TODO list for my own sake, things I’d like to get working on

There are always things I really want to see done in the project, and I work on them far too little really. But every once in a while I ignore everything else in my life for a couple of hours and spend them on adding a new feature or fixing something I’ve been missing. Actual development of new features is a very small fraction of all time I spend on this project.

the list of open bug reports

I regularly revisit this list to see what I can do to push the open ones forward. Follow-up questions, deep dives into source code and specifications or just the sad realization that a particular issue won’t be fixed within the nearest time (year?) so that I close it as “future” and add the problem to our KNOWN_BUGS document. I strive to keep the bug list clean and only keep relevant bugs open. Those issues that are not reproducible, are left without the proper attention from the reporter or otherwise stall will get closed. In general I feel quite lonely as responder in the bug tracker…

the mailing list threads that are sort of dying but I do want some progress or feedback on

In my primary email inbox I usually keep ongoing threads around. Lots of discussions just silently stop getting more posts and thus slowly wither away further up the list to become forgotten and ignored. With some interval I go back to see if the posters are still around, if there’s any more feedback or whatever in order to figure out how to proceed with the subject. Very often this makes me get nothing at all back and instead I just save away the entire conversation thread, forget about it and move on.

the blog post I want to do about a recent change or fix I did I’d like to highlight

I try to explain some changes to the world in blog posts. Not all changes but the ones that are somehow noteworthy as they perhaps change the way things have been or introduce new fun features perhaps not that easily spotted. Of course all features are always documented etc, but sometimes I feel I need to put some extra attention on focus on things in a more free-form style. Or I just write about meta stuff, like this very posting.

the reviewing and merging of patches

One of the most important tasks I have is to review patches. I’m basically the only person in the project who volunteers to review patches against any angle or corner of the project. When people have spent time and effort and gallantly send the results of their labor our way in the best possible format (a patch!), the submitter deserves a good review and proper feedback. Also, paving the road for more patches is one of the best way to scale the project. Helping newcomers become productive is important.

Patches are preferably posted on the mailing lists but there’s also some coming in via pull requests on github and while I strongly discourage that (due to them not getting the same attention and possible scrutiny on the list like the others) I sometimes let them through anyway just to be smooth.

When the patch looks good (or sometimes good enough and I just edit some minor detail), I merge it.

the non-disclosed discussions about a potential security problem

We’re a small project with a wide reach and security problems can potentially have grave impact on users. We take security seriously, and we very often have at least one non-public discussion going on about a problem in curl that may have security implications. We then often work on phrasing security advisories, working down exactly which versions that are vulnerable, producing patches for at least the most recent ones of those affected versions and so on.

tame stackoverflow

stackoverflow.com has become almost like a wikipedia for source code and programming related issues (although it isn’t wiki), and that site is one of the primary referrers to curl’s web site these days. I tend to glance over the curl and libcurl related questions and offer my answers at times. If nothing else, it is good to help keeping the amount of disinformation at low levels.

I strongly disapprove of people filing bug reports on such places or even very detailed (lib)curl core questions that should’ve been asked on the curl-library list.

there are idle times too

Yeah. Not very often, but sometimes I actually just need a day off all this. Sometimes I just don’t find motivation or energy enough to dig into that terrible seldom-happening bug on a platform I’ve never seen personally. A project like this never ends. The same day we release a new release, we just reset our clocks and we’re back on improving curl, fixing bugs and cleaning up things for the next release. Forever and ever until the end of time.

keep-calm-and-improve-curl

Lukas BlakkNew to Bugzilla

I believe it was a few years ago, possibly more, when someone (was it Josh Matthews? David Eaves) added a feature to Bugzilla that indicated when a person was “New to Bugzilla”. It was a visual cue next to their username and its purpose was to help others remember that not everyone in the Bugzilla soup is a veteran, accustomed to our jargon, customs, and best practices. This visual cue came in handy three weeks ago when I encouraged 20 new contributors to sign up for Bugzilla. 20 people who have only recently begun their journey towards becoming Mozilla contributors, and open source mavens. In setting them loose upon our bug tracker I’ve observed two things:

ONE: The “New to Bugzilla” flag does not stay up long enough. I’ll file a bug on this and look into how long it currently does stay up, and recommend that if possible we should have it stay up until the following criteria are met:
* The person has made at least 10 comments
* The person has put up at least one attachment
* The person has either reported, resolved, been assigned to, or verified at least one bug

TWO: This one is a little harder – it involves more social engineering. Sometimes people are might be immune to the “New to Bugzilla” cue or overlook it which has resulted in some cases there have been responses to bugs filed by my cohort of Ascenders where the commenter was neither helpful nor forwarding the issue raised. I’ve been fortunate to be in-person with the Ascend folks and can tell them that if this happens they should let me know, but I can’t fight everyone’s fights for them over the long haul. So instead we should build into the system a way to make sure that when someone who is not New to Bugzilla replies immediately after a “New to Bugzilla” user there is a reminder in the comment field – something along the lines of “You’re about to respond to someone who’s new around here so please remember to be helpful”. Off to file the bugs!

Jordan LundThis Week In Releng - Sept 21st, 2014

Major Highlights:

  • shipped 10 products in less than one day

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Jordan LundThis Week In Releng - Sept 7th, 2014

Major Highlights

  • big time saving in releases thanks to:
    • Bug 807289 - Use hardlinks when pushing to mirrors to speed it up

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Curtis KoenigThe Curtis Report 2014-09-26

So my last report failed to mention something important. There is a lot I do that is not on this report. This only covers note worthy items outside of run the business (RTB) activities. I do a good deal of bug handing, input, triage and routing to get things to the right people, remove bad/invalid or mis tagged items. Answer emails on projects and other items etc. Just general workstuff. Last week had lots of vendor stuff (as noted below) and while kind of RTB it’s usually not this heavy and we had 2 rush ones so I felt they worthy of note.

What I did this week

  • kit herder community stuff
  • [vendor redacted] communications
  • [vendor redacted] review followup
  • [vendor 2 redacted] rush review started
  • Tribe pre-planning for next month
  • [vender redacted] follow ups
  • triage security bugs
  • DerbyCon prep / registration
  • bitcoin vendor prep work
  • SeaSponge mentoring

Meetings Attended

Mon

  • impromptu [vendor redacted] review discussion
  • status meeting for [vendor redacted] security testing
  • Monday meeting

Tue

  • cloud services team (sort of)

Wed

  • impromptu [vendor redacted] standup
  • MWoS SeaSponge Weekly team meeting
  • Cloud Services Show & Tell
  • Mozillians Town Hall – Brand Initiatives (Mozilla + Firefox)
  • Web Bug Triage

Thu

  • security open mic

Fri-Sun

Non Work

  • deal with deer damage to car

Jay PatelFirefox And The Academy

It’s September, so a lot of students are joining us in various Mozilla forums, hoping to make a contribution to an open source project. This is always a challenging time of year for Mozilla, and I’d like to say a few things about it. If you’re a student hoping to get involved or – even better – an educator who would like to involve their classes in open source projects or contribute specifically to Mozilla, I hope this will give you a sense of some of the challenges and pitfalls you may be facing and how we can work together to overcome them.

If you’re a veteran of Usenet from the days when Trumpet Winsock was a thing and dinosaurs roamed the earth, you recognize the particular flavor of Mozilla’s comms channels at this time of year. People who want to make a difference in the world want be a part of Mozilla and we’re always excited to hear from them, but September is a challenging time; we get a lot of requests for “student projects” and “easy bugs” that can be difficult to address, and the dropout rate from new participants grabbing “easy” [good first bug]s at this time of year is frustratingly high.

Part of these challenges are structural, of course, and some of those structures are out of our control – courses that are designed around software and development as a discrete, compartmentalizable thing, rather than a messy, rapidly evolving and organic process aren’t really compatible with the day to day process of shipping software. Likewise the benchmarks that make up a traditional academic evaluation process don’t really make sense in our context, so more often than not the goals and schedules of students and educators aren’t well-aligned with ours.

We’re grateful for any effort put in, large or small, to making Firefox better and supporting a free and open Web. Having said that: there are a few things that make working with Firefox in an academic context challenging and you should be aware of them.

The biggest one is that we can’t promise to accept a patch within a certain time frame. This can become a problem for both students and professors when getting the patch accepted into the main product is part of the criteria for a good grade in the course.

This has happened in the past: a student has done great work on a harder-than-expected bug, but it didn’t make it through our process – including testing, feedback, revision and more testing – by the time grades were assigned. Despite their effort, the student was undeservedly graded poorly.

This is bad for everyone when it happens – the student and professor both get discouraged, the value of their work (and of the course) gets harder to see, and if the student doesn’t stick with the patch long enough to carry it over the line, despite all that, Firefox doesn’t benefit and Mozilla’s engineers feel like they’ve wasted their time.

If you’re involved in shaping your curriculum, a better approach is to combine fixing the bug or delivering the project with a set of reports or presentations about the process. This presentation – maybe even a blog post, because working in the open is important – can be a discussion of what the student is working on and why it’s important, how the work progressed and what the process of getting a patch in looks like, as well as the challenges they’ve faced and what they’ve learned from it.

Making three or four “this is my experience and what I’ve learned so far” reports over the term a more important part of the grading process than the code itself helps enormously, both in terms of keeping everyone involved motivated and in reflecting the open, community-oriented values of the project. There are other options for instructors who are familiar with our processes – breaking up the grade up so that submitting a patch that builds, responding to the first review and so on all count, even if the patch isn’t ultimately accepted, is one possibility.

The second major challenge is finding bugs that are a good fit for their contributors. We’re getting better at that – good first bugs usually tell you what language they rely on ( [lang=c++] in the Bugzilla whiteboard flags, for example) and often have a pretty good outline of what a successful patch would look like and a mentor associated with them. And while we can’t promise to privilege students ahead of any other contributors, we certainly try to hold up our end of the bargain and answer new contributors’ questions promptly.

One thing that takes the edge off there is that class of bug – “good first bugs”, you can search for [good first bug] in the Bugzilla whiteboard – that are a nice, well-defined way to get involved. The idea behind “good first bugs” is that the major challenge of the bug isn’t the code itself, but learning how to get your development environment spun up, participating in development on IRC and Bugzilla, learning how to navigate our patch review and submission process.

It typically takes a few tries for most new contributors to get their patch through. Reviews for code format and quality, suitable tests, that sort of thing can all take an extra week or two to resolve, especially if you’re working on them around other classes. But most of our good first bugs can be resolved within a few weeks, well within a term.

You’ll have to judge for yourself if this is a good fit for your schedule, your students or your institutional goals. On the one hand, though GFBs are generally well-contained, Firefox uses every feature JS and C++ have to offer, so a certain amount of familiarity and comfort with the language is important. On the other, we’ve got a huge variety of Good First Bugs here, ranging from “correct this documentation” to “fix part of a JIT compiler”, so it’s likely that if you want to contribute, we can find a home for your efforts that will make millions of people’s lives just a little bit better.

More generally, if you’re interested in getting people involved with Mozilla in September, get in touch with us in June. Knowing in advance that people will be looking for new bugs or projects gives us time to talk to our team leads and project managers, to let us all find a place to put our efforts that will be helpful and valuable to everyone.

Which is all to say that small first-time bugs are often as inglorious as they are important; while they don’t seem like much, the small patches and first bugs of today come from the Web’s next generation of leaders.

Thank you,

– Mike Hoye – mhoye@mozilla.com

Doug Belshaw21 emerging themes for Web Literacy Map 2.0

Over the past few weeks I’ve interviewed various people to gain their feedback on the current version of Mozilla’s Web Literacy Map. There was a mix of academics, educational practitioners, industry professionals and community members.* I’ve written up the interviews on a tumblr blog and the audio repository can be found at archive.org.

I wanted to start highlighting some of the things a good number of them talked about in terms of the Web Literacy Map and its relationship with Webmaker (and the wider Mozilla mission)

Cat eating popcorn

Introduction

I used five questions to loosely structure the interviews:

  1. Are you currently using the Web Literacy Map (v1.1)? In what kind of context?
  2. What does the Web Literacy Map do well?
  3. What’s missing from the Web Literacy Map?
  4. What kinds of contexts would you like to use an updated (v2.0) version of the Web Literacy Map?
  5. Who would you like to see use/adopt the Web Literacy Map?

How much we stuck to the questions in this order depended on the interviewee. Some really wanted to talk about their context. Others wanted to dwell on more conceptual aspects. Either way, it was interesting to see some themes emerge.

Emerging themes

I’m still synthesizing the thoughts contained within 18+ hours of audio, but here are the headlines so far…

1. The ‘three strands’ approach works well

The strands currently named Exploring / Building / Connecting seem to resonate with lots of people. Many called it out specifically as a strength of the Web Literacy Map, saying that it enables people to orient themselves reasonably quickly.

2. Without context, newbies can be overwhelmed

While many people talked about how useful the Web Literacy Map is as a ‘map of the territory’ giving an at-a-glance overview, some interviewees mentioned that the Web Literacy Map should really be aimed at mentors, educators, and other people who have already got some kind of mental model. We should be meeting end users where they are with interesting activities rather than immediately presenting them with a map that reinforces their lack of skills/knowledge.

3. Shared vocabulary is important

New literacies can be a contested area. One interviewee in particular talked about how draining it can be to have endless discussions and debates about definitions and scope. Several people, especially those using it in workshops, talked about how useful the Web Literacy Map is in developing a shared vocabulary and getting down to skill/knowledge development.

4. The ‘Connecting’ strand has some issues

Although interviewees agreed there were no ‘giant gaping holes’ in the Web Literacy Map, many commented on the third, ‘Connecting’ strand. Some mentioned that it seemed a bit too surface-level. Some wanted a more in-depth treatment of licensing issues under ‘Open Practices’. Others thought that the name ‘Connecting’ didn’t really capture what the competencies in that column are really about. Realistically, most people will be meeting the competencies in this strand through social media. There isn’t enough focus on this, nor on ‘personal branding’, thought some people.

5. Clear focus on learning through making/doing

Those interested in the pedagogical side of things zeroed in on the verb-based approach to the Web Literacy Map. They appreciated that, along with the Discover / Make / Teach flow on each competency page, users of webmaker.org are encouraged to learn through making and doing, rather than simply being tested on facts.

6. Allows other organizations to see how their work relates to Mozilla’s mission

Those using this out ‘in the field’ (especially those involved in Hive Learning Networks talked about how the Web Literacy Map is a good conversation-starter. They mentioned the ease with which most other organizations they work with can map their work onto ours, once they’ve seen it. These organizations can then use it as a sense-check to see how they fit into a wider ecosystem. It allows them to quickly understand the difference between the ‘learn to code’ movement and the more nuanced, holistic approach advocated by Mozilla.

7. It doesn’t really look like a ‘map’

Although interviewees were happy with the word ‘Map’ (much more so than the previous ‘Standard’), many thought we may have missed a trick by not actually presenting it as a map. Some thought that the Web Literacy Map is currently presented in a too clear-cut way, and that we should highlight some of the complexity. There were a few ideas how to do so, although one UX designer warned against surfacing this too much, lest we end up with a ‘plate of spaghetti’. Nevertheless, there was a feeling that riffing on the ‘map’ metaphor could lead to more of an ‘exploratory’ approach.

8. Lacking audience definition

There was a generally-positive sentiment about the Web Literacy Map structuring the Webmaker Resource section, although interviewees were a bit unsure about audience definition. The Web Literacy Map seems to be more of a teaching tool rather than a learning tool. It was suggested that we might want to give Mentors and Learners a different view. Mentors could start with the more abstract competencies, whereas the Learners could start with specific, concrete, interest-based activities. Laura Hilliger’s Web Literacy Learning Pathways prototype was mentioned on multiple occasions.

9. Why is this important?

Although the Web Literacy Map makes sense to westerners in developed countries, there was a feeling among some interviewees that we don’t currently ‘make the case’ for the web. Why is it important? Why should people pay to get online? What benefits does it bring? We need to address this question before, or perhaps during, their introduction to the competencies included in the Web Literacy Map.

10. Arbitrary separation of ‘Security’ and ‘Privacy’ competencies

At present, ‘Privacy’ is a competency under the ‘Exploring’ strand, and ‘Security’ is a competency under the ‘Connecting’ strand. However, there’s a lot of interplay, overlap, and connections between the two. Although interviewees thought that they should be addressed explicitly, there was a level of dissatisfaction with the way it’s currently approached in the Web Literacy Map.

11. Better localization required

Those I interviewed from outside North America and the UK expressed some frustration at the lack of transparency around localization. One in particular had tried to get involved, but became demotivated by a lack of response when posing suggestions and questions via Transifex. Another mentioned that it was important not to focus on translation from English to other languages, but to generate local content. The idea of badges for localization work was mentioned on more than one occasion.

12. The Web Literacy Map should be remixable

Although many interviewees approached it from different angles, there was a distinct feeling that the Web Literacy Map should somehow be remixable. Some used a GitHub metaphor to talk of the ‘main branch’ and ‘forks’. Others wanted a ‘Remix’ button next to the map in a similar vein to Thimble and Popcorn Maker resources. This would allow for multiple versions of the map that could be contextualized and localized while still maintaining a shared vocabulary and single point of reference.

13. Tie more closely to the Mozilla Mission

One of the things I wanted to find out through gentle probing during this series of interviews was whether we should consider re-including the fourth ‘Protecting’ strand we jettisoned before reaching v1.0. At the time, we thought that ‘protecting the web’ was too political and Mozilla-specific to include in what was then a Web Literacy ‘Standard’. However, a lot has changed in a year - both with Mozilla and with the web. Although I got the feeling that interviewees were happy to tie the Web Literacy Map more closely to the Mozilla Mission, there wasn’t overall an appetite for an additional column. Instead, people talked about ‘weaving’ it throughout the other competencies.

14. Use cross-cutting themes to connect news events to web literacy

When we developed the first version of the Web Literacy Map, we didn’t include ‘meta-level’ things such as ‘Identity’ and ‘storytelling’. Along with ‘mobile’, these ideas seem too large or nebulous to be distinct competencies. It was interesting, therefore, to hear some interviewees talk of hooking people’s interest via news items or the zeitgeist. The topical example given the timing of the interviewees tended to be interesting people in ‘Privacy’ and ‘Security’ via the iCloud celebrity photo leaks.

15. Develop user stories

Some interviewees felt that the Web Literacy Map currently lacks a ‘human’ dimension that we could rectify through the inclusion of some case studies showing real people who have learned a particular skill or competency. These could look similar to the UX Personas work.

16. Improve the ‘flow’ of webmaker.org for users

This is slightly outside the purview of the Web Literacy Map per se, but enough interviewees brought it up to surface it here. The feeling is that the connection between Webmaker Tools, the Web Literacy Map, and Webmaker badges isn’t clear. There should be a direct and obvious link between them. For instance, web literacy badges should be included in each competency page. Some even suggested a learner dashboard similar to the one Jess Klein proposed back in 2012.

17. Bake web literacy into Firefox

This, again, is veering away from the Web Literacy Map itself, but many interviewees mentioned how Mozilla should ‘differentiate’ Firefox within the market by allowing you to develop your web literacy skills ‘in the wild’. Some had specific examples of how this could work (“Hey, you just connected to a website using HTTPS, want to learn more?”) while others just had a feeling we should join things up a bit better.

18. Identify ‘foundational’ competencies

Although we explicitly avoided doing this with the first version of the Web Literacy Map, for some interviewees, having a set of ‘foundational’ competencies would be a plus point. It would give a starting point for those new to the area, and allow us to assume a baseline level from which the other competencies could be developed. We could also save the ‘darker’ aspects of the web for later to avoid scaring people off.

19. Avoid scope creep

Many interviewees warned against ‘scope creep’, or trying to cram too much into the Web Literacy Map. On the whole, there were lots of people I spoke to who like it just the way it is, with one saying that it would be relevant for a ‘good few years yet’. One of the valuable things about the Web Literacy Map is that it has a clear focus and scope. We should ensure we maintain that, was the general feeling. There’s also a feeling that it has a ‘strong understanding of technology’ that should be watered-down.

20. Version control

If we’re updating the Web Literacy Map, users need to know which version they’re viewing - and how to access previous versions. This is so they can know how up-to-date the current version is. We should also allow them to view previous iterations that they may have used to build a curriculum still being used by other organizations.

21. Use as a funnel to wider Mozilla projects

We currently have mozilla.org/contribute and webmaker.org/getinvolved, but some interviewees thought that we could guide people who keep selecting certain competencies towards different Mozilla areas - for example OpenNews or Open Science. The latter is also developing its own version of the Web Literacy Map, so that could be a good link. Also, even more widely, Open Hatch provide Open Source ‘missions’ that we could make use of.


*Although I was limited by my language and geographic location, I’m pretty happy with the range of views collected. Instead of a dry, laboratory-like study looking for statistical significance, I decided to focus on people I knew would have good insights, and with whom I could have meaningful conversations. Over the next couple of weeks I’m going to create a survey for community members to get their thoughts on some of the more concrete proposals I’ll make for Web Literacy Map 2.0.


Comments? Feedback? I’m @dajbelshaw on Twitter, or you can email me: doug@mozillafoundation.org.

William LachanceUsing Flexbox in web applications

Over last few months, I discovered the joy that is CSS Flexbox, which solves the “how do I lay out this set of div’s in horizontally or vertically”. I’ve used it in three projects so far:

  • Centering the timer interface in my meditation app, so that it scales nicely from a 320×480 FirefoxOS device all the way up to a high definition monitor
  • Laying out the chart / sidebar elements in the Eideticker dashboard so that maximum horizontal space is used
  • Fixing various problems in the Treeherder UI on smaller screens (see bug 1043474 and its dependent bugs)

When I talk to people about their troubles with CSS, layout comes up really high on the list. Historically, basic layout problems like a panel of vertical buttons have been ridiculously difficult, involving hacks involving floating divs and absolute positioning or JavaScript layout libraries. This is why people write articles entitled “Give up and use tables”.

Flexbox has pretty much put an end to these problems for me. There’s no longer any need to “give up and use tables” because using flexbox is pretty much just *like* using tables for layout, just with more uniform and predictable behaviour. :) They’re so great. I think we’re pretty close to Flexbox being supported across all the major browsers, so it’s fair to start using them for custom web applications where compatibility with (e.g.) IE8 is not an issue.

To try and spread the word, I wrote up a howto article on using flexbox for web applications on MDN, covering some of the common use cases I mention above. If you’ve been curious about flexbox but unsure how to use it, please have a look.

Jennie Rose HalperinWhy I feel like an Open Source Failure

I presented a version of this talk at the Supporting Cultural Heritage Open Source Software (SCHOSS) Symposium in Atlanta, GA in September 2014. This talk was generously sponsored by LYRASIS and the Andrew Mellon Foundation.


I often feel like an Open Source failure.

I haven’t submitted 500 patches in my free time, I don’t spend my after-work hours rating html5 apps, and I was certainly not a 14 year old Linux user. Unlike the incredible group of teenaged boys with whom I write my Mozilla Communities newsletter and hang out with on IRC, I spent most of my time online at that age chatting with friends on AOL Instant Messenger and doing my homework.

I am a very poor programmer. My Wikipedia contributions are pretty sad. I sometimes use Powerpoint. I never donated my time to Open Source in the traditional sense until I started at Mozilla as a GNOME OPW intern and while the idea of data gets me excited, the thought of spending hours cleaning it is another story.

I was feeling this way the other day and chatting with a friend about how reading celebrity news often feels like a better choice after work than trying to find a new open source project to contribute to or making edits to Wikipedia. A few minutes later, a message popped up in my inbox from an old friend asking me to help him with his application to library school.

I dug up my statement of purpose and I was extremely heartened to read my words from three years ago:

I am particularly interested in the interaction between libraries and open source technology… I am interested in innovative use of physical and virtual space and democratic archival curation, providing free access to primary sources.

It felt good to know that I have always been interested in these topics but I didn’t know what that would look like until I discovered my place in the open source community. I feel like for many of us in the cultural heritage sector the lack of clarity about where we fit in is a major blocker, and I do think it can be associated with contribution to open source more generally. Douglas Atkin, Community Manager at Airbnb, claims that the two main questions people have when joining a community are “Are they like me? And will they like me?”. Of course, joining a community is a lot more complicated than that, but the lack of visibility of open source projects in the cultural heritage sector can make even locating a project a whole lot more complicated.

As we’ve discussed in this working group, the ethics of cultural heritage and Open Source overlap considerably and

the open source community considers those in the cultural heritage sector to be natural allies.

In his article, “Who are you empowering?” Hugh Rundle writes: (I quote this article all the time because I believe it’s one of the best articles written about library tech recently…)

A simple measure that improves privacy and security and saves money is to use open source software instead of proprietary software on public PCs.

Community-driven, non-profit, and not good at making money are just some of the attributes that most cultural heritage organizations and open source project have in common, and yet, when choosing software for their patrons, most libraries and cultural heritage organizations choose proprietary systems and cultural heritage professionals are not the strongest open source contributors or advocates.

The main reasons for this are, in my opinion:


1. Many people in cultural heritage don’t know what Open Source is.

In a recent survey I ran of the Code4Lib and UNC SILS listservs, nearly every person surveyed could accurately respond to the prompt “Define Open Source in one sentence” though the responses varied from community-based answers to answers solely about the source code.

My sample was biased toward programmers and young people (and perhaps people who knew how to use Google because many of the answers were directly lifted from the first line of the Wikipedia article about Open Source, which is definitely survey bias,) but I think that it is indicative of one of the larger questions of open source.

Is open source about the community, or is it about the source code?

There have been numerous articles and books written on this subject, many of which I can refer you to (and I am sure that you can refer me to as well!) but this question is fundamental to our work.

Many people, librarians and otherwise, will ask: (I would argue most, but I am operating on anecdotal evidence)

Why should we care about whether or not the code is open if we can’t edit it anyway? We just send our problems to the IT department and they fix it.

Many people in cultural heritage don’t have many feelings about open source because they simply don’t know what it is and cannot articulate the value of one over the other. Proprietary systems don’t advertise as proprietary, but open source constantly advertises as open source, and as I’ll get to later, proprietary systems have cornered the market.

This movement from darkness to clarity brings most to mind a story that Kathy Lussier told about the Evergreen project, where librarians who didn’t consider themselves “techy” jumped into IRC to tentatively ask a technical question and due to the friendliness of the Evergreen community, soon they were writing the documentation for the software themselves and were a vital part of their community, participating in conferences and growing their skills as contributors.

In this story, the Open Source community engaged the user and taught her the valuable skill of technical documentation. She also took control of the software she uses daily and was able to maintain and suggest features that she wanted to see. This situation was really a win-win all around.

What institution doesn’t want to see their staff so well trained on a system that they can write the documentation for it?


2. The majority of the market share in cultural heritage is closed-source, closed-access software and they are way better at advertising than Open Source companies.

Last year, my very wonderful boss in the cataloging and metadata department of the University of North Carolina at Chapel Hill came back from ALA Midwinter with goodies for me: pens and keychains and postits and tote bags and those cute little staplers. “I only took things from vendors we use,” she told me.

Linux and Firefox OS hold 21% of the world’s operating system marketshare. (Interestingly, this is more globally than IOS, but still half that of Windows. On mobile, IOS and Android are approximately equal.)

Similarly, free, open source systems for cultural heritage are unfortunately not a high percentage of the American market. Wikipedia has a great list of proprietary and open source ILSs and OPACs, the languages they’re written in, and their cost. Marshall Breeding writes that FOSS software is picking up some market share, but it is still “the alternative” for most cultural heritage organizations.

There are so many reasons for this small market share, but I would argue (as my previous anecdote did for me,) that a lot of it has to do with the fact that these proprietary vendors have much more money and are therefore a lot better at marketing to people in cultural heritage who are very focused on their work. We just want to be able to install the thing and then have it do the thing well enough. (An article in Library Journal in 2011 describes open source software as: “A lot of work, but a lot of control.”)

As Jack Reed from Stanford and others have pointed out, most of the cost of FOSS in cultural heritage is developer time, and many cultural heritage institutions believe that they don’t have those resources. (John Brice’s example at the Meadville Public Library proves that communities can come together with limited developers and resources in order to maintain vital and robust open source infrastructures as well as significantly cut costs.)

I learned at this year’s Wikiconference USA that academic publishers had the highest profit margin of any company in the country last year, ahead of Google and Apple.

The academic publishing model is, for more reasons than one, completely antithetical to the ethics of cultural heritage work, and yet they maintain a large portion of the cultural heritage market share in terms of both knowledge acquisition and software. Megan Forbes reminds us that the platform Collection Space was founded as the alternative to the market dominance of “several large, commercial vendors” and that cost put them “out of reach for most small and mid-sized institutions.”

Open source has the chance to reverse this vicious cycle, but institutions have to put their resources in people in order to grow.

While certain companies like OCLC are working toward a more equitable future, with caveats of course, I would argue that the majority of proprietary cultural heritage systems are providing inferior product to a resource poor community.


 3. People are tired and overworked, particularly in libraries, and to compound that, they don’t think they have the skills to contribute.

These are two separate issues, but they’re not entirely disparate so I am going to tackle them together.

There’s this conception outside of the library world that librarians are secret coders just waiting to emerge from their shells and start categorizing datatypes instead of MARC records (this is perhaps a misconception due to a lot of things, including the sheer diversity of types of jobs that people in cultural heritage fill, but hear me out.)

When surveyed, the skill that entering information science students most want to learn is “programming.” However, the majority of MLIS programs are still teaching Microsoft Word and beginning html as technology skills.

Learning to program computers takes time and instruction and while programs like Women who Code and Girl Develop It can begin educating librarians, we’re still faced with a workforce that’s over 80% female-identified that learned only proprietary systems in their work and a small number of technology skills in their MLIS degrees.

Library jobs, and further, cultural heritage jobs are dwindling. Many trained librarians, art historians, and archivists are working from grant to grant on low salaries with little security and massive amounts of student loans from both undergraduate and graduate school educations. If they’re lucky to get a job, watching television or doing the loads of professional development work they’re expected to do in their free time seems a much better choice after work than continuing to stare at a computer screen for a work-related task or learn something completely new. For reference: an entry-level computer programmer can expect to make over $70,000 per year on average. An entry-level librarian? Under $40,000. I know plenty of people in cultural heritage who have taken two jobs or jobs they hate just to make ends meet, and I am sure you do too.

One can easily say, “Contributing to open source teaches new skills!” but if you don’t know how to make non-code contributions or the project is not set up to accept those kinds of contributions, you don’t see an immediate pay-off in being involved with this project, and you are probably not willing to stay up all night learning to code when you have to be at work the next day or raise a family. Programs like Software Carpentry have proven that librarians, teachers, scientists, and other non-computer scientists are willing to put in that time and grow their skills, so to make any kind of claim without research would be a reach and possibly erroneous, but I would argue that most cultural heritage organizations are not set up in a way to nurture their employees for this kind of professional development. (Not because they don’t want to, necessarily, but because they feel they can’t or they don’t see the immediate value in it.)

I could go on and on about how a lot of these problems are indicative of cultural heritage work being an historically classed and feminized professional grouping, but I will spare you right now, although you’re not safe if you go to the bar with me later.

In addition, many open source projects operate with a “patches welcome!” or “go ahead, jump in!” or “We don’t need a code of conduct because we’re all nice guys here!” mindset, which is not helpful to beginning coders, women, or really, anyone outside of a few open source fanatics.

I’ve identified a lot of problems, but the title of this talk is “Creating the Conditions for Open Source Community” and I would be remiss if I didn’t talk about what works.

Diversification, both in terms of types of tasks and types of people and skillsets as well as a clear invitation to get involved are two absolute conditions for a healthy open source community.

Ask yourself the questions: Are you a tight knit group with a lot of IRC in-jokes that new people may not understand? Are you all white men? Are you welcoming? Paraphrasing my colleague Sean Bolton, the steps to an inviting community is to build understanding, build connections, build clarity, build trust, build pilots, which creates a build win-win.

As communities grow, it’s important to be able to recognize and support contributors in ways that feel meaningful. That could be a trip to a conference they want to attend, a Linkedin recommendation, a professional badge, or a reference, or best yet: you could ask them what they want. Our network for contributors and staff is adding a “preferred recognition” system. Don’t know what I want? Check out my social profile. (The answer is usually chocolate, but I’m easy.)

Finding diverse contribution opportunities has been difficult for open source since, well, the beginning of open source. Even for us at Mozilla, with our highly diverse international community and hundreds of ways to get involved, we often struggle to bring a diversity of voices into the conversation, and to find meaningful pathways and recognition systems for our 10,000 contributors.

In my mind, education is perhaps the most important part of bringing in first-time contributors. Organizations like Open Hatch and Software Carpentry provide low-cost, high-value workshops for new contributors to locate and become a part of Open Source in a meaningful and sustained manner. Our Webmaker program introduces technical skills in a dynamic and exciting way for every age.

Mentorship is the last very important aspect of creating the conditions for participation. Having a friend or a buddy or a champion from the beginning is perhaps the greatest motivator according to research from a variety of different papers. Personal connection runs deep, and is a major indicator for community health. I’d like to bring mentorship into our conversation today and I hope that we can explore that in greater depth in the next few hours.

With mentorship and 1:1 connection, you may not see an immediate uptick in your project’s contributions, but a friend tells a friend tells a friend and then eventually you have a small army of motivated cultural heritage workers looking to take back their knowledge.

You too can achieve on-the-ground action. You are the change you wish to see.

Are you working in a cultural heritage institution and are about to switch systems? Help your institution switch to the open source solution and point out the benefits of their community. Learning to program? Check out the Open Hatch list of easy bugs to fix! Are you doing patron education? Teach them Libre Office and the values around it. Are you looking for programming for your library? Hold a Wikipedia edit-a-thon. Working in a library? Try working open for a week and see what happens. Already part of an open source community? Mentor a new contributor or open up your functional area for contribution.

It’s more than just “if you build it, they will come.”

If you make open source your mission, people will want to step up to the plate.

To close, I’m going to tell a story that I can’t take credit for, but I will tell it anyway.

We have a lot of ways to contribute at Mozilla. From code to running events to learning and teaching the Web, it can be occasionally overwhelming to find your fit.

A few months ago, my colleague decided to create a module and project around updating the Mozilla Wiki, a long-ignored, frequently used, and under-resourced part of our organization. As an information scientist and former archivist, I was psyched. The space that I called Mozilla’s collective memory was being revived!

We started meeting in April and it became clear that there were other wiki-fanatics in the organization who had been waiting for this opportunity to come up. People throughout the organization were psyched to be a part of it. In August, we held a fantastically successful workweek in London, reskinned the wiki, created a regular release cycle, wrote a manual and a best practice guide, and are still going strong with half contributors and half paid-staff as a regular working group within the organization. Our work has been generally lauded throughout the project, and we’re working hard to make our wiki the resource it can be for contributors and staff.

To me, that was the magic of open source. I met some of my best friends, and at the end of the week, we were a cohesive unit moving forward to share knowledge through our organization and beyond. And isn’t that a basic value of cultural heritage work?

I am still an open source failure. I am not a code fanatic, and I like the ease-of-use of my used IPhone. I don’t listen to techno and write Javscript all night, and I would generally rather read a book than go to a hackathon.

And despite all this, I still feel like I’ve found my community.

I am involved with open source because I am ethically committed to it, because I want to educate my community of practice and my local community about what working open can bring to them.

When people ask me how I got involved with open source, my answer is: I had a great mentor, an incredible community and contributor base, and there are many ways to get involved in open source.

While this may feel like a new frontier for cultural heritage, I know we can do more and do better.

Open up your work as much as you can. Draw on the many, many intelligent people doing work in the field. Educate yourself and others about the value that open source can bring to your institution. Mentor someone new, even if you’re shy. Connect with the community and treat your fellow contributors with respect.Who knows?

You may get an open source failure like me to contribute to your project.

Dustin J. Mitchellfwunit: Unit Tests for your Network

I find your lack of unit tests ... disturbing

It's established fact by now that code should be tested. The benefits are many:

  • Exercising the code;
  • Reducing ambiguity by restating the desired behavior (in the implementation, in the tests, and maybe even a third time in the documentation); and
  • Verifying that the desired behavior remains unchanged when the code is refactored.

System administrators are increasingly thinking of infrastructure as code and reaping the benefits of testing, review, version control, collaboration, and so on. In the networking world, this typically implies "software defined networking" (SDN), a substantial change from the typical approach to network system configuration.

At Mozilla, we haven't taken the SDN plunge yet, although there are plans in the works. In the interim, we maintain very complex firewall configurations by hand. Understanding how all of the rules fit together and making manual changes is often difficult and error-prone. Furthermore, after years of piece-by-piece modifications to our flows, the only comprehensive summary of our network flows are the firewall configurations themselves. And those are not very readable for anyone not familiar with firewalls!

The difficulty and errors come from the gap between the request for a flow and the final implementation, perhaps made across several firewalls. If everyone -- requester and requestee -- had access to a single, readable document specifying what the flows should look like, then requesets for modification could be more explicit and easier to translate into configuration. If we have a way to verify automatically that the firewall configurations match the document, then we can catch errors early, too.

I set about trying to find a way to implement this. After experimenting with various ways to write down flow definitions and parse them, I realized that the verification tests could be the flow document. The idea is to write a set of tests, in Python since it's the lingua franca of Mozilla, which can be read by both the firewall experts and the users requesting a change to the flows. To change flows, change the tests -- a diff makes the request unambiguous. To verify the result, just run the tests.

fwunit

I designed fwunit to support this: unit tests for flows. The idea is to pull in "live" flow configurations and then write tests that verify properties of those configurations. The tool supports reading Juniper SRX configurations as well as Amazon AWS security groups for EC2 instances, and can be extended easily. It can combine rules from several sources (for example, firewalls for each datacenter and several AWS VPCs) using a simple description of the network topology.

As a simple example, here's a test to make sure that the appropriate VLANs have access to the DeployStudio servers:

def test_install_build():
    rules.assertPermits(
        test_releng_scl3 + try_releng_scl3 + build_releng_scl3,
        deploystudio_servers,
        'deploystudio')

The rules instance there is a compact representation of all allowed network flows, deduced from firewall and AWS configurations with the fwunit command line tool. The assertPermits method asserts that the rules permit traffic from the test, try, and build VLANs to the deploystudio servers, using the "deploystudio" application. That all reads pretty naturally from the Python code.

At Mozilla

We glue the whole thing together with a shell script that pulls the tests from our private git repository, runs fwunit to get the latest configuration information, and then runs the tests. Any failures are reported by email, at which point we know that our document (the tests) doesn't match reality, and can take appropriate action.

We're still working on the details of the process involved in changing configurations -- do we update the tests first, or the configuration? Who is responsible for writing or modifying the tests -- the requester, or the person making the configuration change? Whatever we decide, it needs to maximize the benefits without placing undue load on any of the busy people involved in changing network flows.

Benefits

It's early days, but this approach has already paid off handsomely.

  • As expected, it's a readable, authoritative, verifiable account of our network configuration. Requirements met -- aweseome!
  • With all tests in place, netops can easily "refactor" the configurations, using fwunit to verify that no expected behavior has changed. We've deferred a lot of minor cleanups as high-risk with low reward; easy verification should substantially reduce that risk.
  • Just about every test I've written has revealed some subtle misconfiguration -- either a flow that was requested incorrectly, or one that was configured incorrectly. These turn into flow-request bugs that can be dealt with at a "normal" pace, rather than the mad race to debug and fix that would occur later, when they impacted production operations.

Get Involved

I'm a Mozillan, so naturally fwunit is open source and designed to be useful to more than just Mozilla. If this sounds useful, please use it, and I'd love to hear from you about how I might make it work better for you. If you're interested in hacking on the software, there are a number of open issues in the github repo just waiting for a pull request.

Ludovic HirlimannTips on organizing a pgp key signing party

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

  • know who is coming
  • Mass mail participants
  • have them have a calendar reminder

4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

  • date
  • venue
  • link to eventbrite and why I use it
  • ask them to forward (they know the area better than you)
  • I also use lanyrd and twitter but I’m not convinced that it works.

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello my name is ludovic,

I'm a sysadmins at mozilla working remote from europe. I've been
involved with Thunderbird a lot (and still am). I'm organizing a pgp Key
signing party in the Mozilla san francisco office on September the 26th
2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will
attend. I'v setup a eventbrite for that at
https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o=
f-trust-stronger-tickets-12867542165
(please take one ticket if you think about attending - If you change you
mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make
a list with keys and fingerprint before the event to make things more
manageable (but I don't promise).

for those using lanyrd you will be able to use http://lanyrd.com/ccckzw.

Ludovic
ps sent to buug.org,nblug.org end penlug.org - please feel free to post
where appropriate ( the more the meerier, the stronger the web of trust).=

ps2 I have contacted people listed on biglumber to have more gpg related
people show up.

--=20
[:Usul] MOC Team at Mozilla
QA Lead fof Thunderbird
http://sietch-tabr.tumblr.com/ - http://weusepgp.info/

5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

Erik VoldJetpack Pro Tip - Using The Add-on Debugger With JPM

Did you know that there is an Add-on Debugger in Firefox? good for you!

Now with JPM using the Add-on Debugger is even easier. To use the add-on debugger automatically when using jpm you simply need to add a --debug option.

So the typical:

jpm run -b nightly

Would become:

jpm run -b nightly --debug

Chris McAvoyMe and Open Badges – Different, but the same

Hi there, if you read this blog it’s probably for one of three things,

1) my investigation of the life of Isham Randolph, the chief engineer of the Chicago Sanitary and Ship canal.
2) you know me and you want to see what I’m doing but you haven’t discovered Twitter or Facebook yet.
3) Open Badges.

This is a quick update for everyone in that third group, the Open Badges crew. I have some news.

When I joined the Open Badges project nearly three years ago, I knew this was something that once I joined, I wouldn’t leave. The idea of Open Badges hits me exactly where I live, at the corner of ‘life long learning’ and ‘appreciating people for who they are’. I’ve been fortunate that my love of life long learning and self-teaching led me down a path where I get to do what I love as my career. Not everyone is that fortunate. I see Open Badges as a way to make my very lucky career path the norm instead of the exception. I believe in the project, I believe in the goals and I’m never going to not work toward bringing that kind of opportunity to everyone regardless of the university they attended or the degree hanging on their wall.

This summer has been very exciting for me. I joined the Badge Alliance, chaired the BA standard working group and helped organize the first BA Technology Council. At the same time, I was a mentor for Chicago’s Tech Stars program and served as an advisor to a few startups in different stages of growth. The Badge Alliance work has been tremendously satisfying, the standard working group is about to release the first cycle report, and it’s been great to see our accomplishments all written in one place. We’ve made a lot of progress in a short amount of time. That said, my role at the Alliance has been focused on standards growth, some evangelism and guiding a small prototyping project. As much as I loved my summer, the projects and work don’t fit the path I was on. I’ve managed engineering teams for a while now, building products and big technology architectures. The process of guiding a standard is something I’m very interested in, but it doesn’t feel like a full-time job now. I like getting my hands dirty (in Emacs), I want to write code and direct some serious engineer workflow.

Let’s cut to the chase – after a bunch of discussions with Sunny Lee and Erin Knight, two of my favorite people in the whole world, I’ve decided to join Earshot, a Chicago big data / realtime geotargeted social media company, as their CTO. I’m not leaving the Badge Alliance. I’ll continue to serve as the BA director of technology, but as a volunteer. Earshot is a fantastic company with a great team. They understand the Open Badges project and want me to continue to support the Badge Alliance. The Badge Alliance is a great team, they understand that I want to build as much as I want to guide. I’m so grateful to everyone involved for being supportive of me here, I can think of dozens of ways this wouldn’t have worked out. Just a bit of life lesson – as much as you can, work with people who really care about you, it leads to situations like this, where everyone gets what they really need.

The demands of a company moving as fast as Earshot will mean that I’ll be less available, but no less involved in the growth of the Badge Alliance and the Open Badges project. From a tactical perspective, Sunny Lee will be taking over as chair of the standard working group. I’ll still be an active member. I’ll also continue to represent the BA (along with Sunny) in the W3C credentials community group.

If you have any questions, please reach out to me! I’ll still have my chris@badgealliance.org email address…use it!

Christian HeilmannReconnecting at TEDxLinz – impressions, slides, resources

I just returned from Linz, Austria, where I spoke at TEDxLinz yesterday. After my stint at TEDxThessaloniki earlier in the year I was very proud to be invited to another one and love the variety of talks you encounter there.

TEDx_Linz_2014-5783

The overall topic of the event was “re-connect” and I was very excited to hear all the talks covering a vast range of topics. The conference was bilingual with German (well, Austrian) talks and English ones. Oddly enough, no speaker was a native English speaker.

TEDx_Linz_2014-5622

My favourite parts were:

  • Ingrid Brodnig talking about online hate and how to battle it
  • Andrea Götzelmann talking about re-integrating people into their home countries after emigrating. A heart-warming story of helping people out who moved out and failed just to return and succeed
  • Gergely Teglasy talking about creating a crowd-curated novel written on Facebook
  • Malin Elmlid of The Bread Exchange showing how her love of creating your own food got her out into the world and learn about all kind of different cultures. And how doing an exchange of goods and services beats being paid.
  • Elisabeth Gatt-Iro and Stefan Gatt showing us how to keep our relationships fresh and let love listen.
  • Johanna Schuh explaining how asking herself questions about her own behaviour rather than throwing blame gave her much more peace and the ability to go out and speak to people.
  • Stefan Pawel enlightening us about how far ahead Linz is compared to a lot of other cities when it comes to connectivity (150 open hot spots, webspace for each city dweller)

The location was the convention centre of a steel factory and the stage setup was great and not over the top. The audience was very mixed and very excited and all the speakers did a great job mingling. Despite the impressive track record of all of them there was no sense of diva-ism or “parachute presenting”.
I had a lovely time at the speaker’s dinner and going to and from the hotel.

The hotel was a special case in itself: I felt like I was in an old movie and instead of using my laptop I was tempted to grow a tufty beard and wear layers and layers of clothes and a nice watch on a chain.

hotel room

My talk was about bringing the social back into social media or – in other words – stopping to chase numbers of likes and inane comments and go back to a web of user generated content that was done by real people. I have made no qualms about it in the past that I dislike memes and animated GIFs cropped from a TV series of movie with a passion and this was my chance to grand-stand about it.

I wanted the talk to be a response to the “Look up” and “Look down” videos about social oversharing leading to less human interaction. My goal was to move the conversation into a different direction, explaining that social media is for us to put things we did and wanted to share. The big issue is that the addiction-inducing game mechanisms of social media platforms instead lead us to post as much as we can and try to be the most shared instead of the creators.

This also leads to addiction and thus to strange online behaviour up to over-sharing materials that might be used as blackmail opportunities against us.

My slides are on Slideshare.

Resources I covered in the talk:

Other than having a lot of fun on stage I also managed to tick some things off my bucket list:

TEDx_Linz_2014-5792

  • Vandalising a TEDx stage
  • Being on stage with my fly open
  • Using the words “sweater pillows” and “dangly bits” in a talk

I had a wonderful time all in all and I want to thank the organisers for having me, the audience for listening, the other speakers for their contribution and the caterers and volunteers for doing a great job to keep everybody happy.

TEDx_Linz_2014-5806

Eric ShepherdThe Sheppy Report: September 26, 2014

I can’t believe another week has already gone by. This is that time of the year where time starts to scream along toward the holidays at a frantic pace.

What I did this week

  • I’ve continued working heavily on the server-side sample component system
    • Implemented the startup script support, so that each sample component can start background services and the like as needed.
    • Planned and started implementation work on support for allocating ports each sample needs.
    • Designed tear-down process.
  • Created a new “download-desc” class for use on the Firefox landing page on MDN. This page offers download links for all Firefox channels, and this class is used to correct a visual glitch. The class has not as yet been placed into production on the main server though. See this bug to track landing of this class.
  • Updated the MDN administrators’ guide to include information on the new process for deploying changes to site CSS now that the old CustomCSS macro has been terminated on production.
  • Cleaned up Signing Mozilla apps for Mac OS X.
  • Created Using the Mozilla build VM, based heavily on Tim Taubert’s blog post and linked to it from appropriate landing pages.
  • Copy-edited and revised the Web Bluetooth API page.
  • Deleted a crufty page from the Window API.
  • Meetings about API documentation updates and more.

Wrap up

That’s a short-looking list but a lot of time went into many of the things on that list; between coding and research for the server-side component service and experiments with the excellent build VM (I did in fact download it and use it almost immediately to build a working Nightly), I had a lot to do!

My work continues to be fun and exciting, not to mention outright fascinating. I’m looking forward to more, next week.

Jeff WaldenMinor changes are coming to typed arrays in Firefox and ES6

JavaScript has long included typed arrays to efficiently store numeric arrays. Each kind of typed array had its own constructor. Typed arrays inherited from element-type-specific prototypes: Int8Array.prototype, Float64Array.prototype, Uint32Array.prototype, and so on. Each of these prototypes contained useful methods (set, subarray) and properties (buffer, byteOffset, length, byteLength) and inherited from Object.prototype.

This system is a reasonable way to expose typed arrays. Yet as typed arrays have grown, it’s grown unwieldy. When a new typed array method or property is added, distinct copies must be added to Int8Array.prototype, Float64Array.prototype, Uint32Array.prototype, &c. Likewise for “static” functions like Int8Array.from and Float64Array.from. These distinct copies cost memory: a small amount, but across many tabs, windows, and frames it can add up.

A better system

ES6 changes typed arrays to fix these issues. The typed array functions and properties now work on any typed array.

var f32 = new Float32Array(8); // all zeroes
var u8 = new Uint8Array([0, 1, 2, 3, 4, 5, 6, 7]);
Uint8Array.prototype.set.call(f32, u8); // f32 contains u8's values

ES6 thus only needs one centrally-stored copy of each function. All functions move to a single object, denoted %TypedArray%.prototype. The typed array prototypes then inherit from %TypedArray%.prototype to expose them.

assertEq(Object.getPrototypeOf(Uint8Array.prototype),
         Object.getPrototypeOf(Float64Array.prototype));
assertEq(Object.getPrototypeOf(Object.getPrototypeOf(Int32Array.prototype)),
         Object.prototype);
assertEq(Int16Array.prototype.subarray,
         Float32Array.prototype.subarray);

ES6 also changes the typed array constructors to inherit from the %TypedArray% constructor, on which functions like Float64Array.from and Int32Array.of live. (Neither function yet in Firefox, but soon!)

assertEq(Object.getPrototypeOf(Uint8Array),
         Object.getPrototypeOf(Float64Array));
assertEq(Object.getPrototypeOf(Object.getPrototypeOf(Int32Array)),
         Function.prototype);

I implemented these changes a few days ago in Firefox. Grab a nightly build and test things out with a new profile.

Conclusion

In practice this won’t affect most typed array code. Unless you depend on the exact [[Prototype]] sequence or expect typed array methods to only work on corresponding typed arrays (and thus you’re deliberately extracting them to call in isolation), you probably won’t notice a thing. But it’s always good to know about language changes. And if you choose to polyfill an ES6 typed array function, you’ll need to understand %TypedArray% to do it correctly.

Frédéric HarperHTML, not just for desktops at Congreso Universitario Móvil

My translator, and me

My translator, and me

At the beginning of the month, I was in Mexico to represent Mozilla at Congreso Universitario Móvil, an annual conference at the Universidad Nacional Autónoma de México. Unfortunately, I did not have the time to visit a lot Mexico City, but I had an amazing full day at the event. I started the fourth day of the conference with a keynote on Firefox OS.

 

You can also watch the recording of my session. The sound is not that good: the room was echoing a lot, but I promised to the attendees that no matter what, I’ll publish it online.

 

The attendees had so much interest in the Open Web that instead of taking a couple of minutes for the questions, I did a full one hours Q&A right after the keynote. They were supposed to have a one-hour break before the next session, but they used that time to learn more about Firefox OS, which is great. I think we would have been good to continue on the questions, but I had to stop them as it was time to start the three hours hackathons. I started the hackathon with explanations about the next hours we spent together, but also on how to build, and debug your application.

 

I also recorded that presentation, which took more than fifteen minutes as it include explanation about the hackathon itself. Again, we were in the same room, so the sound is not optinal.

 

Attendees worked hard to port their actual web application using HTML, CSS, and JavaScript to make them work on our platform. After the hacking part of the day, I did three interviews with local medias, and one is already online on Excélsior (in Spanish – English version translated by Google). The one with Reforma, and Financiero Bloomberg TV will follow soon. Overall, I had an amazing time again in Mexico, and I was amazed by the interest about HTML, the Open Web, and Firefox OS. Keep the great apps coming Mexico!


--
HTML, not just for desktops at Congreso Universitario Móvil is a post on Out of Comfort Zone from Frédéric Harper

Gervase MarkhamPrevent Territoriality

Watch out for participants who try to stake out exclusive ownership of certain areas of the project, and who seem to want to do all the work in those areas, to the extent of aggressively taking over work that others start. Such behavior may even seem healthy at first. After all, on the surface it looks like the person is taking on more responsibility, and showing increased activity within a given area. But in the long run, it is destructive. When people sense a “no trespassing” sign, they stay away. This results in reduced review in that area, and greater fragility, because the lone developer becomes a single point of failure. Worse, it fractures the cooperative, egalitarian spirit of the project. The theory should always be that any developer is welcome to help out on any task at any time.

— Karl Fogel, Producing Open Source Software

Soledad PenadesBerlin Web Audio Hack Day 2014

As with the Extensible Web Summit, we wrote some notes collaboratively. Here are the notes for the Web Audio Hackday!

We started the day with me being late because I took a series of badly timed bad decisions and that ended up in me taking the wrong untergrund lines. In short: I don’t know how to metro in Berlin in the mornings and I’m still so sorry.

I finally arrived to Soundcloud’s offices, and it was cool that Jan was still doing the presentations, so Tiffany gave me a giant glass of water and I almost drank it all while they finished. Then I set up my computer and proceeded to give my talk/workshop!

It was an improved and revised version of the beta-talk I gave at Mozilla London past past week:


Note to self: maybe remove red banners behind me if wearing a red shirt, so as not to blend with them

Sadly it wasn’t recorded and I didn’t screencast it either, so you’ll have to make do with the slides and the code for the slides (which includes the examples). Or maybe wait until I maybe run this workshop again (which I have already been asked to do!)

Jordan Santell and the Web Audio Editor in Firefox Devtools

Then Jordan (of dancer.js and component.fm fame) talked about the fancy new Web Audio Editor which is one of the latest tools to join the Firefox Devtools collection of awesome—and it just appeared in Firefox Stable (32) so you don’t even need to run Beta, Aurora or Nightly to use it! (I talked a bit about it already).

You can use the editor to visualise the audio graph, change values of the nodes and also detect if you have a memory leak when allocating nodes (which is something that is part of the normal workflow of working with Web Audio).

There was a nice plug to Are We Dubstep Yet?, the minisite I am building to keep track of bugs in the Web Audio Editor. Yay plugs!

are we dubstep yet?

Jordan’s slides are here. You can also watch his JSConf talk where he introduced an early version of the tools!

Chris Wilson and the Web MIDI API

Finally the mighty Chris Wilson explained how the Web MIDI API works and made some demos using a few and assorted MIDI devices he had brought with him: a keyboard, pads, a DJ deck controller…!

It’s interesting that most of the development of the Web MIDI implementation seems to be happening in Japan, so they are super original in their examples.

Chris’ slides on Web MIDI and other audio in general slides.

Hacking + Hacks!

I think we had lunch then… and then it was HACK TIME! But before actually getting started, some people pitched their idea to see if someone else wanted to collaborate with them and hack together. I think that was a really neat idea :-)

Myself, I spent the hack time…

  • reconnecting with old acquaintances
  • answering questions! but very few of them and none of them were the usual “but why doesn’t my oscillator start anymore?” but more interesting ones, so that was cool!
  • asking questions! to Chris mostly–one cannot ask questions to a spec editor in person every day!
  • and even started a hack which I didn’t finish: visualising custom periodic waves for use with an Oscillator Node, given the harmonics array. I gave myself the idea while I was doing the workshop, which is a terrible thing to do to myself, as I was distracting myself and wanted to hack on that instead of finishing the workshop. My brain probably hates itself, or me in general.

Also this was really cool:

I’m always super aware that weird sounds might be coming out of any of the devices in my desk when I’m testing web audio stuff, so it was fun to see I’m not the only one feeling that way :D

After hack time, the hacks were presented:

These are the people that submitted a hack, in the same order they appear in the video. Not all of them have published their hack code so if you are one of those, please do and write a comment so I can update this post!

  1. Jelle Akkerman (github, twitter) – NoOsc was an experiment using NoFlo, trying to build something very visual and cool, super suitable for live-acts. I really liked the idea!
  2. Guillaume Marty (github, twitter) – a BPM detection algorithm, using the OfflineAudioContext
  3. Erik Woitschig (twitter) – Using SoundCloud as sample database
  4. Daniel Roth, Jonathan Lundin (twitter, github), Felix Niklas (twitter, github) – Oscillator reacting to mobile phone gyroscope – it sounded really nice and I liked that the same code worked even in iPads. Yay Web Audio!
  5. Chris Greeff (twittergithub), Nick Lockhart (twitterhttps://github.com/N1ck) – Beaty Bird – source code (Second prize)
  6. Lisa Passing (githubtwitter) – One Hand Soundgame – source code (Third prize)
  7. Thomas Fett (twittergithub) – Remix at once – source code (Fourth prize)
  8. Evan Sonderegger (twittergithub) – Vector Scope in Web Audio API (First prize)

The hardware prizes were sponsored by Mozilla. And the software prizes by Bitwig.

The unofficial/community Web Audio logo

We also publicised a thing that Martin Holzhauer had worked on: the unofficial/community Web Audio logo!

Web Audio logo

Here’s the SVG. Many thanks to Martin for putting it all together!

As far as we know there is/was not an official logo. I totally love this one as it kind of matches the various JS* aesthetics and it is immediately understandable–most of the W3C api icons are just too fancy for anyone to grasp what they actually mean. Sure they look cool, but they do not work as a logo from a purely functional perspective.

And now, what?

Well, the Web Audio Conference is next January in Paris. They’re still accepting submissions for papers until next month, so why don’t you go and submit something? :-)

Hopefully see you there!

flattr this!

Daniel StenbergChanging networks with Firefox running

Short recap: I work on network code for Mozilla. Bug 939318 is one of “mine” – yesterday I landed a fix (a patch series with 6 individual patches) for this and I wanted to explain what goodness that should (might?) come from this!

diffstat

diffstat reports this on the complete patch series:

29 files changed, 920 insertions(+), 162 deletions(-)

The change set can be seen in mozilla-central here. But I guess a proper description is easier for most…

The bouncy road to inclusion

This feature set and associated problems with it has been one of the most time consuming things I’ve developed in recent years, I mean in relation to the amount of actual code produced. I’ve had it “landed” in the mozilla-inbound tree five times and yanked out again before it landed correctly (within a few hours), every time of course reverted again because I had bugs remaining in there. The bugs in this have been really tricky with a whole bunch of timing-dependent and race-like problems and me being unfamiliar with a large part of the code base that I’m working on. It has been a highly frustrating journey during periods but I’d like to think that I’ve learned a lot about Firefox internals partly thanks to this resistance.

As I write this, it has not even been 24 hours since it got into m-c so there’s of course still a risk there’s an ugly bug or two left, but then I also hope to fix the pending problems without having to revert and re-apply the whole series…

Many ways to connect to networks

Firefox Nightly screenshotIn many network setups today, you get an environment and a network “experience” that is crafted for that particular place. For example you may connect to your work over a VPN where you get your company DNS and you can access sites and services you can’t even see when you connect from the wifi in your favorite coffee shop. The same thing goes for when you connect to that captive portal over wifi until you realize you used the wrong SSID and you switch over to the access point you were supposed to use.

For every one of these setups, you get different DHCP setups passed down and you get a new DNS server and so on.

These days laptop lids are getting closed (and the machine is put to sleep) at one place to be opened at a completely different location and rarely is the machine rebooted or the browser shut down.

Switching between networks

Switching from one of the networks to the next is of course something your operating system handles gracefully. You can even easily be connected to multiple ones simultaneously like if you have both an Ethernet card and wifi.

Enter browsers. Or in this case let’s be specific and talk about Firefox since this is what I work with and on. Firefox – like other browsers – will cache images, it will cache DNS responses, it maintains connections to sites a while even after use, it connects to some sites even before you “go there” and so on. All in the name of giving the users an as good and as fast experience as possible.

The combination of keeping things cached and alive, together with the fact that switching networks brings new perspectives and new “truths” offers challenges.

Realizing the situation is new

The changes are not at all mind-bending but are basically these three parts:

  1. Make sure that we detect network changes, even if just the set of available interfaces change. Send an event for this.
  2. Make sure the necessary parts of the code listens and understands this “network topology changed” event and acts on it accordingly
  3. Consider coming back from “sleep” to be a network changed event since we just cannot be sure of the network situation anymore.

The initial work has been made for Windows only but it allows us to smoothen out any rough edges before we continue and make more platforms support this.

The network changed event can be disabled by switching off the new “network.notify.changed” preference. If you do end up feeling a need for that, I really hope you file a bug explaining the details so that we can work on fixing it!

Act accordingly

So what is acting properly? What if the network changes in a way so that your active connections suddenly can’t be used anymore due to the new rules and routing and what not? We attack this problem like this: once we get a “network changed” event, we “allow” connections to prove that they are still alive and if not they’re torn down and re-setup when the user tries to reload or whatever. For plain old HTTP(S) this means just seeing if traffic arrives or can be sent off within N seconds, and for websockets, SPDY and HTTP2 connections it involves sending an actual ping frame and checking for a response.

The internal DNS cache was a bit tricky to handle. I initially just flushed all entries but that turned out nasty as I then also killed ongoing name resolves that caused errors to get returned. Now I instead added logic that flushes all the already resolved names and it makes names “in transit” to get resolved again so that they are done on the (potentially) new network that then can return different addresses for the same host name(s).

This should drastically reduce the situation that could happen before when Firefox would basically just freeze and not want to do any requests until you closed and restarted it. (Or waited long enough for other timeouts to trigger.)

The ‘N seconds’ waiting period above is actually 5 seconds by default and there’s a new preference called “network.http.network-changed.timeout” that can be altered at will to allow some experimentation regarding what the perfect interval truly is for you.

Firefox BallInitially on Windows only

My initial work has been limited to getting the changed event code done for the Windows back-end only (since the code that figures out if there’s news on the network setup is highly system specific), and now when this step has been taken the plan is to introduce the same back-end logic to the other platforms. The code that acts on the event is pretty much generic and is mostly in place already so it is now a matter of making sure the event can be generated everywhere.

My plan is to start on Firefox OS and then see if I can assist with the same thing in Firefox on Android. Then finally Linux and Mac.

I started on Windows since Windows is one of the platforms with the largest amount of Firefox users and thus one of the most prioritized ones.

More to do

There’s separate work going on for properly detecting captive portals. You know the annoying things hotels and airports for example tend to have to force you to do some login dance first before you are allowed to use the internet at that location. When such a captive portal is opened up, that should probably qualify as a network change – but it isn’t yet.

ArkyBuilding Boot2Gecko(B2G) on Ubuntu

Update: This documentation is out-of-date: Please read developer.mozilla.org/en-US/Firefox_OS/Building for latest information

You heard about Mozilla Boot2Gecko(B2G) mobile operating system. Boot2Gecko's Gaia user interface is developed entirely using HTML, CSS and Javascript web technologies. If you are an experienced web developer you can contribute to Gaia UI and develop new Boot2Gecko applications with ease. In this post I'll explain how to setup the Boot2Gecko (B2G) development environment on your personal computer.


You can run Boot2Gecko(B2G) inside an emulator or inside a Firefox web browser. Using Boot2Gecko(B2G) with QEMU emulator is very resource intensive, so we will focus on the later in this post. I'll assume you are comfortable with Mozilla build environment. So, get that pot of coffee brewing and prepare for a long night of hacking.


Building Boot2Gecko(B2G) Firefox App


Before you start, let us make sure that you have all the prerequisites for building Firefox on your computer. Please have a look at the build prerequisites for your Linux, Window and OSX operating system.




# Let get the source code 
# Download mozilla-central repository 

$ hg clone http://hg.mozilla.org/mozilla-central mozilla-central

# Download the Gaia source code 

$ git clone https://github.com/mozilla-b2g/gaia gaia

# Change directory and create our profile 

$ cd gaia 

$ make profile 


# Change directory into your mozilla-central directory

$ cd mozilla-central


# Create a .mozconfig file inside your mozilla-central directory: 

$ nano .mozconfig 
mk_add_options MOZ_OBJDIR=../b2g-build
mk_add_options MOZ_MAKE_FLAGS="-j9 -s"
ac_add_options --enable-application=b2g
ac_add_options --disable-libjpeg-turbo
ac_add_options --enable-b2g-ril
ac_add_options --with-ccache=/usr/bin/ccache 

# Build the Firefox B2G app and wait for the build to finish 

$ make -f client.mk build 


# Create a simple b2g bash script to launch B2G app; change paths you suit your environment
# Note: Have to use to -safe-mode option due to bug on my Ubuntu box 
 
#!/bin/sh 
export B2G_HOMESCREEN=http://homescreen.gaiamobile.org/
/home/arky/src/b2g-build/dist/bin/b2g -profile /home/arky/src/gaia/profile




If everything goes well. You should have boot2gecko running inside a firefox now.


Boot2Gecko running inside firefox on Ubuntu


Customizing Firefox B2G App

For better Boot2Gecko (B2G) experience, we will customize Firefox features offline cache and touch events using a custom Firefox profile.



Create a Custom Firefox Profile

You can use dist/bin/b2g -ProfileManager option to launch Firefox Profile Manager. Create a new profile called 'b2g'. Now we can add customizations to this new profile.


On Linux computers, the profile is created under ~/.mozilla/b2g/ directory. You can find the information about location of firefox profiles for your operating system here.



You launch B2G with your new custom profile using the '-P' option. Modify your B2G bash script and add the custom profile option. dist/bin/b2g -P b2g


Disable offline cache

Create a user.js file inside your custom 'b2g' firefox profile directory. Add the following line to disable offline cache.

user_pref('browser.cache.offline.enable', false);


Enabling Touch events

Add the following line in your user.js file inside your custom 'b2g' Firefox profile directory to enable touch events.

 user_pref('dom.w3c_touch_events.enabled', true);



That's it. You now have a Boot2Gecko(B2G) running inside Firefox on your computer. Happy Hacking!

Julien VehentShellshock IOC search using MIG

Shellshock is being exploited. People are analyzing malwares dropped on systems using the bash vulnerability.

I wrote a MIG Action to check for these IOCs. I will keep updating it with more indicators as we discover them. To run this on your Linux 32 or 64 bits system, download the following archive: mig-shellshock.tar.xz 

Download the archive and run mig-agent as follow:

$ wget https://jve.linuxwall.info/ressources/taf/mig-shellshock.tar.xz
$ sha256sum mig-shellshock.tar.xz 
0b527b86ae4803736c6892deb4b3477c7d6b66c27837b5532fb968705d968822  mig-shellshock.tar.xz
$ tar -xJvf mig-shellshock.tar.xz 
shellshock_iocs.json
mig-agent-linux64
mig-agent-linux32
$ ./mig-agent-linux64 -i shellshock_iocs.json
This will output results in JSON format. If you grep for "foundanything" and both values return "false", it means no IOC was found on your system. If you get "true", look at the detailed results in the JSON output to find out what exactly was found.
$ ./mig-agent-linux64 -i shellshock_iocs.json|grep foundanything
    "foundanything": false,
    "foundanything": false,
The full action for MIG is below. I will keep updating it over time, I recommend you use the one below instead of the one in the archive.
{
"name": "Shellshock IOCs (nginx and more)",
"target": "os='linux' and heartbeattime \u003e NOW() - interval '5 minutes'",
"threat": {
"family": "malware",
"level": "high"
},
"operations": [
{
"module": "filechecker",
"parameters": {
"searches": {
"iocs": {
"paths": [
"/usr/bin",
"/usr/sbin",
"/bin",
"/sbin",
"/tmp"
],
"sha256": [
"73b0d95541c84965fa42c3e257bb349957b3be626dec9d55efcc6ebcba6fa489",
"ae3b4f296957ee0a208003569647f04e585775be1f3992921af996b320cf520b",
"2d3e0be24ef668b85ed48e81ebb50dce50612fb8dce96879f80306701bc41614",
"2ff32fcfee5088b14ce6c96ccb47315d7172135b999767296682c368e3d5ccac",
"1f5f14853819800e740d43c4919cc0cbb889d182cc213b0954251ee714a70e4b"
],
"regexes": [
"/bin/busybox;echo -e '\\\\147\\\\141\\\\171\\\\146\\\\147\\\\164'"
]
}
}
}
},
{
"module": "netstat",
"parameters": {
"connectedip": [
"108.162.197.26",
"162.253.66.76",
"89.238.150.154",
"198.46.135.194",
"166.78.61.142",
"23.235.43.31",
"54.228.25.245",
"23.235.43.21",
"23.235.43.27",
"198.58.106.99",
"23.235.43.25",
"23.235.43.23",
"23.235.43.29",
"108.174.50.137",
"201.67.234.45",
"128.199.216.68",
"75.127.84.182",
"82.118.242.223",
"24.251.197.244",
"166.78.61.142"
]
}
}
],
"description": {
"author": "Julien Vehent",
"email": "ulfr@mozilla.com",
"revision": 201409252305
},
"syntaxversion": 2
}

Armen ZambranoMaking mozharness easier to hack on and try support

Yesterday, we presented a series of proposed changes to Mozharness at the bi-weekly meeting.

We're mainly focused on making it easier for developers and allow for further flexibility.
We will initially focus on the testing side of the automation and make ground work for other further improvements down the line.

The set of changes discussed for this quarter are:

  1. Move remaining set of configs to the tree - bug 1067535
    • This makes it easier to test harness changes on try
  2. Read more information from the in-tree configs - bug 1070041
    • This increases the number of harness parameters we can control from the tree
  3. Use structured output parsing instead of regular where it applies - bug 1068153
    • This is part of a larger goal where we make test reporting more reliable, easy to consume and less burdening on infrastructure
    • It's to establish a uniform criteria for setting a job status based on a test result that depends on structured log data (json) rather than regex-based output parsing
    • "How does a test turn a job red or orange?" 
    • We will then have a simple answer that is that same for all test harnesses
  4. Mozharness try support - bug 791924
    • This will allow us to lock which repo and revision of mozharnes is checked out
    • This isolates mozharness changes to a single commit in the tree
    • This give us try support for user repos (freedom to experiment with mozharness on try)


Even though we feel the pain of #4, we decided that the value gained for developers through #1 & #2 gave us immediate value while for #4 we know our painful workarounds.
I don't know if we'll complete #4 in this quarter, however, we are committed to the first three.

If you want to contribute to the longer term vision on that proposal please let me know.


In the following weeks we will have more updates with regards to implementation details.

Mozilla Release Management TeamFirefox 33 beta6 to beta7

This beta has been driven by the NSS chemspill. We used this unexpected beta to test the behavior of 33 without OMTC under Windows.

  • 8 changesets
  • 232 files changed
  • 73163 insertions
  • 446 deletions

ExtensionOccurrences
cc73
h45
py23
c11
vcproj8
sh7
xcconfig6
mn6
pump4
mk4
cpp3
cbproj3
txt2
sln2
plist2
pbxproj2
m42
html2
def2
mm1
list1
+1
js1
in1
groupproj1
dep1
cmake1
am1
ac1

ModuleOccurrences
security151
security69
image4
widget1
modules1
+1
js1
gfx1

List of changesets:

Michael WuBug 1062886 - Fix one color padded drawing path. r=seth, a=sledru - 232c3b4708b9
Michael WuBug 1068230 - Don't use the gfxContext transform in intermediate surface. r=seth, a=sledru - bca0649c9b79
Douglas CrosherBug 1013996 - irregexp: Avoid unaligned accesses in ARM code. r=bhackett, a=sledru - 5e2a5b6c7a0d
Bas SchoutenBug 1030147 - Switch off OMTC on windows. r=milan, a=sylvestre - f631df57b34c
Steven MichaudBug 1056251 - Changing to a Firefox window in a different workspace does not focus automatically. r=masayuki a=lmandel - 7c118b1cf343
Kai EngertBug 1064636, upgrade to NSS 3.17.1 release, r=rrelyea, a=lmandel - fb8ff9258d02
Matt WoodrowBug 1030147 - Release the DrawTarget to drop the surface ref in ThebesLayerD3D9. r=Bas a=lmandel CLOSED TREE - 280407351f1b
L. David BaronBug 1064636 followup: Add new function to config/external/nss/nss.def r=khuey a=bustage CLOSED TREE - 2431af782661

Mike HommeySo, hum, bash…

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for “echo“, “xterm” and “echo this is a test” are left as an exercise to the reader.

Update: Another thing that bugs me: Why is this feature even enabled in posix mode? (the mode you get from bash --posix, or, more importantly, when running bash as sh) After all, export -f is a bashism.

Sean BoltonFrom the Furthest Edge to the Deepest Middle

In my role as Community Building Intern at Mozilla this summer, my goal has been to be explicit about how community building works so that people both internal and external to Mozilla can better understand and build upon this knowledge. This requires one of my favorite talents: connecting what emerges and making it a thing. We all experience this when we’ve been so immersed in something that we begin to notice patterns – our brains like to connect. One of my mentors, Dia Bondi, experienced this with her 21 Things, which she created during her time as a speech coach and still uses today in her work.

I set out to develop a mental model to help thing-ify this seemingly ambiguous concept of community building so that we all could collectively drive the conversation forward. (That might be the philosopher in me.) What emerged was this sort of fascinating overarching story: community building is connecting the furthest edge to the deepest middle (and making the process along that path easier). What I mean here is that the person with the largest of any form of distance must be able to connect to the hardest to reach person in the heart of the formal organization. For example, the 12 year old girl in Brazil who just taught herself some new JavaScript framework needs to be able to connect in some way to the module owner of that new JavaScript framework located in Finland because when they work together we all rise further together.

community building

The edge requires coordination from community. The center requires internal champions. The goal of community building is then to support community by creating structures that bridge community coordinators and internal champions while independently being or supporting the development of both. This structure allows for more action and creativity than no structure at all – a fundamental of design school.

Below is a model of community management. We see this theme of furthest edge to deepest middle. “It’s broken” is the edge. “I can do something about it” approaches the middle. This model shows how to take action and make the pathway from edge to middle easier.

community management

Community building is connecting the furthest edge to the deepest middle. It’s implicit. It’s obvious. But, when we can be explicit and talk about it we can figure out where and how to improve what works and focus less on what does not.


Ted ClancyA better way to input Vietnamese

Earlier this year I had the pleasure of implementing for Firefox OS an input method for Vietnamese (a language I have some familiarity with). After being dissatisfied with the Vietnamese input methods on other smartphones, I was eager to do something better.

I believe Firefox OS is now the easiest smartphone on the market for out-of-the-box typing of Vietnamese.

The Challenge of Vietnamese

Vietnamese uses the Latin alphabet, much like English, but it has an additional 7 letters with diacritics (Ă, Â, Đ, Ê, Ô, Ơ, Ư). In addition, each word can carry one of five tone marks. The combination of diacritics and tone marks means that the character set required for Vietnamese gets quite large. For example, there are 18 different Os (O, Ô, Ơ, Ò, Ồ, Ờ, Ỏ, Ổ, Ở, Õ, Ỗ, Ỡ, Ó, Ố, Ớ, Ọ, Ộ, Ợ). The letters F, J, W, and Z are unused. The language is (orthographically, at least) monosyllabic, so each syllable is written as a separate word.

This makes entering Vietnamese a little more difficult than most other Latin-based languages. Whereas languages like French benefit from dictionary lookup, where the user can type A-R-R-E-T-E and the system can from prompt for the options ARRÊTE or ARRÊTÉ, that is much less useful for Vietnamese, where the letters D-O can correspond to one of 25 different Vietnamese words (do, , , , dỗ, , dở, dỡ, dợ, đo, đò, đỏ, đó, đọ, đô, đồ, đổ, đỗ, đố, độ, đơ, đờ, đỡ, đớ, or đợ).

Other smartphone platforms have not dealt with this situation well. If you’ve tried to enter Vietnamese text on an iPhone, you’ll know how difficult it is. The user has two options. One is to use the Telex input method, which involves memorizing an arbitrary mapping of letters to tone marks. (It was originally designed as an encoding for sending Vietnamese messages over the Telex telegraph network.) It is user-unfriendly in the extreme, and not discoverable. The other option is to hold down a letter key to see variants with diacritics and tone marks. For example, you can hold down A for a second and then scroll through the 18 different As that appear. You do that every time you need to type a vowel, which is painfully slow.

Fortunately, this is not an intractable problem. In fact, it’s an opportunity to do better. (I can only assume that the sorry state of Vietnamese input on the iPhone speaks to a lack of concern about Vietnamese inside Apple’s hallowed walls, which is unfortunate because it’s not like there’s a shortage of Vietnamese people in San José.)

Crafting a Solution

To some degree, this was already a solved problem. Back in the days of typewriters, there was a Vietnamese layout called AĐERTY. It was based on the French AZERTY, but it moved the F, J, W, and Z keys to the periphery and added keys for Ă, Đ, Ơ, and Ư. It also had five dead keys. The dead keys contained:

  • a circumflex diacritic for typing the remaining letters (Â, Ê, and Ô);
  • the five tone marks; and
  • four glyphs each representing the kerned combination of the circumflex diacritic with a tone mark, needed where the two marks would otherwise overlap

Photo of a Vietnamese typewriter

My plan was to make a smartphone version of this typewriter. Already it would be an improvement over the iPhone. But since this is the computer age, there were more improvements I could make.

Firstly, I omitted F, J, W, and Z completely. If the user needs to type them — for a foreign word, perhaps — they can switch layouts to French. (Gaia will automatically switch to a different keyboard if you need to type a web address.) And obviously I could omit the glyphs that represent kerned pairs of diacritic & tone marks, since kerning is no longer a mechanical process.

The biggest change I made is that, rather than having keys for the five tone marks, words with tones appear as candidates after typing the letters. This has numerous benefits. It eliminates five weird-looking keys from the keyboard. It eliminates confusion about when to type the tone mark. (Tone marks are visually positioned in the middle of the word, but when writing Vietnamese by hand, tone marks are usually added last after writing the rest of the word.) It also saves a keystroke too, since we can automatically insert a space after the user selects the candidate. (For a word without a tone mark, the user can just hit the space bar. Think of the space bar as meaning “no tone”.)

This left just 26 letter keys plus one key for the circumflex diacritic. Firefox OS’s existing AZERTY layout had 26 letter keys plus one key for the apostrophe, so I put the circumflex where the apostrophe was. (The apostrophe is unused in Vietnamese.)

Screenshot of Vietnamese input method in use

In order to generate the tone candidates, I had to detect when the user had typed a valid Vietnamese syllable, because I didn’t want to display bizarre-looking nonsense as a candidate. Vietnamese has rules for what constitutes a valid syllable, based on phonotactics. And although the spelling isn’t purely phonetic (in particular, it inherits some peculiarities from Portuguese), it follows strict rules. This was the hardest part of writing the input method. I had to do some research about Vietnamese phonotactics and orthography. A good chunk of my code is dedicated to encoding these rules.

Knowing about the limited set of valid Vietnamese syllables, I was able to add some convenience to the input method. For example, if the user types V-I-E, a circumflex automatically appears on E because VIÊ is a valid sequence of letters in Vietnamese while VIE is not. If the user types T to complete the partial word VIÊT, only two tone candidates appear (VIẾT and VIỆT), because the other three tone marks can’t appear on a word ending with T.

Using it yourself

You can try the keyboard for yourself at Timothy Guan‑tin Chien’s website.

The keyboard is not included in the default Gaia builds. To include it, add the following line to the Makefile in the gaia root directory:

GAIA_KEYBOARD_LAYOUTS=fr,vi-Typewriter

The code is open source. Please steal it for your own open source project.


Erik VoldJetpack Pro Tip - setImmediate and clearImmediate

Do you know about window.setImmediate or window.clearImmediate ?

Did you know that you can use these now with the Add-on SDK ?

We’ve managed to keep them a little secret, but they are awesome because setImmediate is much quicker than setTimeout(fn, 0) especially if it is used a lot as it would be if it were in a loop or if it were used recursively. This is well described in the notes in MDN on window.setImmediate.

To use these function with the Add-on SDK, do the following:

const { setImmediate, clearImmediate } = require("sdk/timers");

function doStuff() {}
let timerID = setImmediate(doStuff);  // to run `doStuff` in the next tick
clearImmediate(timerID)               // to cancel `doStuff`

Nicholas NethercoteYou should use WebRTC for your 1-on-1 video meetings

Did you know that Firefox 33 (currently in Beta) lets you make a Skype-like video call directly from one running Firefox instance to another without requiring an account with a central service (such as Skype or Vidyo)?

This feature is built on top of Firefox’s WebRTC support, and it’s kind of amazing.

It’s pretty easy to use: just click on the toolbar button that looks like a phone handset or a speech bubble (which one you see depends which version of Firefox you have) and you’ll be given a URL with a call.mozilla.com domain name. [Update: depending on which beta version you have, you might need to set the loop.enabled preference in about:config, and possibly customize your toolbar to make the handset/bubble icon visible.] Send that URL to somebody else — via email, or IRC, or some other means — and when they visit that URL in Firefox 33 (or later) it will initiate a video call with you.

I’ve started using it for 1-on-1 meetings with other Mozilla employees and it works well. It’s nice to finally have an open source implementation of video calling. Give it a try!

Monty MontgomeryIntra-Paint: A new Daala demo from Jean-Marc Valin

Intra paint is not a technique that's part of the original Daala plan and, as of right now, we're not currently using it in Daala. Jean-Marc envisioned it as a simpler, potentially more effective replacement for intra-prediction. That didn't quite work out-- but it has useful and visually pleasing qualities that, of all things, make it an interesting postprocessing filter, especially for deringing.

Several people have said 'that should be an Instagram filter!' I'm sure Facebook could shake a few million loose for us to make that happen ;-)

Wil ClouserThe Great Add-on Bug Triage

The AMO team is meeting this week to discuss road maps and strategies and among the topics is our backlog of open bugs. Since mid-2011 there has averaged around 1200 bugs open at any one time.

Currently any interaction with AMO’s bugs is too time consuming: finding good first bugs, triaging existing bugs, organizing a chunk of bugs to fix in a milestone — they all require interacting with a list of 1200 bugs, many of which are years old and full of discussions by people who no longer contribute to the bugs. The small chunks of time I (and others) get to work on AMO are consumed by digging through these old bugs and trying to apply them to the current site.

In an effort to get this list to a manageable size the AMO team is aggressively triaging and closing bugs this week, hopefully ending the week with a realistic list of items we can hope to accomplish. With that list in hand we can prioritize the bugs, divide them into milestones, and begin to lobby for developer time.

Many of the bugs which are being closed are good ideas and we’d like to fix them, but we simply need to be realistic about what we can actually do with the resources we have. If you contribute patches to any of the bugs, please feel free to reopen them.

Thanks for being a part of AMO.

Jean-Marc ValinDaala: Painting Images For Fun (and Profit?)

As a contribution to Monty's Daala demo effort, I decided to demonstrate a technique I've recently been developing for Daala: image painting. The idea is to represent images as directions and 1-D patterns.

Read more!

Frédéric HarperHTML for the Mobile Web at All Things Open

allthingsopen

In about a month, I’ll speak at All Things Open in Raleigh, North Carolina. I’m quite excited as even if I never attended this event, I hear a lot of good things about it. Funny enough, I don’t go on stage quite often in the United States, so it’s a great opportunity to do so. What could be a better topic than talking about HTML for the Mobile Web at an event like this?

Firefox OS is a new operating system for mobile phones to bring web connectivity to those who can not get top-of-the-line smartphones. By harvesting the principles of what made the web great and giving developers access to the hardware directly through web standards it will be the step we need to make a real open and affordable mobile web a reality. In this talk, Frédéric Harper from Mozilla will show how Firefox OS works, how to build apps for it and how end users will benefit from this open alternative to other platforms.

It’s not too late to register for this event on October 22-23: they still have early birds tickets. See you there to share, and dicuss with you about open source, open tech and the open web!


--
HTML for the Mobile Web at All Things Open is a post on Out of Comfort Zone from Frédéric Harper

Pete MooreWeekly review 2014-09-24

Highlights from this week

Until the end of the year, I will be working with Coop on automating as much of Build Duty as possible. Therefore for the next 3 months, I will be almost full time Build Duty (9:00 - 16:30 local time) with a bit of time afterwards for other things.

Bugs I created this week:

Other bugs I updated this week:

Will Kahn-GreeneHair today, gone tomorrow

I've been cutting my own hair since like 1991 or so with two exceptions: a professional haircut before my wedding and one before my wife's sister's wedding.

Back in 1991, my parents bought me a set of Wahl clippers. Over the years, I broke two of the combs and a few of the extensions. Plus it has a crack down the side of the plastic body. At one point, I was cutting hair for a bunch of people on my dorm floor in college. It's seen a lot of use in 23 years.

However, a month ago, it started shorting the circuit. There's a loose wire or frayed something or something something. Between that and the crack down the side of the plastic body, I figured it's time to retire them and get a new set. The new set arrived today.

23 years is a long time. I have very few things that I've had for a long time. I bought my bicycle in 1992 or so. I have a clock radio I got in the mid-80s. I have a solar powered calculator from 1990 or so (TI-36). Everything else seems to fail within 5 years: blenders, toaster ovens, rice cookers, drills, computers, etc.

I'll miss those clippers. I hope the new ones last as long.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1067381] Sorting by ID broken when changing multiple bugs
  • [1065398] Error when using checksetup.pl to create BMO database from scratch when Review extension enabled
  • [1064395] concatenate and slightly minify javascript files
  • [1068014] skip strptime() in datetime_from() if the date is in a standard format
  • [1054141] add the ability to filter on the user that made the change
  • [891199] clicking on needinfo flag/text should scroll you to the comment which set the flag
  • [1069504] Put My Dashboard in the drop down on the top-right
  • [1067410] Modification time wrong for deleted flags in review schema
  • [1067808] Review history page displays cancelled reviews as overdue
  • [1060728] Add perltidyrc that makes it easier to follow existing code standards to BMO repository
  • [1068328] needinfo flag shows up on attachment details page only when not doing “Edit as Comment”
  • [1037663] Make custom bug entry forms more discoverable
  • [1071926] Can’t unmentor a bug

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Yura Zenevichisfirefoxosaccessibleyet.com

isfirefoxosaccessibleyet.com

24 Sep 2014 - Toronto, ON

Eitan from Mozilla's accessibility team was nice enough to reserve a isfirefoxosaccessibleyet.com domain where we could track Firefox OS accessibility status easily and in the open. I am really happy to announce that isfirefoxosaccessibleyet.com is now ready to be seen in public.

There are several bits of information about each app (including the overall system) that you can find there: an overall accessibility score, a number of opened, resolved and currently in progress bugs. We also provide links to the actual bug lists handy in case you need to dig deeper.

For anyone who wants to help out:

  • If you are a user and want to file a bug, you should be able to find a link inside each app's section.
  • If you are a developer and want to hack on Gaia accessibility, each app section has an up-to-date lists if high priority and good first bugs.

Please feel free to check it out!

yzen

Erik VoldJetpack Pro Tip - Require JSM

Did you know that you can require("some.jsm")?

This has been possible for some time now, thanks to @jsantell.

With the change above you can replace code like this:

const { Cu } = require("chrome");
const { AddonManager } = Cu.import("resource://gre/modules/AddonManager.jsm", {});

With code like this:

const { AddonManager } = require("resource://gre/modules/AddonManager.jsm");

Also, you can include JSMs in your lib/ folder and use them in chrome code and your add-on scope. So you can add a file like lib/some.jsm, and use it in your commonJS modules like so:

const { Something } = require("./some.jsm");

Then if you’d like to fetch the URI for lib/some.jsm in order to use it in a XUL document, or for some other reason, then you can use one of the techniques that I described yesterday to do this, with JPM you’d do this:

let someURI = require.resolve("./some.jsm");

With this you can import the jsm into XUL documents for one thing, you could also provide an api to other add-on’s this way, and many other things which I’ll save for another day.

Mozilla Release Management TeamFirefox 33 beta5 to beta6

  • 21 changesets
  • 40 files changed
  • 570 insertions
  • 530 deletions

ExtensionOccurrences
java11
cpp9
js8
cc6
html2
h2
ini1
css1

ModuleOccurrences
mobile11
ipc7
toolkit6
dom3
docshell2
content2
widget1
mozglue1
modules1
layout1
image1
gfx1
browser1

List of changesets:

Drew WillcoxonBug 1068852 - Highlight search suggestions on hover/mouseover on about:home/about:newtab. r=MattN, a=sylvestre - a34329afda87
Dave TownsendBacking out Bug 893276 for causing Bug 1058840. a=lmandel - fa3e1469d0f1
Alex BardasBug 1042521 - Drop some cases when backslashes from urlbar input were converted to slashes on windows. r=bz, a=sledru - 73202bfb3f03
dominique vincentBug 1062904 - Null pointer check when saving an image. r=mfinkle, a=lmandel - 4bfa8b78669c
Jim ChenBug 1066175 - Use other means to handle uncaught exception when Gecko is unavailable. r=snorp, a=sledru - e1d77019dda9
Jim ChenBug 1066175 - Only crash when crash reporting annotation succeeds. r=snorp, a=sledru - 0cc0faf4524b
Ethan HuggBug 1049087 - Pre-populate the whitelist for screensharing in Fx33. r=jesup, a=sledru - 90713d332601
Markus StangeBug 1066934 - Don't allow the snapped scrollbar thumb to extend past the scrollbar bounds. r=roc, a=sledru - f14c89b414b6
Simon MontaguBug 1068218 - Don't pass lone surrogates to GetDirectionFromChar. r=ehsan, a=abillings - 389dd23d771c
Chenxia LiuBug 1062257 - Handle HomeFragment deletions by panel/type instead of universally. r=margaret, a=sledru - ae87b325401d
Jim ChenBug 1067513 - Import updated base::LazyInstance from upstream. r=bsmedberg, a=sledru - d6aa05e710f2
Michael ComellaBug 956858 - Make menu inaccessible during editing mode. r=wesj, a=sledru - 91f4e2aed979
JW WangBug 1067858 - Apply |AutoNoJSAPI| before calling mAudioChannelAgent->SetVisibilityState in order not to hit nsContentUtils::IsCallerChrome() in HTMLMediaElement::CanPlayChanged(). r=bz, a=sledru - de5e77b26504
David MajorBug 1046382 - Blocklist dtwxsvc.dll. r=bsmedberg, a=sledru - 68fdd69ee9bb
Milan SreckovicBug 1069582 - Check the signed value for < 0 instead. r=sfowler, a=sledru - f8eec8fe1b2b
Tim TaubertBug 1054099 - Remove use of gradients in new tab page. r=dao, a=lmandel - b73f15e656a1
Jan-Ivar BruaroeyBug 1070076 - Fix createOffer options arg legacy-syntax-warning to not trip on absent arg. r=jesup, a=sledru - 02eaea5dce76
Bobby HolleyBug 1051224 - Find a clever way to work around COW restrictions on beta. r=gabor,ochameau - 6cdc428e3e62
Alexandre PoirotBug 1051224 - Test console's cd() against sandboxed iframes. r=msucan a=sylvestre - 0ae1af037f6e
Matt WoodrowBug 1053934 - Don't use the cairo context to create similar surfaces since it might be in an error state. r=jrmuizel, a=sledru - 9337f5dcf107
Randell JesupBug 1062876 - Refactor window iteration code for MediaManager. r=jib, a=abillings - d508b53c3dee

Ben HearsumStop stripping (OS X builds), it leaves you vulnerable

While investigating some strange update requests on our new update server, I discovered that we have thousands of update requests from Beta users on OS X that aren’t getting an update, but should. After some digging I realized that most, if not all of these are coming from users who have installed one of our official Beta builds and subsequently stripped out the architecture they do not need from it. In turn, this causes our builds to report in such a way that we don’t know how to serve updates for them.

We’ll look at ways of addressing this, but the bottom line is that if you want to be secure: Stop stripping Firefox binaries!

Lucas RochaNew Features in Picasso

I’ve always been a big fan of Picasso, the Android image loading library by the Square folks. It provides some powerful features with a rather simple API.

Recently, I started working on a set of new features for Picasso that will make it even more awesome: request handlers, request management, and request priorities. These features have all been merged to the main repo now. Let me give you a quick overview of what they enable you to do.

Request Handlers

Picasso supports a wide variety of image sources, from simple resources to content providers, network, and more. Sometimes though, you need to load images in unconventional ways that are not supported by default in Picasso.

Wouldn’t it be nice if you could easily integrate your custom image loading logic with Picasso? That’s what the new request handlers are about. All you need to do is subclass RequestHandler and implement a couple of methods. For example:

public class PonyRequestHandler extends RequestHandler {
    private static final String PONY_SCHEME = "pony";

    @Override public boolean canHandleRequest(Request data) {
        return PONY_SCHEME.equals(data.uri.getScheme());
    }

    @Override public Result load(Request data) {
         return new Result(somePonyBitmap, MEMORY);
    }
}

Then you register your request handler when instantiating Picasso:

Picasso picasso = new Picasso.Builder(context)
    .addRequestHandler(new PonyHandler())
    .build();

Voilà! Now Picasso can handle pony URIs:

picasso.load("pony://somePonyName")
       .into(someImageView)

This pull request also involved rewriting all built-in bitmap loaders on top of the new API. This means you can also override the built-in request handlers if you need to.

Request Management

Even though Picasso handles view recycling, it does so in an inefficient way. For instance, if you do a fling gesture on a ListView, Picasso will still keep triggering and canceling requests blindly because there was no way to make it pause/resume requests according to the user interaction. Not anymore!

The new request management APIs allow you to tag associated requests that should be managed together. You can then pause, resume, or cancel requests associated with specific tags. The first thing you have to do is tag your requests as follows:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .tag(someTag)
       .into(someImageView)

Then you can pause and resume requests with this tag based on, say, the scroll state of a ListView. For example, Picasso’s sample app now has the following scroll listener:

public class SampleScrollListener implements AbsListView.OnScrollListener {
    ...
    @Override
    public void onScrollStateChanged(AbsListView view, int scrollState) {
        Picasso picasso = Picasso.with(context);
        if (scrollState == SCROLL_STATE_IDLE ||
            scrollState == SCROLL_STATE_TOUCH_SCROLL) {
            picasso.resumeTag(someTag);
        } else {
            picasso.pauseTag(someTag);
        }
    }
    ...
}

These APIs give you a much finer control over your image requests. The scroll listener is just the canonical use case.

Request Priorities

It’s very common for images in your Android UI to have different priorities. For instance, you may want to give higher priority to the big hero image in your activity in relation to other secondary images in the same screen.

Up until now, there was no way to hint Picasso about the relative priorities between images. The new priority API allows you to tell Picasso about the intended order of your image requests. You can just do:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .priority(HIGH)
       .into(someImageView);

These priorities don’t guarantee a specific order, they just tilt the balance towards higher-priority requests.


That’s all for now. Big thanks to Jake Wharton and Dimitris Koutsogiorgas for the prompt code and API reviews!

You can try these new APIs now by fetching the latest Picasso code on Github. These features will probably be available in the 2.4 release. Enjoy!

Michael KaplyFileblock

One of the things that I get asked the most is how to prevent a user from accessing the local file system from within Firefox. This generally means preventing file:// URLs from working, as well as removing the most common methods of opening files from the Firefox UI (the open file button, menuitem and shortcut). Because I consider this outside of the scope of the CCK2, I wrote an extension to do this and gave it out to anyone that asked. Unfortunately over time it started to have a serious case of feature creep.

Going forward, I've decided to go back to basics and just produce a simple local file blocking extension. The only features that it supports are whitelisting by directory and whitelisting by file extension. I've made that available here. There is a README that gives full information on how to use it.

For the other functionality that used to be a part of FileBlock, I'm going to produce a specific extension for each feature. They will probably be AboutBlock (for blocking specific about pages), ChromeBlock (for preventing the loading of chrome files directly into the browser) and SiteBlock (for doing simple whitelisting).

Hopefully this should cover the most common cases. Let me know if you think there is a case I missed.

Gervase MarkhamLicensing Policy Change: Tests are Now Public Domain

I’ve updated the Mozilla Foundation License Policy to state that:

PD Test Code is Test Code which is Mozilla Code, which does not carry an explicit license header, and which was either committed to the Mozilla repository on or after 10th September 2014, or was committed before that date but all contributors up to that date were Mozilla employees, contractors or interns. PD Test Code is made available under the Creative Commons Public Domain Dedication. Test Code which has not been demonstrated to be PD Test Code should be considered to be under the MPL 2.

So in other words, new tests are now CC0 (public domain) by default, and some or many old tests can be relicensed as well. (We don’t intend to do explicit relicensing of them ourselves, but people have the ability to do so in their copies if they do the necessary research.) This should help us share our tests with external standards bodies.

This was bug 788511.

Gervase MarkhamSurvey on FLOSS Contribution Policies

In the “dull but important” category: my friend Allison Randal is doing a survey on people’s attitudes to contribution policies (committer’s agreements, copyright assignment, DCO etc.) in free/libre/open source software projects. I’m rather interested in what she comes up with. So if you have a few minutes (it should take less than 5 – I just did it) to fill in her survey about what you think about such things, she and I would be most grateful:

http://survey.lohutok.net is the link. You want the “FLOSS Developer Contribution Policy Survey” – I’ve done the other one on Mozilla’s behalf.

Incidentally, this survey is notable as I believe it’s the first online multiple-choice survey I’ve ever taken where I didn’t think “my answer doesn’t fit into your narrow categories” about at least one of the questions. So it’s definitely well-designed.

Erik VoldJetpack Pro Tip - Reusing files for content and chrome

I’ve seen this issue come up a lot. Where an add-on developer is trying to reuse a library file, like underscore in both their add-on code, and their content scripts.

Typically the SDK documentation will say to put all of the content scripts in your add-on’s data/ folder, and that is the best thing to do if the script is only going to be used as a content script, but if you want to use the file in your add-on scope too then the file should not be in the data/ folder, and it should be in your lib/ folder instead.

Once this is done, the add-on scope can easy require it, so all that is left is to figure out a uri for the file in your lib/ folder which can be used for content scripts. To do this there are two options, one of which only works on JPM.

JPM

The JPM solution is very simple (thanks to Jordan Santell for implementing this), it is:

let underscoreURI = require.resolve("./lib/underscore");

if the file is in lib/underscore, but it should only be there if you copied and pasted it there, which pros don’t do. Pros use NPM because they know underscore is there, so they just make that a dependency, by adding this to package.json:

{
  // ..
  "dependencies": {
    "underscore": "1.6.0"
  }
  //..
}

Then, simply use:

let underscoreURI = require.resolve("underscore");

CFX

WIth CFX you will have to copy and paste the file in to your lib/ folder, then you can get a URL for the file by doing this:

let underscoreURI = module.uri.replace("main.js", "underscore.js");

Assuming that the code above is evaluated in lib/main.js.

So you can see an issue with the above code is that you have to know the name of the file which this code is evaluated in, so another approach could be:

let underscoreURI = module.uri.replace(/\/[^\/]*$/, "/underscore.js");

Frédéric HarperMon retour à HTML5mtl

HTML5mtl

Il y a déjà presque trois ans, Mathieu Chartier, Benoît Piette et moi nous rencontrions dans un restaurant pour discuter à propos d’HTML5 et de Montréal. De cette discussion est né le groupe HTML5mtl que nous avons fondé pour faire rayonner les nouveautés de HTML, ainsi que les bonnes pratiques. Il y a un peu plus d’un an, j’avais dû quitter, pour des raisons professionnelles, mes fonctions d’organisateurs au sein du groupe laissant Mathieu et Benoît à la barre. Ma passion du web n’ayant pas changé et travaillant toujours dans le milieu, c’est avec plaisir que je suis revenu en tant que seul organisateur du groupe depuis septembre: Benoît ayant quitté l’organisation du groupe entre temps et Mathieu quittant HTML5mtl aussi pour travailler sur un nouveau projet qui n’est pas encore divulgé.

De ce fait, j’ai déjà entrepris quelques modifications, qui de par mon expérience avec différents groupes d’utilisateurs, événements et mon travail s’avèreront, je l’espère, profitable pour la santé du groupe. La première action est un retour à la base: soit une rencontre par mois alentour des technologies web (HTML, CSS et JavaScript) sans nécessairement visé que les nouveautés de HTML5. Il n’y a aucun manque de sujets que nous pouvons aborder et la demande des développeurs de la région de Montréal est présente (presque 1000 membres): la première rencontre de la saison 2014-2015 a généré plus de 180 réservations pour la présentation que Pierre-Paul Lefebvre nous a offerte sur AngularJS lors d’une soirée en salle comble. Je n’en attends pas moins pour la prochaine soirée sur Nodejs avec Rami Sayar qui vient juste d’être annoncé. Ensuite, une identité visuelle a été créée pour se démarquer du logo générique d’HTML5 que nous utilisions: un merci tout spécial à Matthew Potter qui a créé ce dernier, que vous pouvez voir en haut de ce billet. Deux petites modifications ont aussi été apportées au fonctionnement de la soirée: elle débute et finit plus tôt. Cela donnera plus de temps pour rentrer chez vous voir vos enfants ou relaxer avant la prochaine journée et vous avez tout de même le temps de finir votre journée de travail ainsi que d’aller manger une bouchée avant de vous présenter au groupe. En plus de cela, vous remarquerez maintenant que l’audience visée sera mentionnée dès octobre et pour toute présentation qui suivront: plus de surprise avec du contenu trop avancé pour vous ou de somnolence pour une présentation de base qui ne vous intéresse pas. Une des dernières actions que j’ai entreprise est de rendre le groupe bilingue: je crois que ç’a toujours été le cas, car nous avons eu des présentations en anglais par le passé, mais ça n’avait jamais été mis de l’avant. Nous sommes à Montréal, partageons sur le web en étant tout aussi ouvert: vivre le “Montréal style” et bienvenue aux anglophones (il me reste encore des traductions à faire sur la page meetup).

Pour la suite, il me reste à trouver une solution pour éviter que les gens qui réservent ne se présentent pas: en moyenne de 20 à 40% des gens qui disent qu’ils seront présents ne le seront pas. De ce fait, les gens en liste d’attente, qui auraient pu et voulu assisté à l’événement manque une belle soirée. Les gens doivent se responsabiliser, mais je tenterais, par essai et erreur, de minimiser l’impact de ce fléau bien connu des événements gratuits. Je vais bien sûr reprendre contact avec les commanditaires que j’avais recrutés dans le passé et de ce fait, ouvre la porte à quiconque souhaite supporter le groupe tout en obtenant une visibilité hors du commun. N’hésitez pas à m’envoyer un courriel (lien contact ci-haut) et je vous ferais parvenir le document des commandites. Je suis aussi toujours à la recherche de nouvelle présentatrice et nouveau présentateur, donc que vous soyez novice en la matière (je peux vous aider) ou passé maître dans l’art de parler en public, veuillez aussi m’envoyer un courriel si vous désirez faire profiter les membres de votre savoir!

Je suis bien heureux d’être de retour à ce magnifique groupe qu’est HTML5mtl.


--
Mon retour à HTML5mtl is a post on Out of Comfort Zone from Frédéric Harper

Julien VehentBatch inserts in Go using channels, select and japanese gardening

I was recently looking into the DB layer of MIG, searching for ways to reduce the processing time of an action running on several thousands agents. One area of the code that was blatantly inefficient concerned database insertions.

When MIG's scheduler receives a new action, it pulls a list of eligible agents, and creates one command per agent. One action running on 2,000 agents will create 2,000 commands in the ''commands'' table of the database. The scheduler needs to generate each command, insert them into postgres and send them to their respective agents.

MIG uses separate goroutines for the processing of incoming actions, and the storage and sending of commands. The challenge was to avoid individually inserting each command in the database, but instead group all inserts together into one big operation.

Go provides a very elegant way to solve this very problem.

At a high level, MIG Scheduler works like this:

  1. a new action file in json format is loaded from the local spool and passed into the NewAction channel
  2. a goroutine picks up the action from the NewAction channel, validates it, finds a list of target agents and create a command for each agent, which is passed to a CommandReady channel
  3. a goroutine listens on the CommandReady channel, picks up incoming commands, inserts them into the database and sends them to the agents (plus a few extra things)
The CommandReady goroutine is where optimization happens. Instead of processing each command as they come, the goroutine uses a select() statement to either pick up a command, or timeout after one second of inactivity.
// Goroutine that loads and sends commands dropped in ready state
// it uses a select and a timeout to load a batch of commands instead of
// sending them one by one
go func() {
    ctx.OpID = mig.GenID()
    readyCmd := make(map[float64]mig.Command)
    ctr := 0
    for {
        select {
        case cmd := <-ctx.Channels.CommandReady:
            ctr++
            readyCmd[cmd.ID] = cmd
        case <-time.After(1 * time.Second):
            if ctr > 0 {
                var cmds []mig.Command
                for id, cmd := range readyCmd {
                    cmds = append(cmds, cmd)
                    delete(readyCmd, id)
                }
                err := sendCommands(cmds, ctx)
                if err != nil {
                    ctx.Channels.Log <- mig.Log{OpID: ctx.OpID, Desc: fmt.Sprintf("%v", err)}.Err()
                }
            }
            // reinit
            ctx.OpID = mig.GenID()
            ctr = 0
        }
    }
}()

As long as messages are incoming, the select() statement will elect the case when a message is received, and store the command into the readyCmd map.

When messages stop coming for one second, the select statement will fall into its second case: time.After(1 * time.Second).

In the second case, the readyCmd map is emptied and all commands are sent as one operation. Later in the code, a big INSERT statement that include all commands is executed against the postgres database.

In essence, this algorithm is very similar to a Japanese Shishi-odoshi.

shishi-odoshi.gif

The current logic is not yet optimal. It does not currently set a maximum batch size, mostly because it does not currently need to. In my production environment, the scheduler manages  about 1,500 agents, and that's not enough to worry about limiting the batch size.

Eric ShepherdThe Sheppy Report: September 19, 2014

I’ve been working on getting a first usable version of my new server-side sample server project (which remains unnamed as yet — let me know if you have ideas) up and running. The goals of this project are to allow MDN to host samples that require a server side component (for example, examples of how to do XMLHttpRequest or WebSockets), and to provide a place to host samples that require the ability to do things that we don’t allow in an <iframe> on MDN itself. This work is going really well and I think I’ll have something to show off in the next few days.

What I did this week

  • Caught up on bugmail and other messages that came in while I was out after my hospital stay.
  • Played with JSMESS a bit to see extreme uses of some of our new technologies in action.
  • Did some copy-editing work.
  • Wrote up a document for my own reference about the manifest format and architecture for the sample server.
  • Got most of the code for processing the manifests for the sample server modules and running their startup scripts written. It doesn’t quite work yet, but I’m close.
  • Filed a bug about implementing a drawing tool within MDN for creating diagrams and the like in-site. Pointed to draw.io as an example of a possible way to do it. Also created a developer project page for this proposal.
  • Exchanged email with Piotr about the editor revamp. It’s making good progress.

Wrap up

I’m really, really excited about the sample server work. With this up and running (hopefully soon), we’ll be able to create examples for technologies we were never able to properly demonstrate in the past. It’s been a long time coming. It’s also been a fun, fun project!

 

Curtis KoenigThe Curtisk report: 2014-09-21

People wanna know what I do, so I am going to give this a shot, so each Monday I will make a post about the stuff I did in the previous week.

Idea shamlessly stolen from Eric Shepherd

What I did this week

  • MWoS: SeaSponge Project Proposal (Review)
  • Crusty Bugs data digging
  • Mozillians.org security review (move along)
  • Firefox OS Sec discussion
  • sec triage process massaging
  • Firefox OS Security coordination
  • Vendor site review
    • testing plan for vendor site testing
    • testing coordination with team and vendor
  • CBT Training survey
  • security scan of [redacted]

Meetings attended this week

Mon

  • Weekly Project Meeting
  • Web Bounty Triage

Tue

  • SecAutomation
  • Cloud Services Security Team

Wed

  • MWoS team Project meeting
  • Vendor testing call
  • Web Bug Triage

Thu

  • Security Open Mic
  • Grow Mozilla / Community Building
  • Computer Science Teachers Association (guest speaker)

Christian HeilmannNotes on my closing keynote of From the Front 2014

These are some notes about my closing keynote at From the Front in Bologna, Italy last week. The overall theme of the event was “Temple of the DOM” thus I kept it Indiana Jones themed (one could say shoehorned, but I wasn’t alone with this).

from the front 2014 speakers

The slides are available on Slideshare.

In Indiana Jones and the Temple of Doom the Sankara Stones are very powerful stones that can bring prosperity or destroy people, depending how they are used. When you bring the stones together they light up and in general all is very mystic and amazing. It gives the movie an adventure angle that can not explained and allows us to suspend our disbelief and see Indy as being capable of more than a normal human being.

A tangent: Blowing people’s mind is pretty easy. All you need to do is take a known concept and then make an assumption from it. For example, when you see Luigi from Super Mario Brothers and immediately recognise him, there is quite a large chance you have an older sibling. You were always the one who had to wait till your sibling failed in the game so it was your turn to play with “green Mario”. Also, if Luigi and Mario are the Mario brothers then Mario’s name is Mario Mario. Ponder this.

The holy trinity of web development

On the web we also have magical stones that we can put together and create good or evil. These are the standardised technologies of the web: HTML, CSS and JavaScript. These are available in every browser, understood by them without any extra compilation step, very well documented and easy to learn (but harder to master).

Back in 1999, Jeffrey Zeldman taught us all not to write tag-soup any longer and use the technologies of the web to build intelligent solutions that use them to their strengths. These are commonly referred to as the separation of concerns:

  • Structure (HTML and added extra-value semantics like Microformats)
  • Presentation (CSS, Images)
  • Behaviour (JavaScript)

Back then this was a very necessary stake in the ground, explaining that web development is not a random WYSIWYG result but something with a lot of planning and organisation behind it. The separation of concerns played all the different technologies to their strengths and also meant that one or two can fail and nothing will go wrong.

This also paved the way for the progressive enhancement idea that all you really need is a proper HTML document and the rest gets added when and if needed or – in the case of JavaScript – once it has been tested to be available and applicable.

The problems started when people with different agendas skewed the concept of the separation of concerns:

  • HTML and semantic markup enthusiasts advocated far too loudly for very clean markup, validation and adding things like Microformats. For engineers just trying to get something to show up in a browser this has always been a confusion as the tangible benefits of this are, well, not tangible. Browsers are very forgiving and will fix HTML for you and when there is no interface in browsers that surfaces the data in Microformats, why do it? Of course, I disagree and stated very often that semantic, clean markup is the good grammar of the web – you don’t need it, but it does make you much easier to understand and shows that you learned what you are doing. But that doesn’t really matter. Fact is that we continuously try to make people understand something we hold dear without giving them tangible benefits.
  • JavaScript enthusiasts, on the other hand, create far too much with JavaScript. This is a matter of control. You know JavaScript, you are happy seeing parts of an app or a page as objects and you want to instantiate them, inherit from them and re-use them. You don’t want to write much code but feel that generating it is the most clever way of using technology. Many JS enthusiasts also keep citing that browser differences are a real issue and that in JS they have the chance to test and fix problems browsers have. The fallacy here, of course, is that by doing that they also made the current and future browser issues their own.
  • CSS enthusiasts started to shoot against JavaScript as a tool when CSS became more powerful. Are animations and transitions behaviour or presentation? Should it be done in CSS or in JS where there is much more granular control? What about generated content? Where does this fall into? We can create whole drawings from one DIV element, but should we?

All of this, together with lots and lots of libraries promising us to solve all kind of cross-browser issues lead to the massively obese web we see today. An average web site size of almost 2MB would have blown our minds in the past, but these days it seems the right thing to do if you want to be professional and use the tools professionals use. Starting a vanilla HTML file feels like a hack – using a build script to start a boiler plate seems to be the intelligent, full stack development thing to do.

Best practice reminders, repeated

This is nothing new, of course.

Back in 2004, I wrote a self training course on Unobtrusive JavaScript trying to make people understand the need for separation of behaviour and look and feel. In 2005 I questioned the validity of three layers of separation as I worked on CMS and framework driven web products where I did not have full control over the markup but had to deal with the result of .NET 1.0 renderers.

Web technologies have always been a struggle for people to grasp and understand. JavaScript is very powerful whilst being a very loosely architected language compared to C or Java. The ability to use inline styling and scripting always tempted people to write everything in one document rather than separating it out into several which allows for caching and re-use. That way we always created bloated, hard to maintain documents or over-used scripts and style sheets we don’t control and understand.

It seems the epic struggle about what technology to use for what is far from over and we still argue until we are blue in the face if an animation should be done in CSS or in JavaScript and whether static HTML or deferred loading and creation of markup using template engines is the fastest way to go.

So what can we do to stop this endless debate?

The web has moved on a lot since Zeldman laid down the law and I think it is time to move on with it. We have to understand that not everything is readily enhanceable. We also have standard definitions that just seem odd and could have very much been better with our input. But we, the people who know and love the web, were too busy fighting smaller fights and complaining about things we should have taken for granted a while ago:

  • There will always be marketing materials or commercial training programs that get everything wrong we stand for. Mentioning them or trying to debunk them will just get more people to look at them. Yes, I do consider W3Schools part of this. We make these obsolete and unnecessary by creating better resources, not by telling people about their dangers.
  • Browsers will always get things wrong and no, there will not be an amazing future where all browsers are ever-green and users upgrade all the time.
  • Materials by standards bodies like this “Standards for Web Applications on Mobile: current state and roadmap” will always be verbose and seem academic in their wording. That’s what a standard is. There can not be wiggle room that’s why it sounds far more complex than we think it is.
  • There will always be people who use a certain technology for things we consider inappropriate. A great example I saw lately was a Mandelbrot fractal renderer creating a span for each pixel written in SASS and needing 5 minutes to compile.

A fault tolerant web? Think again

One of the great things of old about the web was that it was fault tolerant. Meaning that if something breaks, you can provide a fallback or the browser ignores it. There were no broken interfaces.

This changed when multimedia became a larger part of HTML5. Of course, you can use a fallback image for a CANVAS element (and you should as these get shown as thumbnails on Facebook for example) but it isn’t the same thing as you don’t add a CANVAS for the fun of it but as an interactive part of the page.

The plain fallback case does not quite cut it any longer.

Take a simple example of an image in the page:

<img src="meh.jpg" alt="cute kitten photo">

This is cool. If the image can not be loaded or rendered, the browser shows the alternative text provided in the alt attribute (no, it is not a tag). In most browsers these days, this is just a text display. You even have full control in JavaScript knowing if the image wasn’t loaded and you could provide a different fallback:

var img = document.querySelector('img');
img.addEventListener('error', function(ev) {
  if (this.naturalWidth === 0 && 
      this.naturalHeight === 0) {
    console.log('Image ' + this.src + ' not loaded');
  }
}, false);

With video, it is slightly different. Take the following example:

<video controls>
  <source src="dynamicsearch.mp4" type="video/mp4">
  </source>
  <a href="dynamicsearch.mp4">
    <img src="dynamicsearch.jpg" 
         alt="Dynamic app search in Firefox OS">
  </a>
  <p>Click image to play a video demo of 
     dynamic app search</p>
</video>

If the browser is not capable of supporting HTML5 video, we get a fallback image (again, great for indexing by Facebook and others). However, these browsers are not that likely to be in use any longer. The more interesting question is what happens when the browser can not play the video because the video codec is not supported? What end users get now is a grey box with the grace of a Java Applet that failed to load.

How to find out that the video playback failed? You’d expect an error handler on the video would do it, right? Well, not according to the specs which ask for an error handler on the last source element in the video element. That means that if you want to have the alternative content in the video element to show up when the video can not be played you need the following code:

var v = document.querySelector('video'),
    sources = v.querySelectorAll('source'),
    lastsource = sources[sources.length-1];
lastsource.addEventListener('error', function(ev) {
  var d = document.createElement('div');
  d.innerHTML = v.innerHTML;
  v.parentNode.replaceChild(d, v);
}, false);

Codec detection is incredibly flaky and hard as it is on OS level of the hardware and not fully in the control of the browser. That’s probably also the reason why the return value of the canPlayType() method of a video element (which is meant to tell you if a video format is supported) returns “maybe”, “probably” or an empty string. A coy method, that one.

It is the web, deal with it!

We could get very annoyed with this, or we can just deal with it. In my 18 years of web development I learned to take things like that in stride and I am actually happy about the quirky issues of the web. It makes it a constantly changing and interesting environment to be in.

I really think Mattias Petter Johansson of Spotify nailed it when he answered on Quora to someone why JavaScript is the only language in a browser:

Hating JavaScript is like hating the Internet.
The Internet is a cobweb of different technologies cobbled together with duct tape, string and chewing gum. It’s not elegantly designed in any way, because it’s more of a growing organism than it is a machine constructed with intent.

This is also why we should stop trying to make people love the web no matter what and force our ideas down their throats.

Longevity? Meh!

One of the main things we keep harping on about is the lovely longevity of the web. Whether it is Microsoft’s first web page still working in browsers now after 20 years or the web being the only platform with backwards compatibility and forward enhancement – we love to point out that we are in for the long game.

Sadly, this argument means nothing to developers who currently work in the mobile app space where being first out of the door is the most important part and people know that two months down the line nobody is going to be excited about your game any more. This is not sustainable, and reminds me of other fast-moving technologies that came and went. So let’s not waste our time trying to convince people who already subscribed to an idea of creating consumable software with a very short shelf-life.

I put it this way:

If you enable people world-wide to get a good experience and solve a problem they have, I like it. The technology you use is not the important part. How much you lock them in is. Don’t lock people in.

Let’s analyse our own behaviour

A lot of the bloat and repetitiveness of the web seems to me to stem from three mistakes we make:

  • we optimise prematurely
  • we tend to strive for generic solutions for very specific problems.
  • we build stop-gap solutions to use upcoming technology before it is ready and become dependent on those

A glimpse at the state of the componentised web seems to validate this. Web Components are amazingly necessary for the future of apps on the web platform, but they aren’t ready yet. Many of these frameworks give me great solutions right now and the effort I have to put in to learn them will make it hard for me to ever switch away from them. We’ve been there before: just try to find a junior JavaScript developer that knows the DOM instead of using jQuery for everything.

The cool new thing now are static HTML pages as they run fast, don’t take many resources and are very portable. Except that we already have 298 different generators to choose from if we want to create them. Or, we could write static HTML if all we have is a few sites. But where’s the fun in that?

Fredrik Noren had a great article about this lately called On Generalisation and put it quite succinctly:

Generalization is, in fact, prediction. We look at some things we have and we predict that any current and following entities in that group will look and behave sufficiently similar in the future to what we have now. We predict that we can write something that will cater to all, or most, of its future needs. And that makes perfect sense, if you just disregard one simple fact of life: humans are awful at predicting the future!

So let’s stop trying to build for an assumed problematic future that probably will never come and instead be thankful for what we have right now.

Such amazing times we live in

If you play with the web these days and you leave your “everything is broken, I must fix it!” hat off, it is amazing how much fun you can have. The other day I wrote Makethumbnails.com – a quick app that allows you to drag and drop images into your browser and get a zip of thumbnails back. All without a server in between, all working offline and written on a plane without a web connection using only the developer tools built into the browser these days.

We have an amazing amount of new events, sensors and data to play with. For example, reading out the ambient light around a laptop is a simple event handler:

window.addEventListener('devicelight', function(e) {
  var lv = e.value;
  // lv is the light in lux
});

You can use this to switch from a dark on light to a light on dark display. Or you could detect a 0 and know that the end user is currently covering their camera with their hands and provide a very simple hand gesture interface that way. This sensor is always on and you don’t need to have the camera enabled. How cool is that?

Are there other sensors or features in devices you’d like to have? Please ask on the feedback form about Open Web Apps and you can be part of the next iteration of web interaction.

Developer tools across browsers moved on beyond the view-source functionality and all of them now offer timelines, step-by-step debugging, network information and even device or screen emulation and visual editors for colours and animations. Most also offer some sort of in-built editor and remote debugging of devices. If you miss something there, here is another channel to tell the makers about that.

It is a big, fragmented world out there

The next big boom of the web is not in the Western world and on laptops and desktops that are connected with massively fast lines. We live in a mobile world and the predictability of what device our end users will have is gone. Surveys in Android usage showed 18,796 different devices in use already and both Mozilla’s and Google’s reach into emerging markets with under $100 devices means the light-weight web is going to be a massive thing for all of us. This is why we need to re-think our ways.

First of all, offline first should be our mantra. There is no steady connection we can rely on. Alex Feyerke has a great talk about this.

Secondly, we need to ensure that our solutions run smoothly on very low end devices. For this, there are a few tricks and developer tools give us great insight into where we waste memory and framerate. Angelina Fabbro has a great talk about that.

In general, the web is and stays an amazingly useful resource now more than ever. Tools like Github, JSFiddle, JSBin, CodePen and many others allow us to build things together and be in constant communication. Together.js (built into JSFiddle as the ‘collaboration’ button) allows us to code together with a text or voice chat and seeing each other’s cursors. This is an incredible opportunity to make coding more human and help another whilst we develop instead of telling each other how we should develop.

Let’s use the web to work on things together. Don’t wait to build the perfect solution. Share it early, take on advice and pull requests and together we can build much better products.

Soledad PenadesJSConf.eu 2014

I accidentally ended up attending JSConf.eu 2014–it wasn’t my initial intent, but someone from Mozilla who was going to be at the Hacker Lounge couldn’t make it for personal reasons, and he asked me to join in, so I did!

I hung around the lounge for a while every day, but at times it was so full of people that I just went downstairs and talked hacks & business while having coffee, or simply attended some of the talks instead. The following are notes from the talks I attended and from random conversations on the Hallway and Hacker Lounge tracks ;)

Parallel JavaScript by Jaswanth Sreeram

After having heard about it during the “Future JS” session at the Extensible Web Summit, this one seemed most exciting to me! Data crunching in JS via “invisible” translation to OpenCL? Yay! Power save of 8x, speed increases of 6x thanks to the power of the GPU! Also it is already available in Firefox Nightly.

I got so excited that I started building a test on the same day to try and trigger the parallel code path, but the performance is 2x slower than traditional sequential code. I spoke to Jaswanth in one of the breaks and explained him my issue, he said that the code needs to be complex enough for the “paralleliser” to get in action, and there was a certain amount of work involved in determining this, so that might be the reason why performance is so bad.

Still, existing PJS examples seem a bit too contrived to explain/demonstrate to people why it is so cool in a nutshell, so I would be interested in getting to the right function that triggers parallelism and is not overly complex—things with matrices just get over the head of people who are not used to this kind of data manipulation and the rest of the example just “does not compute” in their mind.

What Harry Potter can teach us about JavaScript by Sara Robinson

I went to this one because the title seemed intriguing. Basically, if people like something they will talk about it on the Internet, and also: regionalisms and variations to better target the market are important.

This rang a tiny little bell for me as it sounded a bit like the work we’re doing at Moz by working closely with communities where Firefox OS is launching–each launch is different as the features are specific to each market.

Bookwise, I am not overly convinced about adaptations that try to adapt the work and convert it so that it conveys something that is not initially being conveyed in the work itself. E.g. in France there was a strong push for highlighting the teaching/learning/school concept and so the book title was translated into something like “Harry Potter and the Wizard’s SCHOOL”. I’m totally OK with good translations that have to change some character name in order for it to still sound funny, but trying to change the meaning of the book is offlimits for me–I think the metaphor with JS didn’t quite work here.

We’re struggling to keep up (a brief story of browser security features) by Frederik Braun

I was expecting more scare and more in-depth tech from this one! Frederik, step it up! (Disclaimer: Frederik works at Mozilla so we’re colleagues and hence the friendly complaint).

Keeping secrets with JavaScript: an introduction to the WebCrypto API by Tim Taubert

I also went to this talk by another fellow German Mozillian (seems like the Berlin office has a thing for security and privacy… which makes total sense). It was a good introduction to how all the pieces fit together. After the talk there were some discussions in the “hallway track” about whether everyone developer should know, or not, cryptography and up to which extent. I have mixed feelings: it is hard enough to mess with it and render it useless (but still think you’re safe, even if you’re not), so maybe we need better libraries/tooling that make it easy to not to mess with it. Or maybe we need easier crypto. I definitely think anyone handling data should know about cryptography. If you’re a purely front-end person and only doing things such as CSS… well, maybe you can go a long way without knowing your SHAs from your MD5s…

Monster Audio-Visual demos in a TCP packet by Matthieu Henry ‘p01′

I went to this one expecting a whole bunch of demoscene tricks but I ended coming back from the forest of dropped jaws and “OMG it’s just 4K” utterances, which was fun anyway! It’s always entertaining to see people’s minds being blown up, although I expected a bit of new material from p01 too.

Usefulness of Uselessness by Brad Bouse

I saw Brad at CascadiaJS in Vancouver past year and he was entertaining, but maybe I wasn’t in the right mood. This talk, in contrast, was way more focused on a simple message: do something useless. Do more useless stuff. Useless stuff is actually useful.

So now I’m giving myself free reign to do more useless stuff. Not that I wasn’t already, but now it is CONSCIOUSLY USELESS and just because.

The meaning of words by Stephan Seidt

Speaking about usefulness… I don’t know if this is because it was the last talk and I was developing a massive headache, but I found it a bit of a gimmick. Maybe if I watched that in other time and moment I’d find it more impressive, but it didn’t quite work for me. Other people clapped to it, so I guess it did work for them.

Javascript for Everybody by Marcy Sutton

This one was a really moving talk on how we should not break accessibility with JavaScript. It’s not just about ARIA roles in mark-up, it’s also about the things we create live, and about patching our frameworks of choice so they help less experienced developers be accessible by default, thus improving the ecosystem.

After the talk I was left with this persistent sensation that I wasn’t doing the right thing in my code, which prompted me to review it and file bugs. Uuuurgh (and yaaaay, thanks for calling us out).

This is bigger than us: Building a future for Open Source by Lena Reinhard

Lena made a very compelling talk about why you should analyse your project and get worried if it is not diverse, because it won’t survive for long, as monocultures are fragile and prone to disappear.

Communities start diverse by default, but each incident makes the community less diverse, as people abstain from participating ever again. Do you care about your community? then you need to ensure it keeps being diverse.

A note about diversity not only being “having women”, but about having people who are representative of your population. Also it is not only about having a representation of the developers that use your code but a representation of the USERS that use the code the developers use–and this is way more important than we usually deem it to be, as the ratio tends to be 1 developer per 400 users.

Yet another demonstration of team Hoodie‘s high human standards :-)

(I’m also very excited that I got to meet and speak to a few of them during the conf, but sadly not the doge in their twitter account avatar–although it would have been weird to have him speak, but who knows what offline can enable?)

Server-less applications powered by Web Components by Sébastien Cevey

Sébastien had been at the Web Components session at the Extensible Web Summit, but he didn’t share as much as he did during this talk.

First he asked the audience how many people had heard about Web Components before; I’d estimate about 40% of people raised their hands. Then he asked them how many had actually used web components and I’d say the number of raised hands was just 5% of the audience.

Basically they had a series of status dashboards rendered with “horrible PHP” and other horrors of legacy code, and they didn’t want to have this mashup of front-end/back-end code because it was unmaintainable. So they set to rewrite the whole thing with Web Components, and so they did.

In the process they came up with a bit of metalanguage to connect the whole thing together, and some metamagic too, and finally they managed to have the whole thing running on the front-end. With just one big caveat: you have to be logged in The Guardian’s VPN to access the dashboards because the auth seemed to be taking place client-wise, and heh heh.

I was looking at the diagram of the web components they were using and the whole message passing chart and maybe it was because it was a bunch of information all of a sudden but I had the same experience of metamagic overdose I get with these all-declarative approaches to web components: some elements send messages to other elements by detecting them in the same document, like for example the modules that needed config would try to detect a config element in the tree, and use it. Maybe I didn’t understand it correctly, but this seemed akin to a global variable :-/

I still need to get my thoughts in order re: the all declarative web components pattern, but I think that one major reason for them not working for me is that the DOM is a hierarchical structure, so when people tuck several elements into it without any hierarchical relation between them, but still things happen magically and the elements interact with each other without hardly any way to know, I feel something’s not quite right there.

Another interesting take-away was that they were able to include other modular components into their component. For example they used a google chart element, and Paper components. I guess there is another minor unmentioned caveat here, and it is that it worked because they used Polymer to build their components, so the 2-way data binding worked seamlessly :-P

Using the web for music production and for live performances by Jan Monschke

I had seen Jan’s earlier talk at Scotland JS which was similar to this one but less cool–this time he convinced his brother and a friend to connect to his online collaborative audio workstation so we could see them playing live via WebRTC, and then he arranged the tracks they had recorded remotely from different points in the country. It was way more engaging and spectacular!

Then he also made a demo with an iPad and a home-made web audio app for live performances, which was really cool–you don’t need to program native code in order to build audiovisual apps! It is super awesome, come to think of it!

He still hasn’t fixed the things that I found “not OK” (as discussed in my Scotland JS post) but he is aware now of them! So maybe we might collaborate on The Definitive Collaborative Editor!

The Linguistic Relativity of Programming Languages by Jenna Zeigen

I didn’t catch this one in its entirety, but I got a few take-aways:

To all language snobs:

  • stop criticising
  • let other people use whatever language they’re comfortable with
  • languages that do not evolve will become obsolete

and a fantastic motto:

let’s keep JavaScript weird.

I think this was Jenna’s first talk. I want to see more!

Abusing phones to make the internet of things by Jan Jongboom

Jan works in Telenor and contributes heavily to Firefox OS just as his coworker Sergi that I had been following on the internets for a while and that I met at the Mozilla Summit past year. I thought I’d finally meet Jan when I went to Amsterdam for GOTO, but it wasn’t meant to be. I then happened to find him in the Hacker Lounge and he gave me an advance of the talk he was going to give later on, which promised to be super exciting. And it was!

Jan was a very entertaining speaker, and delighted the audience with both technical prowess and loads of jokes, including his own take on Firefox OS competition—Jan OS:

Basically he took away the UI layer in Firefox OS and got full root access to do whatever he pleases with the phones which, when the screen is not used, have an extremely very long lasting battery life (in the WEEKS scale). So they are effectively integrated autonomous computers that come in cheaper than Raspberry Pi and similar. He showed some practical examples of “Things” for the Internet of Things, such as building custom GPS trackers to keep track of where one of his very easy-to-get-lost friends was via Push Notifications, or a cheap wireless contactless (via using the proximity sensor) doorbell that can play a custom sound using Bluetooth speakers.

This reminds me that I wanted to ask someone of the people in QA if they had any spare old phone they’re not testing with any more so I could rip its guts apart, but maybe now it’s not such a good idea–I would need to come up with a project!

GIFs vs Web Components by Glen Maddern

I finally could watch Glen’s talk! He gave it too at CascadiaJS but I was hiding on my room preparing for mine so I couldn’t join in the GIF celebration.

The most important takeaway is: GIF with a hard G.

As it should be.

And then, in no less serious terms:

GIFs are important.

Do not impose your conventions or the conventions of your framework of choice onto other potential users—just talk HTML/DOM/JS. He initially started building the <x-gif> component with Polymer, then got requests to port it to Angular, to React, to… every imaginable framework, but adoption wasn’t really catching up, and translating to each framework made him have to learn about each framework and its mannerisms, so it was a really long and nonproductive process, until he realised that it’s better if your component is generic and not tied to any framework.

Finally, lesson learnt: Polymer !== Web Components.

Know your /’s by Lindsay Eyink

This was a good closing talk and I’m so grateful that it wasn’t super overloaded with FEELINGS but with pragmatism and common sense, and a call to have more common sense out there–specially in planning departments that try to foster artificial constructs such as the “Silicon Roundabout” and etc.

Take away: don’t try to copy Silicon Valley. Be your own city, with your idiosyncrasies and differences. That’s what makes your environment unique, and what attracts people from other places, not that you make a bad copy of Silicon Valley but with expensive gas and rent.

All in all—yet another great JSConf.eu! For many more years!

flattr this!

Mozilla WebDev CommunityBeer and Tell – September 2014

September’s Beer and Tell has come and gone.

A practical lesson in the ephemeral nature of networks interrupted the live feed and the recording, but fear not! A wiki page archives the meeting structure and this very post will lay plain the private ambitions of the Webdev cabal.

Mike Cooper: GMR.js

Mythmon is a Civilization V enthusiast, but multiplayer games are difficult — games can last a dozen hours or more. The somewhat archaic play-by-mail format removes the simultaneous, continuous time commitment, and the Giant Multiplayer Robot service abstracts away the other hassles of coordinating turns and save game files.

GMR provides a client for Windows only, so Mythmon created GMR.js to provide equivalent functionality cross platform with Node.js. It presents an interactive command line UI. This enables participation from a Steam Box and other non-windows platforms.

Bramwelt: pewpew

Trevor Bramwell, summer intern for the Web Engineering team, presenting a homebrew clone of space invaders he calls pewpew. He built is using PhaserJS as an exercise to better understand prototypal inheritance. You can follow along as he develops it by playing the live demo on gh-pages.

Cvan: honeyishrunktheurl

Chris Van shared two new takes on the classic url shortener. The first is written in go, with configuration stored in JSON on the server. It was used as an exercise for learning go. The second is an html page that handles the redirect on the server side.

He intends to put them into production on a side project, but hasn’t found a suitable domain name.

Cvan: honeyishrunktheurl

Chris Van held the stage for a second demo. He showed how the CSS order property can be used to cheaply rearrange DOM nodes without destroying and re-rendering new nodes. An accompanying blog post delves into the details. The post is worth a read, since it covers some limitations of the technique that came up in discussion during the demo.

Lonnen: Alonzo, pt II

Last time he joined us, Lonnen was showing off a scheme interpreter he was writing in Haskell called Alonzo. This month Alonzo had a number of new features, including variable assignment, functions, closures, and IO. Next he’ll pursue building a standard library and adding a test suite.


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Daniel Stenbergdaniel.haxx.se week #3

I won’t keep posting every video update here, but I mostly wanted to mention that I’ve kept posting a weekly video over at youtube basically explaining what’s going on right now within my dearest projects. Mostly curl and some Firefox stuff.

This week: libcurl server cert verification API got a bashing at SEC-T, is HTTP for UDP a good idea? How about adding HTTP cache support to libcurl? HTTP/2 is getting deployed as we speak. Interesting curl bug when used by XBMC. The patch series for Firefox bug 939318 is improving slowly – will it ever land?

Erik VoldAdd-on Directionless

At the moment there is no one at Mozilla “in charge” that has awareness of the add-on community’s plight. It’s sadly true that Mozilla has been divesting in add-on support for awhile now. To be fair, I think the inital divestments were good ones, the Add-on Builder website was a money pit for example. The priority has shifted to web apps, and the old addons.mozilla.org team is working mostly on the marketplace (which is no longer for add-ons, and is now just for apps), and the Add-on SDK team is now mostly working on Firefox DevTool projects.

At the moment we have only a few staffers working on addons.mozilla.org and the SDK, and none of us have an authority to make decisions or end debates, there is a tech lead for the SDK but that is not a position that has the authority to make directional decisions, or decide how staffers prioritize their work and spend their time, each person’s manager will be in charge of that, and our managers are DevTool and Marketplace people..

Either we all agree on a direction or we don’t.

Chris McAvoyHand Crafted Open Badges Display

Earning an Open Badge is easy, there’s plenty of places that offer them, with more issuers signing up every day. Once you’ve earned an open badge, you can push it to your backpack, but what if you want to include the badge on your blog, or your artisanal hand crafted web page?

You could download the baked open badge and host it on your site. You could tell people it’s a baked badge, but using that information isn’t super easy. Last year, Mike Larsson had a great idea to build a JS library that would discover open badges on a page, and make them dynamic so that a visitor to the page would know what they were, not just a simple graphic, but a full-blown recognition for a skill or achievement.

Since his original prototype, the process of baking a badge has changed, plus Atul Varma built a library to allow baking and unbaking in the browser. This summer, Joe Curlee and I took all these pieces, prototypes and ideas and pulled them together into a single JS library you can include in a page to make the open badges on that page more dynamic.

There’s a demo of the library in action on Curlee’s Github. It shows a baked badge on the page, when you click the unbake button, it takes the baked information from the image and makes the badge dynamic and clickable. We added the button to make it clear what was happening on the page, but in a normal scenario, you’d just let the library do it’s thing and transform the badges on the page automatically. You can grab the source for the library on Github, or download the compiled / minified library directly.

There’s lot’s more we can do with the library, I’ll be writing more about it soon.

John O'DuinnSan Francisco Car Culture: Unusual Jaguar XK8 paint job

Found this earlier this month while on the way to work. The color scheme really threw me off, so at first I couldn’t even tell it was a Jaguar. I remain speechless.

Mark SurmanYou did it! (maker party)

This past week marked the end of Maker Party 2014. The results are well beyond what we expected and what we did last year — 2,513 learning events in 86 countries. If you we’re one of the 5,000+ teachers, librarians, parents, Hivers, localizers, designers, engineers and marketing ninjas who contributed to Webmaker over the past few months, I want to say: Thank you! You did it! You really did it!

makerparty_postparty_infographic_static_vertical_v2-600x892

What did you do? You taught over 125,000 people how to make things on the web — which is the point of the program and an important end in itself. At the same time, you worked tirelessly to build out and expand Webmaker in meaningful ways. Some examples:

  • Mozilla India organized over 250 learning events in the past two months, showing the kind of scale and impact you can get with well organized corps of volunteers.
  • Countries including Iran, New Zealand, and Sweden held their first ever Maker Party, adding to the idea that Webmaker is a truly global effort.
  • Tools and curriculum focused on mobile were added into the Webmaker suite — AppMaker was launched in June and was well received in Maker Parties around the world.
  • Over 300 partners orgs including major library and after school networks participated, bringing even more skilled teachers and mentors into our community.
  • New and innovative ways to teach the web in a very low touch manner rolled out, including a Firefox snippet that let you hack our home page x-ray goggles style.
  • Webmaker teamed up with Mozilla’s policy team, with a sub-campaign for Net Neutrality teach-ins plus a related reddit AMA.

It’s important to say: these things add up to something. Something big. They add up to a better Webmaker — more curriculum, better tools, a larger network of contributors. These things are assets that we can build on as we move forward. And you made them.

You did one other thing this summer that I really want to call out — you demonstrated what the Mozilla community can be when it is at its best. So many of you took leadership and organized the people around you to do all the things I just listed above. I saw that online and as I traveled to meet with local communities this summer. And, as you did this, so many of you also reached out an mentored others new to this work.You did exactly what Mozilla needs to do more of: you demonstrated the kind of commitment, discipline and thoughtfulness that is needed to both grow and have impact at the same time. As I wrote in July, I believe we need simultaneously drive hard on both depth and scale if we want Webmaker to work. You showed that this was possible.

Celebrating at MozFest East Africa

Celebrating at MozFest East Africa

So, if you were one of the 5000+ people who contributed to Webmaker during Maker Party: pat yourself on the back. You did something great! Also, consider: what do you want to do next? Webmaker doesn’t stop at the end of Maker Party. We’re planning a fall campaign with key partners and networks. We’re also moving quickly to expand our program for mentors and leaders, including thinking through ideas like Webmaker Clubs. These are all things that we need your help with as we build on the great work of the past few months.


Filed under: education, mozilla, webmakers

ArkyNoto Fonts Update

Google Internationalization team released new update of Noto Fonts this week. The update brings numerous new features enhancements. Please read the project release notes for the full list of changes.

You can preview the fonts and download them at google.com/get/noto.

Google Noto project logo

Testing fonts on Firefox OS device

It is very simple to test the Noto fonts on a Firefox OS device. Just copy the the font files into /system/fonts folder and reboot the device. Don't forget to back-up the existing fonts on device first.

Am writing this blog post in Bangkok, So I am going to use Thai Noto fonts in these instructions. Connect your Firefox OS device to the computer with a USB cable. Make sure to turn on developer settings to enable debugging via USB.


# Backup the existing Thai font
$ adb pull /system/fonts/DroidSansThai.ttf  

# Remount the /system partition as read-write
$ adb remount /system 

# Remove the font on the device
$ adb shell rm  /system/fonts/DroidSansThai

# Unzip the previously downloaded Thai font package
$ unzip NotoSansThai-hinted.zip 

# Push to Firefox OS device 
$ adb push NotoSansThai-Regular.ttf /system/fonts

# Reboot the phone. Test your localization by selecting your language 
#in Language settings menu or navigating to local language webpage with browser app.
$ adb reboot


Wait, All I see is Tofu?

If you see square blocks (lovingly referred as Tofu) instead of characters, that means the font file for your language is missing. Please double check the steps, if everything fails restore the previously copy of your font file.

What is Font Tofo, firefox OS screenshot

Happy Hacking!