Doug BelshawPOSTPONED: Mozilla Maker Party Newcastle 2014

We’ve made the decision today to postpone the Maker Party planned for Saturday 13th September at Campus North, Newcastle-upon-Tyne.

Although there were lots of people interested in the event, the timing proved problematic for potential mentors and attendees alike. We’re going to huddle next week and think about a more suitable time – perhaps in November.

Thanks to those who got in touch about the event and offered their support. :-)

Jared WeinWe knew unicorns could bounce, but now they spin?!

One of the hidden features of Firefox 29 was a unicorn that bounced around the Firefox menu when it was emptied. The LA Times covered it in their list of five great features of Firefox 29.

BraveDefiantBarasinga

Building on the fun, Firefox 32 (released today) will now spin the unicorn when you press the mouse down in the area that unicorn is bouncing.

PaleNegativeHumpbackwhale

The really cool thing about the unicorns movement, both bouncing and spinning, and coloring is that this is all completed using pure CSS. There is no Javascript triggering the animation, direction, or events.

The unicorn is shown when the menu’s :empty pseudo-class is true. The direction and speed of the movement is controlled via a CSS animation that moves the unicorn in the X- and Y-direction, with both moving at different speeds. On :hover, the image of the unicorn gets swapped from grayscale to colorful. Finally, :active triggers the spinning.


Tagged: australis, CSS, firefox, planet-mozilla

Daniel StenbergHTTP/2 interop pains

At around 06:49 CEST on the morning of August 27 2014, Google deployed an HTTP/2 draft-14 implementation on their front-end servers that handle logins to Google accounts (and possibly others). Those at least take care of all the various login stuff you do with Google, G+, gmail, etc.

The little problem with that was just that their implementation of HTTP2 is in disagreement with all existing client implementations of that same protocol at that draft level. Someone immediately noticed this problem and filed a bug against Firefox.

The Firefox Nightly and beta versions have HTTP2 enabled by default and so users quickly started to notice this and a range of duplicate bug reports have been filed. And keeps being filed as more users run into this problem. As far as I know, Chrome does not have this enabled by default so much fewer Chrome users get this ugly surprise.

The Google implementation has a broken cookie handling (remnants from the draft-13 it looks like by how they do it). As I write this, we’re on the 7th day with this brokenness. We advice bleeding-edge users of Firefox to switch off HTTP/2 support in the mean time until Google wakes up and acts.

You can actually switch http2 support back on once you’ve logged in and it then continues to work fine. Below you can see what a lovely (wildly misleading) error message you get if you try http2 against Google right now with Firefox:

google-http2-draft14-cookies

This post is being debated on hacker news.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1059627] changes made at the same time as a comment are no longer grouped with the comment
  • [880097] Only retrieve database fetched values if requested
  • [1056162] add bit.ly support to bmo
  • [1059307] Remove reporting of the firefox release channel from the guided bug entry (hasn’t been accurate since firefox 25)

discuss these changes on mozilla.tools.bmo.

bitly


Filed under: bmo, mozilla

John O'Duinn“Lost Cat” by Caroline Paul

After all the fun reading “Meanwhile in San Francisco”, I looked to see if this duo had co-written any other books. Sure enough, they had.

“Lost Cat” tells the true story of how an urban cat owner (one of the authors) loses her cat, then has the cat casually walk back in the door weeks later healthy and well. The book details various experiments the authors did using GPS trackers, and tiny “CatCam” cameras to figure out where her cat actually went. Overlaying that data onto google maps surprised them both – they never knew their cats roamed so far and wide across the city. The detective work they did to track down and then meeting with “Cat StealerA” and “Cat Stealer B” made for a fun read… Just like “Meanwhile in San Francisco”, the illustrations are all paintings. Literally. My all-time favorite painting of any cat ever is on page7.

A fun read… and a great gift to any urban cat owners you know.

Nick ThomasDeprecating our old rsync modules

We’ve removed the rsync modules mozilla-current and mozilla-releases today, after calling for comment a few months ago and hearing no objections. Those modules were previously used to deliver Firefox and other Mozilla products to end users via a network of volunteer mirrors but we now use content delivery networks (CDN). If there’s a use case we haven’t considered then please get in touch in the comments or on the bug.

Josh AasSimple Code Review Checklists

What if, when giving a patch r+ on Mozilla’s bugzilla, you were presented with the following checklist:

You could not actually submit an r+ unless you had checked an HTML check box next to each item. For patches where any of this is irrelevant, just check the box(es) – you considered it.

Checklists like this are commonly used in industries that value safety, quality, and consistency (e.g. medicine, construction, aviation). I don’t see them as often as I’d expect in software development, despite our commitments to these values.

The idea here is to get people to think about the most common and/or serious classes of errors that can be introduced with nearly all patches. Reviewers tend to focus on whatever issue a patch addresses and pay less attention to the other myriad issues any patch might introduce. Example: a patch adds a null check, the reviewer focuses on pointer validity, and misses a leak being introduced.

Catching mistakes in code review is much, much more efficient than dealing with them after they make it into our code base. Once they’re in, fixing them requires a report, a regression range, debugging, a patch, another patch review, and another opportunity for further regressions. If a checklist like this spurred people to do some extra thinking and eliminated even one in twenty (5%) of preventable regressions in code review, we’d become a significantly more efficient organization.

For this to work, the checklist must be kept short. In fact, there is an art to creating effective checklists, involving more than just brevity, but I won’t get into anything else here. My list here has only four items. Are there items you would add or remove?

General thoughts on this or variations as a way to reduce regressions?


Pascal FinetteI've Seen the Future - and It's Virtual

More than 20 years ago I first experienced virtual reality in one of those large-scale 3D rigs which was traveling the country, setting up shop in the local multiplex cinema and charging you a small fortune to step into a 4-by-4 foot contraption, strap on a pair of 3D goggles, grab a plastic gun and hunt down some aliens in an immersive 3D environment.

Early VR Set

It’s funny - as unimpressive as the graphics were, as much as the delay between movement and visual update was puke inducing – I still have vivid memories of the game and the incredible experience of literally stepping into a new world.

Fast forward to last year: Oculus revived the whole Virtual Reality (VR) scene with their Rift headset – cobbled together with cheap off-the-shelf components and some clever hardware and software hacking. The first time I tried the Rift I was hooked. It was the exact same crazy experience I had some 20 years ago – I found myself in a new world, just this time with much better graphics, none of the nasty visual delays (which makes most people motion sick) and all delivered in a much more palatable device. And again, I can’t get that initial experience out of my head (in my case a rather boring walking experience of a Tuscan villa).

Since that experience, I joined Singularity University where we have a Rift in our Innovation Lab. Over the course of the last 8 weeks I must have demoed the Rift to at least 30 people – and they all react in the exact same way:

Oculus Experience.jpg

People giggle, laugh, scream, start moving with the motion they see in the headset… They are lost within the experience in less than 30 seconds of putting the goggles on. The last time I’ve seen people react emotionally in a similar way to a new piece of technology was when Apple introduced the iPhone.

It’s rare to see a piece of technology create such a strong emotional reaction (delight!). And that’s precisely the reason why I believe VR will be huge. A game changer. The entry vector will be gaming – with serious applications following suit (think about use cases in the construction industry, engineering, visualization of complex information) and immersive storytelling being probably the biggest game changer. In the future you will not watch a movie or the news – you will be right in it. You will shop in these environments. You will not Skype but literally be with the other person.

And by being instead of just watching we will be able to tap much deeper into human empathy than ever before. To get a glimpse of this future, check out these panoramic pictures of the destruction in Gaza.

With prices for VR technology rapidly approaching zero (the developer version of Oculus Rift’s DK2 headset is a mere $350 and Google already introduced a cardboard (!) kit which turns your Android phone into a VR headset) and software development tools becoming much more accessible, we are rapidly approaching the point where the tools of production become so accessible that we will see an incredible variety of content being produced. And as VR is not bound to a specific hardware platform, I believe we will see a market more akin to the Internet than the closed ecosystems of traditional game consoles or mobile phone app stores.

The future of virtual reality is nigh. And it’s looking damn real.

Gervase MarkhamKingdom Code UK

Kingdom Code is a new initiative to gather together Christians who program, to direct their efforts towards hastening the eventual total triumph of God’s kingdom on earth. There’s a preparatory meet-up on Monday 15th September (tickets) and then a full get-together on Monday 13th October. Check out the website and sign up if you are interested.

(There’s also Code for the Kingdom in various cities in the US and India, if you live nearer those places than here.)

Adam LoftingOne month of Webmaker Growth Hacking

This post is an attempt to capture some of the things we’ve learned from a few busy and exciting weeks working on the Webmaker new user funnel.

I will forget some things, there will be other stories to tell, and this will be biased towards my recurring message of “yay metrics”.

How did this happen?

Screen Shot 2014-09-01 at 14.25.29

As Dave pointed out in a recent email to Webmaker Dev list, “That’s a comma, not a decimal.”

What happened to increase new user sign-ups by 1,024% compared the previous month?

Is there one weird trick to…?

No.

Sorry, I know you’d like an easy answer…

This growth is the result of a month of focused work and many many incremental improvements to the first-run experience for visitors arriving on webmaker.org from the promotion we’ve seen on the Firefox snippet. I’ll try to recount some of it here.

While the answer here isn’t easy, the good news is it’s repeatable.

Props

While I get the fun job of talking about data and optimization (at least it’s fun when it’s good news), the work behind these numbers was a cross-team effort.

Aki, Andrea, Hannah and I formed the working group. Brett and Geoffrey oversaw the group, sanity checked our decisions and enabled us to act quickly. And others got roped in along the way.

I think this model worked really well.

Where are these new Webmaker users coming from?

We can attribute ~60k of those new users directly to:

  • Traffic coming from the snippet
  • Who converted into users via our new Webmaker Landing pages

Data-driven iterations

I’ve tried to go back over our meeting notes for the month and capture the variations on the funnel as we’ve iterated through them. This was tricky as things changed so fast.

This image below gives you an idea, but also hides many more detailed experiments within each of these pages.

Testing Iterations

With 8 snippets tested so far, 5 funnel variations and at least 5 content variables within each funnel we’ve iterated through over 200 variations of this new user flow in a month.

We’ve been able to do this and get results quickly because of the volume of traffic coming from the snippet, which is fantastic. And in some cases this volume of traffic meant we were learning new things quicker than we were able to ship our next iteration.

What’s the impact?

If we’d run with our first snippet design, and our first call to action we would have had about 1,000 new webmaker users from the snippet, instead of 60,000 (the remainder are from other channels and activities). Total new user accounts is up by ~1,000% but new users from the snippet specifically increased by around 6 times that.

One not-very-weird trick to growth hacking:

I said there wasn’t one weird trick, but I think the success of this work boils down to one piece of advice:

  • Prioritize time and permission for testing, with a clear shared objective, and get just enough people together who can make the work happen.

It’s not weird, and it sounds obvious, but it’s a story that gets overlooked often because it doesn’t have the simple causation based hooked we humans look for in our answers.

It’s much more appealing when someone tells you something like “Orange buttons increase conversion rate”. We love the stories of simple tweaks that have remarkable impact, but really it’s always about process.

More Growth hacking tips:

  • Learn to kill your darlings, and stay happy while doing it
    • We worked overtime to ship things that got replaced within a week
    • It can be hard to see that happen to your work when you’re invested in the product
      • My personal approach is to invest my emotion in the impact of the thing being made rather than the thing itself
      • But I had to lose a lot of A/B tests to realize that
  • Your current page is your control
    • Test ideas you think will beat it
    • If you beat it, that new page is your new control
    • Rinse and repeat
    • Optimize with small changes (content polishing)
    • Challenge with big changes (disruptive ideas)
  • Focus on areas with the most scope for impact
    • Use data to choose where to use data to make choices
    • Don’t stretch yourself too thin

What happens next?

  • We have some further snippet coverage for the next couple of weeks, but not at the same level we’ve had recently, so we’ll see this growth rate drop off
  • We can start testing the funnel we’ve built for other sources of traffic to see how it performs
  • We have infrastructure for spinning up and testing landing pages for many future asks
  • This work is never done, but with any optimization you see declining returns on investment
    • We need to keep reassessing the most effective place to spend our time
    • We have a solid account sign-up flow now, but there’s a whole user journey to think about after that
    • We need to gather up and share the results of the tests we ran within this process

Testing doesn’t have to be scary, but sometimes you want it to be.

Gervase MarkhamGoogle Safe Browsing Now Blocks “Deceptive Software”

From the Google Online Security blog:

Starting next week, we’ll be expanding Safe Browsing protection against additional kinds of deceptive software: programs disguised as a helpful download that actually make unexpected changes to your computer—for instance, switching your homepage or other browser settings to ones you don’t want.

I posted a comment asking:

How is it determined, and who determines, what software falls into this category and is therefore blocked?

However, this question has not been approved for publication, let alone answered :-( At Mozilla, we recognise exactly the behaviour this initiative is trying to stop, but without written criteria, transparency and accountability, this could easily devolve into “Chrome now blocks software Google doesn’t like.” Which would be concerning.

Firefox uses the Google Safe Browsing service but enhancements to it are not necessarily automatically reflected in the APIs we use, so I’m not certain whether or not Firefox would also be blocking software Google doesn’t like, and if it did, whether we would get some input into the list.

Someone else asked:

So this will block flash player downloads from https://get.adobe.com/de/flashplayer/ because it unexpectedly changed my default browser to Google Chrome?!

Kudos to Google for at least publishing that comment, but it also hasn’t been answered. Perhaps this change might signal a move by Google away from deals which sideload Chrome? That would be most welcome.

Pascal FinetteRobots are eating our jobs

Shahin Farshchi wrote a piece for IEEE Spectrum, the flagship magazine of the Institute of Electrical and Electronics Engineers, on “Five Myths and Facts About Robotics Technology Today”.

In the article he states:

Robots are intended to eliminate jobs: MYTH – Almost every major manufacturing and logistics company I’ve spoken to looks to robotics as a means to improve the efficiency of its operations and the quality of life of its existing workers. So human workers continue to be a key part of the business when it comes to robotics. In fact, workers should view robots as how skilled craftsmen view their precision tools: enhancing output while creating greater job satisfaction. Tesla Motors is just one example of using robots [pictured above] to do all the limb-threatening and back-breaking tasks while workers oversee their operation and ensure the quality of their output. At Tesla’s assembly lines, robots glue, rivet, and weld parts together, under the watchful eye of humans. These workers can pride themselves with being part of a new era in manufacturing where robots help to reinvent and reinvigorate existing industries like the automotive sector.”

I disagree.

It is well documented that robots eliminate jobs (heck - that’s what they are for amongst other things). Shahin even shows a picture from Tesla’s highly automated factory depicting a fully automated production line without a single human around. Stating that robots are not replacing jobs but that the few remaining workers “can pride themselves with being part of a new era in manufacturing where robots help to reinvent and reinvigorate existing industries like the automotive sector” really doesn’t cut it.

Robots and automation is destroying jobs, especially at the lower end of the spectrum. At the same time we are not creating enough new jobs - which already leads to massive challenges to our established systems and will only get worse over time.

I suggest you watch this:

In my opinion what’s needed, is us collectively acknowledging the issues at hand and start a productive dialog. The 2013 World Development Report states that we need to create 600 million new jobs globally in the next 15 years to sustain current employment rates – and this doesn’t take into account potential massive job losses due to automation and robots.

We need to start working on this. Now.

Kevin NgoTesting Project Browserify Modules in Karma Test Runner with Gulp

If you want to test local Browserify modules in your project with Karma, you'll have to take an extra step. One solution is to use karma-browserify that bundles your modules with your tests, but it has downfalls requiring files that require other files. That really sucks since we'll often be unit testing local modules that depend on at least one other module, and thus it'd only be useful for like requiring simple NPM modules.

Another solution uses Gulp to manually build a test bundle and put it on the project JS root path such that local modules can be resolved.

Here is the Gulp task in our gulpfile.js:

var browserify = require('browserify');
var glob = require('glob');  // You'll have to install this too.

gulp.task('tests', function() {
    // Bundle a test JS bundle and put it on our project JS root path.
    var testFiles = glob.sync('./tests/**/*.js');  // Bundle all our tests.
    return browserify(testFiles).bundle({debug: true})
        .pipe(source('tests.js'))  // The bundle name.
        .pipe(gulp.dest('./www/js'));  // The JS root path.
});

A test bundle, containing all our test files, will be spit out on our JS root path. Now when we do require('myAppFolder/someJSFile'), Browserify will easily be able to find the module.

But we also have to tell Karma where our new test bundle is. Do so in our karma.config.js file:

files: [
    {pattern: 'www/js/tests.js', included: true}
]

We'll also want to tell Gulp to re-bundle our tests every time the tests are touched. This can be annoying if you have Gulp set up to watch your JS path, since the tests will spit out a bundle on the JS path

gulp.watch('./tests/**/*.js', ['tests']);

Run your tests and try requiring one of your project files. It should work!

Patrick ClokeAdding an Auxiliary Audio Input to a 2005 Subaru Outback

I own a 2005 (fourth generation) Subaru Outback, I’ve had it since the fall of 2006 and it has been great. I have a little over 100,000 miles and do not plan to sell it anytime soon.

There is one thing that just kills me though. You cannot (easily [1]) change the radio in it…and it is just old enough [2] to have neither BlueTooth nor an auxiliary audio input. I’ve been carrying around a book of CDs with me for the past 8 years. I decided it was time to change that.

I knew that it was possible to "modify" the radio to accept an auxiliary input, but it involved always playing a silent CD, which I did not find adequate. I recently came across a post of how to do this in such a way that the radio functions as normal, but when you plug in a device to the auxiliary port it cuts out the radio and plays from the device. Someone else had also confirmed that it worked for them. Cool!

I vaguely followed the directions, but made a few changes here and there. Also, everyone online seems to make it seem like the radio is super easy to get out…I seriously think I spent at least two hours on it. There were two videos and a PDF I found useful for this task.

Front view of my uninstalled 2005 Subaru Outback stereo. Right view of my uninstalled 2005 Subaru Outback stereo. Back view of my uninstalled 2005 Subaru Outback stereo.

A few images of the uninstalled stereo before any disassembly. (So I could remember how to reassemble it!)

I wouldn’t say that this modification was extremely difficult, but it does involve:

  • Soldering to surface mount components (I’m not awesome a soldering, but I have had a good amount of experience).
  • The willingness to potentially trash a radio.
  • Basic understanding of electrical diagrams and how switches work.
  • A lot of time! I spent ~13 hours total working on this.

Total cost of components, however, was < $5.00 (and that’s probably overestimating.) Really the only component I didn’t have was the switching audio jack, which I got at my local RadioShack for $2.99. (I also picked up wire, heatshrink, etc. so…$5.00 sounded reasonable.) The actual list of parts and tools I used was:

  • 1/8" Stereo Panel-Mount Phone Jack [$2.99, RadioShack #274-246]
  • ~2 feet of each of green and red 22 gauge wire, ~1 foot of black 22 gauge wire.
  • Soldering iron / Solder
  • 3 x Alligator clip testing wires (1 black, 1 red, 1 green)
  • Multimeter
  • Hot glue gun / Hot glue
  • Various sizes of flat/slotted and Phillips head screw drivers
  • Wire strippers
  • Wire cutter
  • Needle nosed pliers
  • Flashlight
  • Drill with 1/4" drill bit and a 1/2" spade bit (plus some smaller sized drill bits for pilot holes)

Anyway, once you have the radio out you can disassembly it down to it’s bare components. (It is held together with a bunch of screws and tabs, I took pictures along each step of the way to ensure I could put it back together.)

The front of the stereo after removing the control unit. The reverse of the control unit. The top of the unit with the cover removed showing the CD drive.

The initial steps of disassembly: the front after removing the controls, the reverse of the control unit, a top-down view after removing the top of the unit [3].

The main circuit board of the unit. The reverse of the main circuit board of the unit.

The actual circuit board of the stereo unit. You can see the radio module on the left.

The radio module connects to the motherboard with a 36-pin connector. Pin 31 is the right audio channel and pin 32 is left audio channel. I verified this by connected the disassembly radio to the car and testing with alligator clips hooked up to my phones audio output [4]. I already knew these were the pins from the directions, but I verified by completing the circuit to these pins and ensuring I heard mixed audio with my phone and the radio.

The direction suggested cutting the pin and bending it up to solder to it. I didn’t have any cutting tool small enough to get in between the pins…so I flipped the board over and did sketchier things. I scored the board to remove the traces [5] that connected the radio module to the rest of the board. I then soldered on either side of this connection to put it through the audio connector.

Soldered leads to the bottom of the stereo board. Soldered ground to the top of the radio unit.

Five soldered connections are required, four to the bottom of the board [6] and one to the ground at the top of the unit.

Now, the way that this works is that the audio connector output (pins 2 and 5) is always connected. If nothing is in the jack, it is connected as a passthrough to the inputs (pins 3 and 4, respectively). If an audio connector is plugged in, input redirects to the jack. (Pin 1 is ground.) For reference, red is right audio and green is left audio (black is ground).

Wiring diagrams of the connections.

A few of the diagrams necessary to do this. The top two diagram is simple the connectors two states: no plug and plug. The bottom two diagrams are a normal 1/8" audio plug and the physical pin-out and measurements of the jack.

To reiterate, pins 2 and 5 connect to the "stereo side" of the scored pins 31 and 32 of the radio module. (I.e. They are the output from the connector back to what will be played by the stereo.) Pins 3 and 4 are the inputs from the radio module side of pins 31 and 32 to the connector.

So after soldering for connections (and some hot glue), we have the ability to intercept the signal. At this point I took the bare motherboard and tested this in my car with alligator clips to ensure the radio still worked, I then connected the alligator clips to a cut audio plug to ensure everything worked.

Hot glued wires to the board as strain relief. Testing with alligator clips. (after reassembly).

The wires were also hot glued to the circuit board as strain relief. After reassembly I tested again with alligator clips.

At this point, I reassembled the radio case and ran the wires out through holes in the side / bottom toward the front of the unit. I noticed there was an empty spot in the top left of the unit which looked like it would fit the panel mount audio jack. After doing some measurements I deemed my chances good enough to drill a hole here for the connector. Some tips on drilling plastic, if you haven’t done it much: use the lowest speed you can; start with very small bits and work your way up (I used 4 stages of bits); and cover both sides in masking tape to avoid scratches.

Taped and measurements for drilling the hole from the front. Taped and measurements for drilling the hole from the reverse.

Another benefit of tape is you can write anywhere you want. These measurements were taken initially on the back and transcribed to the front (where I drilled from).

The plastic was actually too think for the panel mount connector to reach through, which is where the 1/2" spade bit came in handy. I use it to drill through roughly half the thickness of the plastic (a little at a time with lots of testing). The connector was able to nestle inside the thinner plastic and reach all the way through.

The 1/4" hole drilled through the plastic. The thinning of the plastic from the 1/2" spade bit. The assembled connector in the hole

After the initial hole was drilled, the tape on the back was removed to thin the plastic.

The last bit was soldering the five connections onto the audio connector, applying a coating of hot glue (for strain relief and to avoid shorts). Once the connector was soldered, the front panel was carefully reassembled. Finally, the completed unit was reinstalled back into the car and voila, I now have an auxiliary audio input! Can’t wait to test it out on a long car trip.

The soldered jack. The hot-glued jack.

The soldered and hot-glued audio jack.

The installed unit. Close-up of the new jack.

The final installed stereo unit.

One caveat of doing this (and I’m unsure if this is because I didn’t cut the pins as suggested or if this is just a fact of doing it this way…). If you have an auxiliary input device playing AND play a CD, the audio mixes instead of being replaced by the auxiliary device. It works fine on radio though, so just remember to set the stereo to FM.

[1]The head unit of the stereo is directly built into the dashboard and includes the heat / air conditioning controls. People do sell kits to convert the dash into one that can accept an aftermarket radio…but where’s the fun in that?
[2]The 2007 edition had an option for a stereo with satellite radio and an AUX input. I probably could have bought this stereo and installed it, but I was quoted $285 last time I asked about changing my radio.
[3]This might seem insane, but I was fairly certain I’d be able to solder a jumper back into place if everything didn’t work, so I actually felt more comfortable doing this than cutting the pin.
[4]Playing one of my favorite albums: No Control by Bad Religion
[5]You can see I actually had a CD in the CD player when I removed the radio. Oops! Luckily it was just a copy of one of my CDs (I never take originals in my car). I didn’t end up scratching it or anything either!
[6]Please don’t judge my soldering! Two of the four connections were a little sloppy (I had to add solder to those instead of just tinning the wires). I did ensure there were no shorts with a multimeter (and had to resolder one connection).

Chris McAvoyOpen Badges and JSON-LD

The BA standard working group has had adding extensions to the OB assertion specification high on its roadmap this summer. We agreed that before we could add an extension to an assertion or Badge Class, we needed to add machine readable schema definitions for the 1.0 standard.

We experimented with JSON-Schema, then JSON-LD. JSON-LD isn’t a schema validator, it’s much more. It builds linked data semantics on the JSON specification. JSON-LD adds several key features to JSON, most of which you can play around with in the JSON-LD node module.

  1. Add semantic data to a JSON structure, link the serialized object to an object type definition.
  2. Extend the object by linking to multiple object type definitions.
  3. A standard way to flatten and compress the data.
  4. Express the object in RDF.
  5. Treat the objects like a weighted graph.

All of which are features that support the concept behind the Open Badges standard very well. At its core, the OB standard is a way for one party (the issuer) to assert facts about another party (the earner). The assertion (the badge) becomes portable and displayable at the discretion of the owner of the badge.

JSON-LD is also quickly becoming the standard method of including semantic markup on html pages for large indexers like Google. Schema.org now lists JSON-LD examples alongside RDFa and Microdata. Google recommends using JSON-LD for inclusion in their rich snippet listings.

We’ve been talking about JSON-LD on the OB standard working group calls for a while now. It’s starting to feel like consensus is forming around inclusion of JSON-LD markup in the standard. This Tuesday, September 2nd 2014, we’ll meet again to collectively build a list of arguments for and against the move. We’ll also discuss a conditional rollout plan (conditional in that it will only be executed if we get the thumbs up from the community) and identify any gaps we need to cover with commitments from the community.

It’s going to be a great meeting, if you’re at all interested in JSON-LD and Open Badges, please join us!

Doug BelshawWeeknote 35/2014

This week I’ve been:

  • Surviving while being home alone. The rest of my family flew down to Devon (it’s quicker/easier than driving) to visit the in-laws.
  • Working on lots of stuff around the house. There are no plants left in my garden, for example. We decided that we want to start from ‘ground zero’ so I went on a bit of a mission.
  • Suffering after launching myself into the week too hard. I’d done half of the things I wanted to do all week by Tuesday morning. By Wednesday I was a bit burnt out.
  • Writing blog posts:
  • Accepting an invitation to join Code Club’s education advisory committee.
  • Finding out about an opportunity to work with a well-known university in the US to design a module for their Ed.D. programme. More on that soon, hopefully!
  • Clearing out the Webmaker badges queue (with some assistance from my amazing colleagues)
  • Inviting some people to talk to me about the current Web Literacy Map and how we can go about updating it to a version 2.0.
  • Finishing and sending the rough draft of a video for a badge which will be available on the iDEA award website when it launches properly.
  • Starting to lift weights at the gym. I actually started last week, but I’ve already noticed it help my swimming this week. Improved stamina, and the bottom of my right hamstring doesn’t hurt when I get out of the pool!

Next week I’m at home with a fuller calendar than usual. That’s because I’m talking to lots of people about future directions for the Web Literacy Map. If you’ve started using it, then I’d love to interview you. Sign up for that here.

Eric ShepherdThe Sheppy Report: August 29, 2014

This week, my WebRTC research continued; I spent a lot of time watching videos of presentations, pausing every few seconds to take notes, and rewinding often to be sure I got things right. It was interesting but very, very time-consuming!

I got a lot accomplished this week, although not any actual code on samples like I’d planned to. However, the pages on which the smaller samples will go are starting to come together, between bits of actual content on MDN and my extensive notes and outline. So that’s good.

I’m looking forward to this three-day Labor Day holiday here in the States. I’ll be back at it on Tuesday!

What I did this week?

  • Copy-edited the Validator glossary entry.
  • Copy-edited and cleaned up the Learning area page Write a simple page in HTML.
  • Created an initial stub documentation project plan page for updating the HTML element interface reference docs.
  • Turned https://developer.mozilla.org/en-US/docs/Project:About into a redirect to the right place.
  • Read a great deal about WebRTC.
  • Watched many videos about WebRTC, pausing a lot to take copious notes.
  • Built an outline of all the topics I want to be sure to cover. I’m sure this will continue to grow for a while yet.
  • Gathered notes and built agendas for the MDN community meeting and the Web APIs documentation meeting.
  • Updated the WebRTC doc plan with new information based on my initial notes.
  • Offered more input on a bug recommending that we try to add code to prevent people from using the style attribute or any undefined classes.
  • Filed bug 1060395 asking for a way to find the pages describing the individual methods and properties of an interface in the Web API reference
  • Fixed bug 1058814 about hard-to-read buttons by correcting the styles used by a macro.
  • Dealt with expense reports.
  • Started very initial work on WebRTC doc tree construction, preparing to reshuffle and clean up the existing, somewhat old, pages, and to add lots of new stuff.
  • Started work on trying to figure out how to make the SubpageMenuByCategories macro not lose headers; it’s calling through to MakeColumnsForDL, which specifically only works for a straight-up <dl>. Fixing this to work correctly will be my first task on Tuesday.

Meetings attended this week

Monday

  • MDN bug triage meeting
  • #mdndev planning meeting

Tuesday

  • Developer Relations weekly meeting.
  • 1:1 with Teoli. This went on for an hour instead of the usual 30 minutes, due to the enormous amount of Big Stuff we discussed.

Wednesday

  • MDN community meeting

Friday

A pretty good week all in all!

Benoit GirardVisual warning for slow B2G transaction landed

With the landing of bug 1055050, if you turn on the FPS counter on mobile you will now notice a rectangle around the screen edge to warning you that a transaction was out of budget.

  • The visual warning will appear if a transaction took over 200ms from start to finish.
  • Yellow indicates the transaction took over 200ms.
  • Orange will indicate the transaction took about 500ms.
  • Red will indicate the transaction is about 1000ms or over.

What’s a transaction?

It’s running the rendering pipeline and includes (1) Running request animation frame and other refresh observers, (2) Flushing pending style changes, (3) Flushing and reflow any pending layout changes, (4) Building a display list, (5) Culling, (6) Updating the layer tree, (7) Sending the final to the compositor, (8) Syncing resources with the GPU. It does NOT include compositing which isn’t part of the main thread transaction. It does not warn for other events like slow running JS events.

Why is this important?

A transaction, just like any other gecko event, blocks the main thread. This means that anything else queued and waiting to be service will be delayed. This means that many things on the page/app will be delayed like: animations, typing, canvas, js callbacks, timers, the next frame.

Why 200ms?

200ms is already very high. If we want anything in the app to run at 60 FPS that doesn’t use a magical async path then any event taking 16ms or more will cause noticeable stutter. However we’re starting with a 200ms threshold to focus on the bigger items first.

How do I fix a visual warning?

The warning as just provided as a visual tool.


Priyanka NagMaker Party Bhubaneshwar

Last weekend I had a blast in Bhubaneshwar. Over two days, I was there at two different colleges for two Maker parties.

Saturday (23rd August 2014), we were at the Center of IT & Management Education (CIME) where we were asked to address a crowd of 100 participants whom we were supposed to teach webmaking. Trust me, very rarely do we get such crowd in events where we get the opportunity to be less of a teacher and more of a learner. We taught them Webmaking, true, but in return we learnt a lot from them.

Maker Party at Center of IT & Management Education (CIME)

On Sunday, things were even more fabulous at Institute of Technical Education & Research(ITER), Siksha 'O' Anusandhan University college, where we were welcomed by around 400 participants, all filled with energy, enthusiasm and the willingness to learn.

Maker Party at Institute of Technical Education & Research(ITER)

Our agenda for both days were simple....to have loads and loads of fun! We kept the tracks interactive and very open ended. On both days, we did cover the following topics:
  • Introduction to Mozilla
  • Mozilla Products and projects
  • Ways of contributing to Mozilla
  • Intro to Webmaker tools
  • Hands-on session on Thimble, Popcorn and X-ray goggles and Appmaker
Both days, we concluded our sessions by giving away some small tokens of appreciation like e T-shirts, badges, stickers etc, to the people who had been extra awesome among the group. We concluded the awesomeness of the two days by cutting a very delicious cake and fighting over it till its last pieces.
 
Cake.....
Bading goodbye after two days was tough, but after witnessing the enthusiasm of everyone we met during these two events, I am very sure we are going to return soon to Bhubaneshwar for even more awesomeness.
 
A few people who are two be thanked for making these events sucessful and very memorable are:
  1. Sayak Sarkar, the co-organizer for this event.
  2. Sumantro, Umesh and Sukanta from travelling all the way from Kolkata and helping us out with the sessions.
  3. Rish and Prasanna for organizing these events.
  4. Most importantly, the entire team of volunteers from both colleges without whom we wouldn't havebeen able to even move a desk.
 p.s - Not to forget, we did manage to grab media's attention as well. The event was covered by a local newspaper.
The article in the newspaper next morning

Daniel StenbergFirefox OS Flatfish Bluedroid fix

Hey, when I just built my own Firefox OS (b2g) image for my Firefox OS Tablet (flatfish) straight from the latest sources, I ran into this (known) problem:

Can't find necessary file(s) of Bluedroid in the backup-flatfish folder.
Please update the system image for supporting Bluedroid (Bug-986314),
so that the needed binary files can be extracted from your flatfish device.

So, as I struggled to figure out the exact instructions on how to proceed from this, I figured I should jot down what I did in the hopes that it perhaps will help a fellow hacker at some point:

  1. Download the 3 *.img files from the dropbox site that is referenced from bug 986314.
  2. Download the flash-flatfish.sh script from the same dropbox place
  3. Make sure you have ‘fastboot’ installed (I’m mentioning this here because it turned out I didn’t and yet I have already built and flashed my Flame phone successfully without having it). “apt-get install android-tools-fastboot” solved it for me. Note that if it isn’t installed, the flash-flatfish.sh script will claim that the device is not in fastboot mode and stop with an error message saying so.
  4. Finally: run the script “./flash-flatfish.sh [dir with the 3 .img files]“
  5. Once it has succeeded, the tablet reboots
  6. Remove the backup-flatfish directory in the build dir.
  7. Restart the flatfish build again and now it should get passed that Bluedroid nit

Enjoy!

Wladimir PalantUsing a Firefox extension to work around Selenium WebDriver's limitations

My Google search link fix extension had a bunch of regressions lately and I realized that testing its impact on the search pages manually isn’t working — these pages are more complicated than it looks like, and there are lots of configuration options affecting them. So I decided looking into Selenium WebDriver in order to write integration tests that would automate Firefox. All in all, writing the tests is fairly simple once you get used to the rather arcane API. However, the functionality seems to be geared towards very old browsers (think IE6) and some features are nowhere to be found.

One issue: there is no way to focus an element without clicking it. Clicking isn’t always an option, since it might trigger a link for example. That issue turned out to be solved fairly easily:

driver.execute_script("arguments[0].focus()", element)

The ability to pass elements as parameters to WebDriver.execute_script is very useful, so it is surprising that it doesn’t seem to be documented properly anywhere.

But what about working with tabs or middle-clicking links? It seems that tabbed browsing wasn’t invented yet back when that API was designed, so it only has a concept of windows — not very useful. So WebDriver will only let you work with the currently selected tab, inactive tabs are off limits. And WebDriver.execute_script isn’t any help here either, it won’t let you run privileged code.

After briefly considering using send_keys functionality to open Web Console on about:config and typing code into it (yes, it looks like that would actually work), I decided to go with a less crazy solution: install an additional extension to implement the necessary functionality. So if a test wants the element to be middle-clicked it can trigger a custom event:

driver.execute_script('''
  var event = document.createEvent("Events");
  event.initEvent("testhelper_middleclick", true, false);
  arguments[0].dispatchEvent(event);
''', element)

And the extension listens to that event:

window.gBrowser.addEventListener("testhelper_middleclick", function(event)
{
  let utils = event.target.ownerDocument.defaultView
                   .QueryInterface(Ci.nsIInterfaceRequestor)
                   .getInterface(Ci.nsIDOMWindowUtils);
  let rect = event.target.getBoundingClientRect();
  utils.sendMouseEvent("mousedown", rect.left + 1, rect.top + 1, 1, 1, 0);
  utils.sendMouseEvent("mouseup", rect.left + 1, rect.top + 1, 1, 1, 0);
}, false, true);

This works nicely, but what if you want to get data back? For example, I want to know which URLs were requested at the top level — in particular, whether there was a redirect before the final URL. Selenium only allows you to get notified of URL changes that were initiated by Selenium itself (not very helpful) or poll driver.current_url (doesn’t work). The solution is to have the extension register a progress listener and write all URLs seen to the Browser Console:

window.gBrowser.addTabsProgressListener({
  onStateChange: function(browser, webProgress, request, flags, status)
  {
    if (!(flags & Ci.nsIWebProgressListener.STATE_IS_WINDOW))
      return;
    if (!(flags & Ci.nsIWebProgressListener.STATE_START) && !(flags & Ci.nsIWebProgressListener.STATE_REDIRECTING))
      return;
    if (request instanceof Ci.nsIChannel)
      Cu.reportError("[testhelper] Loading: " + request.URI.spec);
  }
});

You can use driver.get_log("browser") to retrieve the full list of console messages. Each message also has a timestamp which allows for example only extracting the URLs seen after the previous check.

Side-note: I first considered using MozMill for this. However, it is geared very much towards Firefox development and much of the Selenium functionality would have to be reimplemented (locating installed Firefox instance, default Firefox preferences for a test profile, dismissing alerts on web pages and so on).

Priyanka NagMaker Party gets grander in Pune this time

While going through my twitter time-line this evening, I noticed Michelle Thorne's tweet stating that India leads with most Maker Party action this season.
Well, who doubts that! In India, we have a maker parties being organized almost every second day. My facebook wall and twitter timelime is like overloaded with posts, photos and updates from all the Maker Parties happening around me.

Maker Party, Pune


Well, if you are still not aware of this one, we are having the grand daddy of this maker parties in Pune on the 6th of September 2014. The executive director of Mozilla Foundation, Mark Surman, is going to be personally present for this event. Just like all maker parties, this event is an attempt to map and empower a community of educators and creative people who share a passion to innovate, evolve and change the learning landscape.

A few quick updates about this event:
  •  Event date - 6th and 7th September
  •  Event venue - SICSR, Model Colony, Pune
  • Rough agenda for the event is going to be:
    • 6th September 2014 (Day 1) 
      • 10am - 11am : Mozilla introduction
      • 11am - 12 : About Hive initiative
      •  12 - 1pm: Rohit Lalwani - Entrepreneurship talk
      •  1-2pm : Lunch break
      •  2pm - 3pm: Webmaker begins with Appmaker
      •  3pm - 4pm: Webmaker continues with Thimble
      •  4pm - 4.45pm: Webmaker continues with Popcorn
      •  4.45pm - 5.30pm : Webmaker continues with x-ray goggles
      • 5.30pm - 6pm: Prize distribution (against best makes of the day etc). Science fair also ends
      • 6pm - 7pm : Birds of feature
      • 7pm : Dinner (venue - TBD)
Science fair will be from 12 noon to 6pm.
  
    •  7th September 2014 (Day 2) 
      • 1st Half: Community Meetup and Discussions on the future roadmap for Hive India,
        Long term partnership prospect meeting with partners.
      •  2nd Half: Community training sessions on Hive and Train the trainer events.
 
For this event, we are having a variety of different training sessions, workshops and science displays - starting from 3D printing to wood-works, Origami to quad-copter flying and even film making.

If you have still not registered for this event, heres your chance:


Loading...

Monica ChewFirefox 32 supports Public Key Pinning

Public Key Pinning helps ensure that people are connecting to the sites they intend. Pinning allows site operators to specify which certificate authorities (CAs) issue valid certificates for them, rather than accepting any one of the hundreds of built-in root certificates that ship with Firefox. If any certificate in the verified certificate chain corresponds to one of the known good certificates, Firefox displays the lock icon as normal.

Pinning helps protect users from man-in-the-middle-attacks and rogue certificate authorities. When the root cert for a pinned site does not match one of the known good CAs, Firefox will reject the connection with a pinning error. This type of error can also occur if a CA mis-issues a certificate.

Pinning errors can be transient. For example, if a person is signing into WiFi, they may see an error like the one below when visiting a pinned site. The error should disappear if the person reloads after the WiFi access is setup.



Firefox 32 and above supports built-in pins, which means that the list of acceptable certificate authorities must be set at time of build for each pinned domain. Pinning is enforced by default. Sites may advertise their support for pinning with the Public Key Pinning Extension for HTTP, which we hope to implement soon. Pinned domains include addons.mozilla.org and Twitter in Firefox 32, and Google domains in Firefox 33, with more domains to come. That means that Firefox users can visit Mozilla, Twitter and Google domains more safely. For the full list of pinned domains and rollout status, please see the Public Key Pinning wiki.

Thanks to Camilo Viecco for the initial implementation and David Keeler for many reviews!

Mike RatcliffeView jQuery and jQuery Live events in Firefox DevTools

A little while back we landed a feature that made it possible to interact with events from the markup panel of Firefox developer tools.

A lot of people were excited about this but we soon realized that jQuery wraps event handlers in a proxy making all handlers look like this:

function() {  
  return fn.apply( thisObject || this, arguments );
};

Numerous facepalms later and we now have support for jQuery and jQuery Live events. This support will be available in tonight's nightly:

jQuery and jQuery Live events

Because library support is important we have created an API that allows developers and library authors to create "event parsers" that allow our tools to make sense of their event systems.

This API is priviledged code so to create an event parser you will need to create a simple Firefox extension.

The API looks like this:

const {EventParsers} = require("devtools/toolkit/event-parsers");

var parser = {  
  id: "myLibrary events", // Unique id
  hasListeners: function(node) {
    // Hunt for node's listeners and return true as soon as one is 
    // encountered.
  },
  getListeners: function(node) {
    // Hunt for node's listeners and return an array of objects
    // representing those listeners. Each object should look like this:
    {
      type: "click",
      handler: function clicked() {...},

      // These tags will be displayed as attributes in the events popup.
      tags: "jQuery,Live", 

      // Hide or show fields
      hide: {               
        debugger: false, // Debugger icon
        type: false, // Event type e.g. click
        filename: false,
        capturing: false,
        dom0: false
      },

      override: {
      // The following can be overridden:
        type: "click",
        origin: "http://www.mozilla.com",
        searchString: 'onclick="doSomething()"',
        DOM0: true,
        capturing: true
      }
    }
  }
},
normalizeHandler: function(fnDO) {  
  // Take a handler debug object and use the debugger to walk the scope 
  // chain to discover the function you would like to be displayed.

  // See https://hg.mozilla.org/integration/fx-team/diff/9add1ec0251d/toolkit/devtools/event-parsers.js#l1.98 for an example.
}

EventParsers.registerEventParser(parser);  

We have bugs logged to add support for the following libraries:

You still here? What are you waiting for? Go get hacking!

Nicholas NethercoteUpdate on reviewing our data practices and Bugzilla development database disclosure

As we indicated in the post titled “MDN Disclosure”, we began several remediation measures, including a review of data practices surrounding user data. We have kicked off a larger project to better our practices around data, including with respect to the various non-Mozilla projects we support. We are implementing immediate fixes for any discovered issues across the organization, and are requiring each business unit to perform a review of their data practices and, if necessary, to implement additional protections based on that review.

As we proceed through our broader remediation program, we discovered an incident that occurred in the Bugzilla community, one of the community projects we support. A member of the Bugzilla community discovered that development database dump files containing email addresses and encrypted passwords were posted on a publicly accessible server. They were alerted to this incident by a security bug filed by a contributor. See the Bugzilla community blog post for more information.

While it is important to note that the disclosure of this development database does not affect bugzilla.mozilla.org, we continue to believe that the broader community would benefit from our increased focus on data practices and therefore will continue with our plan of including the Bugzilla project as well as other community projects in the data practices initiatives we’ve described above.

We are committed to continuing to improve our data practices to minimize the likelihood of these and other types of incidents.

Sincerely,

Mozilla Security

Jared WeinNew in Firefox Nightly: Recommended and easy theme switching through Customize mode

Firefox menuWe shipped the Australis project with Firefox 29, but the Firefox team hasn’t stopped working on making Firefox the easiest browser to personalize. Firefox allows easy customizing through the new Customize mode, and now in Firefox Nightly people will find a quick and easy to way to set the theme of the browser.

After entering Customize mode, a new menu is shown at the footer of the window. Clicking on this menu will show any installed themes as well as a list of five recommended themes.

These recommended themes were picked from the Add-ons for Firefox website by members of the Firefox User Experience team. All of the themes are licensed through Creative Commons. Some are CC-BY and others are CC-BY-SA.

Themes menu

Hovering over a theme in the menu will preview the appearance of the theme. Clicking on one of the themes will change the applied theme.

An applied theme

We haven’t figured out yet what the rotation will be for recommended themes. Any input on how often or how we should go about putting together the next list is greatly appreciated.

Full management of themes and add-ons is still available through the Add-ons Manager. Recommended themes that have not been applied will not show up in the Add-ons Manager. Once a recommended theme is applied, it will appear in the Add-ons Manager and can be uninstalled from there.


Tagged: firefox, planet-mozilla, usability

Pete MooreWeekly review 2014-08-27

Highlights from this week

1. Play Store - armv6

The main goal of the last week has been to enable fennec builds on esr31 branch. Last week I updated the build process to use a different mechanism to generate the version code in the play store for armv6 apks generated from the esr31 branch. This week has been about enabling these builds and release builders.

This work is tracked in Bug 1040319 – Ensure that Fennec builds from mozilla-esr31 have a buildID to allow for armv6/Android 2.2 users to update to mozilla-esr31 apks.

2. Working with contributors

I’ve been working with kartikgupta0909 this week on IRC - hoping he is going to fix Bug 1020613 - vcs sync should only push tags/heads that have changed since last successful push for us.

I added metadata to bugs, and created a bugzilla search for them to appear in, which I linked to from our contributions wiki page (and I created a sublink to RelEng contributions page from our main Release Engineering page).

3. Other

Regular type support work, which can be seen in bugs below.

Goals for next week:

  • Return to l10n work
  • Prepare for RelEng arch meeting in September

Bugs I created this week:

Other bugs I updated this week:

Julien VehentPostgres multicolumn indexes to save the day

I love relational databases. Well designed, they are the most elegant and efficient way to store data. Which is why MIG uses Postgresql, hosted by Amazon RDS.

It's the first time I use RDS for anything more than a small website. I discover its capabilities along the way.  Over the past few days, I've been investigating performance issues. The database was close to 100% CPU, and the number of DB connections maintained by the Go database package was varying a lot. Something was off.

I have worked as a junior Oracle & Postgres DBA in the past. In my limited experience, database performances are almost always due to bad queries, or bad schemas. When you wrote the queries, however, this is what you blame last, after spending hours looking for a bug in any other components outside of your control.

Eventually, I re-read my queries, and found one that looked bad enough:

// AgentByQueueAndPID returns a single agent that is located at a given queueloc and has a given PID
func (db *DB) AgentByQueueAndPID(queueloc string, pid int) (agent mig.Agent, err error) {
	err = db.c.QueryRow(`SELECT id, name, queueloc, os, version, pid, starttime, heartbeattime,
		status FROM agents WHERE queueloc=$1 AND pid=$2`, queueloc, pid).Scan(
		&agent.ID, &agent.Name, &agent.QueueLoc, &agent.OS, &agent.Version, &agent.PID,
		&agent.StartTime, &agent.HeartBeatTS, &agent.Status)
	if err != nil {
		err = fmt.Errorf("Error while retrieving agent: '%v'", err)
		return
	}
	if err == sql.ErrNoRows {
		return
	}
	return
}
The query locates an agent using its queueloc and pid values. Which is necessary to properly identify an agent, except that neither queueloc nor pid have indexes, resulting in a sequential scan of the table:
mig=> explain SELECT * FROM agents WHERE queueloc='xyz' AND pid=1234;
QUERY PLAN                                                   
--------------------------------------------------------------
 Seq Scan on agents  (cost=0.00..3796.20 rows=1 width=161)
   Filter: (((queueloc)::text = 'xyz'::text) AND (pid = 1234))
(2 rows)

This query is called ~50 times per second, and even with only 45,000 rows in the agents table, that is enough to burn all the CPU cycles on my RDS instance.

Postgres supports multicolumn indexes. The fix is simple enough: create an index on the columns queueloc and pid together.

mig=> create index agents_queueloc_pid_idx on agents(queueloc, pid);
CREATE INDEX

Which results in an immediate, drastic, reduction of the cost of the query, and CPU usage of the instance.


mig=> explain SELECT * FROM agents WHERE queueloc='xyz' AND pid=1234;
QUERY PLAN                                                     
---------------------------------------------------------------------------------------
 Index Scan using agents_queueloc_pid_idx on agents  (cost=0.41..8.43 rows=1 width=161)
   Index Cond: (((queueloc)::text = 'xyz'::text) AND (pid = 1234))
(2 rows)

migdbcpuusage.png

Immediate performance gain for a limited effort. Gotta love Postgres !

Doug BelshawSoliciting feedback on v1.1 of the Web Literacy Map

The Web Literacy Map constitutes the skills and competencies that Mozilla and its community of stakeholders believe to be necessary to read, write and participate effectively on the web.

Sea

The Web Literacy Map currently stands at v1.1 but as I blogged recently, a lot has happened since we launched the first version at MozFest last year! That’s why we’re planning to update it to v2.0 by early January 2015.

I’ll be connecting with key people over the coming weeks to ask for a half-hour (recorded) conversation which will then be shared with the community. In the meantime we’d appreciate your feedback. Here’s what Atul Varma had to say:

So I feel like the weblit map is cool as it is, but as has been discussed previously, there are a number of areas that are important but cross-cut through existing competencies, rather than necessarily constituting their own competencies by themselves… what if we created a set of lenses through which the competencies could be viewed?

There’s a couple of ways you can give your feedback:

Leaving your name means we can follow up with questions if necessary (for clarification, etc.) I look forward to hearing what you have to say! All opinions are welcome. Pull no punches. :-)


Questions? I’m @dajbelshaw on Twitter or you can email me: doug@mozillafoundation.org

Daniel StenbergGoing to FOSDEM 2015

Yeps,

I’m going there and I know several friends are going too, so this is just my way of pointing this out to the ones of you who still haven’t made up your mind! There’s still a lot of time left as this event is taking place late January next year.

I intend to try to get a talk to present this time and I would love to meet up with more curl contributors and fans.

fosdem

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1058479] move the “mozilla employees” warning on bugzilla::admin next to the submit button
  • [1058481] git commits should link to commitdiff not commit
  • [1056087] contrib/merge-users.pl fails if there are no duplicate bug_user_last_visit rows
  • [1058679] new bug API returning a ref where bzexport expects bug data
  • [1057774] bzAPI landing page gives a 404
  • [1056904] Add “Mentored by me” to MyDashboard
  • [1059085] Unable to update a product’s group controls: Can’t use string (“table”) as an ARRAY ref while “strict refs” in use
  • [1059088] Inline history can be shown out-of-order when two changes occur in the same second

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Gervase MarkhamEmail Account Phishers Do Manual Work

For a while now, criminals have been breaking into email accounts and using them to spam the account’s address book with phishing emails or the like. More evil criminals will change the account password, and/or delete the address book and the email to make it harder for the account owner to warn people about what’s happened.

My mother recently received an email, purportedly from my cousin’s husband, titled “Confidential Doc”. It was a mock-up of a Dropbox “I’ve shared an item with you” email, with the “View Document” URL actually being http://proshow.kz/excel/OLE/PPS/redirect.php. This (currently) redirects to http://www.affordablewebdesigner.co.uk/components/com_wrapper/views/wrapper/tmpl/dropbox/, although it redirected to another site at the time. That page says “Select your email provider”, explaining “Now, you can sign in to dropbox with your email”. When you click the name of your email provider, it asks you for your email address and password. And boom – they have another account to abuse.

But the really interesting thing was that my mother, not being born yesterday, emailed back saying “I’ve just received an email from you. But it has no text – just an item to share. Is it real, or have you been hacked?” So far, so cautious. But she actually got a reply! It said:

Hi <her shortened first name>,
I sent it, It is safe.
<his first name>

(The random capital was in the original.)

Now, this could have been a very smart templated autoresponder, but I think it’s more likely that the guy stayed logged into the account long enough to “reassure” people and to improve his hit rate. That might tell us interesting things about the value of a captured email account, if it’s worth spending manual effort trying to convince people to hand over their creds.

Alex VincentAn insightful statement from a mathematics course

I’m taking a Linear Algebra course this fall.  Last night, my instructor said something quite interesting:

“We are building a model of Euclidean geometry in our vector space. Then we can prove our axioms of geometry (as theorems).”

This would sound like technobabble to me even a week ago, but what he’s really saying is this:

“If you can implement one system’s basic rules or axioms in another system, you can build a model of that first system in the second.”

Programmers and website builders build models of systems all the time, and unconsciously, we build on top of other systems. Think about that when you write JavaScript code: the people who implement JavaScript engines are building a model for millions of people to use that they’ll never meet. I suppose the same could be said of any modern programming language, compiler, transpiler or interpreter.

The beauty for those of us who work in the model is that we (theoretically) shouldn’t need to care what platform we run on. (In practice, there are differences, which is why we want platforms to implement standards, so we can concentrate on using the theoretical model we depend on.)

On the flip side, that also means that building and maintaining that fundamental system we build on top of has to be done very, very carefully.  If you’re building something for others to use (and chances are, when you’re writing software, you’re doing exactly that), you really have to think about how you want others to use your system, and how others might try to use your system in ways you don’t expect.

It’s really quite a profound duty that we take on when we craft software for others to use.

Chris AtLeeGotta Cache 'Em All

TOO MUCH TRAFFIC!!!!

Waaaaaaay back in February we identified overall network bandwidth as a cause of job failures on TBPL. We were pushing too much traffic over our VPN link between Mozilla's datacentre and AWS. Since then we've been working on a few approaches to cope with the increased traffic while at the same time reducing our overall network load. Most recently we've deployed HTTP caches inside each AWS region.

Network traffic from January to August 2014

The answer - cache all the things!

Obligatory XKCD

Caching build artifacts

The primary target for caching was downloads of build/test/symbol packages by test machines from file servers. These packages are generated by the build machines and uploaded to various file servers. The same packages are then downloaded many times by different machines running tests. This was a perfect candidate for caching, since the same files were being requested by many different hosts in a relatively short timespan.

Caching tooltool downloads

Tooltool is a simple system RelEng uses to distribute static assets to build/test machines. While the machines do maintain a local cache of files, the caches are often empty because the machines are newly created in AWS. Having the files in local HTTP caches speeds up transfer times and decreases network load.

Results so far - 50% decrease in bandwidth

Initial deployment was completed on August 8th (end of week 32 of 2014). You can see by the graph above that we've cut our bandwidth by about 50%!

What's next?

There are a few more low hanging fruit for caching. We have internal pypi repositories that could benefit from caches. There's a long tail of other miscellaneous downloads that could be cached as well.

There are other improvements we can make to reduce bandwidth as well, such as moving uploads from build machines to be outside the VPN tunnel, or perhaps to S3 directly. Additionally, a big source of network traffic is doing signing of various packages (gpg signatures, MAR files, etc.). We're looking at ways to do that more efficiently. I'd love to investigate more efficient ways of compressing or transferring build artifacts overall; there is a ton of duplication between the build and test packages between different platforms and even between different pushes.

I want to know MOAR!

Great! As always, all our work has been tracked in a bug, and worked out in the open. The bug for this project is 1017759. The source code lives in https://github.com/mozilla/build-proxxy/, and we have some basic documentation available on our wiki. If this kind of work excites you, we're hiring!

Big thanks to George Miroshnykov for his work on developing proxxy.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1058274] The input field for suggested reviewers when editing a component needs ‘multiple’ to be true for allowing for more than one username
  • [1051655] mentor field updated/reset when a bug is updated as a result of a change on a different bug (eg. see also, duplicate)
  • [1058355] bugzilla.mozilla.org leaks emails to logged out users in “Latest Activity” search URLs

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jess KleinRemix + Hack the Firefox Home page. No really, we want you to!

If you are a Firefox desktop user, you may have seen the Firefox default home page. This page contains a default engine web search and quick links to downloads, bookmarks, history, add-ons, sync and settings. Additionally, if you happen to have had tabs open the last time you used the browser,  you can restore them from the home page.  We often share important news and updates underneath the search bar.

Screen Shot 2014-08-15 at 10.18.04 AM.png

This is what I currently see at the Firefox default home page. Animated gifs FTW.


THE OPPORTUNITY
A few months back, Hive Labs, (a new project within the Hive Learning Networks designed to explore the question “how do we use design to transform edupunk ethics into great products?”), was approached by the Mozilla Foundation Engagement team to brainstorm how the space could be used in an innovative way to educate Firefox users about the Maker Party. Maker Party is Mozilla's global campaign to teach the web, uniting educators, organizations and enthusiastic web users with hands-on learning and making. While I have to admit, I have never really created something in the realm of owned media, I saw this as an interesting opportunity for Mozilla to show (vs. tell) what Maker Party is all about.  


THE CHALLENGE


The team (which included creative individuals from many different projects across the Mozilla Foundation and the Corporation) immediately identified the opportunity space and came up with a few project requirements:
  • use the space in an interactive way to introduce the website visitor to web literacy skills
  • acknowledge that the visitor may not have ever seen code before, and understand that we do not know what web literacy skills they are coming to this space with
  • create something playful


THE SOLUTION


While we tossed around a few different ideas, the solution that we came up with was to create a Webmaker Goggles - like experience that lets the visitor see under the hood of the webpage.


Screen Shot 2014-08-15 at 10.35.04 AM.png


After doing some initial sketches, we realized that we needed to define our learning objectives for the project.  While normally this is fairly easy to do - you say that the learner will come away with the ability to remix a paragraph written in HTML and understand what p tags are, or something very basic. Here, the challenge was two-fold: 1. the webpage visitor did not identify as a learner and 2. as I mentioned before, they might have no knowledge of the fact that the code is written in order to create a webpage. So, after several false starts, we came up with the the goal of having the website visitor walk away understanding that if you look under the hood of a webpage, you will see it is made from code.


Initial sketches for the snippet included replacing the Firefox logo with an image


After the learning objective was defined, we had to interpret what that meant in terms of interaction design. I believe that the most effective way to empower a user is to put the tools in their hands to allow them to directly address and grapple with the thing that they might learn by tinkering with it themselves. We tried out a few different iterations on this. Above is a sketch where the visitor might get instructed to remix the page from a video. The idea was to have a person in the video describe what to do, and then the learner would use the goggles to swap out the video for an image or video of their choosing. This idea was fun, and had a lot of potential community localization opportunities. However, there was a risk that the user would just not click on the video, and miss out on all the fun.


Ultimately, we ended up utilising what Atul Varma calls “cruise control” —that’s where we model the behavior in order to encourage the site visitor to try it out themselves. It looks like someone is typing out all of the words on the screen.  We decided to focus on revealing a little CSS, because you can use real words to represent colors and seeing those colors immediately can have a visceral impact on the site visitor. Here is a screencast of the interaction:



** Update: You can see the actual interactive experience by going to the Firefox homepage or if you can't get to that, check it out here.  **

The crazy and kickass cast of characters who pulled this interactive off are:  Chris Lawrence, Atul Varma, Brian Brennan , Adam Lofting, Hannah Kane, Jean Collings, Mike Kelly, Chris More, Matt Thompson, Aki Rose Braun,  David Ascher, Geoffrey MacDougall, Brett Gaylor, John Slater, Eric Petitt, Mary Ellen Muckerman, Pete Scanlon and Andrea Wood.

We’re really excited about this project, as it represents one of the first interactive uses (if not THE first) of the space of the Firefox home page. We hope that as site visitors dip their toes into understanding the craft of the Web, they’ll be inspired to learn more through Webmaker and Maker Party.  Our ultimate goal is for people to feel empowered to become creators, not just consumers, of the Web.

Daniel StenbergCredits in the curl project

Friends!

When we receive patches, improvements, suggestions, advice and whatever that lead to a change in curl or libcurl, I make an effort to log the contributor’s name in association with that change. Ideally, I add a line in the commit message. We use “Reported-by: <full name>” quite frequently but also other forms of “…-by: <full name>” too like when there was an original patch by someone or testing and similar. It shouldn’t matter what the nature of the contribution is, if it helped us it is a contribution and we say thanks!

curl-give-credits

I want all patch providers and all of us who have push rights to use this approach so that we give credit where credit is due. Giving credit is the only payment we can offer in this project and we should do it with generosity.

The green bars on the right show the results from the question how good we are at giving credit in the project from the 2014 curl survey, where 5 is really good and 1 is really bad. Not too shabby, but I’d say we can do even better! (59% checked the top score, 15% checked the 3′)

I have a script called contributors.sh that extracts all contributors since a tag (typically the previous release) and I use that to get a list of names to thank in the RELEASE-NOTES file for the pending curl release. Easy and convenient.

After every release (which means every 8th week) I then copy the list of names from RELEASE-NOTES into docs/THANKS. So all contributors get remembered and honored after having helped us in one way or another.

When there’s no name

When contributors don’t provide a real name but only a nick name like foobar123, user_5678 and so on I tend to consider that as request to not include the person’s name anywhere and hence I tend to not include it in the THANKS or RELEASE-NOTES. This also sometimes the result of me not always wanting to bother by asking people over and over again for their real name in case they want to be given proper and detailed credit for what they’ve provided to us.

Unfortunately, a notable share of all contributions we get to the project are provided by people “hiding” behind a made up handle. I’m fine with that as long as it truly is what the helpers’ actually want.

So please, if you help us out, we will happily credit you, but please tell us your name!

keep-calm-and-improve-curl

Mozilla Release Management TeamFirefox 32 beta8 to beta9

  • 42 changesets
  • 78 files changed
  • 1175 insertions
  • 782 deletions

ExtensionOccurrences
cpp26
js20
h7
html5
py4
jsm2
ini2
xul1
xml1
json1
in1
cc1
build1

ModuleOccurrences
browser12
layout10
content9
toolkit7
js6
dom6
security5
services4
netwerk3
testing2
config2
tools1
modules1
memory1
image1
gfx1
extensions1

List of changesets:

Mike HommeyBug 1050029 - Improve Makefile checks for deprecated or moz.build variables. r=mshal a=NPOTB - 2a617532286d
Mike ShalBug 1047621 - Move link.py to config for importing expandlibs_exec; r=gps a=NPOTB - a09c51fcbd98
Mike ShalBug 1047621 - Have link.py import and call expandlibs_exec.py; r=gps a=NPOTB - bd02db1d22d0
Tim TaubertBug 1054815 - Fix browser_tabview_bug712203.js to not connect to google.com. r=smacleod, a=test-only - 2309c50ccc6c
Ryan VanderMeulenNo Bug - Change min expected assertions for test_playback_rate.html to 3. a=test-only - 1815786bfc6d
Ryan VanderMeulenNo Bug - Widen the allowable number of asserts in test_bug437844.xul to 19-21 so we don't have to keep adjusting it everytime something randomly perturbs it. a=test-only - 3f100f099542
Martijn WargersBug 1024535 - Fix for failing video test on Windows 7. r=jwwang, a=test-only - d2714b6fc28d
David Rajchenbach-TellerBug 1024686 - Add missing return in Sqlite.jsm. r=mak, a=test-only - da78e23cbe3d
Martijn WargersBug 1051783 - Fix test_pointerlock-api.html. r=Enn, a=test-only - 90b5e0b87666
Terrence ColeBug 1055219. r=terrence, a=abillings - 7c7145e95cb5
Wes KocherBacked out changeset 90b5e0b87666 (Bug 1051783) for an added assert a=backout - ec5427a8e674
Steven MacLeodBug 1035557 - Migrate crash checkpoints with the session so that we don't appear to crash during reset. r=ttaubert, a=lmandel - 8d583074b918
Monica ChewBug 1055670: Disable remote lookups (r=gcp,a=lmandel) - b554afc480aa
C.J. KuBug 1055040 - Send mouse events base on canvas position and enable this test case on all B2G builds. r=ehsan, a=test-only - fadc34768c8b
Jared WeinBug 947574 - Switch browser_426329.js to use tasks to avoid intermittent failure. r=Gijs, a=test-only - 023ef0541072
Michael WuBug 1045977 - Clear heap allocated volatile buffers. r=njn, r=seth, a=sledru - bff13e7445c5
Michal NovotnyBug 1054425 - cache2: leak in CacheFileMetadata::WriteMetadata. r=jduell, a=sledru - 342c0c26e18d
Shane CaraveoBug 1047340 - Fix annotation of marks by using the browser url rather than cannonical url. r=jaws, a=lmandel - 54949d681a14
Aaron KlotzBug 1054813 - Add some missing MutexAutoLocks in nsZipReaderCache. r=mwu, a=lmandel - 50590d1557c4
Jim ChenBug 1013004 - Fix support for sampling intervals > 1 second. r=BenWa, a=lmandel - 61980c2f6177
Gregory SzorcBug 1055102 - Properly handle Unicode in Bagheera payloads. r=bsmedberg, a=lmandel - 4f18903bc230
Steve WorkmanBug 1054418 - Rewrite AppCacheUtils.jsm to use HTTP Cache v2 APIs. r=michal, a=sledru - fa7360fe9779
Michal NovotnyBug 1054819 - Ensure that the dictionary is released on the main thread. r=ehsan, a=sledru - c06efff91ed3
Honza BambasBug 1053517 - Enable the new HTTP cache during automation testing. r=jduell, a=test-only - f5d4b16203aa
Douglas CrosherBug 1013996 - irregexp: avoid unaligned accesses in ARM code. r=bhackett, a=lmandel - 093bfa0f1dee
Joel MaherBug 1056199 - Update talos on Fx32 to the latest revision. r=RyanVM, a=test-only - ec3e586813b5
Tim TaubertBug 1041527 - Ensure that about:home isn't the initial tab when opening new windows in tabview tests. r=ehsan, a=test-only - c340fefc0fe8
Marco BonardoBug 1002439 - browser_bug248970.js is almost perma fail when run by directory on osx opt. r=mano, a=test-only - 0b44c271f755
Ryan VanderMeulenBug 906752 - Disable test_audioBufferSourceNodeOffset.html on deBug builds. a=test-only - d94be43c729c
Seth FowlerBug 1024454 - Part 1: Eagerly propagate dirty bits so absolute children of table parts get reflowed reliably. r=dbaron, a=lmandel - 8e6b808eed02
Bill McCloskeyBug 1053999 - Be more conservative in recursion checks before brain transplants. r=bholley, a=lmandel - ac551f43e2b4
Paul AdenotBug 1056032 - Make sure COM is initialized when trying to decode an mp3 using decodeAudioData. r=cpearce, a=lmandel - f17ade17a846
Paul AdenotBug 1056032 - Test that we can decode an mp3 using decodeAudioData. r=ehsan, a=lmandel - 53d300e03f5b
Markus StangeBack out Bug 1000875 in order to fix the regression tracked in Bug 1011166. a=backout - 11a5306111d0
Peter Van der BekenBug 1036186 - Reset Migration wizard no longer skips the first step to choose a browser. r=smaug, a=lmandel - ac8864d8ecc0
Camilo VieccoBug 1047177 - Treat v4 certs as v3 certs (1/2). r=keeler. a=lmandel - 6049537c2510
Camilo VieccoBug 1047177 - Treat v4 certs as v3 certs. Tests (2/2). r=keeler. a=lmandel - 74a58e14d1d3
Bill McCloskeyBug 1008107 - Allow SandboxPrivate to be null in sandbox_finalize. r=bz, a=lmandel - 85318a1536ee
Sami JaktholmBug 1055499 - StyleEditor: Properly wait for the toolbox to be destroyed before ending test run and causing windows to leak. r=harth, a=test-only - 8f49d60bf5c9
Honza BambasBug 1040086 - EV identifier missing when restoring session with HTTP cache v2. r=michal, a=lmandel - 33ea2d7e342e
Shane CaraveoBug 1056415 - Fix updating the marks buttons during tabchange. r=jaws, a=lmandel - 2f61f6e44a33
Shane CaraveoBug 1047316 - Fix docshell swapping Bug by removing usage in marks (unecessary here). r=jaws, a=lmandel - 58eb677e55f3

David HumphreyIntroducing MakeDrive

I've been lax in my blogging for the past number of months (apologies). I've had my head down in a project that's required all of my attention. On Friday we reached a major milestone, and I gave a demo of the work on the weekly Webmaker call. Afterward David Ascher asked me to blog about it. I've wanted to do so for a while, so I put together a proper post with screencasts.

I've written previously about our idea of a web filesystem, and the initial work to make it possible. Since then we've greatly expanded the idea and implementation into MakeDrive, which I'll describe and show you now.

MakeDrive is a JavaScript library and server (node.js) that provides an offline-first, always available, syncing filesystem for the web. If you've used services like Dropbox or Google Drive, you already know what it does. MakeDrive allows users to work with files and folders locally, then sync that data to the cloud and other browsers or devices. However, unlike Dropbox or other similar services, MakeDrive is based purely on JavaScript and HTML5, and runs on the web. You don't install it; rather, a web application includes it as a script, and the filesystem gets created or loaded as part of the web page or app.

Because MakeDrive is a lower-level service, the best way to demonstrate it is by integrating it into a web app that relies on a full filesystem. To that end, I've made a series of short videos demonstrating aspects of MakeDrive integrated into a modified version of the Brackets code editor. I actually started this work because I want to make Brackets work in the browser, and one of the biggest pieces it is missing in browser is a full featured filesystem (side-note: Brackets can run in a browser just fine :). This post isn't specifically about Brackets, but I'll return to it in future posts to discuss how we plan to use it in Webmaker. MakeDrive started as a shim for Brackets-in-a-browser, but Simon Wex encouraged me to see that it could and should be a separate service, usable by many applications.

In the first video I demonstrate how MakeDrive provides a full "local," offline-first filesystem in the browser to a web app:

The code to provide a filesystem to the web page is as simple as var fs = MakeDrive.fs();. Applications can then use the same API as node.js' fs module. MakeDrive uses another of our projects, Filer, to provide the low-level filesystem API in the browser. Filer is a full POSIX filesystem (or wants to be, file bugs if you find them!), so you can read and write utf8 or binary data, work with files, directories, links, watches, and other fun things. Want to write a text file? it's done like so:

  var data = '<html>...';
  fs.writeFile('/path/to/index.html', data, function(err) {
    if(err) return handleError();
    // data is now written to disk
  });

The docs for Filer are lovingly maintained, and will show you the rest, so I won't repeat it here.

MakeDrive is offline-first, so you can read/write data, close your browser or reload the page, and it will still be there. Obviously having access to your filesystem outside the current web page is also desirable. Our solution was to rework Filer so it could be used in both the browser and node.js, allowing us to mirror filesystems over the network using Web Sockets). We use a rolling-checksum and differential algorithm (i.e., only sending the bits of a file that have changed) inspired by rsync; Dropbox does the same.

In this video I demonstrate syncing the browser filesystem to the server:

Applications and users work with the local browser filesystem (i.e., you read and write data locally, always), and syncing happens in the background. That means you can always work with your data locally, and MakeDrive tries to sync it to/from the server automatically. MakeDrive also makes a user's mirrored filesystem available remotely via a number of authenticated HTTP end points on the server:

  • GET /p/path/into/filesystem - serve the path from the filesystem provided like a regular web server would
  • GET /j/path/into/filesystem - serve the path as JSON (for APIs to consume)
  • GET /z/path/into/filesystem - export the path as export.zip (e.g., zip and send user data)

This means that a user can work on files in one app, sync them, and then consume them in another app that requires URLs. For example: edit a web component in one app and include and use it in another. When I started web development in the 1990s, you worked on files locally, FTP'ed them to a server, then loaded them via your web server and browser. Today we use services like gh-pages and github.io. Both require manual steps. MakeDrive automates the same sort of process, and targets new developers and those learning web development, making it a seamless experience to work on web content: your files are always "on the web."

MakeDrive supports multiple, simultaneous connections for a user. I might have a laptop, desktop, and tablet all sharing the same filesystem via a web app. This app can be running in any HTML5 compatible browser, app, or device. In this video I demonstrate syncing changes between different HTML5 browsers (Chrome, Firefox, and Opera):

Like Dropbox, each client will have its own "local" version of the filesystem, with one authoritative copy on the server. The server manages syncing to/from this filesystem so that multiple clients don't try to sync different changes to the same data at once. After one client syncs new changes, the server informs other clients that they can sync as well, which eventually propagates the changes across all connected clients. Changes can include updates to a file's data blocks, but also any change to the filesystem nodes themselves: renames, deleting a file, making a new directory, etc.

The code to make this syncing happen is very simple. As long as there is network, a MakeDrive filesystem can be connected to the server and synced. This can be a one-time thing, or the connection can be left open and incremental syncs can take place over the lifetime of the app: offline first, always syncing, always available.

Because MakeDrive allows the same user to connect multiple apps/devices at once, we have to be careful not to corrupt data or accidentally overwrite data when syncing. MakeDrive implements something similar to Dropbox's Conflicted Copy mechanism: if two clients change the same data in different ways, MakeDrive syncs the server's authoritative version, but also creates a new file with the local changes, and lets the user decide how to proceed.

This video demonstrates the circumstances by which a conflicted copy would be created, and how to deal with it:

Internally, MakeDrive uses extended attributes on filesystem nodes to determine automatically what has and hasn't been synced, and what is in a conflicted state. Conflicted copies are not synced back to the server, but remain in the local filesystem. The user decides how to resolve conflicts by deleting or renaming the conflicted file (i.e., renaming clears the conflict attribute).

MakeDrive works today, but isn't ready for production quite yet. On Friday we reached the end of our summer work, where we tried hard to follow initial mockups are very cool. If you have a web-first filesystem, you can do some interesting things that might not make sense in a traditional filesystem (i.e., when the scope of your files is limited to web content).

  • Having a filesystem in a web page naturally got me wanting to host web pages from web pages. I wrote nohost to experiment with this idea, an httpd in browser that uses Blob URLs. It's really easy to load DOM elements from a web filesystem:
  • var img = document.createElement('img');
    fs.readFile('/path/into/filesystem/image.png', function(err, data) {
      if(err) return handleError();
    
      // Create a Blob and wrap in URL Object.
      var blob = new Blob([data], {type: 'image/png'})
      var url = URL.createObjectURL(blob);
      img.src = url;
    });
    
    • Using this technique, we could create a small bootloader and store entire web apps in the filesystem. For example, all of Brackets loading from disk, with a tiny bootloader web page to get to the filesystem in appcache. This idea has been discussed elsewhere, and adding the filesystem makes it much more natural.
    • The current work on the W3C stream spec is really exciting, since we need a way to implement streaming data in and out of a filesystem, and therefore IndexedDB.
    • Having the ability to move IndexedDB to worker threads for background syncs (bug 701634), and into third-party iframes with postMessage to share a single filesystem instance across origins (bug 912202) would be amazing
    • Mobile! Being able to sync filesystems in and out of mobile web apps is really exciting. We're going to help get MakeDrive working in Mobile Appmaker this fall.

    If any of this interests you, please get in touch (@humphd) and help us. The next 6 months should be a lot of fun. I'll try to blog again before that, though ;)

    Daniel StenbergMy home setup

    I work in my home office which is upstairs in my house, perhaps 20 steps from my kitchen and the coffee refill. I have a largish desk with room for a number of computers. The photo below shows the three meter beauty. My two kids have their two machines on the left side while I use the right side of it for my desktop and laptop.

    Daniel's home office

    Many computers

    The kids use my old desktop computer with a 20″ Dell screen and my old 15.6″ dual-core Asus laptop. My wife has her laptop downstairs and we have a permanent computer installed underneath the TV for media (an Asus VivoPC).

    My desktop computer

    I’m primarily developing C and C++ code and I’m frequently compiling rather large projects – repeatedly. I use a desktop machine for my ordinary development, equipped with a fairly powerful 3.5GHz quad-core Core-I7 CPU, I have my OS, my home dir and all source code put on an SSD. I have a larger HDD for larger and slower content. With ccache and friends, this baby can build Firefox really fast. I put my machine together from parts myself as I couldn’t find a suitable one focused on horse power but yet a “normal” 2D graphics card that works Fractal Designfine with Linux. I use a Radeon HD 5450 based ASUS card, which works fine with fully open source drivers.

    I have two basic 24 inch LCD monitors (Benq and Dell) both using 1920×1200 resolution. I like having lots of windows up, nothing runs full-screen. I use KDE as desktop and I edit everything in Emacs. Firefox is my primary browser. I don’t shut down this machine, it runs a few simple servers for private purposes.

    My machines (and my kids’) all run Debian Linux, typically of the unstable flavor allowing me to get new code reasonably fast.

    Func KB-460 keyboardMy desktop keyboard is a Func KB-460, mechanical keyboard with some funky extra candy such as red backlight and two USB ports. Both my keyboard and my mouse are wired, not wireless, to take away the need for batteries or recharging etc in this environment. My mouse is a basic and old Logitech MX 310.

    I have a crufty old USB headset with a mic, that works fine for hangouts and listening to music when the rest of the family is home. I have Logitech webcam thing sitting on the screen too, but I hardly ever use it for anything.

    When on the move

    I need to sometimes move around and work from other places. Going to conferences or even our regular Mozilla work weeks. Hence I also have a laptop that is powerful enough to build Firefox is a sane amount of time. I have Lenovo Thinkpad w540a Lenovo Thinkpad W540 with a 2.7GHz quad-core Core-I7, 16GB of RAM and 512GB of SSD. It has the most annoying touch pad on it. I don’t’ like that it doesn’t have the explicit buttons so for example both-clicking (to simulate a middle-click) like when pasting text in X11 is virtually impossible.

    On this machine I also run a VM with win7 installed and associated development environment so I can build and debug Firefox for Windows on it.

    I have a second portable. A small and lightweight netbook, an Eeepc S101, 10.1″ that I’ve been using when I go and just do presentations at places but recently I’ve started to simply use my primary laptop even for those occasions – primarily because it is too slow to do anything else on.

    I do video conferences a couple of times a week and we use Vidyo for that. Its Linux client is shaky to say the least, so I tend to use my Nexus 7 tablet for it since the Vidyo app at least works decently on that. It also allows me to quite easily change location when it turns necessary, which it sometimes does since my meetings tend to occur in the evenings and then there’s also varying amounts of “family activities” going on!

    Backup

    For backup, I have a Synology NAS equipped with 2TB of disk in a RAIDSynology DS211j stashed downstairs, on the wired in-house gigabit ethernet. I run an rsync job every night that syncs the important stuff to the NAS and I run a second rsync that also mirrors relevant data over to a friends house just in case something terribly bad would go down. My NAS backup has already saved me really good at least once.

    Printer

    HP Officejet 8500ANext to the NAS downstairs is the house printer, also attached to the gigabit even if it has a wifi interface of its own. I just like increasing reliability to have the “fixed services” in the house on wired network.

    The printer also has scanning capability which actually has come handy several times. The thing works nicely from my Linux machines as well as my wife’s windows laptop.

    Internet

    fiber cableI have fiber going directly into my house. It is still “just” a 100/100 connection in the other end of the fiber since at the time I installed this they didn’t yet have equipment to deliver beyond 100 megabit in my area. I’m sure I’ll upgrade this to something more impressive in the future but this is a pretty snappy connection already. I also have just a few milliseconds latency to my primary servers.

    Having the fast uplink is perfect for doing good remote backups.

    Router  and wifi

    dlink DIR 635I have a lowly D-Link DIR 635 router and wifi access point providing wifi for the 2.4GHz and 5GHz bands and gigabit speed on the wired side. It was dead cheap it just works. It NATs my traffic and port forwards some ports through to my desktop machine.

    The router itself can also update the dyndns info which ultimately allows me to use a fixed name to my home machine even without a fixed ip.

    Frequent Wifi users in the household include my wife’s laptop, the TV computer and all our phones and tablets.

    Telephony

    Ping Communication Voice Catcher 201EWhen I installed the fiber I gave up the copper connection to my home and since then I use IP telephony for the “land line”. Basically a little box that translates IP to old phone tech and I keep using my old DECT phone. We basically only have our parents that still call this number and it has been useful to have the kids use this for outgoing calls up until they’ve gotten their own mobile phones to use.

    It doesn’t cost very much, but the usage is dropping over time so I guess we’ll just give it up one of these days.

    Mobile phones and tablets

    I have a Nexus 5 as my daily phone. I also have a Nexus 7 and Nexus 10 that tend to be used by the kids mostly.

    I have two Firefox OS devices for development/work.

    Kaustav Das ModakDear smartphone user, it is time to unlearn

    Dear smartphone user, You have been used to sophisticated features and cluttered interfaces for a long time. Remember those days when you had used a smartphone for the first time? Do you recollect that extra cognitive overload you had to face to figure out what each gesture does? Why were there so many round and […]

    Zack WeinbergThe literary merit of right-wing SF

    The results are in for the 2014 Hugo Awards. I’m pleased with the results in the fiction categories—a little sad that “The Waiting Stars” didn’t win its category, but it is the sort of thing that would not be to everyone’s taste.

    Now that it’s all over, people are chewing over the politics of this year’s shortlist, particularly the infamous “sad puppy” slate, over on John Scalzi’s blog, and this was going to be a comment there, but I don’t seem to be able to post comments there, so y’all get the expanded version here instead. I’m responding particularly to this sentiment, which I believe accurately characterizes the motivation behind Larry Correia’s original posting of his slate, and the motivations of those who might have voted for it:

    I too am someone who likes, and dislikes, works from both groups of authors. However, only one group ever gets awards. The issue is not that you cannot like both groups, but that good works from the PC crowd get rewarded and while those from authors that have been labeled “unacceptable” are shunned, and that this happens so regularly, and with such predictability that it is obviously not just quality being rewarded.

    ― “BrowncoatJeff

    I cannot speak to the track record, not having followed genre awards closely in the past. But as to this year’s Hugo shortlist, it is my considered opinion that all the works I voted below No Award (except The Wheel of Time, whose position on my ballot expresses an objection to the eligibility rules) suffer from concrete, objective flaws on the level of basic storytelling craft, severe enough that they did not deserve a nomination. This happens to include Correia’s own novels, and all the other works of fiction from his slate that made the shortlist. Below the fold, I shall elaborate.

    (If you’re not on board with the premise that there is such a thing as objective (observer-independent) quality in a work of art, and that observers can evaluate that independently from whether a work suits their own taste or agrees with their own politics, you should probably stop reading now. Note that this is not the same as saying that I think all Hugo voters should vote according to a work’s objective quality. I am perfectly fine with, for instance, the people who voted “Opera Vita Aeterna” below No Award without even cracking it open—those people are saying “Vox Day is such a despicable person that no matter what his literary skills are, he should not receive an award for them” and that is a legitimate critical stance. It is simply not the critical stance I am taking right now.)

    Let me first show you the basic principles of storytelling craft that I found lacking. I did not invent them; similar sentiments can be found in, for instance, “Fenimore Cooper’s Literary Offenses,” the Turkey City Lexicon, Ursula LeGuin’s Steering the Craft, Robert Schroeck’s A Fanfic Writer’s Guide To Writing, and Aristotle’s Poetics. This formulation, however, is my own.

    1. Above all, a story must not be boring. The reader should care, both about “what happens to these people,” and about the ultimate resolution to the plot.
    2. Stories should not confuse their readers, and should enable readers to anticipate—but not perfectly predict—the consequences of each event.
    3. The description, speech, and actions of each character in a story should draw a clear, consistent picture of that character’s personality and motivations, sufficient for the reader to anticipate their behavior in response to the plot.
    4. Much like music, stories should exhibit dynamic range in their pacing, dramatic tension, emotional color, and so forth; not for nothing is “monotony” a synonym for “tedium.”
    5. Style, language, and diction should be consistent with the tone and content of the story.
    6. Rules 2–5 can be broken in the name of Art, but doing so demands additional effort and trust from the reader, who should, by the end of the story, believe that it was worth it.

    With that in hand, I shall now re-review the works that didn’t deserve (IMNSHO) to make the shortlist, in order from most to least execrable.

    Opera Vita Aeterna

    This is textbook bad writing. The most obvious problem is the padded, purple, monotonously purple prose, which obviously fails point 4, and less obviously fails point 5 because the content isn’t sufficiently sophisticated to warrant the style. The superficial flaws of writing are so severe that it’s hard to see past them, but if you do, you discover that it fails all the other points as well, simply because there wasn’t enough room, underneath all of those purple words, for an actual plot. It’s as if you tried to build a building entirely out of elaborate surface decorations, without first putting up any sort of structural skeleton.

    The Butcher of Khardov and Meathouse Man

    These are both character studies, which is a difficult mode: if you’re going to spend all of your time exploring one character’s personality, you’d better make that one character interesting, and ideally also fun to be around. In these cases, the authors were trying for tragically flawed antiheroes and overdid the anti-, producing characters who are nothing but flaw. Their failures are predictable; their manpain, tedious; their ultimate fates, banal. It does not help that they are, in many ways, the same extruded antihero product that Hollywood and the comic books have been foisting on us for going on two decades now, just taken up to 11.

    Khardov also fails on point 2, being told out of order for no apparent reason, causing the ending to make no sense. Specifically, I have no idea whether the wild-man-in-the-forest scenes are supposed to occur before or after the climactic confrontation with the queen, and the resolution is completely different depending on which way you read it.

    Meathouse Man was not on Correia’s slate. It’s a graphic novel adaptation of a story written in the 1970s, and it makes a nice example of point 6. When it was originally written, a story with a completely unlikable protagonist, who takes exactly the wrong lessons from the school of hard knocks and thus develops from a moderate loser into a complete asshole, would perhaps have been … not a breath of fresh air, but a cold glass of water in the face, perhaps. Now, however, it is nothing we haven’t seen done ten billion times, and we are no longer entertained.

    The Chaplain’s Legacy and The Exchange Officers

    These are told competently, with appropriate use of language, credible series of events, and so on. The plots, however, are formula, the characters are flat, the ideas are not original, and two months after I read them, I’m hard pressed to remember enough about them to criticize!

    I may be being more harsh on Torgerson than the median voter, because I have read Enemy Mine and so I recognize The Chaplain’s Legacy as a retread. (DOES NO ONE READ THE CLASSICS?!) Similarly, The Exchange Officers is prefigured by hundreds of works featuring the Space Marines. I don’t recall seeing remotely piloted mecha before, but mecha themselves are cliché, and the “remotely piloted” part sucks most of the suspense out of the battle scenes, which is probably why it hasn’t been done.

    The Grimnoir Chronicles

    Correia’s own work, this falls just short of good, but in a way that is more disappointing than if it had been dull and clichéd. Correia clearly knows how to write a story that satisfies all of the basic storytelling principles I listed. He is never dull. He comes up with interesting plots and gets the reader invested in their outcome. He’s good at set pieces; I can still clearly envision the giant monster terrorizing Washington DC. He manages dramatic tension effectively, and has an appropriate balance between gripping suspense and calm quiet moments. And he is capable of writing three-dimensional, nuanced, plausibly motivated, sympathetic characters.

    It’s just that the only such character in these novels is the principal villain.

    This is not to say that all of the other characters are flat or uninteresting; Sullivan, Faye, and Francis are all credible, and most of the other characters have their moments. Still, it’s the Chairman, and only the Chairman, who is developed to the point where the reader feels fully able to appreciate his motivations and choices. I do not say sympathize; the man is the leader of Imperial Japan circa 1937, and Correia does not paper over the atrocities of that period—but he does provide more justification for them than anyone had in real life. There really is a cosmic horror incoming, and the Chairman really does think this is the only way to stop it. And that makes for the best sort of villain, provided you give the heroes the same depth of characterization. Instead, as I said last time, the other characters are all by habit unpleasant, petty, self-absorbed, and incapable of empathizing with people who don’t share their circumstances. One winds up hoping for enough reverses to take them down a peg. (Which does not happen.)

    Conclusion

    Looking back, does any of that have anything to do with any of the authors’ political stances, either in the real world, or as expressed in their fiction? Not directly, but I do see a common thread which can be interpreted to shed some light on why “works from the PC crowd” may appear to be winning a disproportionate number of awards, if you are the sort of person who uses the term “PC” unironically. It’s most obvious in the Correia, being the principal flaw in that work, but it’s present in all the above.

    See, I don’t think Correia realized he’d written all of his Good Guys as unpleasant, petty, and self-absorbed. I think he unconsciously assumed they didn’t need the same depth of character as the villain did, because of course the audience is on the side of the Good Guys, and you can tell who the Good Guys are from their costumes (figuratively speaking). It didn’t register on him, for instance, that a captain of industry who’s personally unaffected by the Great Depression is maybe going to come off as greedy, not to mention oblivious, for disliking Franklin Delano Roosevelt and his policies, even if the specific policy FDR was espousing on stage was a genuinely bad idea because of its plot consequences. In fact, that particular subplot felt like the author had his thumb on the scale to make FDR look bad—but the exact same subplot could have been run without giving any such impression, if the characterization had been more thorough. So if you care about characterization, you’re not likely to care for Correia’s work or anything like it. Certainly not enough to shortlist it for an award honoring the very best the genre has to offer.

    Now, from out here on my perch safely beyond the Overton window, “politically correct,” to the extent it isn’t a vacuous pejorative, means “something which jars the speaker out of his daydream of the lily-white suburban 1950s of America (possibly translated to outer space), where everything was pleasant.” (And I do mean his.) Thing is, that suburban daydream is, still, 60 years later, in many ways the default setting for fiction written originally in English. Thanks to a reasonably well-understood bug in human cognition, it takes more effort to write fiction which avoids that default. It requires constant attention to ensure that presuppositions and details from that default are not slipping back in. And most of that extra effort goes into—characterization. It takes only a couple sentences to state that your story is set in the distant future Imperium of Man, in which women and men alike may serve in any position in the military and are considered completely equal; it takes constant vigilance over the course of the entire novel to make sure that you don’t have the men in the Imperial Marines taking extra risks to protect from enemy fire those of their fellow grunts who happen to be women. Here’s another, longer example illustrating how much work can be involved.

    Therefore, it seems to me that the particular type of bad characterization I disliked in the above works—writing characters who, for concrete in-universe reasons, are unlikable people, and then expecting the audience to cheer them on anyway because they’ve been dressed up in These Are The Heroes costumes—is less likely to occur in writing that would get labeled “works from the PC crowd.” The authors of such works are already putting extra effort into the characterization, and are therefore less likely to neglect to write heroes who are, on the whole, likable people whom the reader wishes to see succeed.

    Arun K. RanganathanFAQtechism

    What is this?

    Questions and answers, because my friends and I have been doing a lot of asking and answering, in unequal measure, with more asking than answering. Because I’ve been distraught by the incessant stream of reductionist observations about Mozilla, each one like being punched in the heart with the hard fists of righteousness and conviction. Because questions and answers once brought me peace, when I was much younger.

    Who are you?

    A man with no titles. Formerly, one of the first technology evangelists for Mozilla, when it was still a Netscape project. A Mozillian.

    Who is Brendan Eich?

    A man with a title titles. An inventor. A unifier. A divider. A Mozillian. A friend.

    What has Mozilla done?

    From humble and unlikely beginnings, Mozilla entered a battle seemingly already decided against it, and gradually unseated the entrenched incumbent, user by user by user, through campaigns that were traditional and innovative, and increased consciousness about the open web. It became a beloved brand, standing firmly for open source and the open web, championing the Internet, sometimes advocating politically for these convictions. It relied, and continues to rely, on a community of contributors from all over the world.

    What has Brendan done?

    Many things intrinsic to the open web; he helped shape technologies used by countless numbers of users, including to write and read this very post. Also, a hurtful and divisive thing based on a conviction now at odds with the law of the land, and at odds with my own conviction: in 2008, he donated $1000 to California Proposition 8, which put on a statewide ballot a proposition to define marriage as strictly between a man and a woman in the state, thus eliminating gay marriage, and calling into question pre-existing gay marriages. The amount donated was enough to oblige him to list his employer — Mozilla — for legal reasons.

    What are my convictions?

    That any two people in love should be able to marry, regardless of their genders; that the marriage of two such people affords all legal protections intrinsic to the institution of marriage including immigration considerations, estate planning considerations, and visitation rights. That this is in fact a civil right. That matters of civil rights should not be put before a population to vote on as a statewide proposition; in short, that exceptions to the Equal Protection Clause cannot be decided by any majority, since it is there to protect minorities from majorities (cf.Justice Moreno).

    How do such convictions become law?

    Often, by fiat. Sometimes, even when the battle is already seemingly decided (with the entrenched weight of history behind it, an incumbent), one state at a time. State by State by State (by States), using campaigns that are traditional and innovative, to increase consciousness about this as a civil right.

    How should people with different convictions disagree?

    Bitterly, holding fast to conviction, so that two individuals quarrel ceaselessly till one yields to the other, or till one retreats from the other, unable to engage any longer.

    For real?

    Amicably, by setting aside those convictions that are unnecessary to the pursuit of common convictions I share with other Mozillians, like the open web. Brendan embodied the Mozilla project; he would have made a promising CEO. My conviction can be governed by reason, and set aside, especially since the issue is decided by courts, of both law and public opinion. His view, only guessable by me, seems antediluvian. Times have changed. I can ask myself to be governed by reason. We need never touch this question.

    But I can do this because my conviction about the law, stated before, has never been tested personally by the specter of suicide or the malevolence of bullying; marriage equality is the ultimate recognition, destigmatizing lifestyles, perhaps helping with suicide and bullying. And, my inability to marry has never disrupted my life or my business. I cannot ask others to lay aside convictions, without recognizing the sources of pain, and calling them out. (Here, Brendan made commitments, and Mozilla did too).

    What will the future hold?

    Brendan has said his non serviam but calls out a mission which I think is the right one: privacy, also a civil right, especially privacy from governments; continued user advocacy; data liberation; a check on walled gardens (and an end to digital sharecropping); the web as mobile platform, even though it is under threat in the mobile arena, the battle seemingly decided, the entrenched incumbent slightly less obvious. This latter — mobile — is reminiscent of the desktop world in 1998. It’s the same story, with smaller machines. Perhaps the same story will have to be told again. I’d like Mozilla to be a major player in that story, just as it always has been a major player on the web. And I’ll be looking forward to seeing what Brendan does next. I’ll miss him as part of Mozilla. This has been crushing.

    Coda: what have wise ones said?

    “I don’t know why we’re talking about tolerance to begin with. We should be at acceptance and love. What’s this tolerance business? What are you tolerating, backpain? ‘I’ve been tolerating backpain, and the gay guy at work?’” — Hari Kondabalu (watch him on Letterman). And blog posts: Mozilla is not Chick-Fil-A; Thinking about Mozilla; The Hounding of a Heretic (Andrew Sullivan); a few others, discussing what a CEO should do, and what qualities a CEO should possess, which are out there for you to discover.

    Will Kahn-GreeneDennis v0.5 released! New lint rules, new template linter, bunch of fixes, and now a service!

    What is it?

    Dennis is a Python command line utility (and library) for working with localization. It includes:

    • a linter for finding problems in strings in .po files like invalid Python variable syntax which leads to exceptions
    • a template linter for finding problems in strings in .pot files that make translator's lives difficult
    • a statuser for seeing the high-level translation/error status of your .po files
    • a translator for strings in your .po files to make development easier

    v0.5 released!

    Since the last release announcement, there have been a handful of new lint rules added:

    • W301: Translation consists of just white space
    • W302: The translation is the same as the original string
    • W303: There are descrepancies in the HTML between the original string and the translated string

    Additionally, there's a new template linter for your .pot files which can catch things like:

    • W500: Strings with variable names like o, O, 0, l, 1 which can be hard to read and are often replaced with a similar looking letter by the translator.
    • W501: One-character variable names which don't give translators enough context about what's being translated.
    • W502: Multiple unnamed variables which can't be reordered because the order the variables are expanded is specified outside of the string.

    Dennis in action

    Want to see Dennis in action, but don't want to install Dennis? I threw it up as a service, though it's configured for SUMO: http://dennis-sumo.paas.allizom.org/

    Note

    I may change the URL and I might create a SUMO-agnostic version. If you're interested, let me know.

    Where to go for more

    For more specifics on this release, see here: http://dennis.readthedocs.org/en/v0.4/changelog.html#version-0-4-may-1st-2014

    Documentation and quickstart here: http://dennis.readthedocs.org/en/v0.4/

    Source code and issue tracker here: https://github.com/willkg/dennis

    Source code and issue tracker for Denise (Dennis-as-a-service): https://github.com/willkg/denise

    3 out of 5 summer interns use Dennis to improve their posture while pranking their mentors.

    Marco ZeheMaintenance complete

    A day later than originally announced, I undertook the much needed maintenance. The site should be much faster now, having moved it to a more performant web hoster. I also consolidated all my blogs into a multi-site WordPress installation, which should make it much easier for me in the future to create little blogs for side projects, so I don’t have to use 3rd party services. You know, for the class and such. ;)

    I also use a more modern theme now, using the excellent accessible and responsive Simone theme. This should make it much more reader-friendly. And it, of course, works great with screen readers, too!

    So, enjoy! And I will have more news to share about Mozilla and web accessibility related stuff as always!

    Andy McKayPrivate School

    I've been a bit out of touch recently with holidays, so I'm catching up on the BC teachers situation and what looks like an attempt by the BC Government to destroy public education.

    This week the Minister launched a website giving "some of the options available to you". So what are my options? No public school system? Let's try private school. Here's a preliminary search.

    My daughters are aged 8 and 10 and enjoy an excellent education in the public school system in French Immersion in North Vancouver, despite the Government. I also consider the school an excellent part of the local community.

    Any schooling would ideally be in French and must definitely be non-religious in nature. In North and West Vancouver there are the following private schools and costs to us:

    • Lions Gate Christian Academy: "Moral & Spiritual Development from a Christian Perspective". Cost: $8,720. Distance: 3.8km. French Immersion: No.
    • Brockton School: "a rigorous academic education is balanced by arts and athletics in an environment where merit rather than materialism is the core value". Cost: $29,700. Distance: 10.8km. French Immersion: No.
    • Collingwood School: "Preparing people to thrive in meaningful lives". Cost: Not stated. Distance: 19.2km. French Immersion: No.
    • Mulgrave School: "a caring and supportive school community with high expectations and high levels of achievement". Cost: $35,940. Distance: 20.3km. French Immersion: No.
    • Ecole Francaise Internationale de Vancouver: "where critical thought processes and inter-cultural communication are the determining factors". Cost: $28,500. Distance: 10.4km. French Immersion: Yes.
    • The Vancouver Waldorf School: "educating hearts and minds". Cost: $28,240. Distance: 9.3km. French Immersion: No.

    The highly questionable (if not laughable) Fraser Institute ranking ranks only a couple of these schools. Sherwood Park being just below the average and the West Vancouver schools Mulgrave and Collingwood well above the average.

    Note that although I searched for schools on the North Shore, none of these are "local" and we would suffer a disconnect from our local community. Only one provides French Immersion. Lions Gate Christian Academy is definitely not going to happen.

    Supposing I can get my children into one of these schools, it would drain my families resources by somewhere from $28k to $36k at the minimum. The median total income before tax in BC is $71k (source), after tax of 40%, let's say $43k. One of those private schools would consume 65% to 83% of the average after tax income.

    As an extra kicker, since my wife is a teacher in the public school system, we have less money this year.

    Do you have some realistic options for my family?

    Ray KiddyTo encourage civic participation and voting (in US)

    https://petitions.whitehouse.gov/petition/create-national-holidays-voting-consolidating-other-holidays-honor-civic-engagement/wx7xMFCR

    Please consider whether this suggestion makes sense.

    Matt BrubeckLet's build a browser engine! Part 4: Style

    Welcome back to my series on building your own toy browser engine. If you’re just tuning in, you can find the previous episodes here:

    This article will cover what the CSS standard calls assigning property values, or what I call the style module. This module takes DOM nodes and CSS rules as input, and matches them up to determine the value of each CSS property for any given node.

    This part doesn’t contain a lot of code, since I’ve left out all the really complicated parts. However, I think what’s left is still quite interesting, and I’ll also explain how some of the missing pieces can be implemented.

    The Style Tree

    The output of robinson’s style module is something I call the style tree. Each node in this tree includes a pointer to a DOM node, plus its CSS property values:

    /// Map from CSS property names to values.
    type PropertyMap = HashMap<String, Value>;
    
    /// A node with associated style data.
    struct StyledNode<'a> {
        node: &'a Node, // pointer to a DOM node
        specified_values: PropertyMap,
        children: Vec<StyledNode<'a>>,
    }
    

    What’s with all the 'a stuff? Those are lifetimes, part of how Rust guarantees that pointers are memory-safe without requiring garbage collection. If you’re not working in Rust you can ignore them; they aren’t critical to the code’s meaning.

    We could add new fields to the dom::Node struct instead of creating a new tree, but I wanted to keep style code out of the earlier “lessons.” This also gives me an opportunity to talk about the parallel trees that inhabit most rendering engines.

    A browser engine module often takes one tree as input, and produces a different but related tree as output. For example, Gecko’s layout code takes a DOM tree and produces a frame tree, which is then used to build a view tree. Blink and WebKit transform the DOM tree into a render tree. Later stages in all these engines produce still more trees, including layer trees and widget trees.

    The pipeline for our toy browser engine will look something like this, after we complete a few more stages:

    In my implementation, each node in the DOM tree has exactly one node in the style tree. But in a more complicated pipeline stage, several input nodes could collapse into a single output node. Or one input node might expand into several output nodes, or be skipped completely. For example, the style tree could exclude elements whose display property is set to 'none'. (Instead this will happen in the layout stage, because my code turned out a bit simpler that way.)

    Selector Matching

    The first step in building the style tree is selector matching. This will be very easy, since my CSS parser supports only simple selectors. You can tell whether a simple selector matches an element just by looking at the element itself. Matching compound selectors would require traversing the DOM tree to look at the element’s siblings, parents, etc.

    fn matches(elem: &ElementData, selector: &Selector) -> bool {
        match *selector {
            Simple(ref simple_selector) => matches_simple_selector(elem, simple_selector)
        }
    }
    

    To help, we’ll add some convenient ID and class accessors to our DOM element type. The class attribute can contain multiple class names separated by spaces, which we return in a hash table.

    impl ElementData {
        fn get_attribute(&self, key: &str) -> Option<&String> {
            self.attributes.find_equiv(&key)
        }
    
        fn id(&self) -> Option<&String> {
            self.get_attribute("id")
        }
    
        fn classes(&self) -> HashSet<&str> {
            match self.get_attribute("class") {
                Some(classlist) => classlist.as_slice().split(' ').collect(),
                None => HashSet::new()
            }
        }
    }
    

    To test whether a simple selector matches an element, just look at each selector component, and return false if the element doesn’t have a matching class, ID, or tag name.

    fn matches_simple_selector(elem: &ElementData, selector: &SimpleSelector) -> bool {
        // Check type selector
        if selector.tag_name.iter().any(|name| elem.tag_name != *name) {
            return false;
        }
    
        // Check ID selector
        if selector.id.iter().any(|id| elem.id() != Some(id)) {
            return false;
        }
    
        // Check class selectors
        let elem_classes = elem.classes();
        if selector.class.iter().any(|class| !elem_classes.contains(&class.as_slice())) {
            return false;
        }
    
        // We didn't find any non-matching selector components.
        return true;
    }
    

    Rust note: This function uses the any method, which returns true if an iterator contains an element that passes the provided test. This is the same as the any function in Python (or Haskell), or the some method in JavaScript.

    Building the Style Tree

    Next we need to traverse the DOM tree. For each element in the tree, we will search the stylesheet for matching rules.

    When comparing two rules that match the same element, we need to use the highest-specificity selector from each match. Because our CSS parser stores the selectors from most- to least-specific, we can stop as soon as we find a matching one, and return its specificity along with a pointer to the rule.

    type MatchedRule<'a> = (Specificity, &'a Rule);
    
    /// If `rule` matches `elem`, return a `MatchedRule`. Otherwise return `None`.
    fn match_rule<'a>(elem: &ElementData, rule: &'a Rule) -> Option<MatchedRule<'a>> {
        // Find the first (highest-specificity) matching selector.
        rule.selectors.iter().find(|selector| matches(elem, *selector))
            .map(|selector| (selector.specificity(), rule))
    }
    

    To find all the rules that match an element we call filter_map, which does a linear scan through the style sheet, checking every rule and throwing out ones that don’t match. A real browser engine would speed this up by storing the rules in multiple hash tables based on tag name, id, class, etc.

    /// Find all CSS rules that match the given element.
    fn matching_rules<'a>(elem: &ElementData, stylesheet: &'a Stylesheet) -> Vec<MatchedRule<'a>> {
        stylesheet.rules.iter().filter_map(|rule| match_rule(elem, rule)).collect()
    }
    

    Once we have the matching rules, we can find the specified values for the element. We insert each rule’s property values into a HashMap. We sort the matches by specificity, so the more-specific rules are processed after the less-specific ones, and can overwrite their values in the HashMap.

    /// Apply styles to a single element, returning the specified values.
    fn specified_values(elem: &ElementData, stylesheet: &Stylesheet) -> PropertyMap {
        let mut values = HashMap::new();
        let mut rules = matching_rules(elem, stylesheet);
    
        // Go through the rules from lowest to highest specificity.
        rules.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
        for &(_, rule) in rules.iter() {
            for declaration in rule.declarations.iter() {
                values.insert(declaration.name.clone(), declaration.value.clone());
            }
        }
        return values;
    }
    

    Now we have everything we need to walk through the DOM tree and build the style tree. Note that selector matching works only on elements, so the specified values for a text node are just an empty map.

    /// Apply a stylesheet to an entire DOM tree, returning a StyledNode tree.
    pub fn style_tree<'a>(root: &'a Node, stylesheet: &'a Stylesheet) -> StyledNode<'a> {
        StyledNode {
            node: root,
            specified_values: match root.node_type {
                Element(ref elem) => specified_values(elem, stylesheet),
                Text(_) => HashMap::new()
            },
            children: root.children.iter().map(|child| style_tree(child, stylesheet)).collect(),
        }
    }
    

    That’s all of robinson’s code for building the style tree. Next I’ll talk about some glaring omissions.

    The Cascade

    Style sheets provided by the author of a web page are called author style sheets. In addition to these, browsers also provide default styles via user agent style sheets. And they may allow users to add custom styles through user style sheets (like Gecko’s userContent.css).

    The cascade defines which of these three “origins” takes precedence over another. There are six levels to the cascade: one for each origin’s “normal” declarations, plus one for each origin’s !important declarations.

    Robinson’s style code does not implement the cascade; it takes only a single style sheet. The lack of a default style sheet means that HTML elements will not have any of the default styles you might expect. For example, the <head> element’s contents will not be hidden unless you explicitly add this rule to your style sheet:

    head { display: none; }
    

    Implementing the cascade should by fairly easy: Just track the origin of each rule, and sort declarations by origin and importance in addition to specificity. A simplified, two-level cascade should be enough to support the most common cases: normal user agent styles and normal author styles.

    Computed Values

    In addition to the “specified values” mentioned above, CSS defines initial, computed, used, and actual values.

    Initial values are defaults for properties that aren’t specified in the cascade. Computed values are based on specified values, but may have some property-specific normalization rules applied.

    Implementing these correctly requires separate code for each property, based on its definition in the CSS specs. This work is necessary for a real-world browser engine, but I’m hoping to avoid it in this toy project. In later stages, code that uses these values will (sort of) simulate initial values by using a default when the specified value is missing.

    Used values and actual values are calculated during and after layout, which I’ll cover in future articles.

    Inheritance

    If text nodes can’t match selectors, how do they get colors and fonts and other styles? The answer is inheritance.

    When a property is inherited, any node without a cascaded value will receive its parent’s value for that property. Some properties, like 'color', are inherited by default; others only if the cascade specifies the special value 'inherit'.

    My code does not support inheritance. To implement it, you could pass the parent’s style data into the specified_values function, and use a hard-coded lookup table to decide which properties should be inherited.

    Style Attributes

    Any HTML element can include a style attribute containing a list of CSS declarations. There are no selectors, because these declarations automatically apply only to the element itself.

    <span style="color: red; background: yellow;">
    

    If you want to support the style attribute, make the specified_values function check for the attribute. If the attribute is present, pass it to parse_declarations from the CSS parser. Apply the resulting declarations after the normal author declarations, since the attribute is more specific than any CSS selector.

    Exercises

    In addition to writing your own selector matching and value assignment code, for further exercise you can implement one or more of the missing pieces discussed above, in your own project or a fork of robinson:

    1. Cascading
    2. Initial and/or computed values
    3. Inheritance
    4. The style attribute

    Also, if you extended the CSS parser from Part 3 to include compound selectors, you can now implement matching for those compound selectors.

    To Be Continued…

    Part 5 will introduce the layout module. I haven’t finished the code for this yet, so there will be another delay before I can start writing the article. I plan to split layout into at least two articles (one for block layout and one for inline layout, probably).

    In the meantime, I’d love to see anything you’ve created based on these articles or exercises. If your code is online somewhere, feel free to add a link to the comments below! So far I have seen Martin Tomasi’s Java implementation and Pohl Longsine’s Swift version.

    Matt BrubeckLet's build a browser engine! Part 4: Style

    Welcome back to my series on building your own toy browser engine. If you’re just tuning in, you can find the previous episodes here:

    This article will cover what the CSS standard calls assigning property values, or what I call the style module. This module takes DOM nodes and CSS rules as input, and matches them up to determine the value of each CSS property for any given node.

    This part doesn’t contain a lot of code, since I’ve left out all the really complicated parts. However, I think what’s left is still quite interesting, and I’ll also explain how some of the missing pieces can be implemented.

    The Style Tree

    The output of robinson’s style module is something I call the style tree. Each node in this tree includes a pointer to a DOM node, plus its CSS property values:

    /// Map from CSS property names to values.
    type PropertyMap = HashMap<String, Value>;
    
    /// A node with associated style data.
    struct StyledNode<'a> {
        node: &'a Node, // pointer to a DOM node
        specified_values: PropertyMap,
        children: Vec<StyledNode<'a>>,
    }
    

    What’s with all the 'a stuff? These are lifetime annotations, part of how Rust guarantees that pointers are memory-safe without requiring garbage collection. If you are not working in Rust you can safely ignore them; they aren’t critical to the meaning of this code.

    We could add style information directly to the dom::Node struct from Part 1 instead, but I wanted to keep this code out of the earlier “lessons.” This is also a good opportunity to talk about the parallel trees that inhabit most layout engines.

    A browser engine module often takes one tree as input, and produces a different but related tree as output. For example, Gecko’s layout code takes a DOM tree and produces a frame tree, which is then used to build a view tree. Blink and WebKit transform the DOM tree into a render tree. Later stages in all these engines produce still more trees, including layer trees and widget trees.

    The pipeline for our toy browser engines will look something like this after we complete a few more stages:

    In my implementation, each node in the DOM tree produces exactly one node in the style tree. But in a more complicated pipeline stage, several input nodes could collapse into a single output node. Or one input node might expand into several output nodes, or be skipped completely. For example, the style tree could exclude elements whose display property is set to 'none'. (Instead this will happen in the layout stage, because my code turned out a bit simpler that way.)

    Selector Matching

    The first step in building the style tree is selector matching. This will be very easy, since my CSS parser supports only simple selectors. You can tell whether a simple selector matches an element just by looking at the element itself. Matching compound selectors would require traversing the DOM tree to look at the element’s siblings, parents, etc.

    fn matches(elem: &ElementData, selector: &Selector) -> bool {
        match *selector {
            Simple(ref simple_selector) => matches_simple_selector(elem, simple_selector)
        }
    }
    

    To help, we’ll add some convenient ID and class accessors to our DOM element type. The class attribute can contain multiple class names separated by spaces, which we return in a hash table. [Note: The Rust types below look a bit hairy because we are passing around pointers rather than copying values. This code should be a lot more concise in languages that are not so concerned with this distinction.]

    impl ElementData {
        fn get_attribute<'a>(&'a self, key: &str) -> Option<&'a String> {
            self.attributes.find_equiv(&key)
        }
    
        fn id<'a>(&'a self) -> Option<&'a String> {
            self.get_attribute("id")
        }
    
        fn classes<'a>(&'a self) -> HashSet<&'a str> {
            match self.get_attribute("class") {
                Some(classlist) => classlist.as_slice().split(' ').collect(),
                None => HashSet::new()
            }
        }
    }
    

    To test whether a simple selector matches an element, just look at each selector component, and return false if the element doesn’t have a matching class, ID, or tag name.

    fn matches_simple_selector(elem: &ElementData, selector: &SimpleSelector) -> bool {
        // Check type selector
        if selector.tag_name.iter().any(|name| elem.tag_name != *name) {
            return false;
        }
    
        // Check ID selector
        if selector.id.iter().any(|id| elem.id() != Some(id)) {
            return false;
        }
    
        // Check class selectors
        let elem_classes = elem.classes();
        if selector.class.iter().any(|class| !elem_classes.contains(&class.as_slice())) {
            return false;
        }
    
        // We didn't find any non-matching selector components.
        return true;
    }
    

    Rust note: This function uses the any method, which returns true if an iterator contains an element that passes the provided test. This is the same as the any function in Python (or Haskell), or the some method in JavaScript.

    When comparing two rules that match the same element, we need to use the highest-specificity selector from each match. Because our CSS parser stores the selectors from most- to least-specific, we can stop as soon as we find a matching one, and return its specificity along with a pointer to the rule.

    /// A single CSS rule and the specificity of its most specific matching selector.
    type MatchedRule<'a> = (Specificity, &'a Rule);
    
    /// If `rule` matches `elem`, return a `MatchedRule`. Otherwise return `None`.
    fn match_rule<'a>(elem: &ElementData, rule: &'a Rule) -> Option<MatchedRule<'a>> {
        // Find the first (highest-specificity) matching selector.
        rule.selectors.iter().find(|selector| matches(elem, *selector))
            .map(|selector| (selector.specificity(), rule))
    }
    

    To find all the rules that match an element we call filter_map, which does a linear scan through the style sheet, checking every rule and throwing out ones that don’t match. A real browser engine would speed this up by storing the rules in multiple hash tables based on tag name, id, class, etc.

    /// Find all CSS rules that match the given element.
    fn matching_rules<'a>(elem: &ElementData, stylesheet: &'a Stylesheet) -> Vec<MatchedRule<'a>> {
        stylesheet.rules.iter().filter_map(|rule| match_rule(elem, rule)).collect()
    }
    

    Once we have the matching rules, we can find the specified values for the element. We insert each rule’s property values into a HashMap. We sort the matches by specificity, so the higher specificity rules are processed after the lower ones and can overwrite their values in the HashMap.

    /// Apply styles to a single element, returning the specified styles.
    fn specified_values(elem: &ElementData, stylesheet: &Stylesheet) -> PropertyMap {
        let mut values = HashMap::new();
        let mut rules = matching_rules(elem, stylesheet);
    
        // Go through the rules from lowest to highest specificity.
        rules.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
        for &(_, rule) in rules.iter() {
            for declaration in rule.declarations.iter() {
                values.insert(declaration.name.clone(), declaration.value.clone());
            }
        }
        return values;
    }
    

    Now we have everything we need to walk through the DOM tree and build the style tree. Note that selector matching works only on elements, so the specified values for a text node are just an empty map.

    /// Apply a stylesheet to an entire DOM tree, returning a StyledNode tree.
    pub fn style_tree<'a>(root: &'a Node, stylesheet: &'a Stylesheet) -> StyledNode<'a> {
        StyledNode {
            node: root,
            specified_values: match root.node_type {
                Element(ref elem) => specified_values(elem, stylesheet),
                Text(_) => HashMap::new()
            },
            children: root.children.iter().map(|child| style_tree(child, stylesheet)).collect(),
        }
    }
    

    That’s all of robinson’s code for building the style tree. Next I’ll talk about some glaring omissions.

    The Cascade

    Style sheets provided by the author of a web page are called author style sheets. In addition to these, browsers also provide default styles via user agent style sheets. And they may allow users to add custom styles through user style sheets (like Gecko’s userContent.css).

    The cascade defines which of these three “origins” takes precedence over another. There are six levels to the cascade: one for each origin’s “normal” declarations, plus one for each origin’s !important declarations.

    Robinson’s style code does not implement the cascade; it uses only a single style sheet. The lack of a default style sheet means that HTML elements will not have any of the default styles you might expect. For example, the <head> element’s contents will not be hidden unless you explicitly add this rule to your style sheet:

    head { display: none; }
    

    Implementing the cascade should by fairly easy: Just track the origin of each rule, and sort declarations by origin and importance in addition to specificity. A simplified, two-level cascade should be enough to support the most common cases: normal user agent styles and normal author styles.

    Computed Values

    In addition to the “specified values” mentioned above, CSS defines initial, computed, used, and actual values.

    Initial values are defaults for properties that aren’t specified in the cascade. Computed values are based on specified values, but may have some property-specific normalization rules applied.

    Implementing these correctly requires separate code for each property, based on its definition in the CSS specs. This work is necessary for a real-world browser engine, but I’m hoping to avoid it in this toy project. In later stages, code that uses these values will (sort of) simulate initial values by using a default when the specified value is missing.

    Used values and actual values are calculated during and after layout, which I’ll cover in future articles.

    Inheritance

    If text nodes can’t match selectors, how do they get colors and fonts and other styles? Through the magic of inheritance.

    When a property is inherited, any node without a cascaded value will receive its parent’s value for that property. Some properties, like 'color', are inherited by default; others only if the cascade specifies the special value 'inherit'.

    My code does not support inheritance. To implement it, you could pass the parent’s style data into the specified_values function, and use a hard-coded lookup table to decide which properties should be inherited.

    Style Attributes

    Any HTML element can include a style attribute containing a list of CSS declarations. There are no selectors, because these declarations automatically apply only to the element itself.

    <span style="color: red; background: yellow;">
    

    If you want to support the style attribute, make the specified_values function check for the attribute. If the attribute is present, pass it to parse_declarations from the CSS parser. Apply the resulting declarations after the normal author declarations, since the attribute is more specific than any CSS selector.

    Exercises

    In addition to writing your own selector matching and value assignment code, for further exercise you can implement one or more of the missing pieces discussed above, in your own project or a fork of robinson:

    1. Cascading
    2. Initial and/or computed values
    3. Inheritance
    4. The style attribute

    Also, if you extended the CSS parser from Part 3 to include compound selectors, you can now implement matching for those compound selectors.

    To be continued…

    Part 5 will introduce the layout module. I haven’t finished the code for this yet, so there will be another short before I can start writing the article. I plan to split layout into at least two articles (one for block layout and one for inline layout, probably).

    In the meantime, I’d love to see anything you’ve created based on these articles or exercises. If your code is online somewhere, feel free to add a link to the comments below! So far I have seen Martin Tomasi’s Java implementation and Pohl Longsine’s Swift version.

    Kaustav Das ModakConnect Firefox OS Spreadtrum devices through adb

    The ultra low-cost Firefox OS devices to be launched in India are built on Spreadtrum chipsets. Here are the quick steps for people running Linux or OS X to connect their Spreadtrum devices through adb: Make sure if the device is detected Connect the device through a USB cable. Enable Remote Debugging on the device […]

    Maja FrydrychowiczSnapshots from my OPW Internship with Mozilla QA

    Throughout my OPW1 internship with Mozilla QA2 I've been keeping an informal log in outline form3. In it, I briefly describe what I accomplish (or fail to accomplish) each day, problems I encounter, who I talk to about them, which meetings I attend, what I read, useful tricks I learn, etc. So far, I have about 60-days worth of these tiny log entries about Mozilla. Here's what they look like:

    Checkvist Mozilla Log Screenshot

    Day-to-day, the log helps me answer questions like "How did I solve this weird configuration problem three weeks ago?" or "What should I ask about at the next team meeting?" Writing also generally helps me think through a task, and the log is a quick and effective outlet for that. The other major benefit is that I can take a step back and see the overall progress of my projects.

    So, what's it like being an intern with Mozilla QA?

    I'm so glad you asked! First, some context.

    • OPW interns work remotely.
    • The internship position I applied for is called "Bug Wrangler", which refers to tasks like reproducing and triaging incoming Firefox bugs, but I've actually (mostly) been doing Django web development.

    To future interns: as in my case, there can be some flexibility about your internship activities, and during your application process, you'll narrow down what you will work on. The mentor I applied to offered a Django project as an option under the Bug Wrangler umbrella, and that was more in line with my interests and experience than bug triage, so that's what I chose to focus on.

    Based on my handy log, I'll answer a slightly more specific question:

    "What did Maja do during a typical week while working on a Django project for Mozilla QA?"

    Routines

    Often, I start my day by skimming my latest "bug mail" (updates from Bugzilla) and checking my Bugzilla dashboard to see if I need to follow up on anything immediately.

    The other regular occurrence is about 2 hours of video meetings per week. I meet with my mentor once a week to discuss my general progress and my post-internship plans. I lurk at one QA team meeting almost every week, where I mostly don't have enough context to understand much. My mentor filled me in on some things and my understanding gradually improved. There are also two regular meetings for One and Done, the project I'm contributing to: a weekly technical meeting to discuss the design of new features, and a biweekly check-in meeting with project managers, developers and a few key users.

    Week 3

    The early weeks of the internship involved a lot of reading and trying things out, of course. At this point, I was finishing up the official Django tutorial as well as responding to some administrative requests about the internship.

    Just for fun, I used vim throughout my Django learnings to rediscover some handy vim commands. I also applied the tutorial concepts to the One and Done source code as much as I could, and thus discovered what other parts of Django I need to become familiar with, like generic class-based views.

    I gradually became more familiar with how the One and Done code is structured by looking at how its models are used, poking at its URLconf, and populating my local database with example data.

    Week 5

    At this point, I was just about finished with my first substantial pull request to One and Done. My changes broke some unit tests, which caused me to discover that some of our test data was using the wrong data type: a regular Python dictionary instead of a Django QueryDict. Cool.

    I actually spent a bunch of time getting the unit tests to run in my dev environment, which is on a Linux virtual machine. My local copy of the project is stored in a directory that is shared between my Linux guest OS and Windows host OS, which happens to rely on file permissions that the nose testing library doesn't like. In the end, I chose to have a clone of the project in a non-shared directory that I used just for running unit tests.

    My work log also describes in detail how unintended changes to my development branch in git turned my Github pull request into a giant, unreadable mess. Aaah! (Be careful what you branch from and what you merge with, friends.) I had to close my original pull request and make a new, clean one, which was fairly embarrassing. Now I remember that on that day my friend and I were co-working in my apartment to battle the loneliness of remote work, and she generously listened to me venting my misery about the incident. :) In retrospect, I learned a lot about git.

    Later that week, that same pull request got merged and I started investigating a bug I ran into in one of the libraries our project relies on, which involved asking some questions on IRC.

    All around, a good week.

    Week 9

    First I finished up a couple of things I had started earlier:

    I also contributed my first few code reviews: the week before I missed an issue that someone else caught (doh!), but this week I found something that needed to be fixed (yay!). This was cool because I found the problem by simply taking the time to understand code that was mostly mysterious to me. Bonus: I learned a bit about Mock and patch.

    By the end of the week, I was focused on sketching out the functionality and implementation of a new One and Done feature. I enjoyed working with the project managers to define the feature requirements. Figuring out how to implement them required a few more weeks of research and learning on my part, but it all worked out in the end.

    This is why I like work logs!

    Reviewing my work log to write this article was eye-opening for me, especially due to the perspective it offers of the ups and downs I experienced during my internship. On some days, I felt quite frustrated, stuck, discouraged, and all that bad stuff. So, I like how the log shows that feeling crappy for a few days here and there totally doesn't matter overall. I learned a lot in the past couple of months and it's incredibly satisfying to see that itemized in one big list.


    1. Outreach Program for Women 

    2. Quality Assurance 

    3. I write the log using Checkvist. It's fantastic. I did the same while at Hacker School. 

    Clint TalbertThe Odyssey of Per-Push, On-Phone Firefox OS Automation

    When we started automating tests for Firefox OS, we knew that we could do a lot with automated testing on phone emulators–we could run in a very similar environment to the phone, using the same low level instruction set, even do some basic operations like SMS between two emulator processes. Best of all, we could run those in the cloud, at massive scale.

    But, we also knew that emulator based automation wasn’t ever going to be as complete as actually testing on real phones. For instance, you can’t simulate many basic smart phone operations: calling a number, going to voice-mail, toggling airplane mode, taking a picture, etc. So, we started trying to get phones running in automation very early with Firefox OS, almost two years ago now.

    We had some of our very early Unagi phones up and running on a desk in our office. That eventually grew to a second generation of Hamachi based phones. There were a couple of core scalability problems with both of these solutions:

    1. No reliable way to power-cycle a phone without a human walking up to it, pulling out the battery and putting it back in
    2. At the time these were pre-production phones (hence the code names), and were hard to get in bulk from partners. So, we did what we could with about 10 phones that ran smoketests, correctness tests, and performance tests.
    3. All of the automation jobs and results had to be tracked by hand. And status had to be emailed to developers — there was no way to get these reporting to our main automation dashboard, TBPL.
    4. Because we couldn’t report status to TBPL, maintaining the system and filing bugs when tests failed had to be done entirely by a dedicated set of 4 QA folk–not a scalable option, to say the least.

    Because of points 1 and 2, we were unable to truly scale the number of devices. We only had one person in Mountain View, and what we had thought of as a part time job of pulling phone batteries soon became his full time job. We needed a better solution to increase the number of devices while we worked in parallel to create a better dashboard for our automation that would allow a system like this to easily plug in and report its results.

    The Flame reference device solved that first problem. Now, we had a phone whose hardware we could depend on, and Jon Hylands was able to create a custom battery harnesses for it so that we could instruct our scripts to automatically detect dead phones and remotely power cycle them (and in the future, monitor power consumption). Because we (Mozilla) commissioned the Flame phone ourselves, there were no partner related issues with obtaining pre-production devices–we could easily get as many as we needed. After doing some math to understand our capacity needs, we got 40 phones to seed our prototype lab to support per-push automation.

    As I mentioned, we were solving the dashboard problem in parallel, and that has now been deployed in the form of Treeherder, which will be the replacement for TBPL. That solves point 3. All that now remains is point 4. We have been hard at work on crafting a unified harness to run the Gaia Javascript tests on device which will also allow us to run the older, existing python tests as well until they can be converted. This gives us the most flexibility and allows us to take advantage of all the automation goodies in the existing python harness–like crash detection, JSON structured logging, etc. Once it is complete, we will be able to run a smaller set of the same tests the developers run locally per each push to b2g-inbound on these Flame devices in our lab. This means that when something breaks, it will break tests that are well known, in a well understood environment, and we can work alongside the developers to understand what broke and why. By enabling the developers and QA to work alongside one another, we eliminate the scaling problem in point 4.

    It’s been a very long road to get from zero to where we are today. You can see the early pictures of the “phones on a desk” rack and pictures of the first 20 Flames from Stephen’s presentation he gave earlier this month.

    A number of teams helped get us to this point, and it could not have been done without the cooperation among them: the A*Team, the Firefox OS Performance team, the QA team, and the Gaia team all helped get us to where we are today. You can see the per-push tests showing up on the Treeherder Staging site as we ensure we can meet the stability and load requirements necessary for running in production.

    Last week, James Lal and his new team inherited this project. They are working hard to push the last pieces to completion as well as expanding it even further. And so, even though Firefox OS has had real phone automation for years, that system is now coming into its own. The real-phone automation will finally be extremely visible and easily actionable for all developers, which is a huge win for everyone involved.

    Eric ShepherdThe Sheppy Report: August 22, 2014

    This week looks slower than usual when you look at this list, but the week involved a lot of research.

    What I did this week

    • Reviewed and made (very) minor tweaks to Chris Mills’s doc plan for the Gaia web components and QA documentation.
    • Created an initial stub of a page for the canvas documentation plan.
    • Spent the weekend and a bit of Monday getting my broken server, including this blog, back up and running after a not-entirely-successful (at first) upgrade of the server from OS X 10.6.8 Server to 10.9.4. But most things are working now. I’ll get the rest fixed up over the next few days.
    • Pursued the MDN inbox project, trying to wrap it up.
      • Asked for feedback on the current state of things.
      • Added a subtle background color to the background of pages in the Inbox.
    • Started discussions on dev-mdc and staff mailing list about the documentation process; we’re going to get this thing straightened up and organized.
    • Filed bug 1056026 proposing that the Firefox_for_developers macro be updated to list both newer and older versions of Firefox.
    • Redirected some obsolete pages to their newer, replacement, content in the MDN meta-documentation.
    • Created a Hacker News account and upvoted a post about Hacks on canuckistani’s request.
    • Updated the MDN Administration Guide.
    • Installed various packages and add-ons on my Mac and server in preparation for testing WebRTC code.
    • Forked several WebRTC projects from GitHub to experiment with.
    • Found (after a surprisingly length search) a micro-USB cable so I could charge and update my Geeksphone Peak to Firefox OS 2.0′s latest nightly build.
    • Re-established contact with Piotr at CKSource about continuing work to get our editor updated and improved.
    • Removed a mess of junk from a page in pt-BR; looks like someone used an editor that added a bunch of extra <span>s.
    • Successfully tested a WebRTC connection between my Firefox OS phone and my iMac, using my Mac mini as server. Now I should be ready to start writing code of my own, now that I know it all works!
    • Filed bug 1057546: we should IMHO strip HTML tags that aren’t part of a string from within a macro call; this would prevent unfortunate errors.
    • Filed bug 1057547 proposing that the editor be updated to detect uses of the style attribute and of undefined classes, and present warnings to the user when they do so.
    • Fixed a page that was incorrectly translated in place, and emailed the contributor a reminder to be careful in the future.

    Meetings attended this week

    Monday

    • MDN dev team meeting on security and improved processes to prevent problems like the email address disclosure we just had happen.
    • MDN developer triage meeting.

    Tuesday

    • Developer Engagement weekly meeting.
    • 1:1 with Jean-Yves Perrier.

    Wednesday

    • 1:1 with Ali.

     Thursday

    • Writers’ staff meeting.

    Friday

    • #mdndev weekly review meeting.
    • MDN bug swat meeting.
    • Web API documentation meeting.

    So… it was a wildly varied day today. But I got a lot of interesting things done.

    Gervase MarkhamHSBC Weakens Their Internet Banking Security

    From a recent email about “changes to your terms and conditions”. (“Secure Key” is their dedicated keyfob 2-factor solution; it’s currently required both to log in and to pay a new payee. It’s rather well done.)

    These changes will also enable us to introduce some enhancements to our service over the coming months. You’ll still have access to the full Internet Banking service by logging on with your Secure Key, but in addition, you’ll also be able log in to a limited service when you don’t use your Secure Key – you’ll simply need to verify your identity by providing other security information we request. We’ll contact you again to let you know when this new feature becomes available to you.

    Full details of all the changes can be found below which you should read carefully. If you choose not to accept the changes, you have the right to ask us to stop providing you with the [Personal Internet Banking] service, before they come into effect. If we don’t hear from you, we’ll assume that you accept the changes.

    Translation: we are lowering the security we use to protect your account information from unauthorised viewing and, as long as you still want to be able to access your account online at all, there’s absolutely nothing you can do about it.

    Amy TsayWhat Healthy Relationships Teach Us About Healthy Communities

    In organizations where communities form (whether around a product, mission, or otherwise), there is often a sense of perplexity or trepidation around how to engage with them. What is the proper way to talk to community members? How do I work with them, and what can I do to keep the community healthy and growing? The good news is, if you know what it takes to have a healthy personal relationship, you already know how to build a healthy community.

    Prioritize them

    In a good relationship, we prioritize the other person. At Mozilla, the QA team makes it a point to respond to volunteer contributors within a day or two. A lack of response is one of the top reasons why people leave online communities, so it’s important not to keep them hanging. It doesn’t feel good to volunteer your time on a project only to be left waiting when you ask questions or request feedback, just as it would if your partner doesn’t return your phone calls.

    Be authentic

    Authenticity and honesty in a relationship are the building blocks of trust. If you make a mistake, admit it and set it right. Your tone and word choice will reflect your state of mind, so be aware of it when composing a message. When you come from a place of caring and desire to do what’s right for the community, instead of a place of fear or insecurity, your words and actions will foster trust.

    Be appreciative

    Strong relationships are formed when both parties value and appreciate each other. It’s a great feeling when you take out the trash or do the dishes, and it’s noticed and praised. Make it a ritual to say thanks to community members who make an impact, preferably on the spot, and publicly if possible and appropriate.

    Be their champion

    Be prepared to go to bat for the community. I was once in a relationship with a partner who would not defend me in situations where I was being mistreated; it didn’t end well. It feels nice to be advocated for, to be championed, and it creates a strong foundation. When you discover a roadblock or grievance, take the time to investigate and talk to the people who can make it right. The community will feel heard and valued.

    Empathize

    The processes and programs that support community participation require an understanding of motivation. To understand motivation, you have to be able to empathize. Everyone views the world from their own unique perspectives, so it’s important to try and understand them, even if they’re different from your own. 

    Set expectations

    Understand your organization’s limitations, as well as your own, and communicate them. If your partner expects you to be home at a certain time and you don’t show up, the anger you encounter likely has more to do with not being told you’re going to be late, than the lateness itself.

    Guidelines and rules for participation are important components as well. I once featured content from a community member and was met by an angry online mob, because although the content was great, the member hadn’t reached a certain level of status. The guidelines didn’t cover eligibility for featuring, and up until then only longer-term participants had been featured, so the community’s expectations were not met.

    Not apples to apples

    I would never want to get anyone in trouble by suggesting they treat their community members exactly the same as their partners. Answering emails from anyone while having dinner with your loved one is not advised. The take-away is there isn’t any mystery to interacting with a community. Many of the ingredients for a healthy community are ones found in healthy relationships, and most reassuring of all, we already know what they are.


    Robert KaiserMirror, Mirror: Trek Convention and FLOSS Conferences

    It's been a while since I did any blogging, but that doesn't mean I haven't been doing anything - on the contrary, I have been too busy to blog, basically. We had a few Firefox releases where I scrambled until the last day of the beta phase to make sure we keep our crash rates as low as our users probably expect by now, I did some prototyping work on QA dashboards (with already-helpful results and more to come) and helped in other process improvements on the Firefox Quality team, worked with different teams to improve stability of our blocklist ping "ADI" data, and finally even was at a QA work week and a vacation in the US. So plenty of stuff done, and I hope to get to blog about at least some pieces of that in the next weeks and months.

    That said, one major part of my recent vacation was the Star Trek Las Vegas Convention, which I attended the second time after last year. Since back then, I wanted to blog about some interesting parallels I found between that event (I can't compare to other conventions, as I've never been to any of those) and some Free, Libre and Open Source Software (FLOSS) conferences I've been to, most notably FOSDEM, but also the larger Mozilla events.
    Of course, there's the big events in the big rooms and the official schedule - on the conferences it's the keynotes and presentations of developers about what's new in their software, what they learned or where we should go, on the convention it's actors and other guests talking about their experiences, what's new in their lives, and entertaining the crowd - both with questions from the audience. Of course, the topics are wildly different. And there's booths at both, also quite a bit different, as it's autograph and sales booths on one side, and mainly info booths on the other, though there are geeky T-shirts sold at both types of events. ;-)

    The largest parallels I found, though, are about the mass of people that are there:
    For one thing, the "hallway track" of talking to and meeting other attendees is definitely a main attraction and big piece of the life of the events on both "sides" there. Old friendships are being revived, new found, and the somewhat geeky commonalities are being celebrated and lead to tons of fun and involved conversations - not just the old fun bickering between vi and emacs or Kirk and Picard fans (or different desktop environments / different series and movies). :)
    For the other, I learned that both types of events are in the end more about the "regular" attendees than the speakers, even if the latter end up being featured at both. Especially the recurring attendees go there because they want to meet and interact with all the other people going there, with the official schedule being the icing on the cake, really. Not that it would be unimportant or unneeded, but it's not as much the main attraction as people on the outside, and possibly even the organizers, might think. Also, going there means you do for a few days not have to hide your "geekiness" from your surroundings and can actively show and celebrate it. There's also some amount of a "do good" atmosphere in both those communities.
    And both events, esp. the Trek and Mozilla ones, tend to have a very inclusive atmosphere of embracing everyone else, no matter what their physical appearance, gender or other social components. And actually, given how deeply that inclusive spirit has been anchored into the Star Trek productions by Gene Roddenberry himself, this might even run deeper in the fans there than it is in the FLOSS world. Notably, I saw a much larger amount of women and of colored people on the Star Trek Conventions than I see on FLOSS conferences - my guess is that at least a third of the Trek fans in Las Vegas were female, for example. I guess we need some more role models in they style of Nichelle Nichols and others in the FLOSS scene.

    All in all, there's a lot of similarities and still quite some differences, but quite a twist on an alternate universe like it's depicted in Mirror, Mirror and other episodes - here it's a different crowd with a similar spirit and not the same people with different mindsets and behaviors.
    As a very social person, I love attending and immersing myself in both types of events, and I somewhat wonder if and how we should have some more cross-pollination between those communities.
    I for sure will be seen on more FLOSS and Mozilla events as well as more Star Trek conventions! :)

    Peter Bengtssonpremailer now with 100% test coverage

    One of my most popular GitHub Open Source projects is premailer. It's a python library for combining HTML and CSS into HTML with all its CSS inlined into tags. This is a useful and necessary technique when sending HTML emails because you can't send those with an external CSS file (or even a CSS style tag in many cases).

    The project has had 23 contributors so far and as always people come in get some itch they have scratched and then leave. I really try to get good test coverage and when people come with code I almost always require that it should come with tests too.

    But sometimes you miss things. Also, this project was born as a weekend hack that slowly morphed into an actual package and its own repository and I bet there was code from that day that was never fully test covered.

    So today I combed through the code and plugged all the holes where there wasn't test coverage.
    Also, I set up Coveralls (project page) which is an awesome service that hooks itself up with Travis CI so that on every build and every Pull Request, the tests are run with --with-cover on nosetests and that output is reported to Coveralls.

    The relevant changes you need to do are:

    1) You need to go to coveralls.io (sign in with your GitHub account) and add the repo.
    2) Edit your .travis.yml file to contain the following:

    before_install:
        - pip install coverage
    ...
    after_success:
        - pip install coveralls
        - coveralls
    

    And you need to execute your tests so that coverage is calculated (the coverage module stores everything in a .coverage file which coveralls analyzes and sends). So in my case I change to this:

    script:
        - nosetests premailer --with-cover --cover-erase --cover-package=premailer
    

    3) You must also give coveralls some clues. So it reports on only the relevant files. Here's what mine looked like:

    [run]
    source = premailer
    
    [report]
    omit = premailer/test*
    

    Now, I get to have a cute "coverage: 100%" badge in the README and when people post pull requests Coveralls will post a comment to reflect how the pull request changes the test coverage.

    I am so grateful for all these wonderful tools. And it's all free too!

    Mozilla WebDev CommunityBeer and Tell – August 2014

    Once a month, web developers from across the Mozilla Project get together to upvote stories on Hacker News from each of our blogs. While we’re together, we usually end up sharing a bit about our side projects over beers, which is why we call this meetup “Beer and Tell”.

    There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

    Frederik Braun: Room Availability in the Berlin Office

    freddyb shared (via a ghost presentation by yours truly) a small webapp he made that shows the current availability of meeting rooms in the Mozilla Berlin office. The app reads room availability from Zimbra, which Mozilla uses for calendaring and booking meeting rooms. It also uses moment.js for rendering relative dates to let you know when a room will be free.

    The discussion following the presentation brought up a few similar apps that other Mozilla offices had made to show off their availability, such as the Vancouver office’s yvr-conf-free and the Toronto office’s yyz-conf-free.

    Nigel Babu: hgstats

    nigelb shared (via another ghost presentation, this time split between myself and laura) hgstats, which shows publicly-available graphs of the general health of Mozilla’s mercurial servers. This includes CPU usage, load, swap, and more. The main magic of the app is to load images from graphite, which are publicly visible, while graphite itself isn’t.

    nigelb has offered a bounty of beer for anyone who reviews the app code for him.

    Pomax: Inkcyclopedia

    Pomax shared an early preview of Inkcyclopedia, an online encyclopedia of ink colors. Essentially, Pomax bought roughly 170 different kinds of ink, wrote down samples with all of them, photographed them, and then collected those images along with the kind of ink used for each. Once finished, the site will be able to accept user-submitted samples and analyze them to attempt to identify the color and associate it with the ink used. Unsurprisingly, the site is able to do this using the RGBAnalyse library that Pomax shared during the last Beer and Tell, in tandem with RgbQuant.js.

    Sathya Gunasekaran: screen-share

    gsathya shared a screencast showing off a project that has one browser window running a WebGL game and sharing its screen with another browser window via WebRTC. The demo currently uses Chrome’s desktopCapture API for recording the screen before sending it to the listener over WebRTC.


    Alas, we were unable to beat Hacker News’s voting ring detection. But at least we had fun!

    If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

    See you next month!

    Advancing ContentA Call for Trust, Transparency and User Control in Advertising

    Advertising is the Web’s dominant business.  It relies on users for its success, and ironically fails to engage with them in a direct and honest way.  We are advocates of the many benefits that commercial involvement brings to the development of the Internet – it is at our core and part of the Mozilla Manifesto. Advertising is one of those commercial activities, it fuels and grows the Web. But the model has lost its focus by failing to put the user at the center.  We are calling initially on the advertising industry to adopt three core principles of trust, transparency and user control:

    1)  Trust: Do users understand why they are being presented with content? Do they understand what pieces of their data fed into the display decision?

    2)  Transparency: Is it clear to users why advertising decisions are made? Is it clear how their data is being consumed and shared?  Are they aware and openly contributing?

    3)  Control: Do users have the ability to control their own data? Do they have the option to be completely private, completely public or somewhere in between?

    We are re-thinking the model.  We want a world where Chief Marketing Officers, advertising agency executives, industry groups and the advertising technology companies see the real benefits of a user-centric model. These three principles give us the ability to build a strong, long term and more valuable platform for everyone.

    What are we doing?

    Our intention is to improve the experience as a player within the ecosystem. We’ll do this by experimenting and innovating.  All of our work will be designed with trust in mind.  Tiles is our first experiment and we are learning a lot.  Right now, we are showing users tiles from their “frecency” (recent and frequent sites), along with Mozilla information and suggestions and content labeled as sponsored. This experience is pretty basic but will evolve over time. Initial user interactions are positive. Users interacted with content labeled as sponsored that we placed in directory tiles 10x more than Mozilla-based content.

    Our next step will be to give users more transparency and control. Our UP platform will eventually help to power tiles and will help determine which content is displayed to the user.  The platform itself is innovative as it currently allows the interests data to sit client side, completely in the user’s control. The data can still be accessed there without us creating a dossier on the user, outside of the Firefox client.

    We will then put the user first by building an interests dashboard (something that we are already working on) that offers users a way to easily change their interests or participation in enhanced content at any time. The dashboard provides a constant feedback loop with users and will work with all our enhanced content projects.

    What can we promise?

    We will continue to demonstrate that it’s possible to balance commercial interests with public benefit, and to build successful products that respect user privacy and deliver experiences based upon trust, transparency and control.

    • We want to show the world you can do display advertising in a way that respects users’ privacy.
    • We believe that publishers should respect browser signals around tracking and privacy. If they don’t, we’ll take an active role in doing so and all our enhanced content projects will respect DNT.
    • We will respect the Minimal Actionable Dataset, a thought stream pioneered by one of our fellow Mozillians to only collect what’s needed – nothing more – and be transparent about it.
    • We will put users in control to customize, change or turn product features on/off at any time.

    We can’t change the Web from the sidelines, and we can’t change advertising on the Web without being a part of that ecosystem. We are excited about this mission and we’re working hard to achieve our goals. Stay tuned for updates over the coming weeks.

    If this resonates with and you have ideas or want to help, we’d love to hear from you by leaving comments below or by filling out this form.

    Mozilla Open Policy & Advocacy BlogTrust should be the currency

    At Mozilla, we champion a Web  that empowers people to reach their full potential and be in control of their online lives. In my role at Mozilla this means advocating for products, policies and practices that respect our users and create trusted online environments and experiences.  We believe trust is the most important currency on the Web – and when that trust is violated, the system fails.

    I have been spending a lot of time with our Content Services team as they work on their new initiatives.  Their first challenge is tackling the online advertising ecosystem.  This is hard work but extremely important.  Our core values of trust, transparency and control are just as applicable to the advertising industry as to any other, but they aren’t widely adopted there.

    Today, online advertising is rife with mistrust.  It is opaque for most users because the value exchange is not transparent.  While it should be trust, the prevailing Web currency is user data – much of the content is free because publishers and websites generate revenue through advertising.  At its core, this model is not new or unique, it is common in the media industry (e.g., broadcast television commercials and newspapers that are ad supported).  To improve monetization, online ads are now targeted based on a user’s browsing habits and intentions.  This isn’t a bad thing when done openly or done with consent.  The problem is that this “personalization” is not always transparent, leaving users in the dark about what they have traded for their content.  This breaks the system.

    Our users and our community have told us – through surveys, comments and emails – that transparency and control matter most to them when it comes to online advertising.  They want to know what is happening with their data; they want to control what data is shared, understand how their data is used and what they get for that exchange.  They are willing to engage in the value exchange and allow their data to be used if they understand what happens next.  Our users want trust (and not their data) to be the prevailing currency.  We believe that without this shift in focus, users will limit access to their data and will block ads.

    We want our users to not only trust us but to be able to trust the Web. We want to empower their choices and help them control their online experience. This is why we pioneered the Do Not Track (DNT) initiative.  DNT relies on advertisers, publishers and websites to respect a user’s preference. Unfortunately, many participants in the online advertising ecosystem do not modify their behavior in response to the DNT signal.  In this instance, user choice is not being respected.  So, we must do more for the user and continue to innovate.

    We are doing this by working within the ecosystem to create change.  We are testing our new tiles feature in Firefox and working to ensure that it provides personalization with respect and transparency built in. We are building DNT and other user controls into the tiles experiments and working to establish these foundational elements with our partners.  We are providing users with more information about their Web presence through Lightbeam, and will be testing new privacy initiatives that give users more control over the flow of their data.  We want to bring relevant and personalized content to our users while empowering control that inspires trust.

    We need to see a renewed focus of trust, transparency and control on the Web as a whole.  We can all do better.  We want to see more products and services (and not just in online advertising) developed with those ideals in mind.  For our part, we will continue to do more to innovate and create change so that we deserve your trust.

     

    Aaron KlotzProfile Unlocking in Firefox 34 for Windows

    Today’s Nightly 34 build includes the work I did for bug 286355: a profile unlocker for our Windows users. This should be very helpful to those users whose workflow is interrupted by a Firefox instance that cannot start because a previous Firefox instance has not finished shutting down.

    Firefox 34 users running Windows Vista or newer will now be presented with this dialog box:

    Clicking “Close Firefox” will terminate that previous instance and proceed with starting your new Firefox instance.

    Unfortunately this feature is not available to Windows XP users. To support this feature on Windows XP we would need to call undocumented API functions. I prefer to avoid calling undocumented APIs when writing production software due to the potential stability and compatibility issues that can arise from doing so.

    While this feature adds some convenience to an otherwise annoying issue, please be assured that the Desktop Performance Team will continue to investigate and fix the root causes of long shutdowns so that a profile unlocker hopefully becomes unnecessary.

    Doug BelshawSome preliminary thoughts toward v2.0 of Mozilla's Web Literacy Map

    As we approach the Mozilla Festival 2014, my thoughts are turning towards revisiting the Web Literacy Map. This, for those who haven’t seen it, comprises the skills and competencies Mozilla and a community of stakeholders believe to be important to read, write and participate on the web. Now that we’ve had time to build and iterate on top of the first version, it’s time to start thinking about a v2.0.

    Thinking

    The first thing to do when revisiting something like this is to celebrate the success it’s had: webmaker.org/resources is now structured using the 15 competencies identified in v1.1 of the Web Literacy Map. Each of those competencies now has an associated badge. We’ve published a whitepaper entitled Why Mozilla care about Web Literacy that features in which it features heavily. It’s also been used as the basis of the Boys and Girls Clubs of America’s new technology strategy, and by MOUSE in their work around Privacy. That’s just a few examples amongst the countless other times it’s been shared on social media and by people looking for something more nuanced than the usual new literacies frameworks.

    Deadlines being what they are, the group that were working on the Web Literacy Map had to move a bit more quickly than we would have liked in the final stages of putting it together. As a result, although the 15 competencies are reasonably solid, we were never 100% happy with the description of the skills underpinning each of these. Nevertheless, we decided to roll with it for launch, made a few updates post-MozFest, and then ‘froze’ development so that others could build on top of it.

    At the beginning of 2014, the Open Badges work at Mozilla was moved to a new non-profit called the Badge Alliance. As co-chair of the working group on Digital & Web Literacies, I’ve had a chance to think through web literacy from the perspective of a badged learning pathway with some of the people who helped put together the Web Literacy Map.

    The feeling I get is that with version 2.0 we need to address both the issues we put to one side for the sake of expediency, as well as issues that have cropped up since them. I can name at least five (not listed in any order):

    • Identity
    • Storytelling
    • Protecting the web (e.g. Net Neutrality)
    • Mobile
    • Computer Science

    We’re generally happy with the 15 competencies identified in v1.1 of the Web Literacy Map, and we’ve built resources and badges on top of them. Version 2.0, therefore, is likely to be more about evolution, not revolution.

    If you’ve got any thoughts on this, please do add them to this thread. Alternatively, I’m @dajbelshaw on Twitter and you can email me at doug@mozillafoundation.org

    Adam LoftingOverlapping types of contribution

    Screen Shot 2014-08-21 at 14.02.27TL;DR: Check out this graph!

    Ever wondered how many Mozfest Volunteers also host events for Webmaker? Or how many code contributors have a Webmaker contributor badge? Now you can find out

    The reason the MoFo Contributor dashboard we’re working from at the moment is called our interim dashboard is because it’s combining numbers from multiple data sources, but the number of contributors is not de-duped across systems.

    So if you’re counted as a contributor because you host an event for Webmaker, you will be double counted if you also file bugs in Bugzilla. And until now, we haven’t known what those overlaps look like.

    This interim solution wasn’t perfect, but it’s given us something to work with while we’re building out Baloo and the cross-org areweamillionyet.org (and by ‘we’, the vast credit for Baloo is due to our hard working MoCo friends Pierros and Sheeri).

    To help with prepping MoFo data for inclusion in Baloo, and by  generally being awesome, JP wired up an integration database for our MoFo projects (skipping a night of sleep to ship V1!).

    We’ve tweaked and tuned this in the last few weeks and we’re now extracting all sorts of useful insights we didn’t have before. For example, this integration database is behind quite a few of the stats in OpenMatt’s recent Webmaker update.

    The downside to this is we will soon have a de-duped number for our dashboard, which will be smaller than the current number. Which will feel like a bit of a downer because we’ve been enthusiastically watching that number go up as we’ve built out contribution tracking systems throughout the year.

    But, a smaller more accurate number is a good thing in the long run, and we will also gain new understanding about the multiple ways people contribute over time.

    We will be able to see how people move around the project, and find that what looks like someone ‘stopping’ contributing, might be them switching focus to another team, for example. There are lots of exciting possibilities here.

    And while I’m looking at this from a metrics point of view today, the same data allows us to make sure we say hello and thanks to any new contributors who joined this week, or to reach out and talk to long running active contributors who have recently stopped, and so on.

    Pete MooreWeekly review 2014-08-21

    Highlights since last review

    • Wrote Android Play Store code, got r+ from Rail
    • Set up staging environment, staging release hopefully today
    • Solved pip install problems

    Goals for next week:

    • Get back to vcs sync work

    Bugs I created since last review:

    Other bugs I updated since last review:

    Marco ZeheBlog maintenance on Saturday

    On Saturday, August 23, starting at 9 AM GMT+02:00 (3 AM Eastern, midnight Pacific), this blog will undergo some much needed maintenance. Afterwards it will hopefully be faster, and also have a new theme. I’ll try to keep the interruption as brief as possible. But just in case, so you know. :)

    Peter BengtssonAggressively prefetching everything you might click

    I just rolled out a change here on my personal blog which I hope will make my few visitors happy.

    Basically; when you hover over a link (local link) long enough it prefetches it (with AJAX) so that if you do click it's hopefully already cached in your browser.

    If you hover over a link and almost instantly hover out it cancels the prefetching. The assumption here is that if you deliberately put your mouse cursor over a link and proceed to click on it you want to go there. Because your hand is relatively slow I'm using the opportunity to prefetch it even before you have clicked. Some hands are quicker than others so it's not going to help for the really quick clickers.

    What I also had to do was set a Cache-Control header of 1 hour on every page so that the browser can learn to cache it.

    The effect is that when you do finally click the link, by the time your browser loads it and changes the rendered output it'll hopefully be able to do render it from its cache and thus it becomes visually ready faster.

    Let's try to demonstrate this with this horrible animated gif:
    (or download the screencast.mov file)

    Screencast
    1. Hover over a link (in this case the "Now I have a Gmail account" from 2004)
    2. Notice how the Network panel preloads it
    3. Click it after a slight human delay
    4. Notice that when the clicked page is loaded, its served from the browser cache
    5. Profit!

    So the code that does is is quite simply:

    $(function() {
      var prefetched = [];
      var prefetch_timer = null;
      $('div.navbar, div.content').on('mouseover', 'a', function(e) {
        var value = e.target.attributes.href.value;
        if (value.indexOf('/') === 0) {
          if (prefetched.indexOf(value) === -1) {
            if (prefetch_timer) {
              clearTimeout(prefetch_timer);
            }
            prefetch_timer = setTimeout(function() {
              $.get(value, function() {
                // necessary for $.ajax to start the request :(
              });
              prefetched.push(value);
            }, 200);
          }
        }
      }).on('mouseout', 'a', function(e) {
        if (prefetch_timer) {
          clearTimeout(prefetch_timer);
        }
      });
    });
    

    Also, available on GitHub.

    I'm excited about this change because of a couple of reasons:

    1. On mobile, where you might be on a non-wifi data connection you don't want this. There you don't have the mouse event onmouseover triggering. So people on such devices don't "suffer" from this optimization.
    2. It only downloads the HTML which is quite light compared to static assets such as pictures but it warms up the server-side cache if needs be.
    3. It's much more targetted than a general prefetch meta header.
    4. Most likely content will appear rendered to your eyes faster.

    Nicholas Nethercotemozilla::pkix ships in Firefox!

    In April, we announced an upcoming certificate verification library designed from the ground up to be fast and secure. A few weeks ago, this new library – known as “mozilla::pkix” – shipped with Firefox and is enabled by default. Please see the original announcement for more details.
    Along with using more verifiably secure coding practices, we took the opportunity to closely adhere to the X.509 certificate verification specifications for the Internet. For example, we prevent certificates from being misused in ways that legacy libraries often do not. This protects user data and promotes an overall more secure Web.
    However, this sometimes comes at a compatibility cost. Some certificates issued by certificate authorities not in Mozilla’s Root CA program may no longer work in the same way. We are currently evaluating how we can best balance security with usability with regard to these certificates.
    If you encounter compatibility issues, please read the Certificate Primer which contains information for creating a compatible certificate hierarchy.

    David BoswellQuality over Quantity

    I was in Portland last week for a work week and Michelle recommended that I try the donuts at Blue Star. The blueberry donut was really great. The inside of the bakery was interesting too—right inside the doors was a big mural that said ‘Quality over Quantity’.

    20140812_085436

    That turned out to be an good summary of the work week. We were checking in on progress toward this year’s goal to grow the number of active contributors by 10x and also thinking about how we could increase the impact of our community building work next year.

    One clear take-away was that community building can’t be all about growth. Some teams, like Location Service, do need large numbers of new active contributors, but many teams don’t. For instance, localization needs to develop the active contributors already in the project into core contributors that can take on a bigger role.

    For me, creating a draft framework that would give us more ways to support teams and communities was the most important thing we did—in addition to taking a great team photo :)

    cbt_portland_photo_fun

    Growth is part of this framework, but it includes other factors for us to look at to make sure that we’re building healthy functional and regional communities. The health measures we think we should be focusing on next year are:

    • Retention (how many contributors are staying and leaving)
    • Growth (how many new contributors are joining)
    • Development (how many contributors are getting more deeply involved in a project)
    • Sentiment (how do contributors feel about being involved)
    • Capacity (how are teams increasing their ability to build communities)

    Having this more nuanced approach to community building will create more value because it aligns better with the needs we’re seeing across Mozilla. The growth work we’ve done has been critical to getting us here and we should continue that along with adding more to what we offer.

    scubidiver_video_poster

    There is a video that Rainer just posted that has a story Chris Hofmann told at last year’s summit about one contributor that had a huge impact on the project. This is a great example of how we should be thinking more broadly about community building.

    We should be setting up participation systems that let us help teams build long-lasting relationships with contributors like Scoobidiver as well as helping teams connect with large numbers of people to focus on an issue for a short time when that is what’s needed.

    Moral of this story: Eat more donuts—they help you think :)


    Vladimir VukićevićUpdated Firefox VR Builds

    I’d like to announce the third Firefox Nightly build with experimental VR support. Download links:

    This build includes a number of fixes to CSS VR rendering, as well as some API additions and changes:

    • Fixed CSS rendering (see below for more information)
    • Support for DK2 via 0.4.1 SDK (extended mode only)
    • Experimental auto-positioning on MacOS X — when going fullscreen, the window should move itself to the Rift automatically
    • hmd.setFieldOfView() now takes zNear and zFar arguments
    • New API call: hmd.getRecommendedEyeRenderRect() returns the suggested render dimensions for a given eye; useful for WebGL rendering (see below)

    The DK2 Rift must be in Extended Desktop mode. You will also need to rotate the Rift’s display to landscape. If tracking doesn’t seem to be working, stop the Oculus service using the Configuration Tool first, then launch Firefox.

    CSS Rendering

    Many issues with CSS rendering were fixed in this release. As part of this, the coordinate space when in fullscreen VR is different than normal CSS. When in fullscreen VR mode, the 0,0,0 coordinate location refers to the center of the viewport (and not the top left as is regular in CSS). Additionally, the zNear/zFar values specified to setFieldOfView control the near and far clipping planes.

    The coordinate units are also not rationalized with CSS coordinates. The browser applies a per-eye transform in meters (~ 0.032 meters left/right, or 3.2cm) before rendering the scene; tthus the coordinate space ends up being ~1px = ~1m in real space, which is not correct. This will be fixed in the next release.

    Here’s a simple example of showing 4 CSS images on all sides around the viewer, along with some text. The source includes copious comments about what’s being done and why.

    Known issues:

    • The Y axis is flipped in the resulting rendering. (Workaround: add a rotateZ() to the camera transform div)
    • The initial view doesn’t face the same direction as CSS (Workaround: add a rotateY() to the camera transform div)
    • Manual application of the HMD orientation/position is required.
    • Very large CSS elements (>1000px in width/height) may not be rendered properly
    • Units are not consistent when in VR mode

    getRecommendedEyeRenderRect()

    NOTE: This API will likely change (and become simpler) in the next release.

    getRecommendedEyeRenderRect will return the rectangle into which each eye should be rendered, and the best resolution for the given field of view settings. To create an appropriately sized canvas, the size computation should be:

    var leftRect = hmd.getRecommendedEyeRenderRect("left");
    var rightRect = hmd.getRecommendedEyeRenderRect("right");
    var width = leftRect.x + Math.max(leftRect.width + rightRect.x) + rightRect.width;
    var height = Math.max(leftRect.y, rightRect.y) + Math.max(leftRect.height, leftRect.height);
    

    In practice, leftRect.x will be 0, and the y coordinates will both be 0, so this can be simplified to:

    var width = leftRect.width + rightRect.width;
    var height = Math.max(leftRect.height, rightRect.height);
    

    Each eye should be rendered into the leftRect and rightRect coordinates. This API will change in the next release to make it simpler to obtain the appropriate render sizes and viewports.

    Comments and Issues

    As before, issues are welcome via GitHub issues on my gecko-dev repo. Additionally, discussion is welcome on the web-vr-discuss mailing list.

    Christian HeilmannNo more excuses – subtitle your YouTube videos

    I was just very pleasantly surprised that the subtitling interface in YouTube has gone leaps and bounds since I last looked at it.

    One of the French contributors to Mozilla asked me to get subtitles for the video of the Flame introduction videos and I felt the sense of dread you get when requests like those come in. It seems a lot of work for not much gain.

    However, using the YouTube auto captioning tool this is quite a breeze:

    subtitling-interface

    I just went to the Subtitles and CC tab and told YouTube that the video is English. Almost immediately (this is kind of fishy – does YouTube already create text from speech for indexing reasons?) I got a nice set of subtitles, time-stamped and all.

    Hitting the edit button I was able to edit the few mistakes the recognition made and it was a simple process of listening as you type. I then turned on the subtitles and exported the SRT files for translation.

    I was very impressed with the auto-captioning as I am not happy with the quality of my talking in those videos (they were rushed and the heartless critic in me totally hears that).

    Of course, there is also Amara as a full-fledged transcribing, captioning and translation tool, but there are not many excuses left for us not to subtitle our short videos.

    Let’s not forget that subtitles are amazing and not only a tool for the hard of hearing:

    • I don’t have to put my headphones in when watching your video in public – I can turn off the sound and not annoy people in the cafe
    • As a non-native speaker they are great to learn a new language (I learned English watching Monty Python’s Flying Circus with subtitles – the only program that did that back then in Germany. This might explain a few things)
    • You can search a video by content without having to know the time stamp and you can provide the subtitles as a transcript in a post
    • You help people with various disabilities to make your work understandable.

    Go, hit that Subtitles tab!

    Daniel StenbergThe “right” keyboard layout

    I’ve never considered myself very picky about the particular keyboard I use for my machines. Sure, I work full-time and spare time in front of the same computer and thus I easily spend 2500-3000 hours a year in front of it but I haven’t thought much about it. I wish I had some actual stats on how many key-presses I do on my keyboard on an average day or year or so.

    Then, one of these hot summer days this summer I left the roof window above my work place a little bit too much open when a very intense rain storm hit our neighborhood when I was away for a brief moment and to put it shortly, the huge amounts of water that poured in luckily only destroyed one piece of electronics for me: my trusty old keyboard. The keyboard I just randomly picked from some old computer without any consideration a bunch of years ago.

    So the old was dead, I just picked another keyboard I had lying around.

    But man, very soft rubber-style keys are very annoying to work with. Then I picked another with a weird layout and a control-key that required a little too much pressure to work for it to be comfortable. So, my race for a good enough keyboard had begun. Obviously I couldn’t just pick a random cheap new one and be happy with it.

    Nordic key layout

    That’s what they call it. It is even a Swedish layout, which among a few other details means it features å, ä and ö keys at a rather prominent place. See illustration. Those letters are used fairly frequently in our language. We have a few peculiarities in the Swedish layout that is downright impractical for programming, like how the {[]} – symbols all require AltGr pressed and slash, asterisk and underscore require Shift to be pressed etc. Still, I’v'e learned to program on such a layout so I’m quite used to those odd choices by now…

    kb-nordic

    Cursor keys

    I want the cursor keys to be of “standard size”, have the correct location and relative positions. Like below. Also, the page up and page down keys should not be located close to the cursor keys (like many laptop keyboards do).

    keyboard with marked cursorkeys

    Page up and down

    The page up and page down keys should instead be located in the group of six keys above the cursor keys. The group should have a little gap between it and the three keys (print screen, scroll lock and pause/break) above them so that finding the upper row is easy and quick without looking.

    page up and down keysBackspace

    I’m not really a good keyboard typist. I do a lot of mistakes and I need to use the backspace key quite a lot when doing so. Thus I’m a huge fan of the slightly enlarged backspace key layout so that I can find and hit that key easily. Also, the return key is a fairly important one so I like the enlarged and strangely shaped version of that as well. Pretty standard.

    kb-backspaceFurther details

    The Escape key should have a little gap below it so that I can find it easily without looking.

    The Caps lock key is completely useless for locking caps is not something a normal person does, but it can be reprogrammed for other purposes. I’ve still refrained from doing so, mostly to not get accustomed to “weird” setups that makes it (even) harder for me to move between different keyboards at different places. Just recently I’ve configured it to work as ctrl – let’s see how that works out.

    The F-keys are pretty useless. I use F5 sometimes to refresh web pages but as ctrl-r works just as well I don’t see a strong need for them in my life.

    Numpad – a completely useless piece of the keyboard that I would love to get rid of – I never use any of those key. Never. Unfortunately I haven’t found any otherwise decent keyboards without the numpad.

    Func KB-460

    The Func KB-460 is the keyboard I ended up with this time in my search. It has some fun extra cruft such as two USB ports and a red backlight (that can be made to pulse). The backlight gave me extra points from my kids.

    Func KB-460 keyboard

    It is “mechanical” which obviously is some sort of thing among keyboards that has followers and is supposed to be very good. I remain optimistic about this particular model, even if there are a few minor things with it I haven’t yet gotten used to. I hope I’ll just get used to them.

    This keyboard has Cherry MX Red linear switches.

    How it could look

    Based on my preferences and what keys I think I use, I figure an ideal keyboard layout for me could very well look like this:

    my keyboard layout

    Keyfreq

    I have decided to go further and “scientifically” measure how I use my keyboard, which keys I use the most and similar data and metrics. Turns out the most common keylog program on Linux doesn’t log enough details, so I forked it and created keyfreq for this purpose. I’ll report details about this separately – soon.

    Byron Joneshappy bmo push day!

    the following changes have been pushed to bugzilla.mozilla.org:

    • [1047405] Comment tagging GUI not fully localizable because of text in Javascript instead of template
    • [1048712] comment tagging suggestions always returns a single result
    • [1054795] remove ‘Bugzilla Data For Researchers’ link
    • [1050230] Use better icons for the guided bug entry product selection to differentiate Fx, Fx for Android and FxOS
    • [1022707] Duplicate review flags on attachments in Toolkit and Firefox for Metro
    • [1050628] flag state API doesn’t honour bug or attachment security
    • [1055945] splinter generates “Use of uninitialized value” warnings when dealing with public reviews on private attachments

    discuss these changes on mozilla.tools.bmo.


    Filed under: bmo, mozilla

    Benjamin KerensaMozilla and Open Diversity Data

    I have been aware of the Open Diversity Data project for awhile. It is the work of the wonderful members of Double Union and their community of awesome contributors. Recently, a Mozillian tweeted that Mozilla should release it’s Diversity Data. It is my understanding also that a discussion happened internally and for whatever reason a […]

    Mozilla Release Management TeamFirefox 32 beta7 to beta8

    • 20 changesets
    • 52 files changed
    • 363 insertions
    • 162 deletions

    ExtensionOccurrences
    cpp17
    js9
    h9
    ini2
    xul1
    xml1
    xhtml1
    webidl1
    py1
    mm1
    css1

    ModuleOccurrences
    content15
    js8
    browser8
    netwerk3
    toolkit2
    testing2
    dom2
    modules1
    mobile1
    editor1
    accessible1

    List of changesets:

    Ryan VanderMeulenBug 1023472 - Disable test_bug935876.html on Android for perma-failing when pushed to a different chunk; a=bustage - 1764a68fe1ae
    Ryan VanderMeulenBug 1054087 - Disable test_dom_input_event_on_htmleditor.html on Android 2.3 for perma-failing since the number of Android mochitest chunks was increased; a=bustage - ef94af3dd0ad
    Jon CoppeardBug 999158 - Keep a spare chunk around to mitigate GGC OOM crashes on tenuring. r=terrence, a=lmandel - 97fd0156fdc2
    Ryan VanderMeulenBug 1026805 - Disable frequently-hanging mozapps tests on OSX. a=test-only - 76f7c4f771f5
    Matthew NoorenbergheBug 1054411 - Cancel the HTTP requests in browser_keywordSearch.js to avoid making network contact. r=adw, a=test-only - 6dec02f8d0ea
    Florian QuèzeBug 1048375 - browser_aboutHome.js intermittently causes external requests to snippets.mozilla.com. r=gavin, a=test-only - 8e09aad61a79
    Randell JesupBug 1054166: Mirror Add/RemoveListener in Add/RemoveDirectListener r=roc a=abillings - 6a2810252cf8
    Simon MontaguBug 1037641 - Split SetDirectionFromChangedTextNode into TextNodeWillChangeDirection and TextNodeChangedDirection. r=ehsan, a=abillings - 9e94aa2f0ae7
    Brian HackettBug 1053683 - Add overrecursion checks to FillInBMInfo. r=jandem, a=abillings - c6e134b4ed52
    Ed LeeBug 1039881 - Use an empty directory tiles data source pref before uplift [r=adw r=bholley a=lmandel] - 6790f9333fec
    Wes JohnstonBug 910893 - Don't disable the try again button. r=margaret, r=benb, a=lmandel - 7bb962c117df
    Valentin GosuBug 1045886 - Remove Cache directory from Android profiles. r=michal, a=lmandel - 07eb5ce30325
    Valentin GosuBug 1045886 - Increase assertion count in test_bug437844.xul. a=test-only - c444cb84a78b
    Jan de MooijBug 1054359 - Add is-object check to IonBuilder::makeCallHelper. r=efaust, a=lmandel - f5bfa8f3434c
    Jared WeinBug 1016434 - Backout Bug 759252 from Firefox 32 and Firefox 33 for causing blurry throbbers. a=lmandel - 3741e9a5c6ca
    Jean-Yves AvenardBug 1045591 - Fix media element's autoplay for audio-only stream. r=cpearce, a=lmandel - f595bdcdbd1e
    Alessio PlacitelliBug 1037214 - Throw OOM to the script instead of aborting in FragmentOrElement::GetTextContentInternal. r=bz, a=lmandel - 353ade05d903
    Ed MorleyBug 1026987 - Give the MOZ_DISABLE_NONLOCAL_CONNECTIONS error a TBPL-parsable prefix. r=froydnj, a=NPOTB - 92aead6bd5fb
    Andrew McCreightBug 1039633 - Always try to set the ASan symbolizer in gtest runs. r=ted, a=test-only - e0e150f31ffe
    Tooru FujisawaBug 1053692 - Do not use optimized stub for spread call with many arguments. r=jandem, a=lmandel - 45953c4613d2