Benjamin KerensaCalling Out OkCupid

OkCupid Blocks FirefoxSo the other day, Indiana’s governor signed a bill into law that the Republican controlled legislature passed called the Religious Freedom Restoration Act. The reality of this bill is it has nothing to do with freedom of religion and everything to do with legalizing discrimination.

Anyways, to the point, I hate to open a can of worms but when I heard this news I thought back to this same time last year and remembered how gung ho OkCupid was over Mozilla’s appointment of Brendan Eich because of his personal beliefs and that they ultimately decided to block all Firefox users.

I don’t really think OkCupid should block Indiana but their lack of even a public tweet or statement in opposition of this legislation leads me back to my original conclusion that they were just riding the media train for their own benefit and not because they support the LGBT community.

If you are going to be about supporting the LGBT community, try to at least be consistent in that support and not just do it when it will make you look good in the media!

Air MozillaTestday Regional en Español

Testday Regional en Español Español: El equipo de QA de Mozilla Hispano brindará a todos los usuarios de habla hispana una introducción al área de Control de Calidad, así...

Mark SurmanMoFo March 2015 Board Meeting

What’s happening at the Mozilla Foundation? This post contains the presentation slides from our recent Board Meeting, plus an audio interview that Matt Thompson did with me last week. It provides highlights from 2014, a brief summary of Mozilla’s 2015 plan and a progress report on what we’ve achieved over the past three months.

I’ve also written a brief summary of notes from the slides and  interview below if you want a quick scan. These are also posted on the Webmaker blog.

What we did in 2014

  • Grew contributors and ground game. (10,077 active contributors total.)
  • Prototyped new Webmaker mobile product
  • Expanded community programs by 3x

March 2015 Board Deck - Share.004

March 2015 Board Deck - Share.005 March 2015 Board Deck - Share.007

Mozilla’s 2015 Plan

Mozilla-wide goals: grow long-term relationships that 
help people and promote the open web. By building product and empowering people. Webmaker+ goal: Expand participation in Webmaker through new software and on the ground clubs.

Building Mozilla Learning

By 2017, we’ve built Mozilla Learning: a global classroom and lab for the citizens of the web. Part community, part academy, people come to Mozilla Learning to unlock the power of the web for themselves, their organizations and the world.

2015 Mozilla Foundation goals

  • Deepen learning networks (500 cities)
  • B
uild mass appeal learning product (250k Monthly Active Users)
  • Craft ambitious Mozilla Learning and community strategy
Q1 Mozilla Foundation highlights
  • Major victory in US net neutrality, with Mozilla getting 330k people to sign a petition.
  • Launched Webmaker app at Mobile World Congress. Strong interest from partners, possible link to Orange FirefoxOS launch in Africa and Middle East.

March 2015 Board Deck - Share.019 March 2015 Board Deck - Share.020 March 2015 Board Deck - Share.021


Filed under: mozilla

William LachancePerfherder update: Summary series drilldown

Just wanted to give another quick Perfherder update. Since the last time, I’ve added summary series (which is what GraphServer shows you), so we now have (in theory) the best of both worlds when it comes to Talos data: aggregate summaries of the various suites we run (tp5, tart, etc), with the ability to dig into individual results as needed. This kind of analysis wasn’t possible with Graphserver and I’m hopeful this will be helpful in tracking down the root causes of Talos regressions more effectively.

Let’s give an example of where this might be useful by showing how it can highlight problems. Recently we tracked a regression in the Customization Animation Tests (CART) suite from the commit in bug 1128354. Using Mishra Vikas‘s new “highlight revision mode” in Perfherder (combined with the revision hash when the regression was pushed to inbound), we can quickly zero in on the location of it:

Screen Shot 2015-03-27 at 3.18.28 PM

It does indeed look like things ticked up after this commit for the CART suite, but why? By clicking on the datapoint, you can open up a subtest summary view beneath the graph:

Screen Shot 2015-03-27 at 2.35.25 PM

We see here that it looks like the 3-customize-enter-css.all.TART entry ticked up a bunch. The related test 3-customize-enter-css.half.TART ticked up a bit too. The changes elsewhere look minimal. But is that a trend that holds across the data over time? We can add some of the relevant subtests to the overall graph view to get a closer look:

Screen Shot 2015-03-27 at 2.36.49 PM

As is hopefully obvious, this confirms that the affected subtest continues to hold its higher value while another test just bounces around more or less in the range it was before.

Hope people find this useful! If you want to play with this yourself, you can access the perfherder UI at http://treeherder.mozilla.org/perf.html.

Monty MontgomeryToday's WTF Moment: A Competing HEVC Licensing Pool

Had this happened next week, I'd have thought it was an April Fools' joke.

Out of nowhere, a new patent licensing group just announced it has formed a second, competing patent pool for HEVC that is independent of MPEG LA. And they apparently haven't decided what their pricing will be... maybe they'll have a fee structure ready in a few months.

Video on the Net (and let's be clear-- video's future is the Net) already suffers endless technology licensing problems. And the industry's solution is apparently even more licensing.

In case you've been living in a cave, Google has been trying to establish VP9 as a royalty- and strings-free alternative (new version release candidate just out this week!), and NetVC, our own next-next-generation royalty-free video codec, was just conditionally approved as an IETF working group on Tuesday and we'll be submitting our Daala codec as an input to the standardization process. The biggest practical question surrounding both efforts is 'how can you possibly keep up with the MPEG behemoth'?

Apparently all we have to do is stand back and let the dominant players commit suicide while they dance around Schroedinger's Cash Box.

Patrick McManusOpportunistic Encryption For Firefox

Firefox 37 brings more encryption to the web through opportunistic encryption of some http:// based resources. It will be released the week of March 31st.

OE provides unauthenticated encryption over TLS for data that would otherwise be carried via clear text. This creates some confidentiality in the face of passive eavesdropping, and also provides you much better integrity protection for your data than raw TCP does when dealing with random network noise. The server setup for it is trivial.

These are indeed nice bonuses for http:// - but it still isn't as nice as https://. If you can run https you should - full stop. Don't make me repeat it :) Only https protects you from active man in the middle attackers.

But if you have long tail of legacy content that you cannot yet get migrated to https, commonly due to mixed-content rules and interactions with third parties, OE provides a mechanism for an encrypted transport of http:// data. That's a strict improvement over the cleartext alternative.

Two simple steps to configure a server for OE
  1. Install a TLS based h2 or spdy server on a separate port. 443 is a good choice :). You can use a self-signed certificate if you like because OE is not authenticated.
  2. Add a response header Alt-Svc: h2=":443" or spdy/3.1 if you are using a spdy enabled server like nginx. 
When the browser consumes that response header it will start to verify the fact that there is a HTTP/2 service on port 443. When a session with that port is established it will start routing the requests it would normally send in cleartext to port 80 onto port 443 with encryption instead. There will be no delay in responsiveness because the new connection is fully established in the background before being used. If the alternative service (port 443) becomes unavailable or cannot be verified Firefox will automatically return to using cleartext on port 80. Clients that don't speak the right protocols just ignore the header and continue to use port 80.

This mapping is saved and used in the future. It is important to understand that while the transaction is being routed to a different port the origin of the resource hasn't changed (i.e. if the cleartext origin was http://www.example.com:80 then the origin, including the http scheme and the port 80, are unchanged even if it routed to port 443 over TLS). OE is not available with HTTP/1 servers because that protocol does not carry the scheme as part of each transaction which is a necessary ingredient for the Alt-Svc approach.

You can control some details about how long the Alt-Svc mappings last and some other details. The Internet-Draft is helpful as a reference. As the technology matures we will be tracking it;  the recent HTTP working group meeting in Dallas decided this was ready to proceed to last call status in the working group.

Geoffrey MacDougallMe+Next

I’ve let my long-time friend and Executive Director, Mark Surman, know that April 10th will be my last day as an employee of Mozilla.

The last 5 years have been an amazing ride. I’m proud of what we’ve accomplished.

There are learning, fundraising, and advocacy programs where there weren’t before. We’re empowering hundreds of thousands of people to teach each other the web. We’ve built a $15M/y fundraising program from scratch. And we’ve helped Mozilla find its voice again, playing a lead role in the most significant grassroots policy victory in a generation and the largest ever in telecommunications: the battle for net neutrality.

I’m grateful to Mark and Mitchell Baker for the opportunity and trust to help build something great, to my colleagues for their focus and dedication, and to all of Mozilla for fighting the good fight.

While the 10th will be my last day as an employee, I’ll be around until the end of June as a consultant, helping with the transition of my portfolio to new leadership. I’ll announce my new home closer to that time.

For now, as always, once a Mozillian always a Mozillian.

Thanks again to all of you. I’m looking forward to seeing what we accomplish next.


Filed under: Mozilla

Mike ConleyThings I’ve Learned This Week (March 23 – 27, 2015)

This is the first post in a weekly series, where I’m going to attempt to distill down my week into some lessons or facts I’ve picked up. Maybe they’ll be interesting to others. We’ll see.

  1.  Gecko Media Plugins are used both for WebRTC (the Open H.264 encoding stuff runs inside a GMP), and is also going to be used to hold CDM’s for EME’s. That’s a lot of TLA’s!1
  2. This little notch I saw on the caret on my development build was because I had bidi.browser.ui set to true for some reason. It’s the “bidi caret”:
    Bidi Caret
  3. People hacking on platform are supposed to avoid using the NS_ENSURE_* macros, according to this.2 I originally learned this by reading cpearce’s review of a patch.

So let’s see if I can keep this up for a few weeks. Maybe I’ll get a collection of useful stuff by the end of the experiment!


  1. Three Letter Acronyms 

  2. It says:

    Previously the NS_ENSURE_* macros were used for this purpose, but those macros hide return statements and should not be used in new code.

     

Mike ConleyThe Joy of Coding (Episode 7): Code review, and a Regression

In this episode, I started with some code review. I was reviewing a patch to make the Findbar (particularly, the Find As You Type feature) e10s-friendly.

With that review out of the way, I had to swap a bunch of information about the plugin crash UI for e10s in my head – and in particular, some non-determinism that we have to handle. I explained that stuff (and hopefully didn’t spend too much time on it).

Then, I showed how far I’d gotten with the plugin crash UI for e10s. I was able to submit a crash report, but I found I wasn’t able to type into the comment text area.

After a while, I noticed that I couldn’t type into the comment text area on Nightly, even without my patch. And then I reproduced it in Aurora. And then in Beta. Luckily, I couldn’t reproduce it in Release – but with Beta transitioning to Release in only a few days, I didn’t have a lot of time to get a bug on file to shine some light on it.

Luckily, our brilliant Steven Michaud was on the case, and has just landed a patch to fix this. Talk about fast work!

Episode Agenda

References:
Bug 1133981 – [e10s] Stop sending unsafe CPOWs after the findbar has been closed in a remote browser

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Bug 1147521 – Cannot type into comment area of plugin crash UI

Mozilla Reps CommunityReps Weekly Call – March 26th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

fossasia-2015

Summary

  • MozBalkans Applications.
  • Changes on Event and Report forms.
  • Community Education Update.
  • QA News and Events.
  • Firefox App Training.
  • HackOnMDN at Berlin-March-2015.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Doug BelshawToday is my last day at Mozilla

TL;DR: I’m leaving Mozilla as a paid contributor because, as of next week, I’ll be a full-time consultant! I’ll write about that in a separate blog post.


Around four years ago, I stumbled across a project that the Mozilla Foundation was running with P2PU. It was called ‘Open Badges’ and it really piqued my interest. I was working in Higher Education at the time and finishing off my doctoral thesis. The prospect of being able to change education by offering a different approach to credentialing really intrigued me.

I started investigating further, blogging about it, and started getting more people interested in the Open Badges project. A few months later, the people behind MacArthur’s Digital Media and Learning (DML) programme asked me to be a judge for the badges-focused DML Competition. While I was in San Francisco for the judging process I met Erin Knight, then Director of Learning at Mozilla, in person. She asked if I was interested in working on her team. I jumped at the chance!

During my time at Mozilla I’ve worked on Open Badges, speaking and running keynotes at almost as many events as there are weeks in the year. I’ve helped bring a Web Literacy Map (originally ‘Standard’) into existence, and I’ve worked on various projects and with people who have changed my outlook on life. I’ve never come across a community with such a can-do attitude.

This June would have marked three years as a paid contributor to the Mozilla project. It was time to move on so as not to let the grass grow under my feet. Happily, because Mozilla is a global non-profit with a strong community that works openly, I’ll still be a volunteer contributor. And because of the wonders of the internet, I’ll still have a strong connection to the network I built up over the last few years.

I plan to write more about the things I learned and the things I did at Mozilla over the coming weeks. For now, I just want to thank all of the people I worked with over the past few years, and wish them all the best for the future. As of next week I’ll be a full-time consultant. More about that in an upcoming post!

Mike TaylorPasting into contenteditable elements in Firefox for Android, ~*wowowowowow*~

Bug 783846 landing in Nightly means that Firefox for Android users—starting in version 39—can finally paste into contenteditable elements, which is huge news for the mobile-html5-responsive-shadow-and-or-virtual-dom-contenteditable apps crowd, developers and users alike.

"That's amazing! Can I cut text from contenteditable elements too?", you're asking yourself. The answer is um no, because we haven't fixed that yet. But if you wanna help me get that working come on over to bug 1112276 and let's party. Or write some JS and fix the bug or whatever.

Fun fact, when I told my manager I was working on this bug in my spare time he asked, "…Why?". 🎉

Nicholas NethercoteOn vacation for a month

I’m taking a month of vacation. Today is my last working day for March, and I will be back on April 30th. While I won’t be totally incommunicado, for the most part I won’t be reading email. While I’m gone, any management-type inquiries can be passed on to Naveed Ihsannullah.

Raniere SilvaMathml March Meeting

Mathml March Meeting

This is a report about the Mozilla March IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in April 8th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Leia mais...

Air MozillaVR Cinema Meetup #3

VR Cinema Meetup #3 VR Cinema's third event will showcase new and exciting films for virtual reality. Come see these groundbreaking projects before they're released to the public and...

Morgan PhillipsWhoop, Whoop: Pull Up!

Since December 1st 1975, by FAA mandate, no plane has been allowed to fly without a "Ground Proximity Warning System" GPWS (or one of its successors).[1] For good reason too, as it's been figured that 75% of the fatalities just one year prior (1974) could have been prevented using the system.[2]

In a slew of case studies, reviewers reckoned that a GPWS may have prevented crashes by giving pilots additional time to act before they smashed into the ground. Often, the GPWS's signature "Whoop, Whoop: Pull Up!" would have sounded a full fifteen seconds before any other alarms triggered.[3]

Instruments like this are indispensable to aviation because pilots operate in an environment outside of any realm where human intuition is useful. Lacking augmentation, our bodies and minds are simply not suited to the task of flying airliners.

For the same reason, thick layers of instrumentation and early warning systems are necessary for managing technical infrastructure. Like pilots, without proper tooling, system administrators often plow their vessels into the earth....

The St. Patrick's Day Massacre

Case in point, on Saint Patrick's Day we suffered two outages which could have likely been avoided via some additional alerts and a slightly modified deployment process.

The first outage was caused by the accidental removal of a variable from a config file which one of our utilities depends on. Our utilities are all managed by a dependency system called runner, and when any task fails the machine is prevented from doing work until it succeeds. This all-or-nothing behavior is correct, but should not lead to closed trees....

On our runner dashboards, the whole event looked like this (the smooth decline on the right is a fix being rolled out with ansible):


The second, and most severe, outage was caused by an insufficient wait time between retries upon failing to pull from our mercurial repositories.

There was a temporary disruption in service, and a large number of slaves failed to clone a repository. When this herd of machines began retrying the task it became the equivalent of a DDoS attack.

From the repository's point of view, the explosion looked like this:


Then, from runner's point of view, the retrying task:


In both of these cases, despite having the data (via runner logging), we missed the opportunity to catch the problem before it caused system downtime. Furthermore, especially in the first case, we could have avoided the issue even earlier by testing our updates and rolling them out gradually.

Avoiding Future Massacres

After these fires went out, I started working on a RelEng version of the Ground Proximity Warning System, to keep us from crashing in the future. Here's the plan:

1.) Bug 1146974 - Add automated alerting for abnormally high retries (in runner).

In both of the above cases, we realized that things had gone amiss based on job backlog alerts. The problem is, once we have a large enough backlog to trigger those alarms, we're already hosed.

The good news is, the backlog is preceded by a spike in runner retries. Setting up better alerting here should buy us as much as an extra hour to respond to trouble.

We're already logging all task results to influxdb, but, alerting via that data requires a custom nagios script. Instead of stringing that together, I opted to write runner output to syslog where it's being aggregated by papertrail.

Using papertrail, I can grep for runner retries and build alarms from the data. Below is a screenshot of our runner data in the papertrail dashboard:



2.) Add automated testing, and tiered roll-outs to golden ami generation

Finally, when we update our slave images the new version is not rolled out in a precise fashion. Instead, as old images die (3 hours after the new image releases) new ones are launched on the latest version. Because of this, every deploy is an all-or-nothing affair.

By the time we notice a problem, almost all of our hosts are using the bad instance and rolling back becomes a huge pain. We also do rollbacks by hand. Nein, nein, nein.

My plan here is to launch new instances with a weighted chance of picking up the latest ami. As we become more confident that things aren't breaking -- by monitoring the runner logs in papertrail/influxdb -- we can increase the percentage.

The new process will work like this:
    00:00 - new AMI generated

    00:01 - new slaves launch with a 12.5% chance of taking the latest version.

    00:45 - new slaves launch with a 25% chance of taking the latest version.

    01:30 - new slaves launch with a 50% chance of taking the latest version.

    02:15 - new slaves launch with a 100% chance of taking the latest version.
Lastly, if we want to roll back, we can just lower the percentage down to zero while we figure things out. This also means that we can create sanity checks which roll back bad amis without any human intervention whatsoever.

The intention being, any failure within the first 90 minutes will trigger a rollback and keep the doors open....

Daniel PocockWebRTC: DruCall in Google Summer of Code 2015?

I've offered to help mentor a Google Summer of Code student to work on DruCall. Here is a link to the project details.

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

  • Updating it for Drupal 8
  • Support for logged-in users (currently it just makes anonymous calls, like a phone box)
  • Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call

Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

WebRTC at mini-DebConf Lyon in April

The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.

Armen Zambranomozci 0.4.0 released - Many bug fixes and improved performance

For the release notes with all there hyper-links go here.

NOTE: I did a 0.3.1 release but the right number should have been 0.4.0

This release does not add any major features, however, it fixes many issues and has much better performance.

Many thanks to @adusca, @jmaher and @vaibhavmagarwal for their contributions.

Features:

  • An alltalos.py script has been added
  • Issue #69 - Generate graph of builds to testers
  • Added flake8 support - Remove pyflakes and pep8
  • Allow skipping revisions on a list (09f7138)
  • Issue #61 - Rename trigger_range.py to trigger.py

Fixes:

  • All the documentation and roadmap have been polished
  • Issue #90 - Do not trigger builds multiple times if we are intending the test jobs to be triggered multiple times
  • Issue #94 - Load list of repositories from disk only once
  • Issue #117 - gaia-try builders are always upstream builders
  • Determine a running job correctly (068b5ee)
  • Issue #142 - Loading buildjson files from disk is now only done once
  • Issue #135 - Remove buildjson files which have fallen out of date
  • Issue #146 - If the buildapi information about a build is corrupted, trigger that build again
  • Some DONTBUILD pushes can have buildapi support (dcb942f)
  • Issue #120 - Prevent triggering more build jobs than necessary


For all changes visit: 0.3.0...0.4.0


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting Zweiwöchentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.

Air MozillaFlowbox.io & Luna

Flowbox.io & Luna Woyciech Danilo from flowbox.io talks about their new programming language, Luna. Flowbox develops professional video compositing software, which is powered by a new programming language...

Mozilla Reps CommunityNew Rep Mentors, welcome!

Dear Reps Planet,

The council is excited to share with you, our second group of new Mozilla Rep Mentors this Year.

These are Reps council has recognized as being equally good at inspiring and empowering others, as they are leading globally and locally in their communities.

As mentorship is core to the program, we are very grateful they have agreed to take on this new responsibility.

A crucial role in the Mozilla Reps ecosystem is that of a mentor. We strive for every Rep to become a mentor for the program to become self-sustaining and for Reps to play a central role in our ambitious goals for growing and enabling the Mozilla Community. We’ve just accepted eight new mentors, bringing the current total to 54.

Our new mentors are:

Please join us in congratulating our new Mozilla Rep Mentors – via this thread on Discourse
:)

Reps Mentor Role Description:

  • Mozilla Reps recognizes that our primary goals are best reached through the support, encouragement, and empowerment of community through mentorship. Mentoring is a process for the informal transmission of knowledge, made possible through regular and supportive interaction.
  • We encourage mentors to be as open to learning from their mentees, as they are to teaching, for the benefit and growth of both individuals and the program as a whole.

Welcome New Reps Mentors!

Air MozillaCommunity Education Call

Community Education Call The Community Education Working Group exists to merge ideas, opportunities, efforts and impact across the entire project through Education & Training.

Monty MontgomeryDaala Blog-Like Update: Bug or feature? [or, the law of Unintentionally Intentional Behaviors]

Codec development is often an exercise in tracking down examples of "that's funny... why is it doing that?" The usual hope is that unexpected behaviors spring from a simple bug, and finding bugs is like finding free performance. Fix the bug, and things usually work better.

Often, though, hunting down the 'bug' is a frustrating exercise in finding that the code is not misbehaving at all; it's functioning exactly as designed. Then the question becomes a thornier issue of determining if the design is broken, and if so, how to fix it. If it's fixable. And the fix is worth it.

[continue reading at Xiph.Org....]

Matt ThompsonMozilla Foundation March 2015 Board Meeting

What’s happening at the Mozilla Foundation? This post contains the presentation slides from our recent Board Meeting, plus an audio interview with Executive Director Mark Surman. It provides highlights from 2014, a brief summary of Mozilla’s 2015 plan, and a progress report on what we’ve achieved over the past three months.

What we did in 2014

  • Grew contributors and ground game. (10,077 active contributors total.)
  • Prototyped new Webmaker mobile product
  • Expanded community programs by 3x

March 2015 Board Deck - Share.004

March 2015 Board Deck - Share.005 March 2015 Board Deck - Share.007

Mozilla’s 2015 Plan

Mozilla-wide goals: grow long-term relationships that 
help people and promote the open web. By building product and empowering people.

Webmaker+ goal: Expand participation in Webmaker through new software and on the ground clubs.

Building Mozilla Learning

By 2017, we’ve built Mozilla Learning: a global classroom and lab for the citizens of the web. Part community, part academy, people come to Mozilla Learning to unlock the power of the web for themselves, their organizations and the world.

2015 Mozilla Foundation goals

  • Deepen learning networks (500 cities)
  • B
uild mass appeal learning product (250k Monthly Active Users)
  • Craft ambitious Mozilla Learning and community strategy
Q1 Mozilla Foundation highlights
  • Major victory in US net neutrality, with Mozilla getting 330k people to sign a petition.
  • Launched Webmaker app at Mobile World Congress. Strong interest from partners, possible link to Orange FirefoxOS launch in Africa and Middle East.

March 2015 Board Deck - Share.019

March 2015 Board Deck - Share.020

March 2015 Board Deck - Share.021

Gervase MarkhamTop 50 DOS Problems Solved: Shrinking Hard Disk

Q: My hard disk seems to be getting smaller! There is a megabyte less free space than there was a month ago, yet I have not saved anywhere near 1MB’s worth of files. What’s going on?

A: This is quite a common problem, but most sufferers don’t realise they’ve got it. What happens is that some of the free space gets allocated to a non-existent file.

In other words the disk filing system has, in your case, a megabyte allocated to one or more files that don’t have a directory entry. They cannot therefore be seen with the DIR command, nor deleted.

Fortunately it is possible to turn these lost chains, as they are called, back into real files which can then be seen and deleted in the normal way. Simply type this command:

CHKDSK /F

If you have any lost chains, Chkdsk will tell you so and ask you if you want to convert them into files. Answer ‘Y’.

FILE0000.CHK, FILE0001.CHK, FILE0002.CHK…

Mozilla Science LabAsk Us Anything: Lessons from the Local Research Coding Groups Panel Discussion

On Tuesday, the the Science Lab conducted its first Ask Us Anything on the MSL Forum, organized by Noam Ross. The topic was lessons learned in running local study groups, user’s groups, hacky hours and other meetups for researchers using and writing code; many thanks go out to the seven panelists who were available to answer questions:

This was a tremendously successful event, with a sustained conversation of more than a post per minute on the topic for two full hours; a lot of interesting ideas came out of the discussion, a few of which I summarize here, followed by detailed discussion below; also, be sure to check out the full thread.

Summary

A few great ideas for study groups can be distilled from this event:

  • Get your event information in front of a few hundred people regularly; 10% attendance is normal.
  • Involve your institution in communicating about your event if possible.
  • Always have an exit strategy: how will you pass the baton to the next cohort of organizers?

And for online AMA-style events:

  • Make it a panel discussion; this creates the space for the community to ask questions, but keeps the thread lively and substantial with discussions between experienced community members.
  • Make it a time-limited event; this encourages active participation and re-creates some of the energy usually found only at conferences and in-person meetups.

Sticking Power

One of the first things the thread discussed, was one of the most common problems for any regular community event: sustainability. How do we get people coming out and participating, month after month, and maintain momentum?

The panel quickly zeroed in on an interesting challenge: will interest be inspired and sustained by highly targeted skills and tools trainings, or will keeping things as general as possible appeal to a wide audience? Highly specific material will be the most attractive to the small group of people already interested in it, while general topics might seem vague or unclear to a potential attendee on how relevant they’ll be, even if they, in principle, apply to a wider range of people.

This led to an important observation: the bigger the pool of people a study group is communicating with, the better its attendance will be. Panelists seemed to have a bit more success with the specific and clearly practically applicable; what allowed these groups to keep attendance up despite getting into the nitty gritty, was developing a large audience of people aware of their activities. Numbers seemed to hover around 10% attendance, if we compare number of actual attendees to size of mailing lists; but with a large audience (critical mass seemed to be around 200 people), there’s sure to be a cohort of people interested in whichever specific topic the group wants to take up.

But what about the early days, before a new group has gotten in front of that first 200? Fiona and Jeff made a key observation: stick to it, even if the first couple of events are just you and one or two other people. It takes time for word of mouth to spread, time for people to make up their minds that they’re comfortable dipping their toe into something like a meetup group – and, worst case, you’ve set aside some time to get some of your own work done and have a beer.

Finally on the topic of sustainability, another common concern that came up was the relationship of organizers to the host institution; post-docs and students move on after only a few short years, and without someone to pick up the torch, efforts can fizzle out. The panel agreed that it’s crucial for senior organizers to cultivate relationships with people who they can hand off to in future, but this calls out another key design question: how can we design a really smooth hand-off procedure between generations of organizers? This is a long term goal a bit beyond the concerns of groups just getting started, but I think with some savvy design, this process can be made quite smooth; more of my own ideas on this will be forthcoming on this blog very soon.

Communication

We need that pool of 200 people thinking about our event – how do we assemble them to begin with?

Organizers found, perhaps surprisingly, that their attendees were pretty quiet on Twitter, and didn’t generate much conversation there, although Twitter might be more effective as a ‘push’ platform, to let people know about events and content. More successful were blogs and mailing lists; panelists cited the familiarity of these formats to most researchers.

A novel approach that a few of the groups based at universities took, was to approach departments for inclusion in departmental news letters and welcome packages for new students. Not only do these communication channels typically already exist in most institutions, they can put a group in front of a large number of potentially interested people quickly, and lend a degree of inclusion into the establishment that helps catch peoples’ attention.

Novel Ideas

One thing I love about getting a bunch of people together to talk, is that novel ideas always come out. One of my favorites was a whole other flavor of event that a study group could put on; Fiona Tweedie described ‘Research Speed Dating’, an event where a bunch of people set up short demos of tools they use in their research, and attendees circulate among them, exploring the tools in short, five-minute introductions to see if they might be interested in looking deeper into them at a future meetup. Topics that garner a lot of interest are chosen for deeper dives at future events, and prospective participants get to meet organizers and start developing connections in a relatively no-pressure atmosphere.

Another observation I found compelling from the discussion came from Rayna Harris – graduate school often involves working on the same project for years, and the singular focus can be maddening after a while. It’s really refreshing to have a project that comes in little, month-long bites; from announcing a meetup to delivering can easily occupy only a few weeks, giving a sense of delivery and completion on a much faster cadence than research naturally provides.

Meta-AMA

A number of people also asked me about the AMA format itself; I think it was a big success, and it was largely thanks to some design decisions Noam Ross made when we were setting this event up:

  • Have a panel of people to discuss the topic at hand. This worked very well, since even when there weren’t newcomers to ask questions, the panelists all talked amongst themselves, which led to some really deep and insightful questions and answers from old hands in the space. We had a seven-person panel, and everyone participated and seemed heard.
  • Put it all on one thread. I admit, I had some misgivings about having seven parallel conversations in one thread. It was about as chaotic as I imagined, but Noam was right, it was actually a good thing; it enhanced the panel’s ability to interact and make this as much a panel discussion as an AMA – call it an open panel discussion.

A remarkable thing about this event, was that the same sort of skill and knowledge sharing that happens so naturally at a conference and that I’ve been trying to produce online came out in this event; by sitting a half dozen people down around a topic in a finite time window (we did two hours and it didn’t drag at all), the same sort of connections and mutual understanding came out.

Conclusion

A number of interesting ideas, metrics and goals for study groups came out of this conversation, which we’ll be folding in to our forthcoming support for setting up your own meetup – watch this space for news and opportunities in that project coming very soon, and in the meantime, make sure your local study group is on the map!

Map of Study Groups & Hacky Hours

Given what a great time and what a productive discussion everyone had on the forum on Tuesday, I’m looking forward to making these panel AMAs a regular event at the Lab; if you have a topic you’d like to suggest, post it in the Events section of the forum, or tweet it to us at @MozillaScience and @billdoesphysics. I hope you’ll join us!

Mozilla Release Management TeamFirefox 37 beta7 to rc

Due to the short cycle (5 weeks instead of 6), we landed more changes than we used to in the RC build.

We took some stability fixes for graphic issues.

  • 22 changesets
  • 229 files changed
  • 724 insertions
  • 4356 deletions

ExtensionOccurrences
h17
cpp17
ini7
py3
list2
js2
html2
sh1
json1
ipdlh1
hgtags1
build1

ModuleOccurrences
storage17
dom14
mobile12
gfx8
widget4
testing3
layout3
docshell2
security1
editor1
browser1

List of changesets:

Chris ManchesterBug 1145444. r=jmaher, a=test-only - 1efc8c39543c
Jeff GilbertBug 1143218 - Use mochitest subsuites to specify webgl tests. r=jmaher, r=gbrown, a=test-only - a58b8b594396
Kyle HueyBug 1145870. r=bz, a=lmandel - 0725e4cfa3c3
CykesiopkaBug 1121117 - Add fuzz time to workaround non-monotonicity of Date(). r=keeler, a=test-only - 8358c6c2c417
Tim TaubertBug 1088163 - Fix intermittent browser_offlineQuotaNotification.js timeouts by properly waiting for a notification to show. r=markh, a=test-only - 72912a71fb98
Ehsan AkhgariBug 1142360 - Move the mochitests for bugs 441782, 467672 and 570378 to the reftest framework. r=dbaron, a=test-only - 62a72d33d16b
Neil DeakinBug 942411 - Change the frame height to force a reflow and renable the test on Linux to see if it helps. r=smaug, a=test-only - b8ec30b0a437
James WillcoxBug 1090300 - Repopulate input buffers when necessary in Android media decoder. r=gcp, a=lmandel - 2cca5b090036
Ryan VanderMeulenBug 1146061 - Skip test_peerConnection_basicH264Video.html on Windows debug. a=test-only - 19b630388dda
Ryan VanderMeulenBacked out changeset 72912a71fb98 (Bug 1088163) because it depends on BrowserTestUtils, which isn't available on 37. - 196c6575593d
Matt WoodrowBacked out changeset 0c23dcbc6bf7 (Bug 1138967) for causing crashes - 6d7a2555b021
Matt WoodrowBacked out changeset 0c23dcbc6bf7 (Bug 1138967) for causing crashes. CLOSED TREE - 2592523e1eb0
Olli PettayBug 1146339 - Do anchor scrolling right before dispatching popstate/hashchange. r=bz, a=lmandel - 4d306a83ae1b
Marco BonardoBug 1005991 - mozStorage should not use XPCVariant off the main thread. r=asuth, a=lmandel - b8c1a399905d
Marco BonardoBug 1005991 - Trivial fixes for non-unified builds. r=me, a=lmandel - fadc9f270e9f
Ryan VanderMeulenMerge beta to m-r. a=merge - 07c827be741f
Steven MichaudBug 1137229 - Keyboard input can stop working in a window. r=smaug a=lmandel CLOSED TREE - 45961b7d67dc
Shih-Chiang ChienBug 1080130 - Unreferenced socket might be closed before opened. r=khuey, a=test-only - b6a4dca0edc9
Jeff MuizelaarBug 1137716 - Try blacklisting Optimus w/ Intel Ironlake Graphics. r=bas, a=lmandel - d56b6d648c01
Matt WoodrowBug 1145585 - Hold a ref to the right texture. r=jmuizelaar, a=lmandel - e35deaa85d21
Ehsan AkhgariBug 1146883 - Null check the node passed to GetGoodSelPointForNode. r=smaug, a=lmandel - 8fda35675a3f
Ryan VanderMeulenMerge beta to m-r. a=merge - 7ec23d08cf32

Karl DubostRefresh HTTP Header

Through discussions on whatwg, I learned (or I had just forgotten) about the Refresh HTTP header. Let's cut strait to the syntax:

HTTP/1.1 200 OK
Refresh: 5; url=http://www.example.org/fresh-as-a-summer-breeze

where

  • 5 means here 5 seconds.
  • url= gives the destination where the client should head after 5 seconds.

Simon Pieters (Opera) is saying in that mail:

I think Refresh as an HTTP header is not specified anywhere, so per spec
it shouldn't work. However I think browsers all support it, so it would be
good to specify it.

Eric Law (ex-Microsoft) has written about The Performance Impact of META REFRESH. If we express the previous HTTP header in HTML, we get:

<meta http-equiv="refresh" content="5;url=http://www.example.org/fresh-as-a-summer-breeze" />

In his blog post, Eric is talking about people using refresh to… well refresh the page. He means loading the same exact page over and over again. And indeed it means for the browser to create a certain number of "unconditional and conditional HTTP requests to revalidate the page’s resources" for each reload (refresh).

On the Web Compatibility side of things, I see the <meta http-equiv="refresh" …/> used quite often.

<meta http-equiv="refresh" content="0;url=http://example.com/there" />

Note the 0. Probably the result of sysadmins not willing to touch the configuration of the servers, and so front-end developers taking the lead to "fix it", instead of using HTTP 302 or HTTP 301. Anyway, it is something which is being used for most of the time, redirecting to another domain name or uri. Refresh HTTP Header on the other hand, I don't remember seeing it that often.

Should it be documented?

Simon is saying: "it would be good to specify it." I'm not so sure. First things first.

Testing

Let's create a test, by making a page sending a Refresh.

Header set Refresh "0;url=https://www.youtube.com/watch?v=sTJ1XwGDcA4"

which gives

HTTP/1.1 200 OK
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Length: 200
Content-Type: text/html; charset=utf-8
Date: Thu, 26 Mar 2015 05:48:57 GMT
ETag: "c8-5122a67ec0240"
Expires: Thu, 02 Apr 2015 05:48:57 GMT
Keep-Alive: timeout=5, max=100
Last-Modified: Thu, 26 Mar 2015 05:37:05 GMT
Refresh: 0;url=https://www.youtube.com/watch?v=sTJ1XwGDcA4

This should redirect to this Fresh page

  • Yes - Firefox 36.0.4
  • Yes - Opera 29.0.1795.26
  • Yes - Safari 8.0.4 (10600.4.10.7)

If someone could test for IE and Chrome at least.

Browser Bugs?

On Mozilla bug tracker, there are a certain number of bugs around refresh. This bug about inline resources is quite interesting and might indeed need to be addressed if there was a documentation. The bug is what the browser should do when the Refresh HTTP header is on an image included in a Web page (this could be another test). For now, the refresh is not done for inline resources. Then what about scripts, stylesheets, JSON files, HTML document in iframes, etc? For the SetupRefreshURIFromHeader code, there are Web Compatibility hacks in the source code of Firefox. We can read:

// Also note that the seconds and URL separator can be either
// a ';' or a ','. The ',' separator should be illegal but CNN
// is using it."

also:

// Note that URI should start with "url=" but we allow omission

and… spaces!

// We've had at least one whitespace so tolerate the mistake
// and drop through.
// e.g. content="10 foo"

Good times…

On Webkit bug tracker, I found another couple of bugs but about meta refresh and not specifically Refresh:. But I'm not sure it's handled by WebCore or if it's handled elsewhere in MacOSX (NSURLRequest, NSURLConnection, …). If someone knows, tell me. I didn't explore yet the source code.

On Chromium bug tracker, another couple of bugs for meta refresh, with some interesting such as this person complaining that a space doesn't work instead of a ;. This is also tracked on WebKit. Something like:

<meta http-equiv="refresh" content="0 url=http://example.com/there" />

Also what should be done with a relative URL.

<meta http-equiv="refresh" content="0;url=/there" />

But for Chromium, I have not found anything really specific to Refresh header. I didn't explore yet the source code.

On Opera bug tracker, it is still closed. We tried to open it when I was working there, and it didn't work.

Competition Of Techniques

Then you can also imagine the hierarchy of commands in a case like this:

HTTP/1.1 301 Permanent Redirect
Refresh: 0;url=http://example.net/refresh-header
Location: http://example.net/location

<!DOCTYPE html>
<html>
<title>Fresh</title>
<meta http-equiv="refresh" content="0;url=http://example.net/meta" />
<body onload="document.location.replace('http://example.net/body')">

</body>
</html>

My guess is the 301 always win with the Location HTTP header, or at least it's what I hope.

History

I can find very early references of meta refresh such as in Netscape Developer documentation.

The earliest mention seems to be An Exploration Of Dynamic Documents I can't find anywhere the documentation for Refresh HTTP header on old Netscape Web sites. (Thanks to SecuriTeam Web site and Amit Klein)

So another thing you obviously want to do, in addition to causing the current document to reload, is to cause another document to be reloaded in n seconds in place of the current document. This is easy. The HTTP response header will look like this:

Refresh: 12; URL=http://foo.bar/blatz.html

In June 1996, Jerry Jongerius posted about HTTP/1.1 Refresh header field comments

My concern with "Refresh" is that I do not want it to be a global concept (a browser can only keep track of one refresh)--it looks to be implemented this way in Netscape 2.x. I would like "Refresh" to apply to individual objects (RE: the message below to netscape).

which Roy T. Fielding replied to:

Refresh is no longer in the HTTP/1.1 document -- it has been deferred to HTTP/1.2 (or later).

Should it be documented? Well, there are plenty of issues, there are plenty of hacks around it. I have just touched the surface of it. Maybe it would be worth to document indeed how it is working as implemented now and how it is supposed to be working when there's no interoperability. If I was silly enough, maybe I would do this. HTTP, Archeology and Web Compatibility issues that seems to be close enough from my vices.

Otsukare!

Robert O'CallahanPaper Titles

A few tips on computer science paper titles:

Titles of the form Catchy Project Name: What Our Project Is About are stilted. Show some imagination.

Titles of the form Towards [Some Goal We Totally Failed To Reach] are an obvious attempt to dress up failure as success. Don't do that.

Do write bold papers about negative results. Call your paper [Our Idea] Doesn't Work (And Here's Why) and I'll be excited to read it.

[Goal] Is Harder Than You Think would also get my attention.

If your paper title contains the word Aristotelian, I will never read your work again and skip the conference too --- but you get points for chutzpah.

Note: following this advice may harm your career. Consider a career where you don't have to publish or perish.

The Mozilla BlogPlease welcome Allison Banks, Vice President of People

We’re thrilled to announce that Allison Banks is joining the leadership team at Mozilla today as our new Vice President of People.

As the leader of our global human resource team at Mozilla, Allison will be responsible, above all, for ensuring our people have what they need to help move our mission forward. Specifically, her team will develop and execute the people-related strategies and activities that will help to foster growth, innovation, and our overall organizational effectiveness.

With over 20 years of experience, Allison joins us most recently from GoPro where she served as Sr. Director of HR overseeing the hiring of 900 people, opening offices in seven countries, integrating acquisitions and building the HR processes and systems required to support a dynamic global organization. Prior to GoPro, she developed her HR expertise and track record for inspiring and supporting people at Perforce Software, Citibank, and Ingres.

Allison’s background, experience and passion for the human side of business is an exceptional fit for Mozilla.

She will be based in the Bay Area, working out of our Mozilla Space in San Francisco and our headquarters in Mountain View.

Please join me in welcoming Allison to Mozilla!

chris

Background:

Allison Banks, Vice President of People, Mozilla

Bio & Mozillians profile

LinkedIn profile

High-res photo

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Mozilla WebDev CommunityBeer and Tell – March 2015

Once a month, web developers from across the Mozilla Project get together to design the most dangerous OSHA-compliant workstation possible. While searching for loopholes, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Michael Kelly: dxr-cmd

A certain blog post author was first with dxr-cmd, a command-line client for making queries to DXR, Mozilla’s source code browser. The tool is installed via pip and supports any query you can make via the web interface. Output can be run through a pager utility such as less, and you can also control the syntax highlighting applied to the output.

Daniel Maher: AudioAddict plugin for Plex

Next up was phrawzty, who was not present but shared a link to AudioAddict.bundle, a Plex plugin that allows you to play music from AudioAddict-based services (such as radiotunes.com, di.fm, and more).

Peter Bengtsson: Redunter

peterbe shared Redunter, a web service that helps hunt down unused CSS on your website. By embedding a small snippet of JS into your page and browsing through your website, Redunter will analyze the HTML being rendered and compare it to the CSS being served. The end result is a list of CSS rules that did not match any HTML that was delivered to the user. Redunter even works with sites that modify the DOM by watching for mutation events and tracking the altered HTML.

Scott Michaud: GPU-Accelerated Audio

ScottMichaud returns with more fun stuff using the WebCL extension! Scott shared a demo of WebCL-powered audio where a virtual microphone was surrounded by individual raindrop sounds. By controlling the rate of raindrops, you can simulate a higher audio load and see the difference that pushing audio processing to the GPU can make.

Les Orchard: Parsec Patrol

Senior Space Cadet lorchard shared Parsec Patrol, a vector-based space game for the web. While there’s no full game made yet, there is a webpage with several demos showing collision detection, spaceship navigation, missiles, point-defense systems, and more!

Matthew Claypotch: a9r

Have you ever seen an abbreviation like l10n or i18n and had no idea what it meant? Have no fear, Uncle Potch is here with a9r, the answer to the abbreviation problem! Simply install the command and enter in an abbreviation to receive a list of all words in the SOWPODS word list that match. Got a word that you need to abbreviate? Not only can a9r decipher abbreviations, it can create them!

Matthew Claypotch: socketpeer

In a slightly-less-whimsical vein, potch also shared socketpeer, a simple JavaScript library for 1:1 messaging via WebRTC Data Channels and WebSockets. Extracted from the Tanx demo that Mozilla showed at GDC 2015, socketpeer contains both a server API for establishing peer connections between users and a client API to handle the client-side communication. Potch also shared a demo of a peer-to-peer chat application using socketpeer.

Chris Van Wiemeersch: PhantomHAR

Next up was cvan, who shared PhantomHAR, a PhantomJS and SlimerJS script that generates an HTTP Archive (or HAR) for a URL. A HAR is an archive of data about HTTP transactions that can be used to export detailed performance data for tools to consume and analyze, and PhantomHAR allows you to easily generate the HAR for use by these tools.

Chris Van Wiemeersch: fetch-manifest

Next, cvan shared fetch-manifest, a small library that takes a URL, locates the W3C web app manifest for the page, fixes any relative URLs in the manifest, and returns it. This is useful for things like app marketplaces that want to allow people to submit web apps by submitting a single URL to the app they want to submit.

Bill Walker: robot-threejs

Last up was bwalker, who shared robot-threejs, an experimental steampunk robot game powered by three.js and WebGL. The game currently allows you to fly around a 3D environment that has 3D positional audio emitting from an incredibly mysterious cube. CAN YOU SOLVE THE CUBE MYSTERY?


This month we think we’ve really got something special with our Seki Edge keyboard-and-mouse combo. Order now and get a free box of Band-aids at no additional cost!

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Air MozillaBugzilla Development Meeting

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Advancing ContentContent Services Team Adds New Talent With Partnerships (and Mozilla) Experience

Earlier this year I wrote about how 2015 will be a big year for Mozilla to scale and build better personalized experiences as we help move the ad industry forward. Today, I’m excited to announce two new additions to our Content Services team as we continue our mission to create innovative content offerings while always upholding Mozilla’s commitment to user privacy.

Accomplished interactive advertising expert Aaron Lasilla has joined Mozilla and our Content Services team as head of content partnerships. Aaron comes to us from EA Games where he served as the global director of brand solutions and co-founded the in-game advertising group. Aaron was instrumental in negotiating and securing a number of strategic partnerships for EA’s publishing division as he built the group it into a new business and revenue channel for EA, including the largest EA Online partnership ever (within Pogo.com’s casual games offering, in 2003). During his tenure, EA was established as the number one publisher of integrated advertising placements and partnership in and around games. Aaron previously managed Microsoft’s Premium Games Advertising offering and also worked in sales and sponsorship capacities for Double Fusion, Clear Channel Entertainment and Kemper Sports Marketing.

As we continue to develop and refine our new offerings like Firefox Tiles, Aaron will be focusing on engagement and value exchange for Mozilla’s offerings while maintaining the same quality and standards of user experience that Mozilla is known for.

In addition, I’m excited to formally announce that long-time Mozillian Patrick Finch joined our group late last year as director of marketing. Patrick has been with Mozilla for over seven years based out of Sweden and has worked in a number of strategic roles on Mozilla’s desktop and mobile projects over that time. Prior to joining Mozilla Patrick spent over ten years at Sun Microsystems in a variety of capacities including working on numerous open source projects.

As we continue the rollout of Firefox Tiles and bring on new partners, you’ll probably be seeing more of Aaron and Patrick on this blog. If you’re interested in partnering with us in our mission or if you’d just like to drop our team a line, feel free to reach out to us at contentservices@mozilla.com.

Aaron Lassila

Aaron Lassila

Patrick Finch

Patrick Finch

 

J. Ryan StinnettWiFi Debugging for Firefox OS

I am excited to announce that we're now shipping WiFi debugging for Firefox OS! It's available in Firefox OS 3.0 / master with Firefox Nightly on desktop.

WiFi debugging allows WebIDE to connect to your Firefox OS device via your local WiFi network instead of a USB cable.

The connection experience is generally more straightforward (especially after connecting to a device the first time) than with USB and also more convenient to use since you're no longer tied down by a cable.

Security

A large portion of this project has gone towards making the debugging connection secure, so that you can use it safely on shared network, such as an office or coffee shop.

We use TLS for encryption and authentication. The computer and device both create self-signed certificates. When you connect, a QR code is scanned to verify that the certificates can be trusted. During the connection process, you can choose to remember this information and connect immediately in the future if desired.

How to Use

You'll need to assemble the following bits and bobs:

On Firefox OS, enable WiFi debugging:

  1. Go to Developer Settings on device (Settings -> Developer)
  2. Enable DevTools via Wi-Fi
  3. Edit the device name if desired

Firefox OS WiFi Debugging Options

To connect from Firefox Desktop:

  1. Open WebIDE in Firefox Nightly (Tools -> Web Developer -> WebIDE)
  2. Click "Select Runtime" to open the runtimes panel
  3. Your Firefox OS device should show up in the "WiFi Devices" section
  4. A connection prompt will appear on device, choose "Scan" or "Scan and Remember"
  5. Scan the QR code displayed in WebIDE

WebIDE WiFi Runtimes WebIDE Displays the QR Code

After scanning the QR code, the QR display should disappear and the "device" icon in WebIDE will turn blue for "connected".

You can then access all of your remote apps and browser tabs just as you can today over USB.

Technical Aside

This process does not use ADB at all on the device, so if you find ADB inconvenient while debugging or would rather not install ADB at all, then WiFi debugging is the way to go.

By skipping ADB, we don't have to worry about driver confusion, especially on Windows and Linux.

Supported Devices

This feature should be supported on any Firefox OS device. So far, I've tested it on the Flame and Nexus 4.

Known Issues

The QR code scanner can be a bit frustrating at the moment, as real devices appear to capture a very low resolution picture. Bug 1145772 aims to improve this soon. You should be able to scan with the Flame by trying a few different orientations. I would suggest using "Scan and Remember", so that scanning is only needed for the first connection.

If you find other issues while testing, please file bugs or contact me on IRC.

Acknowledgments

This was quite a complex project, and many people provided advice and reviews while working on this feature, including (in semi-random order):

  • Brian Warner
  • Trevor Perrin
  • David Keeler
  • Honza Bambas
  • Patrick McManus
  • Jason Duell
  • Panos Astithas
  • Jan Keromnes
  • Alexandre Poirot
  • Paul Rouget
  • Paul Theriault

I am probably forgetting others as well, so I apologize if you were omitted.

What's Next

I'd like to add this ability for Firefox for Android next. Thankfully, most of the work done here can be reused there.

If there are features you'd like to see added, file bugs or contact the team via various channels.

Mozilla Release Management TeamFirefox 36.0.2 to 36.0.4

Last Friday and Saturday, we released two minor releases to fix the issues found during the pwn2own hacking contests.

  • 6 changesets
  • 8 files changed
  • 25 insertions
  • 16 deletions

ExtensionOccurrences
txt2
py2
sh1
json1
h1
cpp1

ModuleOccurrences
testing3
docshell2
mobile1
config1
browser1

List of changesets:

Steve FinkBug 1145255. r=luke, a=lmandel - 2b1ecc5fad12
Jordan LundBug 1113460 - Bump mozharness.json to revision 75c435ef19ca. a=test-only - 3d681d747053
Jordan LundBug 1142743 - Increase chunks for Android 2.3 mochitest-gl, in-tree cfg total chunk fix. r=kmoir, a=test-only - 7d23a45099ee
Geoff BrownBug 1142552 - Update robocop chunking to fix rc10. r=ahal, a=test-only - e2ba5973e4bb
Olli PettayBug 1144988 - Don't let other pages to load while doing scroll-to-anchor. r=bz, a=lmandel - d5a003cc284a
Kyle HueyBug 1145870. r=bz a=lmandel - 57cc76236bd7

Mozilla Release Management TeamFirefox 37 beta6 to beta7

In this beta release, we continued to land some patches for MSE. We disabled MSE under Mac OS X for now.

We also took the fixes for the pwn2own hacking contest in this release.

  • 54 changesets
  • 109 files changed
  • 2315 insertions
  • 1248 deletions

ExtensionOccurrences
cpp54
h39
cc3
py2
js2
html2
json1
jsm1
ipdlh1
ipdl1
inc1
c1
build1

ModuleOccurrences
dom26
gfx18
layout15
media14
widget9
toolkit5
image5
ipc4
testing3
security2
js2
docshell2
xpfe1
modules1
caps1
browser1

List of changesets:

Matt WoodrowBug 1131638 - Discard video frames that fail to sync. r=cpearce, a=lmandel - 297e2e626fe9
Matt WoodrowBug 1131638 - Record invalid frames as dropped for video playback stats. r=ajones, a=lmandel - f88fcb8ccc27
Matt WoodrowBug 1131638 - Disable hardware decoding if too many frames are invalid. r=cpearce, a=lmandel - edb24ca59d13
Mike HommeyBackout the part of changeset 8044e5199fe2 (Bug 1080319) that removed -remote. a=sledru - 29eac8276b62
Matt WoodrowBug 1139503 - Backlist ATI driver version for DXVA on windows 10 since it's causing crashes. r=cpearce, a=sledru - 5a8085d3a0fe
Bill McCloskeyBack out Bug 1103036 to resolve shutdown hangs a=backout - 2cc99febbda0
Ryan VanderMeulenNo bug - Bump mozharness.json to revision fde96e1730cc. a=NPOTB - d16fe93d2755
Jordan LundBug 1142743 - Increase chunks for Android 2.3 mochitest-gl, in-tree cfg total chunk fix. r=kmoir, a=test-only - 2d55d8220616
Geoff BrownBug 1142552 - Update robocop chunking to fix rc10. r=ahal, a=test-only - 0e0204877015
Ralph GilesBug 1141349 - Pref off MSE on Mac. r=ajones, a=lmandel - c8f377118985
Jan VargaBug 1067568 - Fix intermittent "ASSERTION: We don't know anyting about this file handle?!: 'Error', file dom/filehandle/FileService.cpp, line 234". r=bent, a=lsblakk - 199e30cb18f3
Margaret LeibovicBug 1141550 - Register an AsyncShutdown blocker to persist download changes. r=paolo, a=lsblakk - 3eeb35bbafd2
Jean-Yves AvenardBug 1139271 - Part 1: Add logging when encountering invalid atoms. r=k17e, a=lsblakk - 202177831c59
Jean-Yves AvenardBug 1139271 - Part 2: Ignore partial moof. r=k17e, a=lsblakk - 21384861c447
Jean-Yves AvenardBug 1139271 - Part 3: Only consider a Box to be available if entire content is available. r=k17e, a=lsblakk - f4c0cec35772
Paul AdenotBug 1141781 - Grip the VideoFrameContainer when queing a call to invalidate in the MediaStreamGraph. r=roc, a=lsblakk - 6a4e68222995
Matthew GreganBug 1142746 - Make unexpected SL_PLAYEVENT_HEADATMARKER notification non-fatal. r=brsun, a=lsblakk - 067f83e99f66
Ryan VanderMeulenBacked out changeset 6a4e68222995 (Bug 1141781) for mochitest crashes/asserts. - 6a9120be7216
Ethan HuggBug 1144157 - Add ciscospark.com to the screenshare default whitelist r=jesup a=lmandel - bd028b4c3b95
Simon MontaguBug 1114239 patch 1: Backout Bug 1105137, r=jfkthame, a=lmandel - 0837b7d1188b
Simon MontaguBug 1114239 patch 2: Backout Bug 1079139, r=jfkthame, a=lmandel - 8fca3694654a
Simon MontaguBug 1114239 patch 3: Backout Bug 1062963 patch 3, r=jfkthame, a=lmandel - 470cd8c739c5
Olli Pettaybackout Bug 1121406 which enabled WebSocket in Workers in beta, a=abillings - f0a0d5d2d525
David MajorBug 1138794: Use an alternate crash report server on Windows XP SP2. r=ted a=lmandel - caf324dbb13f
Matthew GreganBug 1124542 - WebrtcGmpVideoDecoder shouldn't crash when GMP completion callbacks are received. r=rjesup, a=lmandel - c54687cb7086
Ethan HuggBug 1125047 - GMP should catch decoder failures. r=jesup, a=lmandel - 5598a289b442
Chris PearceBug 1140797 - Make gmp-clearkey buildable outside of mozilla-central. r=edwin, a=lmandel - a49b40d229df
Chris PearceBug 1140797 - Prevent fatal assert when doing base64 decode in gmp-clearkey. r=edwin, a=lmandel - 29333933d6d6
Jordan LundBug 1113460 - Bump mozharness.json to revision 75c435ef19ca. a=test-only - 938177ece421
Cameron McCormackBug 1143953 - Fix typo in test_font_loading_api.html where it incorrectly returns document.fonts.read. r=jdaggett, a=test-only - 37be317efc7a
Cameron McCormackBug 1143995 - Remove unnecessary layout flushes from test_font_loading_api.html. r=jdaggett, a=test-only - e84f65c3a6aa
Cameron McCormackBug 1144507 - Fix incorrect Promise usage in test_font_loading_api.html. r=jdaggett, a=test-only - 9fc579f7bf3a
Tim TaubertBug 1124409 - Fix intermittent browser_bug1015721.js failures by ensuring the EventStateManager has a document before trying to dispatch ZoomChangeUsingMouseWheel. r=smaug, a=test-only - 46cfbcfb58c5
Tim TaubertBug 1124409 - Fix test_bug659071.html to properly reset page zoom before finishing. r=smaug, a=test-only - e4f1cc6f63a3
Nicolas B. PierronBug 1137624 - Disable Array.join optimization. r=jandem, a=abillings - 968fa2b32612
Aaron KlotzBug 1141081 - Ensure nsPluginInstanceOwner::Destroy is called before returning from failed plugin instantiation. r=jimm, a=lmandel - 2710769c40a5
Aaron KlotzBug 1128064 - Check for null mContent in nsPluginInstanceOwner::GetDocument. r=jimm, a=abillings - e92558fa59eb
Byron Campen [:bwc]Bug 1141749 - Prevent collisions in local SSRCs. r=mt, a=abillings - d76c709556bb
Nicolas SilvaBug 1125848 - Reduce the likelyhood of a CompositorParent being destroyed without the proper shutdown sequence. r=sotaro a=lmandel - 45897d27ef82
Avi HalachmiBug 1142079 - Disable refresh driver telemetry on Android. r=froydnj, a=lmandel - 17adc07baf56
Matt WoodrowBug 1138967 - Part 1: Remove ISharedImage. r=nical, a=lmandel - c1356c27fa1b
Matt WoodrowBug 1138967 - Part 2: Create IMFYCbCrImage so that image data copying happens off the decoder thread. r=nical, r=cpearce, a=lmandel - 07e266d45703
Matt WoodrowBug 1138967 - Part 3: Add D3D11 YCbCr texture clients and upload on the client side. r=nical, a=lmandel - 0c23dcbc6bf7
Masatoshi KimuraBug 1133187 - Update fallback whitelist. r=keeler, a=lmandel - 02b9c74353ad
Seth FowlerBug 1142849 - Upliftable fix for imgRequest TSan violations. r=tn, a=lmandel - 9b7aa96d0e11
Karsten DüsterlohBug 1116952 - Treelines fragments after Bug 1105104. r=jwatt, a=lmandel - 5bd29483f85e
Jeff MuizelaarBug 1130978 - Fix VisitEdges. r=kats, a=lmandel - fb9ae74a783a
Seth FowlerBug 1137058 - Increment RasterImage::mLockCount to ensure that non-discardable images don't eventually become unlocked. r=tn, a=lmandel - 52b55d9c1d61
Matt WoodrowBug 1145029 - Disable DXVA for 4k videos on AMD hardware since it performs poorly. r=jya a=lmandel - 2445fcfe99d4
Steve FinkBug 1145255. r=luke, a=lmandel - aabde7671ac0
Jed DavisBug 1111079 - Backport some IPC message/channel fixes. r=bent, a=lmandel - 5bb1bb65cc28
Jed DavisBug 1111065 - Backport some upstream IPC serialization fixes. r=bent, a=lmandel - a2295cc0de06
Boris ZbarskyBug 1144991 - Be a bit more restrictive about when a URI_IS_UI_RESOURCE source is allowed to link to a URI_IS_UI_RESOURCE URI that doesn't have the same scheme. r=bholley, a=lmandel - 2e6977da201e
Olli PettayBug 1144988 - Don't let other pages to load while doing scroll-to-anchor. r=bz, a=lmandel - 9b93e6033d5d

François MarierKeeping up with noisy blog aggregators using PlanetFilter

I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.

Other options

In my opinion, the first step in starting a new free software project should be to look for a reason not to do it :) So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla.

It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me.

A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.

PlanetFilter

PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see.

If you get it via Debian or Ubuntu, it comes with a cronjob that looks at all configuration files in /etc/planetfilter.d/ and outputs filtered feeds in /var/cache/planetfilter/.

You can either:

  • add file:///var/cache/planetfilter/planetname.xml to your local feed reader
  • serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
  • host it on a server somewhere on the Internet.

The software will fetch new posts every hour and overwrite the local copy of each feed.

A basic configuration file looks like this:

[feed]
url = http://planet.debian.org/atom.xml

[blacklist]

Filters

There are currently two ways of filtering posts out. The main one is by author name:

[blacklist]
authors =
  Alice Jones
  John Doe

and the other one is by title:

[blacklist]
titles =
  This week in review
  Wednesday meeting for

In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.

Tor support

Since blog updates happen asynchronously in the background, they can work very well over Tor.

In order to set that up in the Debian version of planetfilter:

  1. Install the tor and polipo packages.
  2. Set the following in /etc/polipo/config:

     proxyAddress = "127.0.0.1"
     proxyPort = 8008
     allowedClients = 127.0.0.1
     allowedPorts = 1-65535
     proxyName = "localhost"
     cacheIsShared = false
     socksParentProxy = "localhost:9050"
     socksProxyType = socks5
     chunkHighMark = 67108864
     diskCacheRoot = ""
     localDocumentRoot = ""
     disableLocalInterface = true
     disableConfiguration = true
     dnsQueryIPv6 = no
     dnsUseGethostbyname = yes
     disableVia = true
     censoredHeaders = from,accept-language,x-pad,link
     censorReferer = maybe
    
  3. Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:

     export http_proxy="localhost:8008"
     export https_proxy="localhost:8008"
    

Bugs and suggestions

The source code is available on repo.or.cz.

I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug!

I'm also interested in any suggestions you may have.

Mozilla Open Policy & Advocacy BlogInformation sharing debates continuing in problematic directions

Recently, the U.S. Senate Select Committee on Intelligence held a closed-door hearing to markup the Cybersecurity Information Sharing Act (CISA). Mozilla has previously opposed CISA and its predecessor CISPA, and these changes do not alleviate our concerns. Simultaneously, in neighboring Canada, an aggressive counterterrorism bill would introduce similarly problematic surveillance provisions, among other harms.

But first, CISA. While the newly marked up version includes some improvements over the discussion draft circulated earlier this year, the substantive dangers remain. In particular, the bill:

  • Is still overbroad in scope, allowing near limitless sharing of private user data for a vague and expansive list of purposes that fall well outside the realm of cybersecurity;
  • Continues to require information to be automatically shared with “relevant agencies” including the NSA, which severely limits the power of the Department of Homeland Security (a civilian agency) to oversee information sharing practices and policies;
  • Allows for dangerous “defensive measures” (a rebranding of the previous version’s “countermeasures”) which could legitimize and permit “hacking back” in a manner that seriously harms the Internet; and
  • Provides blanket immunity for sharing private user information with still insufficient privacy safeguards, denying users both effective protection and remedy.

But the flaws of CISA are more than just the sum of its problematic provisions. The underlying paradigm of information sharing as a means to “detect and respond” or “detect and prevent” cybersecurity attacks lends itself more to advancing surveillance than to improving the security of the Web or its users. The primary threat we face is not a dearth of information shared with or by the government, but rather is often a lack of proactive, common sense security measures.

Moreover, data collected is data at risk, from the government’s failures to secure its own systems to the abuses revealed by the Snowden revelations. Putting more and more information into the hands of the government puts more user data in danger. Nevertheless, after passing the Senate Select Committee on Intelligence 14-1, CISA is scheduled to move to the full Senate floor imminently. This is a bad step forward for the future of the open Web.

Meanwhile in Canada, the Canadian Parliament is considering an even more concerning bill, C-51, the Anti-Terrorism Act of 2015. C-51 is sweeping in scope, including granting Canadian intelligence agencies CSIS and CSE new authority for offensive online attacks, as well as allowing these agencies to obtain significant amounts of information held by the Canadian government. The open-ended internal information-sharing exceptions contained in the bill erode the relationship between individuals and their government by removing the compartmentalization that allows Canadians to provide the government some of their most private information (for census, tax compliance, health services, and a range of other purposes) and trust that that information will be used for only its original purposes. This compartmentalization, currently a requirement of the Privacy Act, will not exist after Bill C-51 comes into force.

The Bill further empowers CSIS to take unspecified and open-ended “measures,” which may include the overt takedown of websites, attacks on Internet infrastructure, introduction of malware, and more all without any judicial oversight. These kinds of attacks on the integrity and availability of the Web make us all less secure.

We hope that both the Canadian Parliament and the U.S. Congress will take the time to hear from users and experts before pushing any further with C-51 and CISA respectively. Both of these bills emphasize nearly unlimited information sharing, without adequate privacy safeguards, and alarmingly provide support for cyberattacks. This is an approach to cybersecurity that only serves to undermine user trust, threaten the openness of the Web, and reduce the security of the Internet and its users. For these reasons, we strongly oppose both C-51 and CISA.

 

 

 

 

Carsten BookFirst overview from the sheriff survey!

Hi,

thanks for all the Reply’s we got for the Sheriff Survey! If you haven’t already took part in it, its still online and you can still take part in the survey!

While we close the Survey in a few days and i will provide a comprehensive overview of course, i was feeling i could already do some quick overview what we got so far.

One big take away is how important checkin-needed requests is and how many people depend on this. We are very sorry if there are delays with picking up checkin-needed requests but since its a human task it depend how much is ongoing with the trees etc.

But there is work being done on Autoland like on https://wiki.mozilla.org/Auto-tools/Projects/Autoland :)

Also to follow up on 2 concrete things (you might know or maybe not).

Question: How do i know why the tree is closed (when we have a tree closure) on Treeherder

Answer:  Just hover over the repo name in Treeherder (as example mozilla-inbound) or click on the info button right next to the repo name

Question: When i land something on like mozilla-inbound its a mess to manually copy and past the hg changeset url to bug

Answer: We have a tool called mcmerge its right next to every push in the drown-down arrow action menu and unlike the name says its not just to mark merges. During the survey we found out that the name is misleading so we trying to find a new name – https://bugzilla.mozilla.org/show_bug.cgi?id=1145836

Thanks,

 

– Tomcat

Jim ChenBack from leave

Back in January, I left on a two-month-long leave from Mozilla, in order to do some traveling in China and Japan. Now I'm finally back! I was in China for 1.5 months and in Japan for 2 weeks, and it was amazing! I made a short video highlighting parts of my trip:

Being a mobile developer, I naturally paid some attention to mobile phone usage in China, and how it's different from what I'm used to in the U.S. The cellular infrastructure was impressive. It was fairly cheap, and I was getting full 3G/4G service in small villages and along high-speed rail routes. It seemed like everyone had a smartphone, too. I would see grandmas standing on the side of the road checking their phones.

I never use QR codes in the U.S., but I actually used them quite often in China. For example, you would scan another person's QR code to add them as friends on Wechat. In some places, you could scan a merchant's QR code to pay that merchant using Alipay, a wallet app. Many types of tickets like train tickets and movie tickets also use QR codes over there.

Everyone used Wechat, a messaging app that's “way better than anything else in the U.S.” according to my American friend living in China. It's more than just a messaging app though – you have a “friend circle” that you can post to, a la Facebook; you can also follow “public accounts”, a la Twitter. The app has integrated wallet functionality: I paid for a train ticket and topped up my phone using the app; during Chinese New Year, people were sending each other cash gifts through it.

For some reasons, you see a lot of these “all-in-one” apps in China. I used Baidu Maps during my travel, which does maps and navigation. However, you can also call taxis from within the app or hire a “private car”, a la Uber. You can use the app like Yelp to find nearby restaurants by type and reviews. While you're at it, the app lets you find “group buy” discounts to these restaurants, a la Groupon. I have to say it was super convenient. After I came back to the States, I wasn't used to using Google Maps anymore because it didn't do as much.

Of course, on the flip side, these apps probably would be less popular without the Internet censorship that's so prevalent over there. By creating a barrier for foreign companies to enter the Chinese market, it provided opportunities for domestic companies to create and adapt copycat products. I found it amusing that Android is so prevalent in the Chinese smartphone market, but everything Google is blocked. As a result, you have all these third-party markets that may or may not be legitimate. Mobile malware seems to be a much larger issue in China than in the U.S., because people have to find their apps off of random markets/websites. It was strange to see an apps market promising “safe, no malware” with every download link. Also amusingly, every larger app I saw came with its own updater, again because these apps could not count on having a market to provide update service.

Overall, the trip was quite eye-opening, to see China's tremendous development from multiple angles. I loved Japan, too; I felt it was a lot different from both China and the U.S. Maybe I'll write about Japan in another post.

Last modified: 2015/03/25 11:35

David Weir (satdav)Windows Nighly 64 bit test day

Why not come along to the windows 64 bit nighly test day this Saturday from 9am to 3pm

PS we are looking for moderators at the event

https://etherpad.mozilla.org/testday-20150328


Smokey ArdissonWhat year is it again?

The other day, my brother asked me to log in to his account on his employer’s1HR system” in order to make him some backup copies of information presented there (his existing copies of which he had needed to provide to his supervisor). On the login screen, I was still slightly shocked2 to see the following message:

For an optimal experience, we recommend using these browsers:

Unexpected results may occur when using other browsers.

(If you view the source, you can see that each of the <a>s has an id="ielink_001" attribute—not only incorrect, but perhaps a holdover from the days this particular website “supported” only IE?)

Seriously? It’s 2015 and your website is not only not compatible with any version of Safari, but it is only compatible with versions of Chrome and Firefox that are four3 versions out-of-date!? (Kudos for supporting versions of IE dating back six years, though!)

I forged ahead, because if the site claimed to work properly in a six-year-old version of Internet Explorer, it surely would work in a current two-year-old version of Safari (the just-released version 6.2.4 on 10.8/Mountain Lion). Nothing I had to look at seemed to look or function incorrectly—until it came time to look for his timesheets. When I clicked on the tab entitled “Timesheets”, a page loaded with no “content” below the row of tabs, except for a link to help me return to the site I was already on. Indeed, unexpected results may occur when using a browser other than the last four versions of IE or versions of Chrome and Firefox four versions out-of-date! Eventually, I realized that the problem was that loading the page was triggering a pop-up window(!?) with the website for the company’s scheduling system, and Safari was (silently) blocking said pop-up.4

Allowing pop-ups and forging ahead again, I looked at the scheduling system’s website, and it reminded me of a poor knockoff of the web as rendered by Firebird 0.6 or 0.7 more than a decade ago (eerie, that poorly-rendered, overly-fat Helvetica—perhaps it’s Verdana or Tahoma?—and <table>s, lots of <table>s!) Also, there was a menu that seemed to have no useful functions. Finally relenting, I launched Firefox 36, discovered the functional part of the menu was indeed missing (according to the Web Inspector in Safari, that part of the menu was being rendered off-screen and I think zero-height; given that Blink and WebKit supposedly haven’t diverged that much, I wonder if this critical piece of the menu would have appeared in Chrome, either, supported version or otherwise?), found the link I needed, and returned to Safari to print out pages of multi-page <table>s.

These are websites/systems that are created and installed to be used by every employee of this company, from the convenience of each employee’s personal computing device, not systems that are to be used solely by the HR department on company computers where IT can mandate a certain browser and software combination. This is software whose purpose is to be used by everyone; why is it not designed to be used by everyone—compatible with current versions of the major rendering engines, avoiding unfriendly and abused technologies like pop-ups, and so on?

If the software is intended to be used by everyone (or, generally, people beyond those whose computer configuration you can dictate by supplying said computer) and it’s web-based software (or has a web front-end), then the company (or the company’s software vendor) needs to continually test the software/web front-end with new versions of major rendering engines, making changes (or reporting bugs in the rendering engine) in the unlikely event something breaks, so that they aren’t requiring employees to use six-month-old versions of browsers in order for the corporate software to work properly.

As for the integration between the main HR system and the scheduling system, if the two can’t talk to each other directly behind the scenes, then why not embed the scheduling system into the “Timesheets” tab with an <iframe> (<iframe>s are already present in some of the other tabs). If an <iframe> won’t work for some technical or security reasons, why not include a button on the “Timesheets” tab that the user can click to trigger the pop-up window with the scheduling system, thus escaping the pop-up blocker? It’s not as elegant in some ways as automatically launching, but pop-ups are already not as elegant as showing the data inline (and pop-ups are arguably not elegant at all), and manually-triggered pop-ups are more friendly since the human involved knows he or she is triggering some action and isn’t annoyed by blocked pop-up notifications. You also then get Safari compatibility “for free” without requiring users to change settings (and without having to tell them how to do so). If there are still legitimate reasons not to use a button or link or similar element, at the very least some explanatory text in the “content” section of the “Timesheets” tab is far more useful to anyone than a link to return to the very site you’re already viewing.

When I encounter software like this, I often wonder how it was built. Was there a user experience or human interface designer as part of the team? Was there any testing? Any quality assurance team involved? Or did some product manager just throw a spec sheet from marketing at the software engineers and tell them, “Not only do you have to write the code to make it do these things, but you have to determine how it’s going to do these things, too.” Or did management decide to ship as-is, perhaps over the objections of team members, in order to meet some deadline?

Design is how things work. Not everyone is a good designer, just like not everyone is a good programmer or tester (they’re not necessarily mutually exclusive, but many times excelling in one field means not learning as much about another), but every good piece of software needs all three skillsets, working in concert, whether in one body or more. Too often, “corporate software” like this seems to be missing one or more of the three, and that’s a shame, because with a little more effort, every interaction with the software could be improved. Then the vendor sells better software, the employees who use the software have a faster, easier experience and can get back to doing what they love and are good at, and the company installing the software can have happier employees. Everyone wins.

        

1 An unnamed major American restaurant group.
2 I know, I know, I really shouldn’t be. :P
3 In fairness, Firefox 31 is at least still in the ESR support window until May.
4 Question 1: Why, in 2015, does Safari still not support a per-site control for pop-ups (and, at least as of version 6, still not provide any sort of notification of blocked pop-ups; granted the UI balance there is hard—and a subject for another post—but still!)?
Question 2: The better question is, why, in 2015, are sites still using non-user-triggered pop-up windows for critical functions, or any functions at all?

Michael KaplyFirefox ESR Only Changes

There are a few changes that are coming for Firefox that will be major headaches for enterprise, educational, government and other institutional deployments. These include the removal of the distribution/bundles directory as well as the requirement for all add-ons to be signed by Mozilla.

Given that these two changes are not needed for enterprise, there has been some discussion of not putting these changes into the Firefox ESR.

So I'm curious: besides these two changes, what other things do you think should be different between regular Firefox and the Firefox ESR? I'm not talking about creating new features for the ESR, I'm only talking about enabling and/or disabling features.

Put your suggestions in the comments. I'll put mine there as well.

Daniel PocockThe easiest way to run your own OpenID provider?

A few years ago, I was looking for a quick and easy way to run OpenID on a small web server.

A range of solutions were available but some appeared to be slightly more demanding than what I would like. For example, one solution required a servlet container such as Tomcat and another one required some manual configuration of Python with Apache.

I came across the SimpleID project. As the name implies, it is simple. It is written in PHP and works with the Apache/PHP environment on just about any Linux web server. It allows you to write your own plugin for a user/password database or just use flat files to get up and running quickly with no database at all.

This seemed like the level of simplicity I was hoping for so I created the Debian package of SimpleID. SimpleID is also available in Ubuntu.

Help needed

Thanks to a contribution from Jean-Michel Nirgal Vourgère, I've just whipped up a 0.8.1-14 package that should fix Apache 2.4 support in jessie. I also cleaned up a documentation bug and the control file URLs.

Nonetheless, it may be helpful to get feedback from other members of the community about the future of this package:

  • Is it considered secure enough?
  • Have other people found it relatively simple to install or was I just lucky when I tried it?
  • Are there other packages that now offer such a simple way to get OpenID for a vanilla Apache/PHP environment?
  • Would anybody else be interested in helping to maintain this package?
  • Would anybody like to see this packaged in other distributions such as Fedora?
  • Is anybody using it for any online community?

Works with HOTP one-time-passwords and LDAP servers

One reason I chose SimpleID is because of dynalogin, the two-factor authentication framework. I wanted a quick and easy way to use OTP with OpenID so I created the SimpleID plugin for dynalogin, also available as a package.

I also created the LDAP backend for SimpleID, that is available as a package too.

Works with Drupal

I tested SimpleID for login to a Drupal account when the OpenID support is enabled in Drupal, it worked seamlessly. I've also tested it with a few public web sites that support OpenID.

Adam Lofting2015 Mozilla Foundation Metrics Strategy(ish) & Roadmap(ish)

I wrote a version of this strategy in January but hadn’t published it as I was trying to remove those ‘ish‘s from the title. But the ‘ish’ is actually a big part of my day-to-day work, so this version embraces the ‘ish’.

MoFo Metrics Measures of Success:

These are ironically, more qualitative than quantitative.

  1. Every contributor (paid or volunteer) knows at any given time what number they (or we) are trying to move, where that number is right now, and how they hope to influence it.
  2. We consider metrics (i.e. measures of success) before, during and after after each project.
  3. We articulate the stories behind the metrics we aim for, so their relevance isn’t lost in the numbers.
  4. A/B style testing practice has a significant impact on the performance of our ‘mass audience’ products and campaigns.

1. Every contributor (paid or volunteer) knows at any given time what number they (or we) are trying to move, where that number is right now, and how they hope to influence it.

  • “Every” is ambitious, but it sets the right tone.
  • This includes:
    • Public dashboards, like those at https://metrics.webmaker.org
    • Updates and storytelling throughout the year
    • Building feedback loops between the process, the work and the results (the impact)

2. We consider metrics (i.e. measures of success) before, during and after after each piece of work.

  • This requires close integration into our organizational planning process
  • This work is underway, but it will take time (and many repetitions) before it becomes habit

3. We articulate the stories behind the metrics we aim for, so their relevance isn’t lost in the numbers.

  • The numbers should be for navigation, rather than fuel

4. A/B style testing practice has a significant impact on the performance of our ‘mass audience’ products and campaigns.

  • This is the growth hacking part of the plan
  • We’ve had some successes (e.g. Webmaker and Fundraising)
  • This needs to become a continuous process

Those are my goals.

In many cases, the ultimate measure of success is when this work is done by the team rather than by me for the team.

We’re working on Process AND Culture

Process and culture feed off of and influence each other. Processes must suit the culture being cultivated. A data driven culture can blinker creativity – it doesn’t have to, but it can. And a culture that doesn’t care for data, won’t care for processes related to data. This strategy aims to balance the needs of both.

A roadmap?

I tried to write one, but basically this strategy will respond to the roadmaps of each of the MoFo teams.

So, what does Metrics work look like in 2015?

  • Building the tools and dashboards to provide the organisational visibility we need for our KPIs
  • ‘Instrumenting’ our products so that we can accurately measure how they are being used
  • Running Optimization experiments against high profile campaigns
  • Running training and support for Google Analytics, Optimizely, and other tools
  • Running project level reporting and analysis to support iterative development
  • Consulting to the Community Development Team to plan experimental initiatives

Plus: supporting teams to implement our data practices, and of course, the unknown unknowns.

…ish

Nigel BabuDino Cufflinks

Recently, in a moment of weakness, I made an order on Etsy for custom cufflinks. I had no idea how it would turn out, so it was a huge leap of faith. I got it the other day and it looks gorgeous!

They do look quite good! Click through for larger image

For those of you wondering, I ordered it from LogiCuff. So, when can we get cufflinks on Mozilla Gear? :)

Ben KellyService Workers in Firefox Nightly

I’m pleased to announce that we now recommend normal Nightly builds for testing our implementation of Service Workers. We will not be posting any more custom builds here.

Now that bug 1110814 has landed in mozilla-central, Nightly has roughly the same functionality as the last sw-build. Just enable these preferences in about:config:

  • Set dom.caches.enabled to true.
  • Set dom.serviceWorkers.enabled to true.

Please note that on Firefox OS you must enable an additional preference as well. See bug 1125961 for details.

In addition, we’ve decided to move forward with enabling the Service Worker and Cache API preferences by default in non-releases builds. We expect the Cache preference to be enabled in the tree today. The Service Worker preference should be enabled within the next week once bug 931249 is complete.

When Nightly merges to Aurora (Developer Edition), these preferences will also be enabled by default there. They will not, however, ride the trains to Beta or Release yet. We feel we need more time stabilizing the implementation before that can occur.

So, unfortunately, I cannot tell you exactly which Firefox Release will ship with Service Workers yet. It will definitely not be Firefox 39. Its possible Service Workers will ship in Firefox 40, but its more likely to finally be enabled in Firefox 41.

Developer Edition 39, however, will have Cache enabled and will likely also have Service Workers enabled.

Finally, while the code is stabilizing you may see Service Worker registrations and Cache data be deleted when you update the browser. If we find that the data format on disk needs to change we will simply be reseting the relevant storage area in your profile. Once the decision to ship is made any future changes will then properly migrate data without any loss. Again, this only effects Service Worker registrations and data stored in Cache.

As always we appreciate your help testing, reporting bugs, and implementing code.

Gervase MarkhamHow to Responsibly Publish a Misissued SSL Certificate

I woke up this morning wanting to write a blog post, then I found that someone else had already written it. Thank you, Andrew.

If you succeed in getting a certificate misissued to you, then that has the opportunity to be a great learning experience for the site, the CA, the CAB Forum, or all three. Testing security is, to my mind, generally a good thing. But publishing the private key turns it from a great learning experience into a browser emergency update situation (at least at the moment, in Firefox, although we are working to make this easier with OneCRL).

Friends don’t publish private keys for certs for friends’ domain names. Don’t be that guy. :-)

QMOFirefox 38 Aurora Testday Results

Hello everyone!

Last Friday, March 20th, we held the Firefox 38 Aurora Testday. We’d like to take this opportunity to thank everyone for getting involved in the proposed testing activities and in general, for your hard work in helping us make Firefox even better.

Many thanks to doublex, Aleksej, Hossain Al Ikram and kenkon for their efforts and contributions and to all our moderators. Your help is greatly appreciated!

We look forward to seeing you at the next Testday. Keep an eye on QMO for upcoming events and schedule announcements!

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1145502] cf_crash_signature missing in fresh BMO install
  • [1145689] Adding “Content Services” into Key Initiatives
  • [1146219] the ‘abuse’ comment tag should function like the ‘abusive’ tag and trigger automatic account disabling
  • [1141165] Enforce mandatory field on form.reps.mentorship
  • [1146434] renaming of “Distribution/Bundling” component broke one of the project kickoff form’s sub-bugs
  • [1096798] prototype modal show_bug view

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Chris DoubleContributing to Servo

Servo is a web browser engine written in the Rust programming language. It is being developed by Mozilla. Servo is open source and the project is developed on github.

I was looking for a small project to do some Rust programming and Servo being written in Rust seemed likely to have tasks that were small enough to do in my spare time yet be useful contributions to the project. This post outlines how I built Servo, found issues to work on, and got them merged.

Preparing Servo

The Servo README has details on the pre-requisites needed. Installing the pre-requisites and cloning the repository on Ubuntu was:

$ sudo apt-get install curl freeglut3-dev \
   libfreetype6-dev libgl1-mesa-dri libglib2.0-dev xorg-dev \
   msttcorefonts gperf g++ cmake python-virtualenv \
   libssl-dev libbz2-dev libosmesa6-dev 
...
$ git clone https://github.com/servo/servo

Building Rust

The Rust programming language has been fairly volatile in terms of language and library changes. Servo deals with this by requiring a specific git commit of the Rust compiler to build. The Servo source is periodically updated for new Rust versions. The commit id for Rust that is required to build is stored in the rust-snapshot-hash file in the Servo repository.

If the Rust compiler isn’t installed already there are two options for building Servo. The first is to build the required version of Rust yourself, as outlined below. The second is to let the Servo build system, mach, download a binary snapshot and use that. If you wish to do the latter, and it may make things easier when starting out, skip this step to build Rust.

$ cat servo/rust-snapshot-hash
d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf/rustc-1.0.0-dev
$ git clone https://github.com/rust-lang/rust
$ cd rust
$ git checkout -b servo d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf
$ ./configure --prefix=/home/myuser/rust
$ make
$ make install

Note that I configure Rust to be installed in a directory off my home directory. I do this out of preference to enable managing different Rust versions. The build will take a long time and once built you need to add the prefix directories to the PATH:

$ export PATH=$PATH:/home/myuser/rust/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/myuser/rust/lib

Building Servo

There is a configuration file used by the Servo build system to store information on what Rust compiler to use, whether to use a system wide Cargo (Rust package manager) install and various paths. This file, .servobuild, should exist in the root of the Servo source that was cloned. There is a sample file that can be used as a template. The values I used were:

[tools]
system-rust = true
system-cargo = false

[build]
android = false
debug-mozjs = false

If you want to use a downloaded binary snapshot of Rust to build Servo you should set the system-rust setting to false. With it set to true as above it will expect to find a Rust of the correct version in the path.

Servo uses the mach command line interface that is used to build Firefox. Once the .servobuild is created then Servo can be built with:

$ ./mach build

Servo can be run with:

$ ./mach run http://bluishcoder.co.nz

To run the test suite:

$ ./mach test

Finding something to work on

The github issue list has three useful labels for finding work. They are:

For my first task I searched for E-easy issues that were not currently assigned (using the C-assigned label). I commented in the issue asking if I could work on it and it was then assigned to me by a Servo maintainer.

Submitting the Fix

Fixing the issue involved:

  • Fork the Servo repository on github.
  • Clone my fork localling and make the changes required to the source in a branch I created for the issue I was working on.
  • Commit the changes locally and push them to my fork on github.
  • Raise a pull request for my branch.

Raising the pull request runs a couple of automated actions on the Servo repository. The first is an automated response thanking you for the changes followed by a link to the external critic review system.

Reviews

The Servo project uses the Critic review tool. This will contain data from your pull request and any reviews made by Servo reviewers.

To address reviews I made the required changes and committed them to my local branch as seperate commits using the fixup flag to git commit. This associates the new commit with the original commit that contained the change. It allows easier squashing later.

$ git commit --fixup=<commit id of original commit>

The changes are then pushed to the github fork and the previously made pull request is automatically updated. The Critic review tool also automatically picks up the change and will associate the fix with the relevant lines in the review.

With some back and forth the changes get approved and a request might be made to squash the commits. If fixup was used to record the review changes then they will be squashed into the correct commits when you rebase:

$ git fetch origin
$ git rebase --autosquash origin/master

Force pushing this to the fork will result in the pull request being updated. When the reviewer marks this as r+ the merge to master will start automatically, along with a build and test runs. If test failures happen these get added to the pull request and the review process starts again. If tests pass and it merges then it will be closed and the task is done.

A full overview of the process is available on the github wiki under Github and Critic PR handling 101.

Conclusion

The process overhead of committing to Servo is quite low. There are plenty of small tasks that don’t require a deep knowledge of Rust. The first task I worked on was basically a search/replace. The second was more involved, implementing view-source protocol and text/plain handling. The latter allows the following to work in Servo:

$ ./mach run view-source:http://bluishcoder.co.nz
$ ./mach run http://cd.pn/plainttext.txt

The main issues I encountered working with Rust and Servo were:

  • Compiling Servo is quite slow. Even changing private functions in a module would result in other modules rebuilding. I assume this is due to cross module inlining.
  • I’d hoped to get away from intermittent test failures like there are in Gecko but there seems to be the occasional intermittent reftest failure.

The things I liked:

  • Very helpful Servo maintainers on IRC and in github/review comments.
  • Typechecking in Rust helped find errors early.
  • I found it easier comparing Servo code to HTML specifications and following them together than I do in Gecko.

I hope to contribute more as time permits.

Dave TownsendMaking communicating with chrome from in-content pages easy

As Firefox increasingly switches to support running in multiple processes we’ve been finding common problems. Where we can we are designing nice APIs to make solving them easy. One problem is that we often want to run in-content pages like about:newtab and about:home in the child process without privileges making it safer and less likely to bring down Firefox in the event of a crash. These pages still need to get information from and pass information to the main process though, so we have had to come up with ways to handle that. Often we use custom code in a frame script acting as a middle-man, using things like DOM events to listen for requests from the in-content page and then messaging to the main process.

We recently added a new API to make this problem easier to solve. Instead of needing code in a frame script the RemotePageManager module allows special pages direct access to a message manager to communicate with the main process. This can be useful for any page running in the content area, regardless of whether it needs to be run at low privileges or in the content process since it takes care of listening for documents and hooking up the message listeners for you.

There is a low-level API available but the higher-level API is probably more useful in most cases. If your code wants to interact with a page like about:myaddon just do this from the main process:

Components.utils.import("resource://gre/modules/RemotePageManager.jsm");
let manager = new RemotePages("about:myaddon");

The manager object is now something resembling a regular process message manager. It has sendAsyncMessage and addMessageListener methods but unlike the regular e10s message managers it only communicates with about:myaddon pages. Unlike the regular message managers there is no option to send synchronous messages or pass cross-process wrapped objects.

When about:myaddon is loaded it has sendAsyncMessage and addMessageListener functions defined in its global scope for regular JavaScript to call. Anything that can be structured-cloned can be passed between the processes

The module documentation has more in-depth examples showing message passing between the page and the main process.

The RemotePageManager module is available in nightlies now and you can see it in action with the simple change I landed to switch about:plugins to run in the content process. For the moment the APIs only support exact URL matching but it would be possible to add support for regular expressions in the future if that turns out to be useful.

Mozilla Science LabMozilla Science Lab Week in Review, March 16-22

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

  • Andrew Nesbitt has launched libraries.io, a project to help tackle the discoverability challenge in open source and open science software. By leveraging the PageRank algorithm, Nesbitt hopes to represent what is actually being used (rather than what is simply admired) to better represent the true workhorses of open source.
  • The Center for Open Science began composing a wiki on the arguments and motivations for open science part of a “growing open source, open science” meeting that Titus Brown and Kaitlin Thaney co-organized last week. The new wiki explores ways we can work together better across open science initiatives – watch their space for developments, and get in touch there to contribute!
  • The Wikimedia Foundation has adopted an open access policy to support the free reuse of research produced with their support.
  • Stephanie Hampton et al have submitted a preprint of ‘The Tao of Open Science for Ecology‘, a paper outlining a roadmap to understanding and participating in open science. This paper got its start as a collaborative discussion at the NCEAS Codefest in 2014.
  • GitHub added PDF rendering to their services last week.
  • PLOS Biology published recommendations for the role of publishers in the dissemination of open data.
  • Jojo Scoble wrote a great blog post for Digital Science describing her experiences sharing her data openly, and why other researchers should consider it. On the common worry of whether a dataset is ‘good’ enough to publish, Scoble quoted her former supervisor:

    You could spend years trying to collect the perfect data set when you should be publishing what you have, which is enough.”

  •  The National Science Foundation in the US announced a plan to accommodate comprehensive public access to research results; in it, the “NSF will require that articles in peer-reviewed scholarly journals and papers in juried conference proceedings or transactions be deposited in a public access compliant repository and be available for download, reading and analysis within one year of publication.
  • The Fair Access to Research and Technology (FASTR) act was reintroduced recently to the US Congress. Successor to the Federal Research Public Access Act, FASTR introduces, among other provisions, an emphasis on reuse and correct attribution.
  • PLOS Collections has showcased a collection of negative results, underscoring the importance of publishing such studies to the broader scientific community.
  • In a similar vein, submissions are open for the ERROR conference, highlighting negative results in Munich, Germany on 3-4 September.
  • UNESCO recently put its open access curriculum online; the content is targeted at librarians and researchers, and emphasizes topics from introducing open access, to intellectual property rights to how to share your work in an open access model.
  • Also in partnership with UNESCO, Foster Open Science is hosting a two-day Open Science Workshop for European graduate school administrators in order to ‘construct a roadmap for making Open Science certifiable and standard training for future graduates‘.
  • Tom Baden et al recently published an article on 3D printing your own lab equipment, in order to mitigate the costs and hurdles to setting up a research program.
  • The Scholarly Kitchen recently interviewed two of the founders of Advancing Research and Communication Scholarship (ARCS), a new conference coming April 26-28 in Philadelphia ‘designed to provide a broad and collaborative forum for addressing and affecting scholarly and scientific communication.‘ (- Alice Meadows, Scholarly Kitchen)
  • Chris Parr wrote an article for Times Higher Education on Carol Goble’s work and comments on the hurdles created by ostentation in scholarly communication and questions raised by the failure to distribute both data and code.
  • The Su Lab is holding a hackathon on biomedical big data, May 7-9.
  • Finally, don’t miss our map of hacky hours and study groups – and if you know of anyone running a meetup about coding for researchers, let us know so we can add you to the map!

 

Daniel StenbergFixing the Func KB-460 ‘-key

Func KB-460 keyboardI use a Func KB-460 keyboard with Nordic layout – that basically means it is a qwerty design with the Nordic keys for “åäö” on the right side as shown on the picture above. (yeah yeah Swedish has those letters fairly prominent in the language, don’t mock me now)

The most annoying part with this keyboard has been that the key repeat on the apostrophe key has been sort of broken. If you pressed it and then another key, it would immediately generate another (or more than one) apostrophe. I’ve sort of learned to work around it with some muscle memory and treating the key with care but it hasn’t been ideal.

This problem is apparently only happening on Linux someone told me (I’ve never used it on anything else) and what do you know? Here’s how to fix it on a recent Debian machine that happens to run and use systemd so your mileage will vary if you have something else:

1. Edit the file “/lib/udev/hwdb.d/60-keyboard.hwdb”. It contains keyboard mappings of scan codes to key codes for various keyboards. We will add a special line for a single scan code and for this particular keyboard model only. The line includes the USB vendor and product IDs in uppercase and you can verify that it is correct with lsusb -v and check your own keyboard.

So, add something like this at the end of the file:

# func KB-460
keyboard:usb:v195Dp2030*
KEYBOARD_KEY_70031=reserved

2. Now update the database:

$ udevadm hwdb –update

3. … and finally reload the tweaks:

$ udevadm trigger

4. Now you should have a better working key and life has improved!

With a slightly older Debian without systemd, the instructions I got that I have not tested myself but I include here for the world:

1. Find the relevant input for the device by “cat /proc/bus/input/devices”

2. Make a very simple keymap. Make a file with only a single line like this:

$ cat /lib/udev/keymaps/func
0×70031 reserved

3 Map the key with ‘keymap’:

$ sudo /lib/udev/keymap -i /dev/input/eventX /lib/udev/keymaps/func

where X is the event number you figured out in step 1.

The related kernel issue.

Pierros PapadeasMultiple emails on mozillians.org

tl;dr version: You can now associate multiple emails to your mozillians.org profile

Background

Since the start, users of mozillians.org were able to associate only one email per profile. This was used both as email displayed on your profile (depending on your privacy settings) but most importantly as the email used to login using Persona.

Rationale

Most of us own and use multiple emails everyday. Personal, business, alias and any combo in between. Even within various Mozilla properties people have associated different profiles with different emails (eg. SuMo account vs. Bugzilla email). Although we need to recognize and respect the will of some people to use different emails as different (separate) online personas, we also need to find ways to make identity management and consolidation easier for people that choose to use multiple emails, under the same name.
Being able to associate multiple emails under one mozillians.org profile, presents us with really interesting advantages. For once, you can login on websites that check for your mozillians.org account using any email associated with your Persona account. Also other mozillians would be able to look you up using any of your emails. Finally, from a metrics standpoint we will be able to effectively deduplicate accounts and metrics/statistics across different systems of Mozilla.

Implementation

  • Main email is being used for communication with mozillians in Mozillians.org
  • Alternate emails are mostly being used for identity deduplication
  • API v2 exposes alternate emails

What should I do?

  • Login to mozillians.org
  • Click “Edit your profile”
  • Click “Edit E-mail addresses”
There we provide all the functionality to manage your profile’s emails.
  • Add/delete alternate email address
  • Change your primary email address
  • Manage email visibility

Multiple Accounts?

We dont expect many people to have multiple profiles in mozillians.org. We cannot know for sure, only anecdotally. People with multiple accounts should contact us ( #commtools on IRC or open a bug here) for help merging, or they can choose to use one of them and delete the others.

What is next?

Mozillians.org dev team is working tirelessly on new features and enhancements that would make mozillians.org even easier to use and more robust as a source of truth about all things mozillians. You can check our our roadmap here, follow our development and contribute on github and join our discussions here.

Cameron KaiserPwn2Own this Power Mac (plus: IonPower's time is running out)

All of the mighties fell at this year's Pwn2Own, including Firefox, where clever minds find gaping holes for fun and a small amount of profit (but then it also makes a great resume builder, which may be a more reliable paycheque). The holes last year were quickly patched, and after I determined we were also vulnerable we followed suit. As usual, for this year's naughtiness Mozilla already has patched versions available, including ESR 31.5.3.

However, the two holes used for this year's marvelous and somewhat alarming crack are not exploitable in TenFourFox directly: the SVG navigation fault cannot be effectively used to escalate privileges in TenFourFox's default configuration, and we don't even build the code that uses JavaScript bounds checking. The navigation fault may have other weaponizeable vectors and we do want to roll that fix, but the good news is 31.6 will come out this weekend, so no urgent chemspill is necessary unless I discover some odd way of busting through it between now and then.

I lost about a week of hacking time to one of my occasional bouts of bronchitis, which is pushing IonPower's timing very close to the wire. We need two cycles for 38 to allow localizers to catch up and people to test, and of course somewhere in that timeframe we also have to finish the move from Eric Schmidt is a Poopypants Land Google Code. Over the weekend I got IonPower to pass the test suite in Baseline mode, which is very encouraging, but some of the same problems that doomed PPCBC's Ion work are starting to rear up again.

The biggest problem that recurred is an old one: Ion's allocator is not endian-safe. I get bad indicies off it for stack slots and other in-memory boxed values and all but the simplest scripts either assert deep within Ion's bowels (not our new PowerPC backend) or generate code that is verifiably wrong. Unfortunately, Mozilla doesn't really document Ion's guts anywhere, so I don't know where to start with fixing it, and all the extant Ion backends, even MIPS, are little-endian. Maybe some Mozilla JIT folks are reading and can comment? (See also the post in the JavaScript engine internals group.)

One old problem with bad bailout stack frames, however, is partially solved with IonPower. I say partially because even though the stack frame is sane now, it still crashes, but I have a few ideas about that. However, the acid test is getting Ion code to jump to Baseline, run a bit in Baseline, and then jump back to Ion to finish execution. PPCBC could never manage this without crashing. If IonPower can do no better, there is no point in continuing the porting effort.

Even if this effort runs aground again, that doesn't make IonPower useless. PPCBC may pass the test suite, but some reproducible bugs in Tenderapp indicate that it goes awry in certain extremely-tough-to-debug edge cases, and IonPower (in Baseline mode) does pass the test suite as well now. If I can get IonPower to be as fast or faster than PPCBC even if it can't execute Ion code either, we might ship it anyway as "PPCBC II" in Baseline-only mode to see if it fixes those problems -- I have higher confidence that it will, because it generates much more sane and "correct" output and doesn't rely on the hacks and fragile glue code that PPCBC does in 24 and 31. I have to make this decision sometime mid-April, though, because we're fast approaching EOL for 31.

Also, as of Firefox 38 Mozilla no longer supports gcc 4.6, the compiler which we build with. However, I'm not interested in forcing a compiler change so close to the next ESR, and it appears that we should still be able to get it working on 4.6 with some minor adjustments. That won't be the case for Fx39, if we're even going to bother with that, but fortunately there is a gcc 4.8 in MacPorts and we might even use Sevan's gcc from pkgsrc. Again, the decision to continue will be based on feasibility and how close Electrolysis is to becoming mandatory before 45ESR, which is the next jump after that. For now, TenFourFox 38 is the highest priority.

This Week In RustThis Week in Rust 75

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

79 pull requests were merged in the last week, and 9 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • Johannes Oertel
  • kjpgit
  • Nicholas
  • Paul ADENOT
  • Sae-bom Kim
  • Tero Hänninen

Approved RFCs

New RFCs

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

<mbrubeck> the 5 stages of loss and rust
<mbrubeck> 1. type check. 2. borrow check. 3. anger. 4. acceptance. 5. rust upgrade

Thanks to jdm for the tip. Submit your quotes for next week!.

Anthony JonesYouTube, MSE and Firefox 37

Being invisible is important when you build infrastructure. You don't notice what your browser does for you unless it is doing a poor job. We have been busy making Firefox video playback more robust, more asynchronous, faster and better.

You may have some memory of selecting between high and low quality on YouTube. When you switched it would stop the video and buffer the video at the new quality. Now it defaults to Auto but allows you to manually override. You may have noticed that the Auto mode doesn't stop playing when it changes quality. Nobody really noticed, but a tiny burden was lifted from users. You need to know exactly one less thing to watch videos on YouTube and many other sites.

This Auto mode is otherwise known as DASH, which stands for Dynamic Adaptive Streaming over HTTP. Flash has supported adaptive streaming for some time. In HTML5 video, DASH is supported on top of an API called MSE (Media Source Extensions). MSE allows Javascript to directly control the data going into the video element. This allows DASH to be supported in Javascript, along with some other things that I'm not going to go into.


It has taken a surprising amount of work to make this automatic. My team has been working on adding MSE to Firefox for a couple of years now as well as adding MP4 support on a number of platforms. We're finally getting to the point where it is working really well on Windows Vista and later in Firefox 37 beta. I know people will ask, so MSE is coming soon for Mac, and it is coming later for Linux and Windows XP.

Making significant changes isn't without its pain but it is great to finally see the light at the end of the tunnel. Firefox beta, developer edition and nightly users have put up with a number of teething problems. Most of them have been sorted out. I'd like to thank everyone who has submitted a crash report, written feedback or filed bugs. It has all helped us to find problems and make the video experience better.

Robustness goes further than simply fixing bugs. To make something robust it is necessary to keep simplifying the design and to create re-usable abstractions. We've switched to using a thread pool for decoding, which keeps the number of threads down. Threads use a lot of address space, which is at a premium in a 32 bit application.

We've used a promise-like abstraction to make many things non-blocking. They make chaining asynchronous operations much simpler. They're like Javascript promises, except being C++ they also guarantee you get called back on the right thread.

We're working towards getting all the complex logic on a single thread, with all the computation done in a thread pool. Putting the video playback machinery on a single thread makes it much clearer which operations are synchronous and which ones are asynchronous. It doesn't hurt performance as long as the state machine thread never blocks. In fact you get a performance win because you avoid locking and cache contention.

We're white listing MSE for YouTube at first but we are intending to roll it out to other sites soon. There are a couple of spec compliance issues that we need to resolve before we can remove the white list. Meanwhile, YouTube is looking really good in Firefox 37.



Daniel StenbergSumming up the birthday festivities

I blogged about curl’s 17th birthday on March 20th 2015. I’ve done similar posts in the past and they normally pass by mostly undetected and hardly discussed. This time, something else happened.

Primarily, the blog post quickly became the single most viewed blog entry I’ve ever written – and I’ve been doing it for many many years. Already in the first day it was up, I counted more than 65,000 views.

The blog post got more comments than on any other blog post I’ve ever done. Right now they have probably stopped but there are 60 of them now, almost everyone one of them saying congratulations and/or thanks.

The posting also got discussed on both hacker news and reddit, totaling in more than 260 comments. Most of those in positive spirit.

The initial tweet I made about my blog post is the most retweeted and stared tweet I’ve ever posted. At least 87 retweets and 49 favorites (it might even grow a bit more over time). Others subsequently also tweeted the link hundreds of times. I got numerous replies and friendly call-outs on twitter saying “congrats” and “thanks” in many variations.

Spontaneously (ie not initiated or requested by me but most probably because of a comment on hacker news), I also suddenly started to get donations from the curl web site’s donation web page (to paypal). Within 24 hours from my post, I had received 35 donations from friendly fans who donated a total sum of  445 USD. A quick count revealed that the total number of donations ever through the history of curl’s lifetime was 43 before this day. In one day we had basically gotten as many as we had gotten the first 17 years.

Interesting data from this donation “race”: I got donations varying from 1 USD (yes one dollar) to 50 USD and the average donation was then 12.7 USD.

Let me end this summary by thanking everyone who in various ways made the curl birthday extra fun by being nice and friendly and some even donating some of their hard earned money. I am honestly touched by the attention and all the warmth and positiveness. Thank you for proving internet comments can be this good!

John O'Duinn“The Race for Space” by Public Service Broadcasting

I was happily surprised by this as a gift recently.

For me, the intermixing of old original broadcasts with original composition music worked well as an idea. Choosing which broadcasts to include was just as important as composing the right music.

I liked how the composers framed the album around 9 pivotal moments events from 1957 (launch of sputnik) to 1972 (Apollo 17, the last Apollo departing the moon). Obviously, there was a lot of broadcasts to choose from, and I liked their choices – some of which I’d never heard before. Kennedy’s “We choose to go to the moon” speech, a homage to Valentina Tereshkova (the first female in space), Apollo 8’s “see you on the flip side” (the earthrise photo taken by Apollo 8 is still one of my favourites), and the tense interactions of all the ground + flights teams in the final seconds of descent to land of Apollo 11 (including handling the 1202, 1201 errors!).

All heady stuff and well worth a listen.

Chris PearceReplacing Lenovo optical drive with second hard drive: The Lenovo adapter is disappointing

I recently ordered a Lenovo Serial ATA Hard Drive Bay Adapter III for my Lenovo T510 laptop. This can hold a hard drive, and replaces the DVD/CD-ROM drive in your laptop. This enables your laptop to run a second hard drive.

I've used my optical drive two, maybe three times since getting the laptop, so swapping it for another hard drive seems like a good trade for me.

The Lenovo drive bay itself works fine, but I'm still disappointed in Lenovo's product.

When installed, the drive bay looks like this:
Lenovo Serial ATA Hard Drive Bay Adapter installed in a Lenovo T510
The problem here is that there's a gap of approximately 3mm (~0.12 inches) between the top of the drive bay and the ceiling of the optical disk cavity. This means the drive bay can wobble vertically, so much so that I feel the need to tape it in place to stop it flopping around. This looks ridiculous.

Secondly, in order to install your hard drive inside the Lenovo drive bay, you need a hard drive cover. This is the metal cover that encases the hard drives shipping in Lenovo laptops. The covers normally have rubber bumpers/rails to stop the drive moving around. You need to take the bumpers off to install your drive into the hard drive bay.

The hard drive covers looks like this:

Lenovo hard drive cover

And with a hard drive in it, the hard drive cover looks like this:

Lenovo hard drive cover encasing a hard drive.

Note the screws. The drive bay has notches which the screws snap into, holding the drive securely inside the drive bay. Possibly the screws are the only important bit here; you probably don't actually need the drive cover to install the drive into the bay, just the screws since they're what hold the drive in place inside the drive bay.

The frustrating thing is that nothing on the Lenovo web site tells you that you need a drive cover to install a drive into the drive bay. My drive bay arrived and I had to loot my old laptop's drive cover in order to install a new drive into my current laptop.

And I also couldn't find the drive covers listed on Lenovo's web site. Presumably if you buy a laptop hard drive from Lenovo they come with this cover, and presumably Lenovo use this as a way to force you to buy all your laptop hard drives directly from Lenovo.

That's the sort of behaviour I'd expect from Apple, not Lenovo.

Thankfully Ann at IT was able to figure out how to order the drive covers separately. Thanks Ann!

Overall, the product was easy to install (once I had a drive cover) and works fine (apart from the wobble) but I'm still disappointed. Next time, I'll try one of newmodeus' Lenovo drive caddies:

Update 22 March 2015: This blog post now has a Russian translation;
Пост доступен на сайте softdroid.net: Замена оптического дисковода вторым жестким диском на Lenovo.

Pascal FinetteLink Pack (March 22nd)

What I was reading this week:

Tantek ÇelikDublin Core Application Profiles — A Brief Dialogue

IndieWebCamp Cambridge 2015 is over. Having finished their ice cream and sorbet while sitting on a couch at Toscanini’s watching it snow, the topics of sameAs, reuse, and general semantics leads to a mention of Dublin Core Application Profiles.

  1. A:
    Dublin Core Application Profiles could be useful for a conceptual basis for metadata interoperation.
  2. T:
    (Yahoos for dublin core application profiles, clicks first result)
  3. T:
    Dublin Core Application Profile Guidelines (SUPERSEDED, SEE Guidelines for Dublin Core Application Profiles)
  4. T:
    Kind of like how The Judean People’s Front was superseded by The People’s Front of Judea?
  5. A:
    (nervous laugh)
  6. T:
    Guidelines for Dublin Core Application Profiles
  7. T:
    Replaces: http://dublincore.org/documents/2008/11/03/profile-guidelines/
  8. T:
    Hmm. (clicks back)
  9. T:
    Dublin Core Application Profile Guidelines
  10. T:
    Is Replaced By: Not applicable, wait, isn’t that supposed to be an inverse relationship?
  11. A:
    I’m used to this shit.
  12. T:
    (nods, clicks forward, starts scrolling, reading)
  13. T:
    We decide that the Library of Congress Subject Headings (LCSH) meet our needs. - I’m not sure the rest of the world would agree.
  14. A:
    No surprises there.
  15. T:
    The person has a name, but we want to record the forename and family name separately rather than as a single string. DCMI Metadata Terms has no such properties, so we will take the properties foaf:firstName and foaf:family_name
  16. T:
    Wait what? Not "given-name" and "family-name"? Nor "first-name" and "last-name" but "firstName" and "family_name"?!?
  17. A:
    Clearly it wasn’t proofread.
  18. T:
    But it’s in the following table too. foaf:firstName / foaf:family_name
  19. A:
    At least it’s internally consistent.
  20. A:
    Oh, this is really depressing.
  21. A:
    Did they even read the FOAF spec or did they just hear a rumour?
  22. T:
    (opens text editor)

Air MozillaWebdev Beer and Tell: March 2015

Webdev Beer and Tell: March 2015 Web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Kim MoirScaling Yosemite

We migrated most of our Mac OS X 10.8 (Mountain Lion) test machines to 10.10.2 (Yosemite) this quarter.

This project had two major constraints:
1) Use the existing hardware pool (~100 r5 mac minis)
2) Keep wait times sane1.  (The machines are constantly running tests most of the day due to the distributed nature of the Mozilla community and this had to continue during the migration.)

So basically upgrade all the machines without letting people notice what you're doing!

Yosemite Valley - Tunnel View Sunrise by ©jeffkrause, Creative Commons by-nc-sa 2.0

Why didn't we just buy more minis and add them to the existing pool of test machines?
  1. We run performance tests and thus need to have all the machines running the same hardware within a pool so performance comparisons are valid.  If we buy new hardware, we need to replace the entire pool at once.  Machines with different hardware specifications = useless performance test comparisons.
  2. We tried to purchase some used machines with the same hardware specs as our existing machines.  However, we couldn't find a source for them.  As Apple stops production of old mini hardware each time they announce a new one, they are difficult and expensive to source.
Apple Pi by ©apionid, Creative Commons by-nc-sa 2.0

Given that Yosemite was released last October, why we are only upgrading our test pool now?  We wait until the population of users running a new platform2 surpass those the old one before switching.

Mountain Lion -> Yosemite is an easy upgrade on your laptop.  It's not as simple when you're updating production machines that run tests at scale.

The first step was to pull a few machines out of production and verify the Puppet configuration was working.  In Puppet, you can specify commands to only run certain operating system versions. So we implemented several commands to accommodate changes for Yosemite. For instance, changing the default scrollbar behaviour, new services that interfere with test runs needed to be disabled, debug tests required new Apple security permissions configured etc.

Once the Puppet configuration was stable, I updated our configs so the people could run tests on Try and allocated a few machines to this pool. We opened bugs for tests that failed on Yosemite but passed on other platforms.  This was a very iterative process.  Run tests on try.  Look at failures, file bugs, fix test manifests. Once we had to the opt (functional) tests in a green state on try, we could start the migration.

Migration strategy
  • Disable selected Mountain Lion machines from the production pool
  • Reimage as Yosemite, update DNS and let them puppetize
  • Land patches to disable Mountain Lion tests and enable corresponding Yosemite tests on selected branches
  • Enable Yosemite machines to take production jobs
  • Reconfig so the buildbot master enable new Yosemite builders and schedule jobs appropriately
  • Repeat this process in batches
    • Enable Yosemite opt and performance tests on trunk (gecko >= 39) (50 machines)
    • Enable Yosemite debug (25 more machines)
    • Enable Yosemite on mozilla-aurora (15 more machines)
We currently have 14 machines left on Mountain Lion for mozilla-beta and mozilla-release branches.

As a I mentioned earlier, the two constraints with this project were to use the existing hardware pool that constantly runs tests in production and keep the existing wait times sane.  We encountered two major problems that impeded that goal:

It's a compliment when people say things like "I didn't realize that you updated a platform" because it means the upgrade did not cause large scale fires for all to see.  So it was a nice to hear that from one of my colleagues this week.

Thanks to philor, RyanVM and jmaher for opening bugs with respect to failing tests and greening them up.  Thanks to coop for many code reviews. Thanks dividehex for reimaging all the machines in batches and to arr for her valiant attempts to source new-to-us minis!

References
1Wait times represent the time from when a job is added to the scheduler database until it actually starts running. We usually try to keep this to under 15 minutes but this really varies on how many machines we have in the pool.
2We run tests for our products on a matrix of operating systems and operating system versions. The terminology for operating system x version in many release engineering shops is a platform.  To add to this, the list of platform we support varies across branches.  For instance, if we're going to deprecate a platform, we'll let this change ride the trains to release.

Further reading
Bug 1121175: [Tracking] Fix failing tests on Mac OSX 10.10 
Bug 1121199: Green up 10.10 tests currently failing on try 
Bug 1126493: rollout 10.10 tests in a way that doesn't impact wait times
Bug 1144206: investigate what is causing frequent talos failures on 10.10
Bug 1125998: Debug tests initially took 1.5-2x longer to complete on Yosemite


Why don't you just run these tests in the cloud?
  1. The Apple EULA severely restricts virtualization on Mac hardware. 
  2. I don't know of any major cloud vendors that offer the Mac as a platform.  Those that claim they do are actually renting racks of Macs on a dedicated per host basis.  This does not have the inherent scaling and associated cost saving of cloud computing.  In addition, the APIs to manage the machines at scale aren't there.
  3. We manage ~350 Mac minis.  We have more experience scaling Apple hardware than many vendors. Not many places run CI at Mozilla scale :-) Hopefully this will change and we'll be able to scale testing on Mac products like we do for Android and Linux in a cloud.

Emma IrwinP2PU Course in a Box & Mozilla Community Education

Last year I created my first course on the P2PU platform  titled ‘Hacking Open Source Participation’,  and through that fantastic experience stumbled across a newer P2PU project called Course in a Box. Built on  Jekyll blogging software, Course in a Box makes it easy to create online educational content powered by Github Pages.

As awesome as this project is, there were a number of challenges I needed solve before adopting it for Mozilla’s Community Education Platform:

 Hierarchy

Jekyll is a blog-aware, static site generator. It uses template and layout files + markdown  +  CSS to display posts. Course in a Box comes with a top level category for content called modules, and within those modules are the content  – which works beautifully for single-course purpose

The challenge is , that we need to write education and training materials on a regular basis, and creating multiple Course in a Box(es) would be a maintenance nightmare.  What I really needed was a way to build multiple courses under one or more topics vrs the ‘one course’ model.  To do that, we needed to build out a hierarchy of content.

What I did

Visualized the menu moving from a list of course modules

cib

 

To a list of course topics.

ce

So Marketpulse, DevRel (for example) are course topics.  Topics are followed by courses, which then contain modules.

On the technical side, I added a new variable called submodules to the courses.yml data file.

2015-03-19_2313

Submodules are prefixed with the topic they belong ‘under’, for example: reps_mentor_training is a module in the topic reps.  This is also how module folders are named:

2015-03-19_2319

 

 

 

 

Using this method of prefixing modules with topics, it was super-simple to create a dropdown menu.

2015-03-19_2317

 

As far as Jekyll is concerned, these are all still ‘modules’, which means that even top level topics can have content associated.  This works great for a ‘landing page’ type of introduction to a topic.

Curriculum Modularity

As mentioned, Jekyll is a blogging platform, so there’s no depth or usability  designed into content architecture, and this is a problem with our goal of writing modular curriculum.  I wanted to make it possible to reuse curriculum across not only our instance of Course in a Box, but other instances across Mozilla well.

What I did

I created a separate repository for community curriculum and made this a git submodule  in the _includes folder of Course in a Box.

2015-03-20_1026

 

 

 

 

With this submodule & Jekyll’s include() function  – I was able easily reference our modular content from a post:

{% include community_curriculum/market_pulse/FFOS/en/introduction.md %}

The only drawback is that Jekyll expects all content referenced with include() to be in a specific folder – and so having content in with design files is – gah!  But I can live with it.

And of course we can do this for multiple repositories if we need.  By using a submodule we can stick to certain versions/releases of curriculum if needed.   Additionally, this makes it easier for contributors to focus on ‘just the content’ (and not get lost in Jeykll code) when they are forking and helping improve curriculum.

Finally

I’m thinking about bigger picture of curriculum-sharing, in large part thanks to conversations with the amazing Laura Hilliger about how we can both share and remix curriculum accross more than one instance of Course in a Box.  The challenge is with remixed curriculum, which is essentially a new version – and whether it should ‘ live’ in a difference place than the original repository fork.

My current thinking is that each Course in a Box Instance should have it’s own curriculum repository, included as a git submodule AND other submodules needed, but not unique to the platform. This  repo will contain all curriculum unique to that instance, including remixed versions of content from other repositories.   (IMHO)  Remixed content should not live in the original fork, ans you risk becoming increasing out of sync with the original.

So that’s where I am right now, welcoming feedback & suggestions on our Mozilla Community Education platform (with gratitude to P2PU for making it possible)

 

 

 

 

 

Doug BelshawWeb Literacy Map v1.5

I’m delighted to announce that, as a result of a process that started back in late August 2014, the Mozilla community has defined the skills and competencies that make up v1.5 of the Web Literacy Map.

Cheers - DiCaprio.gif

Visual design work will be forthcoming with the launch of teach.webmaker.org, but I wanted to share the list of skills and competencies as soon as possible:


EXPLORING

Reading the Web

Navigation

Using software tools to browse the web

  • Accessing the web using the common features of a browser
  • Using hyperlinks to access a range of resources on the web
  • Reading, evaluating, and manipulating URLs
  • Recognizing the common visual cues in web services
  • Exploring browser add-ons and extensions to provide additional functionality

Web Mechanics

Understanding the web ecosystem and Internet stack

  • Using and understanding the differences between URLs, IP addresses and search terms
  • Identifying where data is in the network of devices that makes up the Internet
  • Exporting, moving, and backing up data from web services
  • Explaining the role algorithms play in creating and managing content on the web
  • Creating or modifying an algorithm to serve content from around the web

Search

Locating information, people and resources via the web

  • Developing questions to aid a search
  • Using and revising keywords to make web searches more efficient
  • Evaluating search results to determine if the information is relevant
  • Finding real-time or time-sensitive information using a range of search techniques
  • Discovering information and resources by asking people within social networks

Credibility

Critically evaluating information found on the web

  • Comparing and contrasting information from a number of sources
  • Making judgments based on technical and design characteristics
  • Discriminating between ‘original’ and derivative web content
  • Identifying and investigating the author or publisher of web resources
  • Evaluating how purpose and perspectives shape web resources

Security

Keeping systems, identities, and content safe

  • Recommending how to avoid online scams and 'phishing’
  • Managing and maintaining account security
  • Encrypting data and communications using software and add-ons
  • Changing the default behavior of websites, add-ons and extensions to make web browsing more secure

BUILDING

Writing the web

Composing for the web

Creating and curating content for the web

  • Inserting hyperlinks into a web page
  • Identifying and using HTML tags
  • Embedding multimedia content into a web page
  • Creating web resources in ways appropriate to the medium/genre
  • Setting up and controlling a space to publish on the Web

Remixing

Modifying existing web resources to create something new

  • Identifying remixable content
  • Combining multimedia resources to create something new on the web
  • Shifting context and meaning by creating derivative content
  • Citing and referencing original content

Designing for the web

Enhancing visual aesthetics and user experiences

  • Using CSS properties to change the style and layout of a Web page
  • Demonstrating the difference between inline, embedded and external CSS
  • Improving user experiences through feedback and iteration
  • Creating device-agnostic web resources

Coding / Scripting

Creating interactive experiences on the web

  • Reading and explaining the structure of code
  • Identifying and applying common coding patterns and concepts
  • Adding comments to code for clarification and attribution
  • Applying a script framework
  • Querying a web service using an API

Accessibility

Communicating in a universally-recognisable way

  • Using empathy and awareness to inform the design of web content that is accessible to all users
  • Designing for different cultures which may have different interpretations of design elements
  • Comparing and exploring how different interfaces impact diverse users
  • Improving the accessibility of a web page through the design of its color scheme, structure/hierarchy and markup
  • Comparing and contrasting how different interfaces impact diverse web users

CONNECTING

Participating on the web

Sharing

Providing access to web resources

  • Creating and using a system to distribute web resources to others
  • Contributing and finding content for the benefit of others
  • Creating, curating, and circulating web resources to elicit peer feedback
  • Understanding the needs of audiences in order to make relevant contributions to a community
  • Identifying when it is safe to contribute content in a variety of situations on the web

Collaborating

Creating web resources with others

  • Choosing a Web tool to use for a particular contribution/ collaboration
  • Co-creating Web resources
  • Configuring notifications to keep up-to-date with community spaces and interactions
  • Working towards a shared goal using synchronous and asynchronous tools
  • Developing and communicating a set of shared expectations and outcomes

Community Participation

Getting involved in web communities and understanding their practices

  • Engaging in web communities at varying levels of activity
  • Respecting community norms when expressing opinions in web discussions
  • Making sense of different terminology used within online communities
  • Participating in both synchronous and asynchronous discussions

Privacy

Examining the consequences of sharing data online

  • Debating privacy as a value and right in a networked world
  • Explaining ways in which unsolicited third parties can track users across the web
  • Controlling (meta)data shared with online services
  • Identifying rights retained and removed through user agreements
  • Managing and shaping online identities

Open Practices

Helping to keep the web democratic and universally accessible

  • Distinguishing between open and closed licensing
  • Making web resources available under an open license
  • Contributing to an Open Source project
  • Advocating for an open web

Thanks goes to the dedicated Mozilla contributors who steadfastly worked on this over the last few months. They’re listed here. We salute you!

Any glaring errors? Typos? Let us know! You can file an issue on GitHub.


Questions? Comments? Try and put them in the GitHub repo, but you can also grab me on Twitter (@dajbelshaw) or by email (doug@mozillafoundation.org

Michael KaplyCCK2 2.0.21 released

I've released new version of the CCK. New features include:

  • Setting a lightweight theme
  • Clearing preferences
  • Setting user preference values (versus default or locking)
  • More control over the CA trust string
  • Security devices are loaded at startup and fail gracefully (so multiple platforms can be specified)
  • Redesign of security devices dialog
  • Distribution info on about dialog is no longer bolded
  • Proxy information can be set in the preference page (if you want user values, not default/locked)
  • Better migration of bookmarks between versions
  • Better errors for cert download failures

Bugs fixed include:

  • International characters not working properly
  • CA trust string not being used
  • Unable to set the plugin.disable_full_page_plugin_for_types
  • Bookmarks not deleted when migrating from CCK Wizard

If you find bugs, please report them at cck2.freshdesk.com.

Priority support is given to folks with support subscriptions. If the CCK2 is important to your company, please consider purchasing one.