Nicholas NethercoteUpdate on reviewing our data practices and Bugzilla development database disclosure

As we indicated in the post titled “MDN Disclosure”, we began several remediation measures, including a review of data practices surrounding user data. We have kicked off a larger project to better our practices around data, including with respect to the various non-Mozilla projects we support. We are implementing immediate fixes for any discovered issues across the organization, and are requiring each business unit to perform a review of their data practices and, if necessary, to implement additional protections based on that review.

As we proceed through our broader remediation program, we discovered an incident that occurred in the Bugzilla community, one of the community projects we support. A member of the Bugzilla community discovered that development database dump files containing email addresses and encrypted passwords were posted on a publicly accessible server. They were alerted to this incident by a security bug filed by a contributor. While it is important to note that the disclosure of this development database does not affect bugzilla.mozilla.org, we continue to believe that the broader community would benefit from our increased focus on data practices and therefore will continue with our plan of including the Bugzilla project as well as other community projects in the data practices initiatives we’ve described above.

We are committed to continuing to improve our data practices to minimize the likelihood of these and other types of incidents.

Sincerely,

Mozilla Security

Jared WeinNew in Firefox Nightly: Recommended and easy theme switching through Customize mode

Firefox menuWe shipped the Australis project with Firefox 29, but the Firefox team hasn’t stopped working on making Firefox the easiest browser to personalize. Firefox allows easy customizing through the new Customize mode, and now in Firefox Nightly people will find a quick and easy to way to set the theme of the browser.

After entering Customize mode, a new menu is shown at the footer of the window. Clicking on this menu will show any installed themes as well as a list of five recommended themes.

These recommended themes were picked from the Add-ons for Firefox website by members of the Firefox User Experience team. All of the themes are licensed through Creative Commons. Some are CC-BY and others are CC-BY-SA.

Themes menu

Hovering over a theme in the menu will preview the appearance of the theme. Clicking on one of the themes will change the applied theme.

An applied theme

We haven’t figured out yet what the rotation will be for recommended themes. Any input on how often or how we should go about putting together the next list is greatly appreciated.

Full management of themes and add-ons is still available through the Add-ons Manager. Recommended themes that have not been applied will not show up in the Add-ons Manager. Once a recommended theme is applied, it will appear in the Add-ons Manager and can be uninstalled from there.


Tagged: firefox, planet-mozilla, usability

Pete MooreWeekly review 2014-08-27

Highlights from this week

1. Play Store - armv6

The main goal of the last week has been to enable fennec builds on esr31 branch. Last week I updated the build process to use a different mechanism to generate the version code in the play store for armv6 apks generated from the esr31 branch. This week has been about enabling these builds and release builders.

This work is tracked in Bug 1040319 – Ensure that Fennec builds from mozilla-esr31 have a buildID to allow for armv6/Android 2.2 users to update to mozilla-esr31 apks.

2. Working with contributors

I’ve been working with kartikgupta0909 this week on IRC - hoping he is going to fix Bug 1020613 - vcs sync should only push tags/heads that have changed since last successful push for us.

I added metadata to bugs, and created a bugzilla search for them to appear in, which I linked to from our contributions wiki page (and I created a sublink to RelEng contributions page from our main Release Engineering page).

3. Other

Regular type support work, which can be seen in bugs below.

Goals for next week:

  • Return to l10n work
  • Prepare for RelEng arch meeting in September

Bugs I created this week:

Other bugs I updated this week:

Julien VehentPostgres multicolumn indexes to save the day

I love relational databases. Well designed, they are the most elegant and efficient way to store data. Which is why MIG uses Postgresql, hosted by Amazon RDS.

It's the first time I use RDS for anything more than a small website. I discover its capabilities along the way.  Over the past few days, I've been investigating performance issues. The database was close to 100% CPU, and the number of DB connections maintained by the Go database package was varying a lot. Something was off.

I have worked as a junior Oracle & Postgres DBA in the past. In my limited experience, database performances are almost always due to bad queries, or bad schemas. When you wrote the queries, however, this is what you blame last, after spending hours looking for a bug in any other components outside of your control.

Eventually, I re-read my queries, and found one that looked bad enough:

// AgentByQueueAndPID returns a single agent that is located at a given queueloc and has a given PID
func (db *DB) AgentByQueueAndPID(queueloc string, pid int) (agent mig.Agent, err error) {
	err = db.c.QueryRow(`SELECT id, name, queueloc, os, version, pid, starttime, heartbeattime,
		status FROM agents WHERE queueloc=$1 AND pid=$2`, queueloc, pid).Scan(
		&agent.ID, &agent.Name, &agent.QueueLoc, &agent.OS, &agent.Version, &agent.PID,
		&agent.StartTime, &agent.HeartBeatTS, &agent.Status)
	if err != nil {
		err = fmt.Errorf("Error while retrieving agent: '%v'", err)
		return
	}
	if err == sql.ErrNoRows {
		return
	}
	return
}
The query locates an agent using its queueloc and pid values. Which is necessary to properly identify an agent, except that neither queueloc nor pid have indexes, resulting in a sequential scan of the table:
mig=> explain SELECT * FROM agents WHERE queueloc='xyz' AND pid=1234;
QUERY PLAN                                                   
--------------------------------------------------------------
 Seq Scan on agents  (cost=0.00..3796.20 rows=1 width=161)
   Filter: (((queueloc)::text = 'xyz'::text) AND (pid = 1234))
(2 rows)

This query is called ~50 times per second, and even with only 45,000 rows in the agents table, that is enough to burn all the CPU cycles on my RDS instance.

Postgres supports multicolumn indexes. The fix is simple enough: create an index on the columns queueloc and pid together.

mig=> create index agents_queueloc_pid_idx on agents(queueloc, pid);
CREATE INDEX

Which results in an immediate, drastic, reduction of the cost of the query, and CPU usage of the instance.


mig=> explain SELECT * FROM agents WHERE queueloc='xyz' AND pid=1234;
QUERY PLAN                                                     
---------------------------------------------------------------------------------------
 Index Scan using agents_queueloc_pid_idx on agents  (cost=0.41..8.43 rows=1 width=161)
   Index Cond: (((queueloc)::text = 'xyz'::text) AND (pid = 1234))
(2 rows)

migdbcpuusage.png

Immediate performance gain for a limited effort. Gotta love Postgres !

Doug BelshawSoliciting feedback on v1.1 of the Web Literacy Map

The Web Literacy Map constitutes the skills and competencies that Mozilla and its community of stakeholders believe to be necessary to read, write and participate effectively on the web.

Sea

The Web Literacy Map currently stands at v1.1 but as I blogged recently, a lot has happened since we launched the first version at MozFest last year! That’s why we’re planning to update it to v2.0 by early January 2015.

I’ll be connecting with key people over the coming weeks to ask for a half-hour (recorded) conversation which will then be shared with the community. In the meantime we’d appreciate your feedback. Here’s what Atul Varma had to say:

So I feel like the weblit map is cool as it is, but as has been discussed previously, there are a number of areas that are important but cross-cut through existing competencies, rather than necessarily constituting their own competencies by themselves… what if we created a set of lenses through which the competencies could be viewed?

There’s a couple of ways you can give your feedback:

Leaving your name means we can follow up with questions if necessary (for clarification, etc.) I look forward to hearing what you have to say! All opinions are welcome. Pull no punches. :-)


Questions? I’m @dajbelshaw on Twitter or you can email me: doug@mozillafoundation.org

Daniel StenbergGoing to FOSDEM 2015

Yeps,

I’m going there and I know several friends are going too, so this is just my way of pointing this out to the ones of you who still haven’t made up your mind! There’s still a lot of time left as this event is taking place late January next year.

I intend to try to get a talk to present this time and I would love to meet up with more curl contributors and fans.

fosdem

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1058479] move the “mozilla employees” warning on bugzilla::admin next to the submit button
  • [1058481] git commits should link to commitdiff not commit
  • [1056087] contrib/merge-users.pl fails if there are no duplicate bug_user_last_visit rows
  • [1058679] new bug API returning a ref where bzexport expects bug data
  • [1057774] bzAPI landing page gives a 404
  • [1056904] Add “Mentored by me” to MyDashboard
  • [1059085] Unable to update a product’s group controls: Can’t use string (“table”) as an ARRAY ref while “strict refs” in use
  • [1059088] Inline history can be shown out-of-order when two changes occur in the same second

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Monica ChewFirefox 32 supports Public Key Pinning

Public Key Pinning helps ensure that people are connecting to the sites they intend. Pinning allows site operators to specify which certificate authorities (CAs) issue valid certificates for them, rather than accepting any one of the hundreds of built-in root certificates that ship with Firefox. If any certificate in the verified certificate chain corresponds to one of the known good certificates, Firefox displays the lock icon as normal.

Pinning helps protect users from man-in-the-middle-attacks and rogue certificate authorities. When the root cert for a pinned site does not match one of the known good CAs, Firefox will reject the connection with a pinning error. This type of error can also occur if a CA mis-issues a certificate.

Pinning errors can be transient. For example, if a person is signing into WiFi, they may see an error like the one below when visiting a pinned site. The error should disappear if the person reloads after the WiFi access is setup.



Firefox 32 and above supports built-in pins, which means that the list of acceptable certificate authorities must be set at time of build for each pinned domain. Pinning is enforced by default. Sites may advertise their support for pinning with the Public Key Pinning Extension for HTTP, which we hope to implement soon. Pinned domains include addons.mozilla.org and Twitter in Firefox 32, and Google domains in Firefox 33, with more domains to come. That means that Firefox users can visit Mozilla, Twitter and Google domains more safely. For the full list of pinned domains and rollout status, please see the Public Key Pinning wiki.

Thanks to Camilo Viecco for the initial implementation and David Keeler for many reviews!

Gervase MarkhamEmail Account Phishers Do Manual Work

For a while now, criminals have been breaking into email accounts and using them to spam the account’s address book with phishing emails or the like. More evil criminals will change the account password, and/or delete the address book and the email to make it harder for the account owner to warn people about what’s happened.

My mother recently received an email, purportedly from my cousin’s husband, titled “Confidential Doc”. It was a mock-up of a Dropbox “I’ve shared an item with you” email, with the “View Document” URL actually being http://proshow.kz/excel/OLE/PPS/redirect.php. This (currently) redirects to http://www.affordablewebdesigner.co.uk/components/com_wrapper/views/wrapper/tmpl/dropbox/, although it redirected to another site at the time. That page says “Select your email provider”, explaining “Now, you can sign in to dropbox with your email”. When you click the name of your email provider, it asks you for your email address and password. And boom – they have another account to abuse.

But the really interesting thing was that my mother, not being born yesterday, emailed back saying “I’ve just received an email from you. But it has no text – just an item to share. Is it real, or have you been hacked?” So far, so cautious. But she actually got a reply! It said:

Hi <her shortened first name>,
I sent it, It is safe.
<his first name>

(The random capital was in the original.)

Now, this could have been a very smart templated autoresponder, but I think it’s more likely that the guy stayed logged into the account long enough to “reassure” people and to improve his hit rate. That might tell us interesting things about the value of a captured email account, if it’s worth spending manual effort trying to convince people to hand over their creds.

Alex VincentAn insightful statement from a mathematics course

I’m taking a Linear Algebra course this fall.  Last night, my instructor said something quite interesting:

“We are building a model of Euclidean geometry in our vector space. Then we can prove our axioms of geometry (as theorems).”

This would sound like technobabble to me even a week ago, but what he’s really saying is this:

“If you can implement one system’s basic rules or axioms in another system, you can build a model of that first system in the second.”

Programmers and website builders build models of systems all the time, and unconsciously, we build on top of other systems. Think about that when you write JavaScript code: the people who implement JavaScript engines are building a model for millions of people to use that they’ll never meet. I suppose the same could be said of any modern programming language, compiler, transpiler or interpreter.

The beauty for those of us who work in the model is that we (theoretically) shouldn’t need to care what platform we run on. (In practice, there are differences, which is why we want platforms to implement standards, so we can concentrate on using the theoretical model we depend on.)

On the flip side, that also means that building and maintaining that fundamental system we build on top of has to be done very, very carefully.  If you’re building something for others to use (and chances are, when you’re writing software, you’re doing exactly that), you really have to think about how you want others to use your system, and how others might try to use your system in ways you don’t expect.

It’s really quite a profound duty that we take on when we craft software for others to use.

Chris AtLeeGotta Cache 'Em All

TOO MUCH TRAFFIC!!!!

Waaaaaaay back in February we identified overall network bandwidth as a cause of job failures on TBPL. We were pushing too much traffic over our VPN link between Mozilla's datacentre and AWS. Since then we've been working on a few approaches to cope with the increased traffic while at the same time reducing our overall network load. Most recently we've deployed HTTP caches inside each AWS region.

Network traffic from January to August 2014

The answer - cache all the things!

Obligatory XKCD

Caching build artifacts

The primary target for caching was downloads of build/test/symbol packages by test machines from file servers. These packages are generated by the build machines and uploaded to various file servers. The same packages are then downloaded many times by different machines running tests. This was a perfect candidate for caching, since the same files were being requested by many different hosts in a relatively short timespan.

Caching tooltool downloads

Tooltool is a simple system RelEng uses to distribute static assets to build/test machines. While the machines do maintain a local cache of files, the caches are often empty because the machines are newly created in AWS. Having the files in local HTTP caches speeds up transfer times and decreases network load.

Results so far - 50% decrease in bandwidth

Initial deployment was completed on August 8th (end of week 32 of 2014). You can see by the graph above that we've cut our bandwidth by about 50%!

What's next?

There are a few more low hanging fruit for caching. We have internal pypi repositories that could benefit from caches. There's a long tail of other miscellaneous downloads that could be cached as well.

There are other improvements we can make to reduce bandwidth as well, such as moving uploads from build machines to be outside the VPN tunnel, or perhaps to S3 directly. Additionally, a big source of network traffic is doing signing of various packages (gpg signatures, MAR files, etc.). We're looking at ways to do that more efficiently. I'd love to investigate more efficient ways of compressing or transferring build artifacts overall; there is a ton of duplication between the build and test packages between different platforms and even between different pushes.

I want to know MOAR!

Great! As always, all our work has been tracked in a bug, and worked out in the open. The bug for this project is 1017759. The source code lives in https://github.com/mozilla/build-proxxy/, and we have some basic documentation available on our wiki. If this kind of work excites you, we're hiring!

Big thanks to George Miroshnykov for his work on developing proxxy.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1058274] The input field for suggested reviewers when editing a component needs ‘multiple’ to be true for allowing for more than one username
  • [1051655] mentor field updated/reset when a bug is updated as a result of a change on a different bug (eg. see also, duplicate)
  • [1058355] bugzilla.mozilla.org leaks emails to logged out users in “Latest Activity” search URLs

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jess KleinRemix + Hack the Firefox Home page. No really, we want you to!

If you are a Firefox desktop user, you may have seen the Firefox default home page. This page contains a default engine web search and quick links to downloads, bookmarks, history, add-ons, sync and settings. Additionally, if you happen to have had tabs open the last time you used the browser,  you can restore them from the home page.  We often share important news and updates underneath the search bar.

Screen Shot 2014-08-15 at 10.18.04 AM.png

This is what I currently see at the Firefox default home page. Animated gifs FTW.


THE OPPORTUNITY
A few months back, Hive Labs, (a new project within the Hive Learning Networks designed to explore the question “how do we use design to transform edupunk ethics into great products?”), was approached by the Mozilla Foundation Engagement team to brainstorm how the space could be used in an innovative way to educate Firefox users about the Maker Party. Maker Party is Mozilla's global campaign to teach the web, uniting educators, organizations and enthusiastic web users with hands-on learning and making. While I have to admit, I have never really created something in the realm of owned media, I saw this as an interesting opportunity for Mozilla to show (vs. tell) what Maker Party is all about.  


THE CHALLENGE


The team (which included creative individuals from many different projects across the Mozilla Foundation and the Corporation) immediately identified the opportunity space and came up with a few project requirements:
  • use the space in an interactive way to introduce the website visitor to web literacy skills
  • acknowledge that the visitor may not have ever seen code before, and understand that we do not know what web literacy skills they are coming to this space with
  • create something playful


THE SOLUTION


While we tossed around a few different ideas, the solution that we came up with was to create a Webmaker Goggles - like experience that lets the visitor see under the hood of the webpage.


Screen Shot 2014-08-15 at 10.35.04 AM.png


After doing some initial sketches, we realized that we needed to define our learning objectives for the project.  While normally this is fairly easy to do - you say that the learner will come away with the ability to remix a paragraph written in HTML and understand what p tags are, or something very basic. Here, the challenge was two-fold: 1. the webpage visitor did not identify as a learner and 2. as I mentioned before, they might have no knowledge of the fact that the code is written in order to create a webpage. So, after several false starts, we came up with the the goal of having the website visitor walk away understanding that if you look under the hood of a webpage, you will see it is made from code.


Initial sketches for the snippet included replacing the Firefox logo with an image


After the learning objective was defined, we had to interpret what that meant in terms of interaction design. I believe that the most effective way to empower a user is to put the tools in their hands to allow them to directly address and grapple with the thing that they might learn by tinkering with it themselves. We tried out a few different iterations on this. Above is a sketch where the visitor might get instructed to remix the page from a video. The idea was to have a person in the video describe what to do, and then the learner would use the goggles to swap out the video for an image or video of their choosing. This idea was fun, and had a lot of potential community localization opportunities. However, there was a risk that the user would just not click on the video, and miss out on all the fun.


Ultimately, we ended up utilising what Atul Varma calls “cruise control” —that’s where we model the behavior in order to encourage the site visitor to try it out themselves. It looks like someone is typing out all of the words on the screen.  We decided to focus on revealing a little CSS, because you can use real words to represent colors and seeing those colors immediately can have a visceral impact on the site visitor. Here is a screencast of the interaction:



** Update: You can see the actual interactive experience by going to the Firefox homepage or if you can't get to that, check it out here.  **

The crazy and kickass cast of characters who pulled this interactive off are:  Chris Lawrence, Atul Varma, Brian Brennan , Adam Lofting, Hannah Kane, Jean Collings, Mike Kelly, Chris More, Matt Thompson, Aki Rose Braun,  David Ascher, Geoffrey MacDougall, Brett Gaylor, John Slater, Eric Petitt, Mary Ellen Muckerman, Pete Scanlon and Andrea Wood.

We’re really excited about this project, as it represents one of the first interactive uses (if not THE first) of the space of the Firefox home page. We hope that as site visitors dip their toes into understanding the craft of the Web, they’ll be inspired to learn more through Webmaker and Maker Party.  Our ultimate goal is for people to feel empowered to become creators, not just consumers, of the Web.

Daniel StenbergCredits in the curl project

Friends!

When we receive patches, improvements, suggestions, advice and whatever that lead to a change in curl or libcurl, I make an effort to log the contributor’s name in association with that change. Ideally, I add a line in the commit message. We use “Reported-by: <full name>” quite frequently but also other forms of “…-by: <full name>” too like when there was an original patch by someone or testing and similar. It shouldn’t matter what the nature of the contribution is, if it helped us it is a contribution and we say thanks!

curl-give-credits

I want all patch providers and all of us who have push rights to use this approach so that we give credit where credit is due. Giving credit is the only payment we can offer in this project and we should do it with generosity.

The green bars on the right show the results from the question how good we are at giving credit in the project from the 2014 curl survey, where 5 is really good and 1 is really bad. Not too shabby, but I’d say we can do even better! (59% checked the top score, 15% checked the 3′)

I have a script called contributors.sh that extracts all contributors since a tag (typically the previous release) and I use that to get a list of names to thank in the RELEASE-NOTES file for the pending curl release. Easy and convenient.

After every release (which means every 8th week) I then copy the list of names from RELEASE-NOTES into docs/THANKS. So all contributors get remembered and honored after having helped us in one way or another.

When there’s no name

When contributors don’t provide a real name but only a nick name like foobar123, user_5678 and so on I tend to consider that as request to not include the person’s name anywhere and hence I tend to not include it in the THANKS or RELEASE-NOTES. This also sometimes the result of me not always wanting to bother by asking people over and over again for their real name in case they want to be given proper and detailed credit for what they’ve provided to us.

Unfortunately, a notable share of all contributions we get to the project are provided by people “hiding” behind a made up handle. I’m fine with that as long as it truly is what the helpers’ actually want.

So please, if you help us out, we will happily credit you, but please tell us your name!

keep-calm-and-improve-curl

Mozilla Release Management TeamFirefox 32 beta8 to beta9

  • 42 changesets
  • 78 files changed
  • 1175 insertions
  • 782 deletions

ExtensionOccurrences
cpp26
js20
h7
html5
py4
jsm2
ini2
xul1
xml1
json1
in1
cc1
build1

ModuleOccurrences
browser12
layout10
content9
toolkit7
js6
dom6
security5
services4
netwerk3
testing2
config2
tools1
modules1
memory1
image1
gfx1
extensions1

List of changesets:

Mike HommeyBug 1050029 - Improve Makefile checks for deprecated or moz.build variables. r=mshal a=NPOTB - 2a617532286d
Mike ShalBug 1047621 - Move link.py to config for importing expandlibs_exec; r=gps a=NPOTB - a09c51fcbd98
Mike ShalBug 1047621 - Have link.py import and call expandlibs_exec.py; r=gps a=NPOTB - bd02db1d22d0
Tim TaubertBug 1054815 - Fix browser_tabview_bug712203.js to not connect to google.com. r=smacleod, a=test-only - 2309c50ccc6c
Ryan VanderMeulenNo Bug - Change min expected assertions for test_playback_rate.html to 3. a=test-only - 1815786bfc6d
Ryan VanderMeulenNo Bug - Widen the allowable number of asserts in test_bug437844.xul to 19-21 so we don't have to keep adjusting it everytime something randomly perturbs it. a=test-only - 3f100f099542
Martijn WargersBug 1024535 - Fix for failing video test on Windows 7. r=jwwang, a=test-only - d2714b6fc28d
David Rajchenbach-TellerBug 1024686 - Add missing return in Sqlite.jsm. r=mak, a=test-only - da78e23cbe3d
Martijn WargersBug 1051783 - Fix test_pointerlock-api.html. r=Enn, a=test-only - 90b5e0b87666
Terrence ColeBug 1055219. r=terrence, a=abillings - 7c7145e95cb5
Wes KocherBacked out changeset 90b5e0b87666 (Bug 1051783) for an added assert a=backout - ec5427a8e674
Steven MacLeodBug 1035557 - Migrate crash checkpoints with the session so that we don't appear to crash during reset. r=ttaubert, a=lmandel - 8d583074b918
Monica ChewBug 1055670: Disable remote lookups (r=gcp,a=lmandel) - b554afc480aa
C.J. KuBug 1055040 - Send mouse events base on canvas position and enable this test case on all B2G builds. r=ehsan, a=test-only - fadc34768c8b
Jared WeinBug 947574 - Switch browser_426329.js to use tasks to avoid intermittent failure. r=Gijs, a=test-only - 023ef0541072
Michael WuBug 1045977 - Clear heap allocated volatile buffers. r=njn, r=seth, a=sledru - bff13e7445c5
Michal NovotnyBug 1054425 - cache2: leak in CacheFileMetadata::WriteMetadata. r=jduell, a=sledru - 342c0c26e18d
Shane CaraveoBug 1047340 - Fix annotation of marks by using the browser url rather than cannonical url. r=jaws, a=lmandel - 54949d681a14
Aaron KlotzBug 1054813 - Add some missing MutexAutoLocks in nsZipReaderCache. r=mwu, a=lmandel - 50590d1557c4
Jim ChenBug 1013004 - Fix support for sampling intervals > 1 second. r=BenWa, a=lmandel - 61980c2f6177
Gregory SzorcBug 1055102 - Properly handle Unicode in Bagheera payloads. r=bsmedberg, a=lmandel - 4f18903bc230
Steve WorkmanBug 1054418 - Rewrite AppCacheUtils.jsm to use HTTP Cache v2 APIs. r=michal, a=sledru - fa7360fe9779
Michal NovotnyBug 1054819 - Ensure that the dictionary is released on the main thread. r=ehsan, a=sledru - c06efff91ed3
Honza BambasBug 1053517 - Enable the new HTTP cache during automation testing. r=jduell, a=test-only - f5d4b16203aa
Douglas CrosherBug 1013996 - irregexp: avoid unaligned accesses in ARM code. r=bhackett, a=lmandel - 093bfa0f1dee
Joel MaherBug 1056199 - Update talos on Fx32 to the latest revision. r=RyanVM, a=test-only - ec3e586813b5
Tim TaubertBug 1041527 - Ensure that about:home isn't the initial tab when opening new windows in tabview tests. r=ehsan, a=test-only - c340fefc0fe8
Marco BonardoBug 1002439 - browser_bug248970.js is almost perma fail when run by directory on osx opt. r=mano, a=test-only - 0b44c271f755
Ryan VanderMeulenBug 906752 - Disable test_audioBufferSourceNodeOffset.html on deBug builds. a=test-only - d94be43c729c
Seth FowlerBug 1024454 - Part 1: Eagerly propagate dirty bits so absolute children of table parts get reflowed reliably. r=dbaron, a=lmandel - 8e6b808eed02
Bill McCloskeyBug 1053999 - Be more conservative in recursion checks before brain transplants. r=bholley, a=lmandel - ac551f43e2b4
Paul AdenotBug 1056032 - Make sure COM is initialized when trying to decode an mp3 using decodeAudioData. r=cpearce, a=lmandel - f17ade17a846
Paul AdenotBug 1056032 - Test that we can decode an mp3 using decodeAudioData. r=ehsan, a=lmandel - 53d300e03f5b
Markus StangeBack out Bug 1000875 in order to fix the regression tracked in Bug 1011166. a=backout - 11a5306111d0
Peter Van der BekenBug 1036186 - Reset Migration wizard no longer skips the first step to choose a browser. r=smaug, a=lmandel - ac8864d8ecc0
Camilo VieccoBug 1047177 - Treat v4 certs as v3 certs (1/2). r=keeler. a=lmandel - 6049537c2510
Camilo VieccoBug 1047177 - Treat v4 certs as v3 certs. Tests (2/2). r=keeler. a=lmandel - 74a58e14d1d3
Bill McCloskeyBug 1008107 - Allow SandboxPrivate to be null in sandbox_finalize. r=bz, a=lmandel - 85318a1536ee
Sami JaktholmBug 1055499 - StyleEditor: Properly wait for the toolbox to be destroyed before ending test run and causing windows to leak. r=harth, a=test-only - 8f49d60bf5c9
Honza BambasBug 1040086 - EV identifier missing when restoring session with HTTP cache v2. r=michal, a=lmandel - 33ea2d7e342e
Shane CaraveoBug 1056415 - Fix updating the marks buttons during tabchange. r=jaws, a=lmandel - 2f61f6e44a33
Shane CaraveoBug 1047316 - Fix docshell swapping Bug by removing usage in marks (unecessary here). r=jaws, a=lmandel - 58eb677e55f3

David HumphreyIntroducing MakeDrive

I've been lax in my blogging for the past number of months (apologies). I've had my head down in a project that's required all of my attention. On Friday we reached a major milestone, and I gave a demo of the work on the weekly Webmaker call. Afterward David Ascher asked me to blog about it. I've wanted to do so for a while, so I put together a proper post with screencasts.

I've written previously about our idea of a web filesystem, and the initial work to make it possible. Since then we've greatly expanded the idea and implementation into MakeDrive, which I'll describe and show you now.

MakeDrive is a JavaScript library and server (node.js) that provides an offline-first, always available, syncing filesystem for the web. If you've used services like Dropbox or Google Drive, you already know what it does. MakeDrive allows users to work with files and folders locally, then sync that data to the cloud and other browsers or devices. However, unlike Dropbox or other similar services, MakeDrive is based purely on JavaScript and HTML5, and runs on the web. You don't install it; rather, a web application includes it as a script, and the filesystem gets created or loaded as part of the web page or app.

Because MakeDrive is a lower-level service, the best way to demonstrate it is by integrating it into a web app that relies on a full filesystem. To that end, I've made a series of short videos demonstrating aspects of MakeDrive integrated into a modified version of the Brackets code editor. I actually started this work because I want to make Brackets work in the browser, and one of the biggest pieces it is missing in browser is a full featured filesystem (side-note: Brackets can run in a browser just fine :). This post isn't specifically about Brackets, but I'll return to it in future posts to discuss how we plan to use it in Webmaker. MakeDrive started as a shim for Brackets-in-a-browser, but Simon Wex encouraged me to see that it could and should be a separate service, usable by many applications.

In the first video I demonstrate how MakeDrive provides a full "local," offline-first filesystem in the browser to a web app:

The code to provide a filesystem to the web page is as simple as var fs = MakeDrive.fs();. Applications can then use the same API as node.js' fs module. MakeDrive uses another of our projects, Filer, to provide the low-level filesystem API in the browser. Filer is a full POSIX filesystem (or wants to be, file bugs if you find them!), so you can read and write utf8 or binary data, work with files, directories, links, watches, and other fun things. Want to write a text file? it's done like so:

  var data = '<html>...';
  fs.writeFile('/path/to/index.html', data, function(err) {
    if(err) return handleError();
    // data is now written to disk
  });

The docs for Filer are lovingly maintained, and will show you the rest, so I won't repeat it here.

MakeDrive is offline-first, so you can read/write data, close your browser or reload the page, and it will still be there. Obviously having access to your filesystem outside the current web page is also desirable. Our solution was to rework Filer so it could be used in both the browser and node.js, allowing us to mirror filesystems over the network using Web Sockets). We use a rolling-checksum and differential algorithm (i.e., only sending the bits of a file that have changed) inspired by rsync; Dropbox does the same.

In this video I demonstrate syncing the browser filesystem to the server:

Applications and users work with the local browser filesystem (i.e., you read and write data locally, always), and syncing happens in the background. That means you can always work with your data locally, and MakeDrive tries to sync it to/from the server automatically. MakeDrive also makes a user's mirrored filesystem available remotely via a number of authenticated HTTP end points on the server:

  • GET /p/path/into/filesystem - serve the path from the filesystem provided like a regular web server would
  • GET /j/path/into/filesystem - serve the path as JSON (for APIs to consume)
  • GET /z/path/into/filesystem - export the path as export.zip (e.g., zip and send user data)

This means that a user can work on files in one app, sync them, and then consume them in another app that requires URLs. For example: edit a web component in one app and include and use it in another. When I started web development in the 1990s, you worked on files locally, FTP'ed them to a server, then loaded them via your web server and browser. Today we use services like gh-pages and github.io. Both require manual steps. MakeDrive automates the same sort of process, and targets new developers and those learning web development, making it a seamless experience to work on web content: your files are always "on the web."

MakeDrive supports multiple, simultaneous connections for a user. I might have a laptop, desktop, and tablet all sharing the same filesystem via a web app. This app can be running in any HTML5 compatible browser, app, or device. In this video I demonstrate syncing changes between different HTML5 browsers (Chrome, Firefox, and Opera):

Like Dropbox, each client will have its own "local" version of the filesystem, with one authoritative copy on the server. The server manages syncing to/from this filesystem so that multiple clients don't try to sync different changes to the same data at once. After one client syncs new changes, the server informs other clients that they can sync as well, which eventually propagates the changes across all connected clients. Changes can include updates to a file's data blocks, but also any change to the filesystem nodes themselves: renames, deleting a file, making a new directory, etc.

The code to make this syncing happen is very simple. As long as there is network, a MakeDrive filesystem can be connected to the server and synced. This can be a one-time thing, or the connection can be left open and incremental syncs can take place over the lifetime of the app: offline first, always syncing, always available.

Because MakeDrive allows the same user to connect multiple apps/devices at once, we have to be careful not to corrupt data or accidentally overwrite data when syncing. MakeDrive implements something similar to Dropbox's Conflicted Copy mechanism: if two clients change the same data in different ways, MakeDrive syncs the server's authoritative version, but also creates a new file with the local changes, and lets the user decide how to proceed.

This video demonstrates the circumstances by which a conflicted copy would be created, and how to deal with it:

Internally, MakeDrive uses extended attributes on filesystem nodes to determine automatically what has and hasn't been synced, and what is in a conflicted state. Conflicted copies are not synced back to the server, but remain in the local filesystem. The user decides how to resolve conflicts by deleting or renaming the conflicted file (i.e., renaming clears the conflict attribute).

MakeDrive works today, but isn't ready for production quite yet. On Friday we reached the end of our summer work, where we tried hard to follow initial mockups are very cool. If you have a web-first filesystem, you can do some interesting things that might not make sense in a traditional filesystem (i.e., when the scope of your files is limited to web content).

  • Having a filesystem in a web page naturally got me wanting to host web pages from web pages. I wrote nohost to experiment with this idea, an httpd in browser that uses Blob URLs. It's really easy to load DOM elements from a web filesystem:
  • var img = document.createElement('img');
    fs.readFile('/path/into/filesystem/image.png', function(err, data) {
      if(err) return handleError();
    
      // Create a Blob and wrap in URL Object.
      var blob = new Blob([data], {type: 'image/png'})
      var url = URL.createObjectURL(blob);
      img.src = url;
    });
    
    • Using this technique, we could create a small bootloader and store entire web apps in the filesystem. For example, all of Brackets loading from disk, with a tiny bootloader web page to get to the filesystem in appcache. This idea has been discussed elsewhere, and adding the filesystem makes it much more natural.
    • The current work on the W3C stream spec is really exciting, since we need a way to implement streaming data in and out of a filesystem, and therefore IndexedDB.
    • Having the ability to move IndexedDB to worker threads for background syncs (bug 701634), and into third-party iframes with postMessage to share a single filesystem instance across origins (bug 912202) would be amazing
    • Mobile! Being able to sync filesystems in and out of mobile web apps is really exciting. We're going to help get MakeDrive working in Mobile Appmaker this fall.

    If any of this interests you, please get in touch (@humphd) and help us. The next 6 months should be a lot of fun. I'll try to blog again before that, though ;)

    Daniel StenbergMy home setup

    I work in my home office which is upstairs in my house, perhaps 20 steps from my kitchen and the coffee refill. I have a largish desk with room for a number of computers. The photo below shows the three meter beauty. My two kids have their two machines on the left side while I use the right side of it for my desktop and laptop.

    Daniel's home office

    Many computers

    The kids use my old desktop computer with a 20″ Dell screen and my old 15.6″ dual-core Asus laptop. My wife has her laptop downstairs and we have a permanent computer installed underneath the TV for media (an Asus VivoPC).

    My desktop computer

    I’m primarily developing C and C++ code and I’m frequently compiling rather large projects – repeatedly. I use a desktop machine for my ordinary development, equipped with a fairly powerful 3.5GHz quad-core Core-I7 CPU, I have my OS, my home dir and all source code put on an SSD. I have a larger HDD for larger and slower content. With ccache and friends, this baby can build Firefox really fast. I put my machine together from parts myself as I couldn’t find a suitable one focused on horse power but yet a “normal” 2D graphics card that works Fractal Designfine with Linux. I use a Radeon HD 5450 based ASUS card, which works fine with fully open source drivers.

    I have two basic 24 inch LCD monitors (Benq and Dell) both using 1920×1200 resolution. I like having lots of windows up, nothing runs full-screen. I use KDE as desktop and I edit everything in Emacs. Firefox is my primary browser. I don’t shut down this machine, it runs a few simple servers for private purposes.

    My machines (and my kids’) all run Debian Linux, typically of the unstable flavor allowing me to get new code reasonably fast.

    Func KB-460 keyboardMy desktop keyboard is a Func KB-460, mechanical keyboard with some funky extra candy such as red backlight and two USB ports. Both my keyboard and my mouse are wired, not wireless, to take away the need for batteries or recharging etc in this environment. My mouse is a basic and old Logitech MX 310.

    I have a crufty old USB headset with a mic, that works fine for hangouts and listening to music when the rest of the family is home. I have Logitech webcam thing sitting on the screen too, but I hardly ever use it for anything.

    When on the move

    I need to sometimes move around and work from other places. Going to conferences or even our regular Mozilla work weeks. Hence I also have a laptop that is powerful enough to build Firefox is a sane amount of time. I have Lenovo Thinkpad w540a Lenovo Thinkpad W540 with a 2.7GHz quad-core Core-I7, 16GB of RAM and 512GB of SSD. It has the most annoying touch pad on it. I don’t’ like that it doesn’t have the explicit buttons so for example both-clicking (to simulate a middle-click) like when pasting text in X11 is virtually impossible.

    On this machine I also run a VM with win7 installed and associated development environment so I can build and debug Firefox for Windows on it.

    I have a second portable. A small and lightweight netbook, an Eeepc S101, 10.1″ that I’ve been using when I go and just do presentations at places but recently I’ve started to simply use my primary laptop even for those occasions – primarily because it is too slow to do anything else on.

    I do video conferences a couple of times a week and we use Vidyo for that. Its Linux client is shaky to say the least, so I tend to use my Nexus 7 tablet for it since the Vidyo app at least works decently on that. It also allows me to quite easily change location when it turns necessary, which it sometimes does since my meetings tend to occur in the evenings and then there’s also varying amounts of “family activities” going on!

    Backup

    For backup, I have a Synology NAS equipped with 2TB of disk in a RAIDSynology DS211j stashed downstairs, on the wired in-house gigabit ethernet. I run an rsync job every night that syncs the important stuff to the NAS and I run a second rsync that also mirrors relevant data over to a friends house just in case something terribly bad would go down. My NAS backup has already saved me really good at least once.

    Printer

    HP Officejet 8500ANext to the NAS downstairs is the house printer, also attached to the gigabit even if it has a wifi interface of its own. I just like increasing reliability to have the “fixed services” in the house on wired network.

    The printer also has scanning capability which actually has come handy several times. The thing works nicely from my Linux machines as well as my wife’s windows laptop.

    Internet

    fiber cableI have fiber going directly into my house. It is still “just” a 100/100 connection in the other end of the fiber since at the time I installed this they didn’t yet have equipment to deliver beyond 100 megabit in my area. I’m sure I’ll upgrade this to something more impressive in the future but this is a pretty snappy connection already. I also have just a few milliseconds latency to my primary servers.

    Having the fast uplink is perfect for doing good remote backups.

    Router  and wifi

    dlink DIR 635I have a lowly D-Link DIR 635 router and wifi access point providing wifi for the 2.4GHz and 5GHz bands and gigabit speed on the wired side. It was dead cheap it just works. It NATs my traffic and port forwards some ports through to my desktop machine.

    The router itself can also update the dyndns info which ultimately allows me to use a fixed name to my home machine even without a fixed ip.

    Frequent Wifi users in the household include my wife’s laptop, the TV computer and all our phones and tablets.

    Telephony

    Ping Communication Voice Catcher 201EWhen I installed the fiber I gave up the copper connection to my home and since then I use IP telephony for the “land line”. Basically a little box that translates IP to old phone tech and I keep using my old DECT phone. We basically only have our parents that still call this number and it has been useful to have the kids use this for outgoing calls up until they’ve gotten their own mobile phones to use.

    It doesn’t cost very much, but the usage is dropping over time so I guess we’ll just give it up one of these days.

    Mobile phones and tablets

    I have a Nexus 5 as my daily phone. I also have a Nexus 7 and Nexus 10 that tend to be used by the kids mostly.

    I have two Firefox OS devices for development/work.

    Kaustav Das ModakDear smartphone user, it is time to unlearn

    Dear smartphone user, You have been used to sophisticated features and cluttered interfaces for a long time. Remember those days when you had used a smartphone for the first time? Do you recollect that extra cognitive overload you had to face to figure out what each gesture does? Why were there so many round and […]

    Zack WeinbergThe literary merit of right-wing SF

    The results are in for the 2014 Hugo Awards. I’m pleased with the results in the fiction categories—a little sad that “The Waiting Stars” didn’t win its category, but it is the sort of thing that would not be to everyone’s taste.

    Now that it’s all over, people are chewing over the politics of this year’s shortlist, particularly the infamous “sad puppy” slate, over on John Scalzi’s blog, and this was going to be a comment there, but I don’t seem to be able to post comments there, so y’all get the expanded version here instead. I’m responding particularly to this sentiment, which I believe accurately characterizes the motivation behind Larry Correia’s original posting of his slate, and the motivations of those who might have voted for it:

    I too am someone who likes, and dislikes, works from both groups of authors. However, only one group ever gets awards. The issue is not that you cannot like both groups, but that good works from the PC crowd get rewarded and while those from authors that have been labeled “unacceptable” are shunned, and that this happens so regularly, and with such predictability that it is obviously not just quality being rewarded.

    ― “BrowncoatJeff

    I cannot speak to the track record, not having followed genre awards closely in the past. But as to this year’s Hugo shortlist, it is my considered opinion that all the works I voted below No Award (except The Wheel of Time, whose position on my ballot expresses an objection to the eligibility rules) suffer from concrete, objective flaws on the level of basic storytelling craft, severe enough that they did not deserve a nomination. This happens to include Correia’s own novels, and all the other works of fiction from his slate that made the shortlist. Below the fold, I shall elaborate.

    (If you’re not on board with the premise that there is such a thing as objective (observer-independent) quality in a work of art, and that observers can evaluate that independently from whether a work suits their own taste or agrees with their own politics, you should probably stop reading now. Note that this is not the same as saying that I think all Hugo voters should vote according to a work’s objective quality. I am perfectly fine with, for instance, the people who voted “Opera Vita Aeterna” below No Award without even cracking it open—those people are saying “Vox Day is such a despicable person that no matter what his literary skills are, he should not receive an award for them” and that is a legitimate critical stance. It is simply not the critical stance I am taking right now.)

    Let me first show you the basic principles of storytelling craft that I found lacking. I did not invent them; similar sentiments can be found in, for instance, “Fenimore Cooper’s Literary Offenses,” the Turkey City Lexicon, Ursula LeGuin’s Steering the Craft, Robert Schroeck’s A Fanfic Writer’s Guide To Writing, and Aristotle’s Poetics. This formulation, however, is my own.

    1. Above all, a story must not be boring. The reader should care, both about “what happens to these people,” and about the ultimate resolution to the plot.
    2. Stories should not confuse their readers, and should enable readers to anticipate—but not perfectly predict—the consequences of each event.
    3. The description, speech, and actions of each character in a story should draw a clear, consistent picture of that character’s personality and motivations, sufficient for the reader to anticipate their behavior in response to the plot.
    4. Much like music, stories should exhibit dynamic range in their pacing, dramatic tension, emotional color, and so forth; not for nothing is “monotony” a synonym for “tedium.”
    5. Style, language, and diction should be consistent with the tone and content of the story.
    6. Rules 2–5 can be broken in the name of Art, but doing so demands additional effort and trust from the reader, who should, by the end of the story, believe that it was worth it.

    With that in hand, I shall now re-review the works that didn’t deserve (IMNSHO) to make the shortlist, in order from most to least execrable.

    Opera Vita Aeterna

    This is textbook bad writing. The most obvious problem is the padded, purple, monotonously purple prose, which obviously fails point 4, and less obviously fails point 5 because the content isn’t sufficiently sophisticated to warrant the style. The superficial flaws of writing are so severe that it’s hard to see past them, but if you do, you discover that it fails all the other points as well, simply because there wasn’t enough room, underneath all of those purple words, for an actual plot. It’s as if you tried to build a building entirely out of elaborate surface decorations, without first putting up any sort of structural skeleton.

    The Butcher of Khardov and Meathouse Man

    These are both character studies, which is a difficult mode: if you’re going to spend all of your time exploring one character’s personality, you’d better make that one character interesting, and ideally also fun to be around. In these cases, the authors were trying for tragically flawed antiheroes and overdid the anti-, producing characters who are nothing but flaw. Their failures are predictable; their manpain, tedious; their ultimate fates, banal. It does not help that they are, in many ways, the same extruded antihero product that Hollywood and the comic books have been foisting on us for going on two decades now, just taken up to 11.

    Khardov also fails on point 2, being told out of order for no apparent reason, causing the ending to make no sense. Specifically, I have no idea whether the wild-man-in-the-forest scenes are supposed to occur before or after the climactic confrontation with the queen, and the resolution is completely different depending on which way you read it.

    Meathouse Man was not on Correia’s slate. It’s a graphic novel adaptation of a story written in the 1970s, and it makes a nice example of point 6. When it was originally written, a story with a completely unlikable protagonist, who takes exactly the wrong lessons from the school of hard knocks and thus develops from a moderate loser into a complete asshole, would perhaps have been … not a breath of fresh air, but a cold glass of water in the face, perhaps. Now, however, it is nothing we haven’t seen done ten billion times, and we are no longer entertained.

    The Chaplain’s Legacy and The Exchange Officers

    These are told competently, with appropriate use of language, credible series of events, and so on. The plots, however, are formula, the characters are flat, the ideas are not original, and two months after I read them, I’m hard pressed to remember enough about them to criticize!

    I may be being more harsh on Torgerson than the median voter, because I have read Enemy Mine and so I recognize The Chaplain’s Legacy as a retread. (DOES NO ONE READ THE CLASSICS?!) Similarly, The Exchange Officers is prefigured by hundreds of works featuring the Space Marines. I don’t recall seeing remotely piloted mecha before, but mecha themselves are cliché, and the “remotely piloted” part sucks most of the suspense out of the battle scenes, which is probably why it hasn’t been done.

    The Grimnoir Chronicles

    Correia’s own work, this falls just short of good, but in a way that is more disappointing than if it had been dull and clichéd. Correia clearly knows how to write a story that satisfies all of the basic storytelling principles I listed. He is never dull. He comes up with interesting plots and gets the reader invested in their outcome. He’s good at set pieces; I can still clearly envision the giant monster terrorizing Washington DC. He manages dramatic tension effectively, and has an appropriate balance between gripping suspense and calm quiet moments. And he is capable of writing three-dimensional, nuanced, plausibly motivated, sympathetic characters.

    It’s just that the only such character in these novels is the principal villain.

    This is not to say that all of the other characters are flat or uninteresting; Sullivan, Faye, and Francis are all credible, and most of the other characters have their moments. Still, it’s the Chairman, and only the Chairman, who is developed to the point where the reader feels fully able to appreciate his motivations and choices. I do not say sympathize; the man is the leader of Imperial Japan circa 1937, and Correia does not paper over the atrocities of that period—but he does provide more justification for them than anyone had in real life. There really is a cosmic horror incoming, and the Chairman really does think this is the only way to stop it. And that makes for the best sort of villain, provided you give the heroes the same depth of characterization. Instead, as I said last time, the other characters are all by habit unpleasant, petty, self-absorbed, and incapable of empathizing with people who don’t share their circumstances. One winds up hoping for enough reverses to take them down a peg. (Which does not happen.)

    Conclusion

    Looking back, does any of that have anything to do with any of the authors’ political stances, either in the real world, or as expressed in their fiction? Not directly, but I do see a common thread which can be interpreted to shed some light on why “works from the PC crowd” may appear to be winning a disproportionate number of awards, if you are the sort of person who uses the term “PC” unironically. It’s most obvious in the Correia, being the principal flaw in that work, but it’s present in all the above.

    See, I don’t think Correia realized he’d written all of his Good Guys as unpleasant, petty, and self-absorbed. I think he unconsciously assumed they didn’t need the same depth of character as the villain did, because of course the audience is on the side of the Good Guys, and you can tell who the Good Guys are from their costumes (figuratively speaking). It didn’t register on him, for instance, that a captain of industry who’s personally unaffected by the Great Depression is maybe going to come off as greedy, not to mention oblivious, for disliking Franklin Delano Roosevelt and his policies, even if the specific policy FDR was espousing on stage was a genuinely bad idea because of its plot consequences. In fact, that particular subplot felt like the author had his thumb on the scale to make FDR look bad—but the exact same subplot could have been run without giving any such impression, if the characterization had been more thorough. So if you care about characterization, you’re not likely to care for Correia’s work or anything like it. Certainly not enough to shortlist it for an award honoring the very best the genre has to offer.

    Now, from out here on my perch safely beyond the Overton window, “politically correct,” to the extent it isn’t a vacuous pejorative, means “something which jars the speaker out of his daydream of the lily-white suburban 1950s of America (possibly translated to outer space), where everything was pleasant.” (And I do mean his.) Thing is, that suburban daydream is, still, 60 years later, in many ways the default setting for fiction written originally in English. Thanks to a reasonably well-understood bug in human cognition, it takes more effort to write fiction which avoids that default. It requires constant attention to ensure that presuppositions and details from that default are not slipping back in. And most of that extra effort goes into—characterization. It takes only a couple sentences to state that your story is set in the distant future Imperium of Man, in which women and men alike may serve in any position in the military and are considered completely equal; it takes constant vigilance over the course of the entire novel to make sure that you don’t have the men in the Imperial Marines taking extra risks to protect from enemy fire those of their fellow grunts who happen to be women. Here’s another, longer example illustrating how much work can be involved.

    Therefore, it seems to me that the particular type of bad characterization I disliked in the above works—writing characters who, for concrete in-universe reasons, are unlikable people, and then expecting the audience to cheer them on anyway because they’ve been dressed up in These Are The Heroes costumes—is less likely to occur in writing that would get labeled “works from the PC crowd.” The authors of such works are already putting extra effort into the characterization, and are therefore less likely to neglect to write heroes who are, on the whole, likable people whom the reader wishes to see succeed.

    Arun K. RanganathanFAQtechism

    What is this?

    Questions and answers, because my friends and I have been doing a lot of asking and answering, in unequal measure, with more asking than answering. Because I’ve been distraught by the incessant stream of reductionist observations about Mozilla, each one like being punched in the heart with the hard fists of righteousness and conviction. Because questions and answers once brought me peace, when I was much younger.

    Who are you?

    A man with no titles. Formerly, one of the first technology evangelists for Mozilla, when it was still a Netscape project. A Mozillian.

    Who is Brendan Eich?

    A man with a title titles. An inventor. A unifier. A divider. A Mozillian. A friend.

    What has Mozilla done?

    From humble and unlikely beginnings, Mozilla entered a battle seemingly already decided against it, and gradually unseated the entrenched incumbent, user by user by user, through campaigns that were traditional and innovative, and increased consciousness about the open web. It became a beloved brand, standing firmly for open source and the open web, championing the Internet, sometimes advocating politically for these convictions. It relied, and continues to rely, on a community of contributors from all over the world.

    What has Brendan done?

    Many things intrinsic to the open web; he helped shape technologies used by countless numbers of users, including to write and read this very post. Also, a hurtful and divisive thing based on a conviction now at odds with the law of the land, and at odds with my own conviction: in 2008, he donated $1000 to California Proposition 8, which put on a statewide ballot a proposition to define marriage as strictly between a man and a woman in the state, thus eliminating gay marriage, and calling into question pre-existing gay marriages. The amount donated was enough to oblige him to list his employer — Mozilla — for legal reasons.

    What are my convictions?

    That any two people in love should be able to marry, regardless of their genders; that the marriage of two such people affords all legal protections intrinsic to the institution of marriage including immigration considerations, estate planning considerations, and visitation rights. That this is in fact a civil right. That matters of civil rights should not be put before a population to vote on as a statewide proposition; in short, that exceptions to the Equal Protection Clause cannot be decided by any majority, since it is there to protect minorities from majorities (cf.Justice Moreno).

    How do such convictions become law?

    Often, by fiat. Sometimes, even when the battle is already seemingly decided (with the entrenched weight of history behind it, an incumbent), one state at a time. State by State by State (by States), using campaigns that are traditional and innovative, to increase consciousness about this as a civil right.

    How should people with different convictions disagree?

    Bitterly, holding fast to conviction, so that two individuals quarrel ceaselessly till one yields to the other, or till one retreats from the other, unable to engage any longer.

    For real?

    Amicably, by setting aside those convictions that are unnecessary to the pursuit of common convictions I share with other Mozillians, like the open web. Brendan embodied the Mozilla project; he would have made a promising CEO. My conviction can be governed by reason, and set aside, especially since the issue is decided by courts, of both law and public opinion. His view, only guessable by me, seems antediluvian. Times have changed. I can ask myself to be governed by reason. We need never touch this question.

    But I can do this because my conviction about the law, stated before, has never been tested personally by the specter of suicide or the malevolence of bullying; marriage equality is the ultimate recognition, destigmatizing lifestyles, perhaps helping with suicide and bullying. And, my inability to marry has never disrupted my life or my business. I cannot ask others to lay aside convictions, without recognizing the sources of pain, and calling them out. (Here, Brendan made commitments, and Mozilla did too).

    What will the future hold?

    Brendan has said his non serviam but calls out a mission which I think is the right one: privacy, also a civil right, especially privacy from governments; continued user advocacy; data liberation; a check on walled gardens (and an end to digital sharecropping); the web as mobile platform, even though it is under threat in the mobile arena, the battle seemingly decided, the entrenched incumbent slightly less obvious. This latter — mobile — is reminiscent of the desktop world in 1998. It’s the same story, with smaller machines. Perhaps the same story will have to be told again. I’d like Mozilla to be a major player in that story, just as it always has been a major player on the web. And I’ll be looking forward to seeing what Brendan does next. I’ll miss him as part of Mozilla. This has been crushing.

    Coda: what have wise ones said?

    “I don’t know why we’re talking about tolerance to begin with. We should be at acceptance and love. What’s this tolerance business? What are you tolerating, backpain? ‘I’ve been tolerating backpain, and the gay guy at work?’” — Hari Kondabalu (watch him on Letterman). And blog posts: Mozilla is not Chick-Fil-A; Thinking about Mozilla; The Hounding of a Heretic (Andrew Sullivan); a few others, discussing what a CEO should do, and what qualities a CEO should possess, which are out there for you to discover.

    Will Kahn-GreeneDennis v0.5 released! New lint rules, new template linter, bunch of fixes, and now a service!

    What is it?

    Dennis is a Python command line utility (and library) for working with localization. It includes:

    • a linter for finding problems in strings in .po files like invalid Python variable syntax which leads to exceptions
    • a template linter for finding problems in strings in .pot files that make translator's lives difficult
    • a statuser for seeing the high-level translation/error status of your .po files
    • a translator for strings in your .po files to make development easier

    v0.5 released!

    Since the last release announcement, there have been a handful of new lint rules added:

    • W301: Translation consists of just white space
    • W302: The translation is the same as the original string
    • W303: There are descrepancies in the HTML between the original string and the translated string

    Additionally, there's a new template linter for your .pot files which can catch things like:

    • W500: Strings with variable names like o, O, 0, l, 1 which can be hard to read and are often replaced with a similar looking letter by the translator.
    • W501: One-character variable names which don't give translators enough context about what's being translated.
    • W502: Multiple unnamed variables which can't be reordered because the order the variables are expanded is specified outside of the string.

    Dennis in action

    Want to see Dennis in action, but don't want to install Dennis? I threw it up as a service, though it's configured for SUMO: http://dennis-sumo.paas.allizom.org/

    Note

    I may change the URL and I might create a SUMO-agnostic version. If you're interested, let me know.

    Where to go for more

    For more specifics on this release, see here: http://dennis.readthedocs.org/en/v0.4/changelog.html#version-0-4-may-1st-2014

    Documentation and quickstart here: http://dennis.readthedocs.org/en/v0.4/

    Source code and issue tracker here: https://github.com/willkg/dennis

    Source code and issue tracker for Denise (Dennis-as-a-service): https://github.com/willkg/denise

    3 out of 5 summer interns use Dennis to improve their posture while pranking their mentors.

    Marco ZeheMaintenance complete

    A day later than originally announced, I undertook the much needed maintenance. The site should be much faster now, having moved it to a more performant web hoster. I also consolidated all my blogs into a multi-site WordPress installation, which should make it much easier for me in the future to create little blogs for side projects, so I don’t have to use 3rd party services. You know, for the class and such. ;)

    I also use a more modern theme now, using the excellent accessible and responsive Simone theme. This should make it much more reader-friendly. And it, of course, works great with screen readers, too!

    So, enjoy! And I will have more news to share about Mozilla and web accessibility related stuff as always!

    Andy McKayPrivate School

    I've been a bit out of touch recently with holidays, so I'm catching up on the BC teachers situation and what looks like an attempt by the BC Government to destroy public education.

    This week the Minister launched a website giving "some of the options available to you". So what are my options? No public school system? Let's try private school. Here's a preliminary search.

    My daughters are aged 8 and 10 and enjoy an excellent education in the public school system in French Immersion in North Vancouver, despite the Government. I also consider the school an excellent part of the local community.

    Any schooling would ideally be in French and must definitely be non-religious in nature. In North and West Vancouver there are the following private schools and costs to us:

    • Lions Gate Christian Academy: "Moral & Spiritual Development from a Christian Perspective". Cost: $8,720. Distance: 3.8km. French Immersion: No.
    • Brockton School: "a rigorous academic education is balanced by arts and athletics in an environment where merit rather than materialism is the core value". Cost: $29,700. Distance: 10.8km. French Immersion: No.
    • Collingwood School: "Preparing people to thrive in meaningful lives". Cost: Not stated. Distance: 19.2km. French Immersion: No.
    • Mulgrave School: "a caring and supportive school community with high expectations and high levels of achievement". Cost: $35,940. Distance: 20.3km. French Immersion: No.
    • Ecole Francaise Internationale de Vancouver: "where critical thought processes and inter-cultural communication are the determining factors". Cost: $28,500. Distance: 10.4km. French Immersion: Yes.
    • The Vancouver Waldorf School: "educating hearts and minds". Cost: $28,240. Distance: 9.3km. French Immersion: No.

    The highly questionable (if not laughable) Fraser Institute ranking ranks only a couple of these schools. Sherwood Park being just below the average and the West Vancouver schools Mulgrave and Collingwood well above the average.

    Note that although I searched for schools on the North Shore, none of these are "local" and we would suffer a disconnect from our local community. Only one provides French Immersion. Lions Gate Christian Academy is definitely not going to happen.

    Supposing I can get my children into one of these schools, it would drain my families resources by somewhere from $28k to $36k at the minimum. The median total income before tax in BC is $71k (source), after tax of 40%, let's say $43k. One of those private schools would consume 65% to 83% of the average after tax income.

    As an extra kicker, since my wife is a teacher in the public school system, we have less money this year.

    Do you have some realistic options for my family?

    Ray KiddyTo encourage civic participation and voting (in US)

    https://petitions.whitehouse.gov/petition/create-national-holidays-voting-consolidating-other-holidays-honor-civic-engagement/wx7xMFCR

    Please consider whether this suggestion makes sense.

    Matt BrubeckLet's build a browser engine! Part 4: Style

    Welcome back to my series on building your own toy browser engine. If you’re just tuning in, you can find the previous episodes here:

    This article will cover what the CSS standard calls assigning property values, or what I call the style module. This module takes DOM nodes and CSS rules as input, and matches them up to determine the value of each CSS property for any given node.

    This part doesn’t contain a lot of code, since I’ve left out all the really complicated parts. However, I think what’s left is still quite interesting, and I’ll also explain how some of the missing pieces can be implemented.

    The Style Tree

    The output of robinson’s style module is something I call the style tree. Each node in this tree includes a pointer to a DOM node, plus its CSS property values:

    /// Map from CSS property names to values.
    type PropertyMap = HashMap<String, Value>;
    
    /// A node with associated style data.
    struct StyledNode<'a> {
        node: &'a Node, // pointer to a DOM node
        specified_values: PropertyMap,
        children: Vec<StyledNode<'a>>,
    }
    

    What’s with all the 'a stuff? Those are lifetimes, part of how Rust guarantees that pointers are memory-safe without requiring garbage collection. If you’re not working in Rust you can ignore them; they aren’t critical to the code’s meaning.

    We could add new fields to the dom::Node struct instead of creating a new tree, but I wanted to keep style code out of the earlier “lessons.” This also gives me an opportunity to talk about the parallel trees that inhabit most rendering engines.

    A browser engine module often takes one tree as input, and produces a different but related tree as output. For example, Gecko’s layout code takes a DOM tree and produces a frame tree, which is then used to build a view tree. Blink and WebKit transform the DOM tree into a render tree. Later stages in all these engines produce still more trees, including layer trees and widget trees.

    The pipeline for our toy browser engine will look something like this, after we complete a few more stages:

    In my implementation, each node in the DOM tree has exactly one node in the style tree. But in a more complicated pipeline stage, several input nodes could collapse into a single output node. Or one input node might expand into several output nodes, or be skipped completely. For example, the style tree could exclude elements whose display property is set to 'none'. (Instead this will happen in the layout stage, because my code turned out a bit simpler that way.)

    Selector Matching

    The first step in building the style tree is selector matching. This will be very easy, since my CSS parser supports only simple selectors. You can tell whether a simple selector matches an element just by looking at the element itself. Matching compound selectors would require traversing the DOM tree to look at the element’s siblings, parents, etc.

    fn matches(elem: &ElementData, selector: &Selector) -> bool {
        match *selector {
            Simple(ref simple_selector) => matches_simple_selector(elem, simple_selector)
        }
    }
    

    To help, we’ll add some convenient ID and class accessors to our DOM element type. The class attribute can contain multiple class names separated by spaces, which we return in a hash table.

    impl ElementData {
        fn get_attribute(&self, key: &str) -> Option<&String> {
            self.attributes.find_equiv(&key)
        }
    
        fn id(&self) -> Option<&String> {
            self.get_attribute("id")
        }
    
        fn classes(&self) -> HashSet<&str> {
            match self.get_attribute("class") {
                Some(classlist) => classlist.as_slice().split(' ').collect(),
                None => HashSet::new()
            }
        }
    }
    

    To test whether a simple selector matches an element, just look at each selector component, and return false if the element doesn’t have a matching class, ID, or tag name.

    fn matches_simple_selector(elem: &ElementData, selector: &SimpleSelector) -> bool {
        // Check type selector
        if selector.tag_name.iter().any(|name| elem.tag_name != *name) {
            return false;
        }
    
        // Check ID selector
        if selector.id.iter().any(|id| elem.id() != Some(id)) {
            return false;
        }
    
        // Check class selectors
        let elem_classes = elem.classes();
        if selector.class.iter().any(|class| !elem_classes.contains(&class.as_slice())) {
            return false;
        }
    
        // We didn't find any non-matching selector components.
        return true;
    }
    

    Rust note: This function uses the any method, which returns true if an iterator contains an element that passes the provided test. This is the same as the any function in Python (or Haskell), or the some method in JavaScript.

    When comparing two rules that match the same element, we need to use the highest-specificity selector from each match. Because our CSS parser stores the selectors from most- to least-specific, we can stop as soon as we find a matching one, and return its specificity along with a pointer to the rule.

    /// A single CSS rule and the specificity of its most specific matching selector.
    type MatchedRule<'a> = (Specificity, &'a Rule);
    
    /// If `rule` matches `elem`, return a `MatchedRule`. Otherwise return `None`.
    fn match_rule<'a>(elem: &ElementData, rule: &'a Rule) -> Option<MatchedRule<'a>> {
        // Find the first (highest-specificity) matching selector.
        rule.selectors.iter().find(|selector| matches(elem, *selector))
            .map(|selector| (selector.specificity(), rule))
    }
    

    To find all the rules that match an element we call filter_map, which does a linear scan through the style sheet, checking every rule and throwing out ones that don’t match. A real browser engine would speed this up by storing the rules in multiple hash tables based on tag name, id, class, etc.

    /// Find all CSS rules that match the given element.
    fn matching_rules<'a>(elem: &ElementData, stylesheet: &'a Stylesheet) -> Vec<MatchedRule<'a>> {
        stylesheet.rules.iter().filter_map(|rule| match_rule(elem, rule)).collect()
    }
    

    Once we have the matching rules, we can find the specified values for the element. We insert each rule’s property values into a HashMap. We sort the matches by specificity, so the higher specificity rules are processed after the lower ones and can overwrite their values in the HashMap.

    /// Apply styles to a single element, returning the specified values.
    fn specified_values(elem: &ElementData, stylesheet: &Stylesheet) -> PropertyMap {
        let mut values = HashMap::new();
        let mut rules = matching_rules(elem, stylesheet);
    
        // Go through the rules from lowest to highest specificity.
        rules.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
        for &(_, rule) in rules.iter() {
            for declaration in rule.declarations.iter() {
                values.insert(declaration.name.clone(), declaration.value.clone());
            }
        }
        return values;
    }
    

    Now we have everything we need to walk through the DOM tree and build the style tree. Note that selector matching works only on elements, so the specified values for a text node are just an empty map.

    /// Apply a stylesheet to an entire DOM tree, returning a StyledNode tree.
    pub fn style_tree<'a>(root: &'a Node, stylesheet: &'a Stylesheet) -> StyledNode<'a> {
        StyledNode {
            node: root,
            specified_values: match root.node_type {
                Element(ref elem) => specified_values(elem, stylesheet),
                Text(_) => HashMap::new()
            },
            children: root.children.iter().map(|child| style_tree(child, stylesheet)).collect(),
        }
    }
    

    That’s all of robinson’s code for building the style tree. Next I’ll talk about some glaring omissions.

    The Cascade

    Style sheets provided by the author of a web page are called author style sheets. In addition to these, browsers also provide default styles via user agent style sheets. And they may allow users to add custom styles through user style sheets (like Gecko’s userContent.css).

    The cascade defines which of these three “origins” takes precedence over another. There are six levels to the cascade: one for each origin’s “normal” declarations, plus one for each origin’s !important declarations.

    Robinson’s style code does not implement the cascade; it takes only a single style sheet. The lack of a default style sheet means that HTML elements will not have any of the default styles you might expect. For example, the <head> element’s contents will not be hidden unless you explicitly add this rule to your style sheet:

    head { display: none; }
    

    Implementing the cascade should by fairly easy: Just track the origin of each rule, and sort declarations by origin and importance in addition to specificity. A simplified, two-level cascade should be enough to support the most common cases: normal user agent styles and normal author styles.

    Computed Values

    In addition to the “specified values” mentioned above, CSS defines initial, computed, used, and actual values.

    Initial values are defaults for properties that aren’t specified in the cascade. Computed values are based on specified values, but may have some property-specific normalization rules applied.

    Implementing these correctly requires separate code for each property, based on its definition in the CSS specs. This work is necessary for a real-world browser engine, but I’m hoping to avoid it in this toy project. In later stages, code that uses these values will (sort of) simulate initial values by using a default when the specified value is missing.

    Used values and actual values are calculated during and after layout, which I’ll cover in future articles.

    Inheritance

    If text nodes can’t match selectors, how do they get colors and fonts and other styles? The answer is inheritance.

    When a property is inherited, any node without a cascaded value will receive its parent’s value for that property. Some properties, like 'color', are inherited by default; others only if the cascade specifies the special value 'inherit'.

    My code does not support inheritance. To implement it, you could pass the parent’s style data into the specified_values function, and use a hard-coded lookup table to decide which properties should be inherited.

    Style Attributes

    Any HTML element can include a style attribute containing a list of CSS declarations. There are no selectors, because these declarations automatically apply only to the element itself.

    <span style="color: red; background: yellow;">
    

    If you want to support the style attribute, make the specified_values function check for the attribute. If the attribute is present, pass it to parse_declarations from the CSS parser. Apply the resulting declarations after the normal author declarations, since the attribute is more specific than any CSS selector.

    Exercises

    In addition to writing your own selector matching and value assignment code, for further exercise you can implement one or more of the missing pieces discussed above, in your own project or a fork of robinson:

    1. Cascading
    2. Initial and/or computed values
    3. Inheritance
    4. The style attribute

    Also, if you extended the CSS parser from Part 3 to include compound selectors, you can now implement matching for those compound selectors.

    To Be Continued…

    Part 5 will introduce the layout module. I haven’t finished the code for this yet, so there will be another delay before I can start writing the article. I plan to split layout into at least two articles (one for block layout and one for inline layout, probably).

    In the meantime, I’d love to see anything you’ve created based on these articles or exercises. If your code is online somewhere, feel free to add a link to the comments below! So far I have seen Martin Tomasi’s Java implementation and Pohl Longsine’s Swift version.

    Matt BrubeckLet's build a browser engine! Part 4: Style

    Welcome back to my series on building your own toy browser engine. If you’re just tuning in, you can find the previous episodes here:

    This article will cover what the CSS standard calls assigning property values, or what I call the style module. This module takes DOM nodes and CSS rules as input, and matches them up to determine the value of each CSS property for any given node.

    This part doesn’t contain a lot of code, since I’ve left out all the really complicated parts. However, I think what’s left is still quite interesting, and I’ll also explain how some of the missing pieces can be implemented.

    The Style Tree

    The output of robinson’s style module is something I call the style tree. Each node in this tree includes a pointer to a DOM node, plus its CSS property values:

    /// Map from CSS property names to values.
    type PropertyMap = HashMap<String, Value>;
    
    /// A node with associated style data.
    struct StyledNode<'a> {
        node: &'a Node, // pointer to a DOM node
        specified_values: PropertyMap,
        children: Vec<StyledNode<'a>>,
    }
    

    What’s with all the 'a stuff? These are lifetime annotations, part of how Rust guarantees that pointers are memory-safe without requiring garbage collection. If you are not working in Rust you can safely ignore them; they aren’t critical to the meaning of this code.

    We could add style information directly to the dom::Node struct from Part 1 instead, but I wanted to keep this code out of the earlier “lessons.” This is also a good opportunity to talk about the parallel trees that inhabit most layout engines.

    A browser engine module often takes one tree as input, and produces a different but related tree as output. For example, Gecko’s layout code takes a DOM tree and produces a frame tree, which is then used to build a view tree. Blink and WebKit transform the DOM tree into a render tree. Later stages in all these engines produce still more trees, including layer trees and widget trees.

    The pipeline for our toy browser engines will look something like this after we complete a few more stages:

    In my implementation, each node in the DOM tree produces exactly one node in the style tree. But in a more complicated pipeline stage, several input nodes could collapse into a single output node. Or one input node might expand into several output nodes, or be skipped completely. For example, the style tree could exclude elements whose display property is set to 'none'. (Instead this will happen in the layout stage, because my code turned out a bit simpler that way.)

    Selector Matching

    The first step in building the style tree is selector matching. This will be very easy, since my CSS parser supports only simple selectors. You can tell whether a simple selector matches an element just by looking at the element itself. Matching compound selectors would require traversing the DOM tree to look at the element’s siblings, parents, etc.

    fn matches(elem: &ElementData, selector: &Selector) -> bool {
        match *selector {
            Simple(ref simple_selector) => matches_simple_selector(elem, simple_selector)
        }
    }
    

    To help, we’ll add some convenient ID and class accessors to our DOM element type. The class attribute can contain multiple class names separated by spaces, which we return in a hash table. [Note: The Rust types below look a bit hairy because we are passing around pointers rather than copying values. This code should be a lot more concise in languages that are not so concerned with this distinction.]

    impl ElementData {
        fn get_attribute<'a>(&'a self, key: &str) -> Option<&'a String> {
            self.attributes.find_equiv(&key)
        }
    
        fn id<'a>(&'a self) -> Option<&'a String> {
            self.get_attribute("id")
        }
    
        fn classes<'a>(&'a self) -> HashSet<&'a str> {
            match self.get_attribute("class") {
                Some(classlist) => classlist.as_slice().split(' ').collect(),
                None => HashSet::new()
            }
        }
    }
    

    To test whether a simple selector matches an element, just look at each selector component, and return false if the element doesn’t have a matching class, ID, or tag name.

    fn matches_simple_selector(elem: &ElementData, selector: &SimpleSelector) -> bool {
        // Check type selector
        if selector.tag_name.iter().any(|name| elem.tag_name != *name) {
            return false;
        }
    
        // Check ID selector
        if selector.id.iter().any(|id| elem.id() != Some(id)) {
            return false;
        }
    
        // Check class selectors
        let elem_classes = elem.classes();
        if selector.class.iter().any(|class| !elem_classes.contains(&class.as_slice())) {
            return false;
        }
    
        // We didn't find any non-matching selector components.
        return true;
    }
    

    Rust note: This function uses the any method, which returns true if an iterator contains an element that passes the provided test. This is the same as the any function in Python (or Haskell), or the some method in JavaScript.

    When comparing two rules that match the same element, we need to use the highest-specificity selector from each match. Because our CSS parser stores the selectors from most- to least-specific, we can stop as soon as we find a matching one, and return its specificity along with a pointer to the rule.

    /// A single CSS rule and the specificity of its most specific matching selector.
    type MatchedRule<'a> = (Specificity, &'a Rule);
    
    /// If `rule` matches `elem`, return a `MatchedRule`. Otherwise return `None`.
    fn match_rule<'a>(elem: &ElementData, rule: &'a Rule) -> Option<MatchedRule<'a>> {
        // Find the first (highest-specificity) matching selector.
        rule.selectors.iter().find(|selector| matches(elem, *selector))
            .map(|selector| (selector.specificity(), rule))
    }
    

    To find all the rules that match an element we call filter_map, which does a linear scan through the style sheet, checking every rule and throwing out ones that don’t match. A real browser engine would speed this up by storing the rules in multiple hash tables based on tag name, id, class, etc.

    /// Find all CSS rules that match the given element.
    fn matching_rules<'a>(elem: &ElementData, stylesheet: &'a Stylesheet) -> Vec<MatchedRule<'a>> {
        stylesheet.rules.iter().filter_map(|rule| match_rule(elem, rule)).collect()
    }
    

    Once we have the matching rules, we can find the specified values for the element. We insert each rule’s property values into a HashMap. We sort the matches by specificity, so the higher specificity rules are processed after the lower ones and can overwrite their values in the HashMap.

    /// Apply styles to a single element, returning the specified styles.
    fn specified_values(elem: &ElementData, stylesheet: &Stylesheet) -> PropertyMap {
        let mut values = HashMap::new();
        let mut rules = matching_rules(elem, stylesheet);
    
        // Go through the rules from lowest to highest specificity.
        rules.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
        for &(_, rule) in rules.iter() {
            for declaration in rule.declarations.iter() {
                values.insert(declaration.name.clone(), declaration.value.clone());
            }
        }
        return values;
    }
    

    Now we have everything we need to walk through the DOM tree and build the style tree. Note that selector matching works only on elements, so the specified values for a text node are just an empty map.

    /// Apply a stylesheet to an entire DOM tree, returning a StyledNode tree.
    pub fn style_tree<'a>(root: &'a Node, stylesheet: &'a Stylesheet) -> StyledNode<'a> {
        StyledNode {
            node: root,
            specified_values: match root.node_type {
                Element(ref elem) => specified_values(elem, stylesheet),
                Text(_) => HashMap::new()
            },
            children: root.children.iter().map(|child| style_tree(child, stylesheet)).collect(),
        }
    }
    

    That’s all of robinson’s code for building the style tree. Next I’ll talk about some glaring omissions.

    The Cascade

    Style sheets provided by the author of a web page are called author style sheets. In addition to these, browsers also provide default styles via user agent style sheets. And they may allow users to add custom styles through user style sheets (like Gecko’s userContent.css).

    The cascade defines which of these three “origins” takes precedence over another. There are six levels to the cascade: one for each origin’s “normal” declarations, plus one for each origin’s !important declarations.

    Robinson’s style code does not implement the cascade; it uses only a single style sheet. The lack of a default style sheet means that HTML elements will not have any of the default styles you might expect. For example, the <head> element’s contents will not be hidden unless you explicitly add this rule to your style sheet:

    head { display: none; }
    

    Implementing the cascade should by fairly easy: Just track the origin of each rule, and sort declarations by origin and importance in addition to specificity. A simplified, two-level cascade should be enough to support the most common cases: normal user agent styles and normal author styles.

    Computed Values

    In addition to the “specified values” mentioned above, CSS defines initial, computed, used, and actual values.

    Initial values are defaults for properties that aren’t specified in the cascade. Computed values are based on specified values, but may have some property-specific normalization rules applied.

    Implementing these correctly requires separate code for each property, based on its definition in the CSS specs. This work is necessary for a real-world browser engine, but I’m hoping to avoid it in this toy project. In later stages, code that uses these values will (sort of) simulate initial values by using a default when the specified value is missing.

    Used values and actual values are calculated during and after layout, which I’ll cover in future articles.

    Inheritance

    If text nodes can’t match selectors, how do they get colors and fonts and other styles? Through the magic of inheritance.

    When a property is inherited, any node without a cascaded value will receive its parent’s value for that property. Some properties, like 'color', are inherited by default; others only if the cascade specifies the special value 'inherit'.

    My code does not support inheritance. To implement it, you could pass the parent’s style data into the specified_values function, and use a hard-coded lookup table to decide which properties should be inherited.

    Style Attributes

    Any HTML element can include a style attribute containing a list of CSS declarations. There are no selectors, because these declarations automatically apply only to the element itself.

    <span style="color: red; background: yellow;">
    

    If you want to support the style attribute, make the specified_values function check for the attribute. If the attribute is present, pass it to parse_declarations from the CSS parser. Apply the resulting declarations after the normal author declarations, since the attribute is more specific than any CSS selector.

    Exercises

    In addition to writing your own selector matching and value assignment code, for further exercise you can implement one or more of the missing pieces discussed above, in your own project or a fork of robinson:

    1. Cascading
    2. Initial and/or computed values
    3. Inheritance
    4. The style attribute

    Also, if you extended the CSS parser from Part 3 to include compound selectors, you can now implement matching for those compound selectors.

    To be continued…

    Part 5 will introduce the layout module. I haven’t finished the code for this yet, so there will be another short before I can start writing the article. I plan to split layout into at least two articles (one for block layout and one for inline layout, probably).

    In the meantime, I’d love to see anything you’ve created based on these articles or exercises. If your code is online somewhere, feel free to add a link to the comments below! So far I have seen Martin Tomasi’s Java implementation and Pohl Longsine’s Swift version.

    Kaustav Das ModakConnect Firefox OS Spreadtrum devices through adb

    The ultra low-cost Firefox OS devices to be launched in India are built on Spreadtrum chipsets. Here are the quick steps for people running Linux or OS X to connect their Spreadtrum devices through adb: Make sure if the device is detected Connect the device through a USB cable. Enable Remote Debugging on the device […]

    Maja FrydrychowiczSnapshots from my OPW Internship with Mozilla QA

    Throughout my OPW1 internship with Mozilla QA2 I've been keeping an informal log in outline form3. In it, I briefly describe what I accomplish (or fail to accomplish) each day, problems I encounter, who I talk to about them, which meetings I attend, what I read, useful tricks I learn, etc. So far, I have about 60-days worth of these tiny log entries about Mozilla. Here's what they look like:

    Checkvist Mozilla Log Screenshot

    Day-to-day, the log helps me answer questions like "How did I solve this weird configuration problem three weeks ago?" or "What should I ask about at the next team meeting?" Writing also generally helps me think through a task, and the log is a quick and effective outlet for that. The other major benefit is that I can take a step back and see the overall progress of my projects.

    So, what's it like being an intern with Mozilla QA?

    I'm so glad you asked! First, some context.

    • OPW interns work remotely.
    • The internship position I applied for is called "Bug Wrangler", which refers to tasks like reproducing and triaging incoming Firefox bugs, but I've actually (mostly) been doing Django web development.

    To future interns: as in my case, there can be some flexibility about your internship activities, and during your application process, you'll narrow down what you will work on. The mentor I applied to offered a Django project as an option under the Bug Wrangler umbrella, and that was more in line with my interests and experience than bug triage, so that's what I chose to focus on.

    Based on my handy log, I'll answer a slightly more specific question:

    "What did Maja do during a typical week while working on a Django project for Mozilla QA?"

    Routines

    Often, I start my day by skimming my latest "bug mail" (updates from Bugzilla) and checking my Bugzilla dashboard to see if I need to follow up on anything immediately.

    The other regular occurrence is about 2 hours of video meetings per week. I meet with my mentor once a week to discuss my general progress and my post-internship plans. I lurk at one QA team meeting almost every week, where I mostly don't have enough context to understand much. My mentor filled me in on some things and my understanding gradually improved. There are also two regular meetings for One and Done, the project I'm contributing to: a weekly technical meeting to discuss the design of new features, and a biweekly check-in meeting with project managers, developers and a few key users.

    Week 3

    The early weeks of the internship involved a lot of reading and trying things out, of course. At this point, I was finishing up the official Django tutorial as well as responding to some administrative requests about the internship.

    Just for fun, I used vim throughout my Django learnings to rediscover some handy vim commands. I also applied the tutorial concepts to the One and Done source code as much as I could, and thus discovered what other parts of Django I need to become familiar with, like generic class-based views.

    I gradually became more familiar with how the One and Done code is structured by looking at how its models are used, poking at its URLconf, and populating my local database with example data.

    Week 5

    At this point, I was just about finished with my first substantial pull request to One and Done. My changes broke some unit tests, which caused me to discover that some of our test data was using the wrong data type: a regular Python dictionary instead of a Django QueryDict. Cool.

    I actually spent a bunch of time getting the unit tests to run in my dev environment, which is on a Linux virtual machine. My local copy of the project is stored in a directory that is shared between my Linux guest OS and Windows host OS, which happens to rely on file permissions that the nose testing library doesn't like. In the end, I chose to have a clone of the project in a non-shared directory that I used just for running unit tests.

    My work log also describes in detail how unintended changes to my development branch in git turned my Github pull request into a giant, unreadable mess. Aaah! (Be careful what you branch from and what you merge with, friends.) I had to close my original pull request and make a new, clean one, which was fairly embarrassing. Now I remember that on that day my friend and I were co-working in my apartment to battle the loneliness of remote work, and she generously listened to me venting my misery about the incident. :) In retrospect, I learned a lot about git.

    Later that week, that same pull request got merged and I started investigating a bug I ran into in one of the libraries our project relies on, which involved asking some questions on IRC.

    All around, a good week.

    Week 9

    First I finished up a couple of things I had started earlier:

    I also contributed my first few code reviews: the week before I missed an issue that someone else caught (doh!), but this week I found something that needed to be fixed (yay!). This was cool because I found the problem by simply taking the time to understand code that was mostly mysterious to me. Bonus: I learned a bit about Mock and patch.

    By the end of the week, I was focused on sketching out the functionality and implementation of a new One and Done feature. I enjoyed working with the project managers to define the feature requirements. Figuring out how to implement them required a few more weeks of research and learning on my part, but it all worked out in the end.

    This is why I like work logs!

    Reviewing my work log to write this article was eye-opening for me, especially due to the perspective it offers of the ups and downs I experienced during my internship. On some days, I felt quite frustrated, stuck, discouraged, and all that bad stuff. So, I like how the log shows that feeling crappy for a few days here and there totally doesn't matter overall. I learned a lot in the past couple of months and it's incredibly satisfying to see that itemized in one big list.


    1. Outreach Program for Women 

    2. Quality Assurance 

    3. I write the log using Checkvist. It's fantastic. I did the same while at Hacker School. 

    Clint TalbertThe Odyssey of Per-Push, On-Phone Firefox OS Automation

    When we started automating tests for Firefox OS, we knew that we could do a lot with automated testing on phone emulators–we could run in a very similar environment to the phone, using the same low level instruction set, even do some basic operations like SMS between two emulator processes. Best of all, we could run those in the cloud, at massive scale.

    But, we also knew that emulator based automation wasn’t ever going to be as complete as actually testing on real phones. For instance, you can’t simulate many basic smart phone operations: calling a number, going to voice-mail, toggling airplane mode, taking a picture, etc. So, we started trying to get phones running in automation very early with Firefox OS, almost two years ago now.

    We had some of our very early Unagi phones up and running on a desk in our office. That eventually grew to a second generation of Hamachi based phones. There were a couple of core scalability problems with both of these solutions:

    1. No reliable way to power-cycle a phone without a human walking up to it, pulling out the battery and putting it back in
    2. At the time these were pre-production phones (hence the code names), and were hard to get in bulk from partners. So, we did what we could with about 10 phones that ran smoketests, correctness tests, and performance tests.
    3. All of the automation jobs and results had to be tracked by hand. And status had to be emailed to developers — there was no way to get these reporting to our main automation dashboard, TBPL.
    4. Because we couldn’t report status to TBPL, maintaining the system and filing bugs when tests failed had to be done entirely by a dedicated set of 4 QA folk–not a scalable option, to say the least.

    Because of points 1 and 2, we were unable to truly scale the number of devices. We only had one person in Mountain View, and what we had thought of as a part time job of pulling phone batteries soon became his full time job. We needed a better solution to increase the number of devices while we worked in parallel to create a better dashboard for our automation that would allow a system like this to easily plug in and report its results.

    The Flame reference device solved that first problem. Now, we had a phone whose hardware we could depend on, and Jon Hylands was able to create a custom battery harnesses for it so that we could instruct our scripts to automatically detect dead phones and remotely power cycle them (and in the future, monitor power consumption). Because we (Mozilla) commissioned the Flame phone ourselves, there were no partner related issues with obtaining pre-production devices–we could easily get as many as we needed. After doing some math to understand our capacity needs, we got 40 phones to seed our prototype lab to support per-push automation.

    As I mentioned, we were solving the dashboard problem in parallel, and that has now been deployed in the form of Treeherder, which will be the replacement for TBPL. That solves point 3. All that now remains is point 4. We have been hard at work on crafting a unified harness to run the Gaia Javascript tests on device which will also allow us to run the older, existing python tests as well until they can be converted. This gives us the most flexibility and allows us to take advantage of all the automation goodies in the existing python harness–like crash detection, JSON structured logging, etc. Once it is complete, we will be able to run a smaller set of the same tests the developers run locally per each push to b2g-inbound on these Flame devices in our lab. This means that when something breaks, it will break tests that are well known, in a well understood environment, and we can work alongside the developers to understand what broke and why. By enabling the developers and QA to work alongside one another, we eliminate the scaling problem in point 4.

    It’s been a very long road to get from zero to where we are today. You can see the early pictures of the “phones on a desk” rack and pictures of the first 20 Flames from Stephen’s presentation he gave earlier this month.

    A number of teams helped get us to this point, and it could not have been done without the cooperation among them: the A*Team, the Firefox OS Performance team, the QA team, and the Gaia team all helped get us to where we are today. You can see the per-push tests showing up on the Treeherder Staging site as we ensure we can meet the stability and load requirements necessary for running in production.

    Last week, James Lal and his new team inherited this project. They are working hard to push the last pieces to completion as well as expanding it even further. And so, even though Firefox OS has had real phone automation for years, that system is now coming into its own. The real-phone automation will finally be extremely visible and easily actionable for all developers, which is a huge win for everyone involved.

    Eric ShepherdThe Sheppy Report: August 22, 2014

    This week looks slower than usual when you look at this list, but the week involved a lot of research.

    What I did this week

    • Reviewed and made (very) minor tweaks to Chris Mills’s doc plan for the Gaia web components and QA documentation.
    • Created an initial stub of a page for the canvas documentation plan.
    • Spent the weekend and a bit of Monday getting my broken server, including this blog, back up and running after a not-entirely-successful (at first) upgrade of the server from OS X 10.6.8 Server to 10.9.4. But most things are working now. I’ll get the rest fixed up over the next few days.
    • Pursued the MDN inbox project, trying to wrap it up.
      • Asked for feedback on the current state of things.
      • Added a subtle background color to the background of pages in the Inbox.
    • Started discussions on dev-mdc and staff mailing list about the documentation process; we’re going to get this thing straightened up and organized.
    • Filed bug 1056026 proposing that the Firefox_for_developers macro be updated to list both newer and older versions of Firefox.
    • Redirected some obsolete pages to their newer, replacement, content in the MDN meta-documentation.
    • Created a Hacker News account and upvoted a post about Hacks on canuckistani’s request.
    • Updated the MDN Administration Guide.
    • Installed various packages and add-ons on my Mac and server in preparation for testing WebRTC code.
    • Forked several WebRTC projects from GitHub to experiment with.
    • Found (after a surprisingly length search) a micro-USB cable so I could charge and update my Geeksphone Peak to Firefox OS 2.0′s latest nightly build.
    • Re-established contact with Piotr at CKSource about continuing work to get our editor updated and improved.
    • Removed a mess of junk from a page in pt-BR; looks like someone used an editor that added a bunch of extra <span>s.
    • Successfully tested a WebRTC connection between my Firefox OS phone and my iMac, using my Mac mini as server. Now I should be ready to start writing code of my own, now that I know it all works!
    • Filed bug 1057546: we should IMHO strip HTML tags that aren’t part of a string from within a macro call; this would prevent unfortunate errors.
    • Filed bug 1057547 proposing that the editor be updated to detect uses of the style attribute and of undefined classes, and present warnings to the user when they do so.
    • Fixed a page that was incorrectly translated in place, and emailed the contributor a reminder to be careful in the future.

    Meetings attended this week

    Monday

    • MDN dev team meeting on security and improved processes to prevent problems like the email address disclosure we just had happen.
    • MDN developer triage meeting.

    Tuesday

    • Developer Engagement weekly meeting.
    • 1:1 with Jean-Yves Perrier.

    Wednesday

    • 1:1 with Ali.

     Thursday

    • Writers’ staff meeting.

    Friday

    • #mdndev weekly review meeting.
    • MDN bug swat meeting.
    • Web API documentation meeting.

    So… it was a wildly varied day today. But I got a lot of interesting things done.

    Gervase MarkhamHSBC Weakens Their Internet Banking Security

    From a recent email about “changes to your terms and conditions”. (“Secure Key” is their dedicated keyfob 2-factor solution; it’s currently required both to log in and to pay a new payee. It’s rather well done.)

    These changes will also enable us to introduce some enhancements to our service over the coming months. You’ll still have access to the full Internet Banking service by logging on with your Secure Key, but in addition, you’ll also be able log in to a limited service when you don’t use your Secure Key – you’ll simply need to verify your identity by providing other security information we request. We’ll contact you again to let you know when this new feature becomes available to you.

    Full details of all the changes can be found below which you should read carefully. If you choose not to accept the changes, you have the right to ask us to stop providing you with the [Personal Internet Banking] service, before they come into effect. If we don’t hear from you, we’ll assume that you accept the changes.

    Translation: we are lowering the security we use to protect your account information from unauthorised viewing and, as long as you still want to be able to access your account online at all, there’s absolutely nothing you can do about it.

    Amy TsayWhat Healthy Relationships Teach Us About Healthy Communities

    In organizations where communities form (whether around a product, mission, or otherwise), there is often a sense of perplexity or trepidation around how to engage with them. What is the proper way to talk to community members? How do I work with them, and what can I do to keep the community healthy and growing? The good news is, if you know what it takes to have a healthy personal relationship, you already know how to build a healthy community.

    Prioritize them

    In a good relationship, we prioritize the other person. At Mozilla, the QA team makes it a point to respond to volunteer contributors within a day or two. A lack of response is one of the top reasons why people leave online communities, so it’s important not to keep them hanging. It doesn’t feel good to volunteer your time on a project only to be left waiting when you ask questions or request feedback, just as it would if your partner doesn’t return your phone calls.

    Be authentic

    Authenticity and honesty in a relationship are the building blocks of trust. If you make a mistake, admit it and set it right. Your tone and word choice will reflect your state of mind, so be aware of it when composing a message. When you come from a place of caring and desire to do what’s right for the community, instead of a place of fear or insecurity, your words and actions will foster trust.

    Be appreciative

    Strong relationships are formed when both parties value and appreciate each other. It’s a great feeling when you take out the trash or do the dishes, and it’s noticed and praised. Make it a ritual to say thanks to community members who make an impact, preferably on the spot, and publicly if possible and appropriate.

    Be their champion

    Be prepared to go to bat for the community. I was once in a relationship with a partner who would not defend me in situations where I was being mistreated; it didn’t end well. It feels nice to be advocated for, to be championed, and it creates a strong foundation. When you discover a roadblock or grievance, take the time to investigate and talk to the people who can make it right. The community will feel heard and valued.

    Empathize

    The processes and programs that support community participation require an understanding of motivation. To understand motivation, you have to be able to empathize. Everyone views the world from their own unique perspectives, so it’s important to try and understand them, even if they’re different from your own. 

    Set expectations

    Understand your organization’s limitations, as well as your own, and communicate them. If your partner expects you to be home at a certain time and you don’t show up, the anger you encounter likely has more to do with not being told you’re going to be late, than the lateness itself.

    Guidelines and rules for participation are important components as well. I once featured content from a community member and was met by an angry online mob, because although the content was great, the member hadn’t reached a certain level of status. The guidelines didn’t cover eligibility for featuring, and up until then only longer-term participants had been featured, so the community’s expectations were not met.

    Not apples to apples

    I would never want to get anyone in trouble by suggesting they treat their community members exactly the same as their partners. Answering emails from anyone while having dinner with your loved one is not advised. The take-away is there isn’t any mystery to interacting with a community. Many of the ingredients for a healthy community are ones found in healthy relationships, and most reassuring of all, we already know what they are.


    Robert KaiserMirror, Mirror: Trek Convention and FLOSS Conferences

    It's been a while since I did any blogging, but that doesn't mean I haven't been doing anything - on the contrary, I have been too busy to blog, basically. We had a few Firefox releases where I scrambled until the last day of the beta phase to make sure we keep our crash rates as low as our users probably expect by now, I did some prototyping work on QA dashboards (with already-helpful results and more to come) and helped in other process improvements on the Firefox Quality team, worked with different teams to improve stability of our blocklist ping "ADI" data, and finally even was at a QA work week and a vacation in the US. So plenty of stuff done, and I hope to get to blog about at least some pieces of that in the next weeks and months.

    That said, one major part of my recent vacation was the Star Trek Las Vegas Convention, which I attended the second time after last year. Since back then, I wanted to blog about some interesting parallels I found between that event (I can't compare to other conventions, as I've never been to any of those) and some Free, Libre and Open Source Software (FLOSS) conferences I've been to, most notably FOSDEM, but also the larger Mozilla events.
    Of course, there's the big events in the big rooms and the official schedule - on the conferences it's the keynotes and presentations of developers about what's new in their software, what they learned or where we should go, on the convention it's actors and other guests talking about their experiences, what's new in their lives, and entertaining the crowd - both with questions from the audience. Of course, the topics are wildly different. And there's booths at both, also quite a bit different, as it's autograph and sales booths on one side, and mainly info booths on the other, though there are geeky T-shirts sold at both types of events. ;-)

    The largest parallels I found, though, are about the mass of people that are there:
    For one thing, the "hallway track" of talking to and meeting other attendees is definitely a main attraction and big piece of the life of the events on both "sides" there. Old friendships are being revived, new found, and the somewhat geeky commonalities are being celebrated and lead to tons of fun and involved conversations - not just the old fun bickering between vi and emacs or Kirk and Picard fans (or different desktop environments / different series and movies). :)
    For the other, I learned that both types of events are in the end more about the "regular" attendees than the speakers, even if the latter end up being featured at both. Especially the recurring attendees go there because they want to meet and interact with all the other people going there, with the official schedule being the icing on the cake, really. Not that it would be unimportant or unneeded, but it's not as much the main attraction as people on the outside, and possibly even the organizers, might think. Also, going there means you do for a few days not have to hide your "geekiness" from your surroundings and can actively show and celebrate it. There's also some amount of a "do good" atmosphere in both those communities.
    And both events, esp. the Trek and Mozilla ones, tend to have a very inclusive atmosphere of embracing everyone else, no matter what their physical appearance, gender or other social components. And actually, given how deeply that inclusive spirit has been anchored into the Star Trek productions by Gene Roddenberry himself, this might even run deeper in the fans there than it is in the FLOSS world. Notably, I saw a much larger amount of women and of colored people on the Star Trek Conventions than I see on FLOSS conferences - my guess is that at least a third of the Trek fans in Las Vegas were female, for example. I guess we need some more role models in they style of Nichelle Nichols and others in the FLOSS scene.

    All in all, there's a lot of similarities and still quite some differences, but quite a twist on an alternate universe like it's depicted in Mirror, Mirror and other episodes - here it's a different crowd with a similar spirit and not the same people with different mindsets and behaviors.
    As a very social person, I love attending and immersing myself in both types of events, and I somewhat wonder if and how we should have some more cross-pollination between those communities.
    I for sure will be seen on more FLOSS and Mozilla events as well as more Star Trek conventions! :)

    Peter Bengtssonpremailer now with 100% test coverage

    One of my most popular GitHub Open Source projects is premailer. It's a python library for combining HTML and CSS into HTML with all its CSS inlined into tags. This is a useful and necessary technique when sending HTML emails because you can't send those with an external CSS file (or even a CSS style tag in many cases).

    The project has had 23 contributors so far and as always people come in get some itch they have scratched and then leave. I really try to get good test coverage and when people come with code I almost always require that it should come with tests too.

    But sometimes you miss things. Also, this project was born as a weekend hack that slowly morphed into an actual package and its own repository and I bet there was code from that day that was never fully test covered.

    So today I combed through the code and plugged all the holes where there wasn't test coverage.
    Also, I set up Coveralls (project page) which is an awesome service that hooks itself up with Travis CI so that on every build and every Pull Request, the tests are run with --with-cover on nosetests and that output is reported to Coveralls.

    The relevant changes you need to do are:

    1) You need to go to coveralls.io (sign in with your GitHub account) and add the repo.
    2) Edit your .travis.yml file to contain the following:

    before_install:
        - pip install coverage
    ...
    after_success:
        - pip install coveralls
        - coveralls
    

    And you need to execute your tests so that coverage is calculated (the coverage module stores everything in a .coverage file which coveralls analyzes and sends). So in my case I change to this:

    script:
        - nosetests premailer --with-cover --cover-erase --cover-package=premailer
    

    3) You must also give coveralls some clues. So it reports on only the relevant files. Here's what mine looked like:

    [run]
    source = premailer
    
    [report]
    omit = premailer/test*
    

    Now, I get to have a cute "coverage: 100%" badge in the README and when people post pull requests Coveralls will post a comment to reflect how the pull request changes the test coverage.

    I am so grateful for all these wonderful tools. And it's all free too!

    Mozilla WebDev CommunityBeer and Tell – August 2014

    Once a month, web developers from across the Mozilla Project get together to upvote stories on Hacker News from each of our blogs. While we’re together, we usually end up sharing a bit about our side projects over beers, which is why we call this meetup “Beer and Tell”.

    There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

    Frederik Braun: Room Availability in the Berlin Office

    freddyb shared (via a ghost presentation by yours truly) a small webapp he made that shows the current availability of meeting rooms in the Mozilla Berlin office. The app reads room availability from Zimbra, which Mozilla uses for calendaring and booking meeting rooms. It also uses moment.js for rendering relative dates to let you know when a room will be free.

    The discussion following the presentation brought up a few similar apps that other Mozilla offices had made to show off their availability, such as the Vancouver office’s yvr-conf-free and the Toronto office’s yyz-conf-free.

    Nigel Babu: hgstats

    nigelb shared (via another ghost presentation, this time split between myself and laura) hgstats, which shows publicly-available graphs of the general health of Mozilla’s mercurial servers. This includes CPU usage, load, swap, and more. The main magic of the app is to load images from graphite, which are publicly visible, while graphite itself isn’t.

    nigelb has offered a bounty of beer for anyone who reviews the app code for him.

    Pomax: Inkcyclopedia

    Pomax shared an early preview of Inkcyclopedia, an online encyclopedia of ink colors. Essentially, Pomax bought roughly 170 different kinds of ink, wrote down samples with all of them, photographed them, and then collected those images along with the kind of ink used for each. Once finished, the site will be able to accept user-submitted samples and analyze them to attempt to identify the color and associate it with the ink used. Unsurprisingly, the site is able to do this using the RGBAnalyse library that Pomax shared during the last Beer and Tell, in tandem with RgbQuant.js.

    Sathya Gunasekaran: screen-share

    gsathya shared a screencast showing off a project that has one browser window running a WebGL game and sharing its screen with another browser window via WebRTC. The demo currently uses Chrome’s desktopCapture API for recording the screen before sending it to the listener over WebRTC.


    Alas, we were unable to beat Hacker News’s voting ring detection. But at least we had fun!

    If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

    See you next month!

    Advancing ContentA Call for Trust, Transparency and User Control in Advertising

    Advertising is the Web’s dominant business.  It relies on users for its success, and ironically fails to engage with them in a direct and honest way.  We are advocates of the many benefits that commercial involvement brings to the development of the Internet – it is at our core and part of the Mozilla Manifesto. Advertising is one of those commercial activities, it fuels and grows the Web. But the model has lost its focus by failing to put the user at the center.  We are calling initially on the advertising industry to adopt three core principles of trust, transparency and user control:

    1)  Trust: Do users understand why they are being presented with content? Do they understand what pieces of their data fed into the display decision?

    2)  Transparency: Is it clear to users why advertising decisions are made? Is it clear how their data is being consumed and shared?  Are they aware and openly contributing?

    3)  Control: Do users have the ability to control their own data? Do they have the option to be completely private, completely public or somewhere in between?

    We are re-thinking the model.  We want a world where Chief Marketing Officers, advertising agency executives, industry groups and the advertising technology companies see the real benefits of a user-centric model. These three principles give us the ability to build a strong, long term and more valuable platform for everyone.

    What are we doing?

    Our intention is to improve the experience as a player within the ecosystem. We’ll do this by experimenting and innovating.  All of our work will be designed with trust in mind.  Tiles is our first experiment and we are learning a lot.  Right now, we are showing users tiles from their “frecency” (recent and frequent sites), along with Mozilla information and suggestions and content labeled as sponsored. This experience is pretty basic but will evolve over time. Initial user interactions are positive. Users interacted with content labeled as sponsored that we placed in directory tiles 10x more than Mozilla-based content.

    Our next step will be to give users more transparency and control. Our UP platform will eventually help to power tiles and will help determine which content is displayed to the user.  The platform itself is innovative as it currently allows the interests data to sit client side, completely in the user’s control. The data can still be accessed there without us creating a dossier on the user, outside of the Firefox client.

    We will then put the user first by building an interests dashboard (something that we are already working on) that offers users a way to easily change their interests or participation in enhanced content at any time. The dashboard provides a constant feedback loop with users and will work with all our enhanced content projects.

    What can we promise?

    We will continue to demonstrate that it’s possible to balance commercial interests with public benefit, and to build successful products that respect user privacy and deliver experiences based upon trust, transparency and control.

    • We want to show the world you can do display advertising in a way that respects users’ privacy.
    • We believe that publishers should respect browser signals around tracking and privacy. If they don’t, we’ll take an active role in doing so and all our enhanced content projects will respect DNT.
    • We will respect the Minimal Actionable Dataset, a thought stream pioneered by one of our fellow Mozillians to only collect what’s needed – nothing more – and be transparent about it.
    • We will put users in control to customize, change or turn product features on/off at any time.

    We can’t change the Web from the sidelines, and we can’t change advertising on the Web without being a part of that ecosystem. We are excited about this mission and we’re working hard to achieve our goals. Stay tuned for updates over the coming weeks.

    If this resonates with and you have ideas or want to help, we’d love to hear from you by leaving comments below or by filling out this form.

    Mozilla Open Policy & Advocacy BlogTrust should be the currency

    At Mozilla, we champion a Web  that empowers people to reach their full potential and be in control of their online lives. In my role at Mozilla this means advocating for products, policies and practices that respect our users and create trusted online environments and experiences.  We believe trust is the most important currency on the Web – and when that trust is violated, the system fails.

    I have been spending a lot of time with our Content Services team as they work on their new initiatives.  Their first challenge is tackling the online advertising ecosystem.  This is hard work but extremely important.  Our core values of trust, transparency and control are just as applicable to the advertising industry as to any other, but they aren’t widely adopted there.

    Today, online advertising is rife with mistrust.  It is opaque for most users because the value exchange is not transparent.  While it should be trust, the prevailing Web currency is user data – much of the content is free because publishers and websites generate revenue through advertising.  At its core, this model is not new or unique, it is common in the media industry (e.g., broadcast television commercials and newspapers that are ad supported).  To improve monetization, online ads are now targeted based on a user’s browsing habits and intentions.  This isn’t a bad thing when done openly or done with consent.  The problem is that this “personalization” is not always transparent, leaving users in the dark about what they have traded for their content.  This breaks the system.

    Our users and our community have told us – through surveys, comments and emails – that transparency and control matter most to them when it comes to online advertising.  They want to know what is happening with their data; they want to control what data is shared, understand how their data is used and what they get for that exchange.  They are willing to engage in the value exchange and allow their data to be used if they understand what happens next.  Our users want trust (and not their data) to be the prevailing currency.  We believe that without this shift in focus, users will limit access to their data and will block ads.

    We want our users to not only trust us but to be able to trust the Web. We want to empower their choices and help them control their online experience. This is why we pioneered the Do Not Track (DNT) initiative.  DNT relies on advertisers, publishers and websites to respect a user’s preference. Unfortunately, many participants in the online advertising ecosystem do not modify their behavior in response to the DNT signal.  In this instance, user choice is not being respected.  So, we must do more for the user and continue to innovate.

    We are doing this by working within the ecosystem to create change.  We are testing our new tiles feature in Firefox and working to ensure that it provides personalization with respect and transparency built in. We are building DNT and other user controls into the tiles experiments and working to establish these foundational elements with our partners.  We are providing users with more information about their Web presence through Lightbeam, and will be testing new privacy initiatives that give users more control over the flow of their data.  We want to bring relevant and personalized content to our users while empowering control that inspires trust.

    We need to see a renewed focus of trust, transparency and control on the Web as a whole.  We can all do better.  We want to see more products and services (and not just in online advertising) developed with those ideals in mind.  For our part, we will continue to do more to innovate and create change so that we deserve your trust.

     

    Aaron KlotzProfile Unlocking in Firefox 34 for Windows

    Today’s Nightly 34 build includes the work I did for bug 286355: a profile unlocker for our Windows users. This should be very helpful to those users whose workflow is interrupted by a Firefox instance that cannot start because a previous Firefox instance has not finished shutting down.

    Firefox 34 users running Windows Vista or newer will now be presented with this dialog box:

    Clicking “Close Firefox” will terminate that previous instance and proceed with starting your new Firefox instance.

    Unfortunately this feature is not available to Windows XP users. To support this feature on Windows XP we would need to call undocumented API functions. I prefer to avoid calling undocumented APIs when writing production software due to the potential stability and compatibility issues that can arise from doing so.

    While this feature adds some convenience to an otherwise annoying issue, please be assured that the Desktop Performance Team will continue to investigate and fix the root causes of long shutdowns so that a profile unlocker hopefully becomes unnecessary.

    Doug BelshawSome preliminary thoughts toward v2.0 of Mozilla's Web Literacy Map

    As we approach the Mozilla Festival 2014, my thoughts are turning towards revisiting the Web Literacy Map. This, for those who haven’t seen it, comprises the skills and competencies Mozilla and a community of stakeholders believe to be important to read, write and participate on the web. Now that we’ve had time to build and iterate on top of the first version, it’s time to start thinking about a v2.0.

    Thinking

    The first thing to do when revisiting something like this is to celebrate the success it’s had: webmaker.org/resources is now structured using the 15 competencies identified in v1.1 of the Web Literacy Map. Each of those competencies now has an associated badge. We’ve published a whitepaper entitled Why Mozilla care about Web Literacy that features in which it features heavily. It’s also been used as the basis of the Boys and Girls Clubs of America’s new technology strategy, and by MOUSE in their work around Privacy. That’s just a few examples amongst the countless other times it’s been shared on social media and by people looking for something more nuanced than the usual new literacies frameworks.

    Deadlines being what they are, the group that were working on the Web Literacy Map had to move a bit more quickly than we would have liked in the final stages of putting it together. As a result, although the 15 competencies are reasonably solid, we were never 100% happy with the description of the skills underpinning each of these. Nevertheless, we decided to roll with it for launch, made a few updates post-MozFest, and then ‘froze’ development so that others could build on top of it.

    At the beginning of 2014, the Open Badges work at Mozilla was moved to a new non-profit called the Badge Alliance. As co-chair of the working group on Digital & Web Literacies, I’ve had a chance to think through web literacy from the perspective of a badged learning pathway with some of the people who helped put together the Web Literacy Map.

    The feeling I get is that with version 2.0 we need to address both the issues we put to one side for the sake of expediency, as well as issues that have cropped up since them. I can name at least five (not listed in any order):

    • Identity
    • Storytelling
    • Protecting the web (e.g. Net Neutrality)
    • Mobile
    • Computer Science

    We’re generally happy with the 15 competencies identified in v1.1 of the Web Literacy Map, and we’ve built resources and badges on top of them. Version 2.0, therefore, is likely to be more about evolution, not revolution.

    If you’ve got any thoughts on this, please do add them to this thread. Alternatively, I’m @dajbelshaw on Twitter and you can email me at doug@mozillafoundation.org

    Adam LoftingOverlapping types of contribution

    Screen Shot 2014-08-21 at 14.02.27TL;DR: Check out this graph!

    Ever wondered how many Mozfest Volunteers also host events for Webmaker? Or how many code contributors have a Webmaker contributor badge? Now you can find out

    The reason the MoFo Contributor dashboard we’re working from at the moment is called our interim dashboard is because it’s combining numbers from multiple data sources, but the number of contributors is not de-duped across systems.

    So if you’re counted as a contributor because you host an event for Webmaker, you will be double counted if you also file bugs in Bugzilla. And until now, we haven’t known what those overlaps look like.

    This interim solution wasn’t perfect, but it’s given us something to work with while we’re building out Baloo and the cross-org areweamillionyet.org (and by ‘we’, the vast credit for Baloo is due to our hard working MoCo friends Pierros and Sheeri).

    To help with prepping MoFo data for inclusion in Baloo, and by  generally being awesome, JP wired up an integration database for our MoFo projects (skipping a night of sleep to ship V1!).

    We’ve tweaked and tuned this in the last few weeks and we’re now extracting all sorts of useful insights we didn’t have before. For example, this integration database is behind quite a few of the stats in OpenMatt’s recent Webmaker update.

    The downside to this is we will soon have a de-duped number for our dashboard, which will be smaller than the current number. Which will feel like a bit of a downer because we’ve been enthusiastically watching that number go up as we’ve built out contribution tracking systems throughout the year.

    But, a smaller more accurate number is a good thing in the long run, and we will also gain new understanding about the multiple ways people contribute over time.

    We will be able to see how people move around the project, and find that what looks like someone ‘stopping’ contributing, might be them switching focus to another team, for example. There are lots of exciting possibilities here.

    And while I’m looking at this from a metrics point of view today, the same data allows us to make sure we say hello and thanks to any new contributors who joined this week, or to reach out and talk to long running active contributors who have recently stopped, and so on.

    Pete MooreWeekly review 2014-08-21

    Highlights since last review

    • Wrote Android Play Store code, got r+ from Rail
    • Set up staging environment, staging release hopefully today
    • Solved pip install problems

    Goals for next week:

    • Get back to vcs sync work

    Bugs I created since last review:

    Other bugs I updated since last review:

    Marco ZeheBlog maintenance on Saturday

    On Saturday, August 23, starting at 9 AM GMT+02:00 (3 AM Eastern, midnight Pacific), this blog will undergo some much needed maintenance. Afterwards it will hopefully be faster, and also have a new theme. I’ll try to keep the interruption as brief as possible. But just in case, so you know. :)

    Peter BengtssonAggressively prefetching everything you might click

    I just rolled out a change here on my personal blog which I hope will make my few visitors happy.

    Basically; when you hover over a link (local link) long enough it prefetches it (with AJAX) so that if you do click it's hopefully already cached in your browser.

    If you hover over a link and almost instantly hover out it cancels the prefetching. The assumption here is that if you deliberately put your mouse cursor over a link and proceed to click on it you want to go there. Because your hand is relatively slow I'm using the opportunity to prefetch it even before you have clicked. Some hands are quicker than others so it's not going to help for the really quick clickers.

    What I also had to do was set a Cache-Control header of 1 hour on every page so that the browser can learn to cache it.

    The effect is that when you do finally click the link, by the time your browser loads it and changes the rendered output it'll hopefully be able to do render it from its cache and thus it becomes visually ready faster.

    Let's try to demonstrate this with this horrible animated gif:
    (or download the screencast.mov file)

    Screencast
    1. Hover over a link (in this case the "Now I have a Gmail account" from 2004)
    2. Notice how the Network panel preloads it
    3. Click it after a slight human delay
    4. Notice that when the clicked page is loaded, its served from the browser cache
    5. Profit!

    So the code that does is is quite simply:

    $(function() {
      var prefetched = [];
      var prefetch_timer = null;
      $('div.navbar, div.content').on('mouseover', 'a', function(e) {
        var value = e.target.attributes.href.value;
        if (value.indexOf('/') === 0) {
          if (prefetched.indexOf(value) === -1) {
            if (prefetch_timer) {
              clearTimeout(prefetch_timer);
            }
            prefetch_timer = setTimeout(function() {
              $.get(value, function() {
                // necessary for $.ajax to start the request :(
              });
              prefetched.push(value);
            }, 200);
          }
        }
      }).on('mouseout', 'a', function(e) {
        if (prefetch_timer) {
          clearTimeout(prefetch_timer);
        }
      });
    });
    

    Also, available on GitHub.

    I'm excited about this change because of a couple of reasons:

    1. On mobile, where you might be on a non-wifi data connection you don't want this. There you don't have the mouse event onmouseover triggering. So people on such devices don't "suffer" from this optimization.
    2. It only downloads the HTML which is quite light compared to static assets such as pictures but it warms up the server-side cache if needs be.
    3. It's much more targetted than a general prefetch meta header.
    4. Most likely content will appear rendered to your eyes faster.

    Nicholas Nethercotemozilla::pkix ships in Firefox!

    In April, we announced an upcoming certificate verification library designed from the ground up to be fast and secure. A few weeks ago, this new library – known as “mozilla::pkix” – shipped with Firefox and is enabled by default. Please see the original announcement for more details.
    Along with using more verifiably secure coding practices, we took the opportunity to closely adhere to the X.509 certificate verification specifications for the Internet. For example, we prevent certificates from being misused in ways that legacy libraries often do not. This protects user data and promotes an overall more secure Web.
    However, this sometimes comes at a compatibility cost. Some certificates issued by certificate authorities not in Mozilla’s Root CA program may no longer work in the same way. We are currently evaluating how we can best balance security with usability with regard to these certificates.
    If you encounter compatibility issues, please read the Certificate Primer which contains information for creating a compatible certificate hierarchy.

    David BoswellQuality over Quantity

    I was in Portland last week for a work week and Michelle recommended that I try the donuts at Blue Star. The blueberry donut was really great. The inside of the bakery was interesting too—right inside the doors was a big mural that said ‘Quality over Quantity’.

    20140812_085436

    That turned out to be an good summary of the work week. We were checking in on progress toward this year’s goal to grow the number of active contributors by 10x and also thinking about how we could increase the impact of our community building work next year.

    One clear take-away was that community building can’t be all about growth. Some teams, like Location Service, do need large numbers of new active contributors, but many teams don’t. For instance, localization needs to develop the active contributors already in the project into core contributors that can take on a bigger role.

    For me, creating a draft framework that would give us more ways to support teams and communities was the most important thing we did—in addition to taking a great team photo :)

    cbt_portland_photo_fun

    Growth is part of this framework, but it includes other factors for us to look at to make sure that we’re building healthy functional and regional communities. The health measures we think we should be focusing on next year are:

    • Retention (how many contributors are staying and leaving)
    • Growth (how many new contributors are joining)
    • Development (how many contributors are getting more deeply involved in a project)
    • Sentiment (how do contributors feel about being involved)
    • Capacity (how are teams increasing their ability to build communities)

    Having this more nuanced approach to community building will create more value because it aligns better with the needs we’re seeing across Mozilla. The growth work we’ve done has been critical to getting us here and we should continue that along with adding more to what we offer.

    scubidiver_video_poster

    There is a video that Rainer just posted that has a story Chris Hofmann told at last year’s summit about one contributor that had a huge impact on the project. This is a great example of how we should be thinking more broadly about community building.

    We should be setting up participation systems that let us help teams build long-lasting relationships with contributors like Scoobidiver as well as helping teams connect with large numbers of people to focus on an issue for a short time when that is what’s needed.

    Moral of this story: Eat more donuts—they help you think :)


    Vladimir VukićevićUpdated Firefox VR Builds

    I’d like to announce the third Firefox Nightly build with experimental VR support. Download links:

    This build includes a number of fixes to CSS VR rendering, as well as some API additions and changes:

    • Fixed CSS rendering (see below for more information)
    • Support for DK2 via 0.4.1 SDK (extended mode only)
    • Experimental auto-positioning on MacOS X — when going fullscreen, the window should move itself to the Rift automatically
    • hmd.setFieldOfView() now takes zNear and zFar arguments
    • New API call: hmd.getRecommendedEyeRenderRect() returns the suggested render dimensions for a given eye; useful for WebGL rendering (see below)

    The DK2 Rift must be in Extended Desktop mode. You will also need to rotate the Rift’s display to landscape. If tracking doesn’t seem to be working, stop the Oculus service using the Configuration Tool first, then launch Firefox.

    CSS Rendering

    Many issues with CSS rendering were fixed in this release. As part of this, the coordinate space when in fullscreen VR is different than normal CSS. When in fullscreen VR mode, the 0,0,0 coordinate location refers to the center of the viewport (and not the top left as is regular in CSS). Additionally, the zNear/zFar values specified to setFieldOfView control the near and far clipping planes.

    The coordinate units are also not rationalized with CSS coordinates. The browser applies a per-eye transform in meters (~ 0.032 meters left/right, or 3.2cm) before rendering the scene; tthus the coordinate space ends up being ~1px = ~1m in real space, which is not correct. This will be fixed in the next release.

    Here’s a simple example of showing 4 CSS images on all sides around the viewer, along with some text. The source includes copious comments about what’s being done and why.

    Known issues:

    • The Y axis is flipped in the resulting rendering. (Workaround: add a rotateZ() to the camera transform div)
    • The initial view doesn’t face the same direction as CSS (Workaround: add a rotateY() to the camera transform div)
    • Manual application of the HMD orientation/position is required.
    • Very large CSS elements (>1000px in width/height) may not be rendered properly
    • Units are not consistent when in VR mode

    getRecommendedEyeRenderRect()

    NOTE: This API will likely change (and become simpler) in the next release.

    getRecommendedEyeRenderRect will return the rectangle into which each eye should be rendered, and the best resolution for the given field of view settings. To create an appropriately sized canvas, the size computation should be:

    var leftRect = hmd.getRecommendedEyeRenderRect("left");
    var rightRect = hmd.getRecommendedEyeRenderRect("right");
    var width = leftRect.x + Math.max(leftRect.width + rightRect.x) + rightRect.width;
    var height = Math.max(leftRect.y, rightRect.y) + Math.max(leftRect.height, leftRect.height);
    

    In practice, leftRect.x will be 0, and the y coordinates will both be 0, so this can be simplified to:

    var width = leftRect.width + rightRect.width;
    var height = Math.max(leftRect.height, rightRect.height);
    

    Each eye should be rendered into the leftRect and rightRect coordinates. This API will change in the next release to make it simpler to obtain the appropriate render sizes and viewports.

    Comments and Issues

    As before, issues are welcome via GitHub issues on my gecko-dev repo. Additionally, discussion is welcome on the web-vr-discuss mailing list.

    Christian HeilmannNo more excuses – subtitle your YouTube videos

    I was just very pleasantly surprised that the subtitling interface in YouTube has gone leaps and bounds since I last looked at it.

    One of the French contributors to Mozilla asked me to get subtitles for the video of the Flame introduction videos and I felt the sense of dread you get when requests like those come in. It seems a lot of work for not much gain.

    However, using the YouTube auto captioning tool this is quite a breeze:

    subtitling-interface

    I just went to the Subtitles and CC tab and told YouTube that the video is English. Almost immediately (this is kind of fishy – does YouTube already create text from speech for indexing reasons?) I got a nice set of subtitles, time-stamped and all.

    Hitting the edit button I was able to edit the few mistakes the recognition made and it was a simple process of listening as you type. I then turned on the subtitles and exported the SRT files for translation.

    I was very impressed with the auto-captioning as I am not happy with the quality of my talking in those videos (they were rushed and the heartless critic in me totally hears that).

    Of course, there is also Amara as a full-fledged transcribing, captioning and translation tool, but there are not many excuses left for us not to subtitle our short videos.

    Let’s not forget that subtitles are amazing and not only a tool for the hard of hearing:

    • I don’t have to put my headphones in when watching your video in public – I can turn off the sound and not annoy people in the cafe
    • As a non-native speaker they are great to learn a new language (I learned English watching Monty Python’s Flying Circus with subtitles – the only program that did that back then in Germany. This might explain a few things)
    • You can search a video by content without having to know the time stamp and you can provide the subtitles as a transcript in a post
    • You help people with various disabilities to make your work understandable.

    Go, hit that Subtitles tab!

    Daniel StenbergThe “right” keyboard layout

    I’ve never considered myself very picky about the particular keyboard I use for my machines. Sure, I work full-time and spare time in front of the same computer and thus I easily spend 2500-3000 hours a year in front of it but I haven’t thought much about it. I wish I had some actual stats on how many key-presses I do on my keyboard on an average day or year or so.

    Then, one of these hot summer days this summer I left the roof window above my work place a little bit too much open when a very intense rain storm hit our neighborhood when I was away for a brief moment and to put it shortly, the huge amounts of water that poured in luckily only destroyed one piece of electronics for me: my trusty old keyboard. The keyboard I just randomly picked from some old computer without any consideration a bunch of years ago.

    So the old was dead, I just picked another keyboard I had lying around.

    But man, very soft rubber-style keys are very annoying to work with. Then I picked another with a weird layout and a control-key that required a little too much pressure to work for it to be comfortable. So, my race for a good enough keyboard had begun. Obviously I couldn’t just pick a random cheap new one and be happy with it.

    Nordic key layout

    That’s what they call it. It is even a Swedish layout, which among a few other details means it features å, ä and ö keys at a rather prominent place. See illustration. Those letters are used fairly frequently in our language. We have a few peculiarities in the Swedish layout that is downright impractical for programming, like how the {[]} – symbols all require AltGr pressed and slash, asterisk and underscore require Shift to be pressed etc. Still, I’v'e learned to program on such a layout so I’m quite used to those odd choices by now…

    kb-nordic

    Cursor keys

    I want the cursor keys to be of “standard size”, have the correct location and relative positions. Like below. Also, the page up and page down keys should not be located close to the cursor keys (like many laptop keyboards do).

    keyboard with marked cursorkeys

    Page up and down

    The page up and page down keys should instead be located in the group of six keys above the cursor keys. The group should have a little gap between it and the three keys (print screen, scroll lock and pause/break) above them so that finding the upper row is easy and quick without looking.

    page up and down keysBackspace

    I’m not really a good keyboard typist. I do a lot of mistakes and I need to use the backspace key quite a lot when doing so. Thus I’m a huge fan of the slightly enlarged backspace key layout so that I can find and hit that key easily. Also, the return key is a fairly important one so I like the enlarged and strangely shaped version of that as well. Pretty standard.

    kb-backspaceFurther details

    The Escape key should have a little gap below it so that I can find it easily without looking.

    The Caps lock key is completely useless for locking caps is not something a normal person does, but it can be reprogrammed for other purposes. I’ve still refrained from doing so, mostly to not get accustomed to “weird” setups that makes it (even) harder for me to move between different keyboards at different places. Just recently I’ve configured it to work as ctrl – let’s see how that works out.

    The F-keys are pretty useless. I use F5 sometimes to refresh web pages but as ctrl-r works just as well I don’t see a strong need for them in my life.

    Numpad – a completely useless piece of the keyboard that I would love to get rid of – I never use any of those key. Never. Unfortunately I haven’t found any otherwise decent keyboards without the numpad.

    Func KB-460

    The Func KB-460 is the keyboard I ended up with this time in my search. It has some fun extra cruft such as two USB ports and a red backlight (that can be made to pulse). The backlight gave me extra points from my kids.

    Func KB-460 keyboard

    It is “mechanical” which obviously is some sort of thing among keyboards that has followers and is supposed to be very good. I remain optimistic about this particular model, even if there are a few minor things with it I haven’t yet gotten used to. I hope I’ll just get used to them.

    This keyboard has Cherry MX Red linear switches.

    How it could look

    Based on my preferences and what keys I think I use, I figure an ideal keyboard layout for me could very well look like this:

    my keyboard layout

    Keyfreq

    I have decided to go further and “scientifically” measure how I use my keyboard, which keys I use the most and similar data and metrics. Turns out the most common keylog program on Linux doesn’t log enough details, so I forked it and created keyfreq for this purpose. I’ll report details about this separately – soon.

    Byron Joneshappy bmo push day!

    the following changes have been pushed to bugzilla.mozilla.org:

    • [1047405] Comment tagging GUI not fully localizable because of text in Javascript instead of template
    • [1048712] comment tagging suggestions always returns a single result
    • [1054795] remove ‘Bugzilla Data For Researchers’ link
    • [1050230] Use better icons for the guided bug entry product selection to differentiate Fx, Fx for Android and FxOS
    • [1022707] Duplicate review flags on attachments in Toolkit and Firefox for Metro
    • [1050628] flag state API doesn’t honour bug or attachment security
    • [1055945] splinter generates “Use of uninitialized value” warnings when dealing with public reviews on private attachments

    discuss these changes on mozilla.tools.bmo.


    Filed under: bmo, mozilla

    Benjamin KerensaMozilla and Open Diversity Data

    I have been aware of the Open Diversity Data project for awhile. It is the work of the wonderful members of Double Union and their community of awesome contributors. Recently, a Mozillian tweeted that Mozilla should release it’s Diversity Data. It is my understanding also that a discussion happened internally and for whatever reason a […]

    Mozilla Release Management TeamFirefox 32 beta7 to beta8

    • 20 changesets
    • 52 files changed
    • 363 insertions
    • 162 deletions

    ExtensionOccurrences
    cpp17
    js9
    h9
    ini2
    xul1
    xml1
    xhtml1
    webidl1
    py1
    mm1
    css1

    ModuleOccurrences
    content15
    js8
    browser8
    netwerk3
    toolkit2
    testing2
    dom2
    modules1
    mobile1
    editor1
    accessible1

    List of changesets:

    Ryan VanderMeulenBug 1023472 - Disable test_bug935876.html on Android for perma-failing when pushed to a different chunk; a=bustage - 1764a68fe1ae
    Ryan VanderMeulenBug 1054087 - Disable test_dom_input_event_on_htmleditor.html on Android 2.3 for perma-failing since the number of Android mochitest chunks was increased; a=bustage - ef94af3dd0ad
    Jon CoppeardBug 999158 - Keep a spare chunk around to mitigate GGC OOM crashes on tenuring. r=terrence, a=lmandel - 97fd0156fdc2
    Ryan VanderMeulenBug 1026805 - Disable frequently-hanging mozapps tests on OSX. a=test-only - 76f7c4f771f5
    Matthew NoorenbergheBug 1054411 - Cancel the HTTP requests in browser_keywordSearch.js to avoid making network contact. r=adw, a=test-only - 6dec02f8d0ea
    Florian QuèzeBug 1048375 - browser_aboutHome.js intermittently causes external requests to snippets.mozilla.com. r=gavin, a=test-only - 8e09aad61a79
    Randell JesupBug 1054166: Mirror Add/RemoveListener in Add/RemoveDirectListener r=roc a=abillings - 6a2810252cf8
    Simon MontaguBug 1037641 - Split SetDirectionFromChangedTextNode into TextNodeWillChangeDirection and TextNodeChangedDirection. r=ehsan, a=abillings - 9e94aa2f0ae7
    Brian HackettBug 1053683 - Add overrecursion checks to FillInBMInfo. r=jandem, a=abillings - c6e134b4ed52
    Ed LeeBug 1039881 - Use an empty directory tiles data source pref before uplift [r=adw r=bholley a=lmandel] - 6790f9333fec
    Wes JohnstonBug 910893 - Don't disable the try again button. r=margaret, r=benb, a=lmandel - 7bb962c117df
    Valentin GosuBug 1045886 - Remove Cache directory from Android profiles. r=michal, a=lmandel - 07eb5ce30325
    Valentin GosuBug 1045886 - Increase assertion count in test_bug437844.xul. a=test-only - c444cb84a78b
    Jan de MooijBug 1054359 - Add is-object check to IonBuilder::makeCallHelper. r=efaust, a=lmandel - f5bfa8f3434c
    Jared WeinBug 1016434 - Backout Bug 759252 from Firefox 32 and Firefox 33 for causing blurry throbbers. a=lmandel - 3741e9a5c6ca
    Jean-Yves AvenardBug 1045591 - Fix media element's autoplay for audio-only stream. r=cpearce, a=lmandel - f595bdcdbd1e
    Alessio PlacitelliBug 1037214 - Throw OOM to the script instead of aborting in FragmentOrElement::GetTextContentInternal. r=bz, a=lmandel - 353ade05d903
    Ed MorleyBug 1026987 - Give the MOZ_DISABLE_NONLOCAL_CONNECTIONS error a TBPL-parsable prefix. r=froydnj, a=NPOTB - 92aead6bd5fb
    Andrew McCreightBug 1039633 - Always try to set the ASan symbolizer in gtest runs. r=ted, a=test-only - e0e150f31ffe
    Tooru FujisawaBug 1053692 - Do not use optimized stub for spread call with many arguments. r=jandem, a=lmandel - 45953c4613d2

    Mike ShalPGO Performance on SeaMicro Build Machines

    Let's take a look at why our SeaMicro (sm) build machines perform slower than our iX machines. In particular, the extra time it takes to do non-unified PGO Windows builds can cause timeouts in certain cases (on Aurora we have bug 1047621). Since this was a learning experience for me and I hit a few roadblocks along the way, I thought it might be useful to share the experience of debugging the issue. Read on for more details!

    Andrew Overholt“Bootcamp” talks on Air Mozilla

    Thanks to Jonathan Lin and Spencer Hui some of the talks that were presented at the recent “bootcamp” are appearing on Air Mozilla and more will do so as we get them ready. They’re all in Air Mozilla’s engineering channel: https://air.mozilla.org/channels/engineering/

    Gregory SzorcSubmit Feedback about Mercurial

    Are you a Mozillian who uses Mercurial? Do you have a complaint, suggestion, observation, or any other type of feedback you'd like to give to the maintainers of Mercurial? Now's your chance.

    There is a large gathering of Mercurial contributors next weekend in Munich. The topics list is already impressive. But Mozilla's delegation (Mike Hommey, Ben Kero, and myself) would love to advance Mozilla's concerns to the wider community.

    To leave or vote for feedback, please visit https://hgfeedback.paas.allizom.org/e/august-2014-summit before August 29 so your voice may be heard.

    I encourage you to leave feedback about any small, big or small, Mozilla-specific or not. Comparisons to Git, GitHub and other version control tools and services are also welcome.

    If you have feedback that can't be captured in that moderator tool, please email me. gps@mozilla.com.

    Jen Fong-Adwentrevisit.link

    A little over 3 years ago, I was learning node and wanted to try a project with it.

    Michael KaplyWebconverger

    One of projects I've been working on is Webconverger. Webconverger is an open source Linux-based kiosk that uses a customized version of Firefox as the user interface.

    Webconverger is a great choice if you are setting up a kiosk or digital signage. It can be quickly and easily deployed on any type of machine. It works especially well on legacy hardware because of its low resource requirements. It can even be installed onto a USB stick and simply plugged in to an existing machine.

    The configuration for the kiosk is downloaded from a server allowing you to customize your kiosk remotely and it will pick up your latest changes. It has a full featured API that allows you to do things like customize the browser chrome or whitelist certain sites. Plus it even stays updated automatically if you choose by downloading the latest version in the background.

    If you're looking for a kiosk or digital sign solution, I would definitely recommend checking it out. Go to Webconverger.com for more information or email sales@webconverger.com.

    Will Kahn-GreeneInput status: August 19th, 2014

    Development

    High-level summary:

    It's been a slower two weeks than normal, but we still accomplished some interesting things:

    • L Guruprasad finished cleaning up the Getting Started guide--that work helps all future contributors. He did a really great job with it. Thank you!
    • Landed a minor rewrite to rate-limiting/throttling.
    • Redid the Elasticsearch indexing admin page.
    • Fixed some Heartbeat-related things.

    Landed and deployed:

    • cf2e0e2 [bug 948954] Redo index admin
    • f917d41 Update Getting Started guide to remove submodule init (L. Guruprasad)
    • 5eb6d6d Merge pull request #329 from lgp171188/peepify_submodule_not_required_docs
    • c168a5b Update peep from v1.2 to v1.3
    • adf7361 [bug 1045623] Overhaul rate limiting and update limits
    • 7647053 Fix response view
    • f867a2d Fix rulename
    • 8f0c36e [bug 1051214] Clean up DRF rate limiting code
    • 0f0b738 [bug 987209] Add django-waffle (v0.10)
    • b52362a Make peep script executable
    • 461c503 Improvie Heartbeat API docs
    • 8f0ccd3 [bug 1052460] Add heartbeat view
    • d1604f0 [bug 1052460] Add missing template

    Landed, but not deployed:

    • ed2923f [bug 1015788] Cosmetic: flake8 fixes (analytics)
    • afdfc6a [bug 1015788] Cosmetic: flake8 fixes (base)
    • 05e0a33 [bug 1015788] Cosmetic: flake8 fixes (feedback)
    • 2d9bc26 [bug 1015788] Cosmetic: flake8 fixes (heartbeat)
    • dc6e990 Add anonymize script

    Current head: dc6e990

    Rough plan for the next two weeks

    1. Working on Dashboards-for-everyone bits. Documenting the GET API. Making it a bit more functional. Writing up some more examples. (https://wiki.mozilla.org/Firefox/Input/Dashboards_for_Everyone)
    2. Update Input to ElasticUtils v0.10 (bug 1055520)
    3. Land all the data retention policy work (bug 946456)
    4. Gradients (https://wiki.mozilla.org/Firefox/Input/Gradient_Sentiment)
    5. Product administration views (bug 965796)

    Most of that is in some state of half-done, so we're going to spend the next couple of weeks focusing on finishing things.

    What I need help with

    1. (django) Update to django-rest-framework 2.3.14 (bug 934979) -- I think this is straight-forward. We'll know if it isn't if the tests fail.
    2. (django, cookies, debugging) API response shouldn't create anoncsrf cookie (bug 910691) -- I have no idea what's going on here because I haven't looked into it much.
    3. (html) Fixing the date picker in Chrome (bug 1012965) -- The issue is identified. Someone just needs to do the fixing.

    For details, see our GetInolved page:

    https://wiki.mozilla.org/Webdev/GetInvolved/input.mozilla.org

    If you're interested in helping, let me know! We hang out on #input on irc.mozilla.org and there's the input-dev mailing list.

    Additional thoughts

    We're in the process of doing a Personally Identifiable Information audit on Input, the systems it's running on and the processes that touch and move data around. This covers things like "what data are we storing?", "where is the data stored?", "who/what has access to that data?", "does that data get copied/moved anywhere?", "who/what has access to where the data gets copied/moved to?", etc.

    I think we're doing pretty well. However, during the course of the audit, we identified a few things we should be doing better. Some of them already have bugs, one of them is being worked on already and the otehrs need to be written up.

    Some time this week, I'll turn that into a project and write up missing bugs.

    That's about it!

    Adam LoftingTrendlines and Stacking Logs

    TL;DR

    • Our MoFo dashboards now have trendlines based on known activity to date
    • The recent uptick in activity is partly new contributors, and partly new recognition of existing contributors (all of which is good, but some of which is misleading for the trendline in the short term)
    • Below is a rambling analogy for thinking about our contributor goals and how we answer the question ‘are we on track for 2014?’
    • + if you haven’t seen it, OpenMatt has crisply summarized a tonne of the data and insights that we’ve unpicked during Maker Party

    Stacking Logs

    I was stacking logs over the weekend, and wondering if I had enough for winter, when it struck me that this might be a useful analogy for a post I was planning to write. So bear with me, I hope this works…

    To be clear, this is an analogy about predicting and planning, not a metaphor for contributors* :D

    So the trendline looks good, but…

    Screen Shot 2014-08-19 at 11.47.27

    Trendlines can be misleading.

    What if our task was gathering and splitting logs?

    Vedstapel, Johannes Jansson (1)

    We’re halfway through the year, and the log store is half full. The important questions is, ‘will it be full when the snow starts falling?

    Well, it depends.

    It depends how quickly we add new logs to the store, and it depends how many get used.

    So let’s push this analogy a bit.

    Firewood in the snow

    Before this year, we had scattered stacks of logs here and there, in teams and projects. Some we knew about, some we didn’t. Some we thought were big stacks of logs but were actually stacked on top of something else.

    Vedstapel, Johannes Jansson

    Setting a target was like building a log store and deciding to fill it. We built ours to hold 10,000 logs. There was a bit of guesswork in that.

    It took a while to gather up our existing logs (build our databases and counting tools). But the good news is, we had more logs than we thought.

    Now we need to start finding and splitting more logs*.

    Switching from analogy to reality for a minute…

    This week we added trendlines to our dashboard. These are two linear regression lines. One based on all activity for the year to-date, and one based on the most recent 4 weeks. It gives a quick feedback mechanism on whether recent actions are helping us towards to our targets and whether we’re improving over the year to-date.

    These are interesting, but can be misleading given our current working practices. The trendline implies some form of destiny. You do a load of work recruiting new contributors, see the trendline is on target, and relax. But relaxing isn’t an option because of the way we’re currently recruiting contributors.

    Switching back to the analogy…

    We’re mostly splitting logs by hand.

    Špalek na štípání.jpg

    Things happen because we go out and make them happen.

    Hard work is the reason we have 1,800 Maker Party events on the map this year and we’re only half-way through the campaign.

    There’s a lot to be said for this way of making things happen, and I think there’s enough time left in the year to fill the log store this way.

    But this is not mathematical or automated, which makes trendlines based on this activity a bit misleading.

    In this mode of working, the answer to ‘Are we on track for 2014?‘ is: ‘the log store will be filled… if we fill it‘.

    Scaling

    Holzspalter 2

    As we move forward, and think about scale… say a hundred-thousand logs (or even better, a Million Mozillians). We need to think about log splitting machines (or ‘systems’).

    Systems can be tested, tuned, modified and multiplied. In a world of ‘systems’ we can apply trendlines to our graphs that are much better predictors of future growth.

    We should be experimenting with systems now (and we are a little bit). But we don’t yet know what the contributor growth system looks like that works as well as the analogous log splitting machines of the forestry industry. These are things to be invented, tested and iterated on, but I wouldn’t bet on them as the solution for 2014 as this could take a while to solve.

    I should also state explicitly that systems are not necessarily software (or hardware). Technology is a relatively small part of the systems of movement building. For an interesting but time consuming distraction, this talk on Social Machines from last week’s Wikimania conference is worth a ponder:

    Predicting 2014 today?

    Even if you’re splitting logs by hand, you can schedule time to do it. Plan each month, check in on targets and spend more or less time as required to stay on track for the year.

    This boils down to a planning exercise, with a little bit of guess work to get started.

    In simple terms, you list all the things you plan to do this year that could recruit contributors, and how many contributors you think each will recruit. As you complete some of these activities you reflect on your predictions, and modify the plans and update estimates for the rest of the year.

    Geoffrey has put together a training workshop for this, along with a spreadsheet structure to make this simple for teams to implement. It’s not scary, and it helps you get a grip on the future.

    From there, we can start to feed our planned activity and forecast recruitment numbers into our dashboard as a trendline rather than relying solely on past activity.

    The manual nature of the splitting-wood-like-activity means what we plan to do is a much more important predictor of the future than extrapolating what we have done in the past, and that changing the future is something you can go out and do.

    *Contributors are not logs. Do not swing axes at them, and do not under any circumstances put them in your fireplace or wood burning stove.

    Laurent JouanneauRelease of SlimerJS 0.9.2

    Few days ago, I released a minor version of SlimerJS, my scriptable browser based on XulRunner: SlimerJS 0.9.2.

    If you discover my project: this is a browser which is controlled by a script, not by a human. So it has no user interface. In fact this is a browser like PhantomJS, which proposes the same API as PhantomJS. But it is based on Gecko, not on Webkit. See my previous post about the start of the project.

    This new version fixes some bugs and is now compatible with Gecko/Firefox/Xulrunner 31.

    Next big work on SlimerJS:

    • fix last issues that prevent GhostDriver to work well with SlimerJS
    • support Marionette(https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette)
    • try to implement remote debugging, to allow to debug your script from Firefox Dev Tools
    • try to have a true headless browser (so to have a browser without visible windows)

    Help is welcomed, See you on Github ;-)

    Christian HeilmannMakethumbnails.com – drop images into the browser, get a zip of thumbnails

    About 2½ years ago I wrote a demo for Mozilla Hacks how to use Canvas to create thumbnails. Now I felt the itch to update this a bit and add more useful functionality. The result is:

    http://makethumbnails.com

    It is very easy to use: Drop images onto the square and the browser creates thumbnails for them and sends them to you as a zip.

    homepage

    Thumbnail settings page

    You can set the size of the thumbnails, if you want them centered on a coloured background of your choice or cropped to their real size and you can set the quality. All of this has a live preview.

    If you resize the browser to a very small size (or click the pin icon on the site and open a popup) you can use it as neat extra functionality for Finder:

    resize to simple mode

    All of your settings are stored locally, which means everything will be ready for you when you return.

    As there is no server involved, you can also download the app and use it offline.

    The source, of course, of course is available on GitHub.

    To see it in action, you can also watch the a quick walkthrough of Makethumbnails on YouTube

    Happy thumbing!

    Chris

    Rizky AriestiyansyahWebmaker with SMK ITACO

    August 18, 2014 we will carry out the webmaker event we’ve scheduled previously, the event held at SMK ITACO Bekasi, this is a vocational school for children who are less economic conditions. We only...

    The post Webmaker with SMK ITACO appeared first on oonlab.

    Doug BelshawFacebook and Twitter: beyond the like/favorite binary?

    There’s been a couple of developments with the social networks Facebook and Twitter that fit together quite nicely this week. The first is the news that Facebook likes make a huge difference in terms of what you see while browsing your news feed:

    Wired writer Mat Honan found out what happens when you like every single thing that shows up in your Facebook feed. The results were dramatic: Instead of his friends’ updates, he saw more and more updates from brands and publishers. And, based on what he had liked most recently, Facebook’s algorithm made striking judgements about his political leanings, giving him huge numbers extremely right-wing or extremely left-wing posts. What’s more, all that liking made Honan’s own posts show up far more in his friends’ feeds — distorting their view of the world, too.

    But Medium writer Elan Morgan tried the opposite experiment: Not liking anything on Facebook. Instead of pressing like, she wrote a few thoughtful words whenever she felt the need to express appreciation: “What a gorgeous shock of hair” or “Remember how we hid from your grandmother in the gazebo and smoked cigarettes?” The result, as you might guess, is just the opposite of Honan’s experience: Brand messages dwindled away and Facebook became a more relaxed, conversational place for Morgan.

    The second piece of news is that Twitter is experimenting with changes to the way that ‘Favorites’ work:

    Favorites have also been pseudo-private; while you can view a list of favorited tweets from an account’s profile page or on a tweet’s detail page, typically only the “favoriter” and the “favoritee” ever know about it. If Twitter starts surfacing favorited tweets in timelines, they’ve suddenly become far more public. The change — and the backlash — is somewhat similar to Facebook’s attempts to share just about everything “friends” did with Open Graph.

    […]

    For those who have used Twitter for years, the change is so shocking it can seem like the company is completely ignorant to how its customers use the service. But even seasoned Twitter veterans should admit that the service’s core functionality is fairly arcane — it’s far from accessible to new users, and that’s a problem for Twitter.

    What I find interesting is that most sites allow you to ‘love’, ‘like’, ‘favourite’, ‘+1’ or otherwise show your appreciation towards content. You can do this with Mozilla Webmaker too, when browsing the gallery. The trouble is that this is extremely limiting when it comes to data mining. If it’s used in conjunction with an algorithm to serve up content (not currently the case with Webmaker) then it’s a fairly blunt instrument.

    There are some sites that have attempted to go beyond this. I’m thinking specifically of Bit.ly for Feelings, which allows you to share content that you don’t agree with. But there’s not a lot of great examples.

    The trouble is, I guess, is that human emotions are complex, changeable and along three-dimensional analogue spectrum. Digital technologies, on the other hand - and particularly like/favorite buttons - are binary.


    Update: after posting this I found that Yahoo! are planning to scan photos you publish on Tumblr to gauge brand sentiment. I’m not sure if that’s better or worse, to be honest!


    Questions? Comments? I’m @dajbelshaw on Twitter, or you can email me at doug@mozillafoundation.org

    Nigel BabuArrrgh! Tracebacks and Exceptions

    My colleague asked me to take a look at a logging issue on a server last week. He noticed that the error logs had way too little information about exceptions. In this particular instance, we had switched to Nginx + gunicorn instead of our usual Nginx + Apache + mod_wsgi (yeah, we’re weird). I took a quick look this morning and everything looked exactly like they should. I’ve read up more gunicorn docs today than I’ve ever done, I think.

    Eventually, I asked my colleague Tryggvi for help. I needed a third person to tell me if I was making an obvious mistake. He asked me if I tried running gunicorn without supervisor, which I hadn’t. I tried that locally first, and it worked! I was all set to blame supervisor for my woes and tried it on production. Nope. No luck. As any good sysadmin would do, I checked if the versions matched and they did. CKAN itself has it’s dependencies frozen, this lead to more confusion in my brain. It didn’t make sense.

    I started looking at the Exception in more detail, there was a note about email not working and the actual traceback. Well, since I didn’t actually have a mail server on my local machine, I commented those configs out, and now I just had the right Traceback. A few minutes later, it dawned on me. It’s a Pylons “feature”. The full traceback is printed to stdout if and only if there’s no email handling. Our default configs have an email configured and our servers have postfix installed on them and all the errors go to an email alias that’s way too noisy to be useful (Sentry. Soon). I went and commented out the relevant bits of configuration and voilà, it works!

    Palm Face

    Image source: Unknown, but provided by Tryggvi :)

    J. Ryan StinnettWebIDE enabled in Nightly

    I am excited to announce that WebIDE is now enabled by default in Nightly (Firefox 34)! Everyone on the App Tools team has been working hard to polish this new tool that we originally announced back in June.

    Features

    While the previous App Manager tool was great, that tool's UX held us back when trying support more complex workflows. With the redesign into WebIDE, we've already been able to add:

    • Project Editing
      • Great for getting started without worrying about an external editor
    • Project Templates
      • Easy to focus on content from the start by using a template
    • Improved DevTools Toolbox integration
      • Many UX issues arose from the non-standard way that App Manager used the DevTools
    • Monitor
      • Live memory graphs help diagnose performance issues

    Transition

    All projects you may have created previously in the App Manager are also available in WebIDE.

    While the App Manager is now hidden, it's accessible for now at about:app-manager. We do intend to remove it entirely in the future, so it's best to start using WebIDE today. If you find any issues, please file bugs!

    What's Next

    Looking ahead, we have many more exciting things planned for WebIDE, such as:

    • Command line integration
    • Improved support for app frameworks like Cordova
    • Validation that matches the Firefox Marketplace

    If there are features you'd like to see added, file bugs or contact the team via various channels.

    Gregory SzorcMercurial hooks move and testing Mercurial

    Mozilla has a number of source repositories under https://hg.mozilla.org/hgcustom/ that cumulatively define how version control works at Mozilla.

    Back in February, I launched an effort to establish a unified Mercurial repository for all this code. That repository is version-control-tools and it has slowly grown.

    The latest addition to this repository is the import of the hghooks repository. This now-defunct repository contained all the server-side Mercurial hooks that Mozilla has deployed on hg.mozilla.org.

    Soon after that repository was imported into version-control-tools, we started executing the hooks tests as part of the existing test suite in version-control-tools. This means we get continuous integration, code coverage, and the ability to run tests against multiple versions of Mercurial (2.5.4 through 3.1) in one go.

    This is new for Mozilla and is a big deal. For the first time, we have a somewhat robust testing environment for Mercurial that is testing things we run in production.

    But we still have a long way to go. The ultimate goal is to get everything rolled into the version-control-tools repository and to write tests for everything people rely on. We also want the test environment to look as much like our production environment as possible. Once that's in place, most of the fear and uncertainty around upgrading or changing the server goes away. This will allow Mozilla to move faster and issues like our recent server problems can be diagnosed more quickly (Mercurial has added better logging in newer versions).

    If you want to contribute to this effort, please write tests for behavior you rely on. We're now relying on Mercurial's test harness and test types rather than low-level unit tests. This means our tests are now running a Mercurial server and running actual Mercurial commands. The tests thus explicitly verify that client-seen behavior is exactly as you intend. For an example, see the WebIDL hook test.

    So what are you waiting for? Find some gaps in code coverage and write some tests today!

    Matt ThompsonWebmaker: what is the latest data telling us?

    What are we learning? This post highlights new metrics and some early analysis from Adam, Amira, Geoff, Hannah and many others. The goal: turn our various sources of raw data into some high-level narrative headlines we can learn from.

    Getting to 10K

    Current contributor count: 5,529 (Aug 15)

    • Are we on track to hit 10K? No, not yet. The statistical increase we’re seeing is based on good work to record past contribution. But our current growth-rate isn’t enough.
    • Why is the 4-week trend-line up? Because of Maker Party + bulk capturing historical activity (especially Hive + MVP contribution badges).
    • What can we do to grow faster? Short term, we can focus on (amongst other things):

      • 1) Maker Party partners. Convert more partner commitments into action, through a streamlined on-boarding process.
      • 2) Webmaker users. Try to convert more users into contributors. Ask them to do something more directly.
      • 3) Training. Net Neutrality teach-ins, train the trainer events, MozCamps, etc.
        • + …what else?

    Webmaker users

    Highlights:

    • We now have about 120K Webmaker users. We’re seeing big recent increases, mostly thanks to the snippet.
    • About 2% of those users are currently contributors.
    • ~50% of users have published something.
      • Most of that publishing happens on the user’s first day. (Users who don’t make something on their first day tend not to make anything at all.)
      • There’s very little overlap between tools. Users tend to make with a single tool. (e.g., of the ~46K people who have made something, only 2K have made something with both Thimble and Popcorn.)
      • About 20% have opted in to receive email updates from us. (e.g., via BSD)

    Owned media

    • Snippet
      • Our top snippet performer: “The Web is your playground! See what you can build with Mozilla Webmaker and our global Maker Party.” (+ animated pug icon)
        • CTR = 0.58%. (Other MP variations: 0.15% – 0.49%)
        • The icon and animation have a big influence on CTR. Fun icons and playfulness are the hook.
        • “Teach and learn” language generally performs as well as more playful language.

    • Landing pages
      • A “survey-based approach” is our top performer. Asking people *why* they’re interested in Webmaker. (vs straight email sign-up ask) (+4.7% conversion rate)
      • 80 / 20 split for learning vs. teaching. About 78% of survey respondents express interest in making / learning, with 22% wanting to teach / mentor.
    • Language focused on teaching, learning and education performs well.
      • e.g., “Welcome to Webmaker, Mozilla’s open source education project, where you can teach and learn the web through making.” (+17%)
      • vs. “We believe anyone can be a tinkerer, creator, builder of the Web. Including you.”

    • Mozilla.org referral traffic
      • “Webmaker” out-performs “Maker Party.” Our conversion rate dropped to half when we shifted from from “Learn the web” to “Join our Maker Party.”

    “The further away we get from the Mozilla brand, the more work there is to get someone on board.” — Adam

    Maker Party

    • 1,796 events currently entered (Aug 15)
      • That means we’ve already surpassed last year’s total! 1,694 total Maker Party events last year, vs. same number in our first month this year.
      • But: we’ll still need a big event push in second half to hit our contributor target.
    • Key takeaways:
      • Tracking partner activity. Automated tracking has been hard — we’re relying instead on one-to-one calls.
      • We’re gathering great data from those calls. e.g.,
        • Unreported success. Partners are participating in ways that aren’t showing up in our system. Manual badging is filling that gap.
        • Occasional confusion about the ask. Some think “Maker Party” is a “MozFest-level” commitment. They don’t realize the ask is simpler than that.
        • They need easier ways to get started. More simplification and hand-holding. Working on a simplified “Event Wizard” experience now.
        • Some partners see more value in Maker Party than others. Orgs with offerings similar to our own may perceive less value than those in adjacent spaces.
      • We haven’t cracked the earned media nut. Not much coverage. And little evidence of impact from the coverage we got.
      • We don’t have a good way for measuring participation from active Mozillians.
      • Second half. We should gear up for a second “back to school” wave to maximize contributors.

    “There’s the ‘summer wave’ and ‘back to school’ waves. We need to have strategies and actions towards both.” –Hannah

    Next steps

    Short-term focus:

    • 1) Partner conversion. This is probably our best immediate strategy for boosting contribution. Ship a simplified on-ramp for Maker Party partners. A new “Event Wizard,” simple start-up events, and user success support.
    • 2) Convert Webmaker users to contributors. We’ve seen a *big* increase in user numbers. This opens an opportunity to focus on converting those users. Ask them to do something more directly. Try new low-bar CTAs, email optimization, re-activating dormant users, etc.
    • 3) Training. Train the trainer events, MozCamps, MozFest, etc.

    Longer-term questions

    • Year-long engagement. How do we more evenly distribute event creation throughout the entire year?
    • Match-making. How do we identify the teachers? How do we connect those who want to learn with those who want to teach? What are the pathways for teachers / learners?
    • Impact. How many people are learning? How much are they learning? Should we make “number of people learning” Webmaker’s KPI in 2015?

    Jordan LundThis week in Releng - Aug 11th 2014

    Completed work (resolution is 'FIXED'):


    In progress work (unresolved and not assigned to nobody):

    Alex GibsonAnimating the Firefox desktop pages using CSS and SVG

    I recently co-authored a post over on the Mozilla Web Development blog! It's a technical run through of how we did some of the CSS and SVG animations on the new Firefox desktop web pages over on mozilla.org. If that's your sort of thing, you can read the full article here.

    Nigel BabuThe story of hgstats

    tl;dr: I built a thing to see public graphs of hg.mozilla.org called hgstats.

    Lately, we’ve had problems with Mercurial at Mozilla. The Developer Services Team added a bunch of instrumentation to the hg webheads to help us track what is going wrong and when, to give us somewhat an early indicator of when things get shot to hell. All of these are on the Mozilla Graphite instance which are behind employee-only LDAP. However, an interesting quirk is that the image rendering is actually available without authentication. As a community Sheriff, I’ve been keeping close watch on hg throughout my shift with images that releng folks or hwine gave me. This gave an indicator of when to close trees so that we don’t end up having everything turn red. On Thursday evening, I was watching the conversation in #vcs on irc.mozilla.org, when bkero mentioned he’d made a dashboard in graphite. It suddenly dawned on me that I could just embed those images onto a page and quickly have a public dashboard!

    Armed with a bunch of images from Ben, I created a github pages repo with a lovely theme that’s available by default. I embedded the images onto a static HTML page and suddenly, we had a minimal dashboard. It wouldn’t auto-refresh or let you alter the duration of the graph, but hey, now we had one place for things! This first step took about 15 minutes.

    There were two features I had in my mind as must-haves: a) the page must let me change the hours of the graphs (i.e. last 2 hours, last 4 hours, last 8 hours, etc), and b) it should auto-refresh. I’ve looked at backbone several times in the past and I figured this was a good time as any to get cracking on building a backbone.js app.

    I started slowly, the first step was, get everything I have right now, rendered with backbone. I spent a lot of frustrating hours trying to get it to work, but couldn’t because of silly mistakes. I haven’t been coding in JS much and it shows :) I think I stayed up until 2 am trying to diagnose it, but I couldn’t. When I woke up in the morning, I spotted the trouble immediately and it was a tiny typo. Instead of <%=, I typed <%$. After that first step, I got the router bit working and I had an app that could dynamically change the range of hours in the graph! I’d met my first goal!

    I talked to mdoglio who took a quick look at the code and thought models might be a good idea if I’m dealing with data. I refactored the code again to use models, which cleaned it up quite well! Overnight, I had a pull request from hwine to add another graph as well, which I also made more dynamic.

    The hardest bit was getting auto-refresh working. I couldn’t figure out an easy way to solve the problem. Eventually, I ended up with setTimer, but the full credit for the right incandation goes to bwinton.

    High Five!

    Working with backbone has been great, but I wish the documentation did more than just tell me what each function did. Python’s documentation often gives you more than function’s description, it tells you how you would use it practically. Of course, there quite a few resources that already fill this gap. I found backbonetutorials.com pretty useful. I got most of the basic idea of backbone from the site.

    I also submitted it to Webdev Beer and Tell (my first submission!). Mike kindly presented it for me (no, he’s not the real nigelb!) and you can watch the video on Air Mozilla if you have some free time :) I would totally recommend watching the whole video, but if you don’t have a lot of time, skip to 6:37.

    This is the first time I’ve built a single-page app, so I’d love feedback (extra points if you can do a code review). The code is on GitHub.

    Hannah KaneMaker Party Engagement Week 5

    Week 5!

    tl;dr highlights of the week:

    • Though we saw significant jumps in Wm accounts and events, our Contributor numbers did not increase accordingly
    • We’re identifying many opportunities from the partner calls
    • Hack the Snippet is coming soon, along with the next iteration of the snippet funnel
    • The TweetChat created a temporary increase in Twitter engagement, but took attention away from press

    Overall stats:

    • Contributors: 5552 (2% increase from last week’s 5441)
    • Webmaker accounts: 124K (17% increase from last week’s 106.3K)
    • Events: 1799 (crazy 50% jump from last week’s 1199)
    • Hosts: 493 (10% increase from last week’s 450)
    • Expected attendees: 76,200  (23% increase from 61,910)
    • Cities: 362 (40% increase from 260 – what caused this jump?)
    • Traffic: here’s the last three weeks. We continue to see the major boost from the snippet.
     

    traffic

    • And the Webmaker user account conversion rate increased a bit further:
     

    conversion

    ——————————————————————–

    Engagement Strategy #1: PARTNER OUTREACH

    We are learning a lot from the partner calls. Here are some of the most salient takeaways (borrowing from Amira and Melissa’s notes during Friday’s call):

    Partner trends
    • Partners see value in badging their event mentors, speakers and volunteers as a form of appreciation. But there is a potential for those who receive the badges to have no idea who is badging them or what it means (lack of connection to MP). Opportunity: We need to better explain to people why they’ve received a badge and why they might want to create a Webmaker account.
    • Partners are doing things but we just haven’t captured them.  Opportunity: We need to offer real value to users in order to increase the amount of sharing/broadcasting/badging that happens through the site. 
    • Some people need way more training — Opportunity: this is where the event wizard might play a role; there also might be an opportunity to run TTT within certain orgs and spaces.
    • We need to clarify our value statement for partners. It may not be in  adding numbers to their events or traction to their programs/site, or getting press for non-Hive partners. Instead it may be in providing resources and curriculum. We can better segment partners into affinity groups (e.g. afterschool programs) and provide content, trainings, resources, CTAs specifically for them.  We can also localize those offerings to reduce hand-holding.
    • People don’t understand how broad our definition of Maker Party is: everyday events, small events, stands/booths/tables within other events — have to push them to realize that and include all of these on the events platform (note from HK: I would argue we have to offer them a reason to)
    • Opportunity: There’s the summer wave and back to school waves. We need to have strategies and actions towards both.
    • Challenges:
      • Age and time continue to be a blocker for new Wm accounts.
      • Mass emails to order swag, upload events, share information just didn’t work. They need 1-to-1.
      • We lost interest by a lot of people along the way. There’s a good 20-30% we will not be able to bring back in.
      • Parties sound like fun kid-like things (making toys etc.)
      • Getting the Maker Party logo/brand included in event promotion in a meaningful way is not happening, and the meaning behind the brand seems to cause confusion in some cases.

    PROMOTIONAL PARTNERS: We continue to see only a tiny amount of referrals from promotional partner urls with RIDs.

    ——————————————————————–

    Engagement Strategy #2: ACTIVE MOZILLIANS

    Haven’t heard anything this week, but Amira and I are meeting with the FSA Community Manager on Monday of this week.

    ——————————————————————–

    Engagement Strategy #3: OWNED MEDIA

    Snippet Funnel:

    The snippet funnel continues to perform well in terms of driving traffic. We’re aiming to beat a baseline 1.8% conversion rate.

    We were a bit blocked by technical issues this week and weren’t able to release the new tailored account signup pages, but we continue to work on that.

    The “hack the snippet” test was delayed, but will be live soon. We have a comms strategy around it (for after it’s been tested).

    ——————————————————————–

    Engagement Strategy #4: EARNED MEDIA

    Press this week:

    Aside from a cross-post of last week’s Washington Post Magazine story (http://www.tampabay.com/news/business/workinglife/want-a-tech-job-what-to-study-in-a-fast-moving-field/2193050), we didn’t see press this week. We were focused on our Net Neutrality tweetchat instead.

    SOCIAL (not one of our key strategies):

    As expected, the Tweetchat temporarily increased our Twitter engagement for a two-day period—we saw double the usual amount of favorites, retweets, and replies. You can view the Storify here: https://storify.com/mozilla/net-neutrality-tweet-chat-from-mozilla-s-teaminter

    The #MakerParty trendline for this week is back up to where it had been two weeks ago: 

    trend

     

    See #MakerParty tweets here: https://twitter.com/search?q=%23makerparty


    Nigel BabuOKFestival Fringe Events

    The writeup of the OKFestival is very incomplete, because I haven’t mentioned the fringe events! I attended two fringe events and they both were very good.

    First, I attended CKANCon right before OKFestival. It was informal and co-located with CSVConf. My best takeaway has been talking to people from the wider community around CKAN. I often feel blind-sided because we don’t have a good view of CKAN. I want to know how a user of a portal built on CKAN feels about the UX. After all, the actual users of open data portals are citizens who get data that they can do awesome things with. I had a good conversation with folks from DKAN about their work and I’ve been thinking about how we can make that better.

    I finally met Max! (And I was disappointed he didn’t have a meatspace sticker :P

    The other event I attended was Write the Docs. Ali and Florian came to Berlin to attend the event. It was total surprise running into them at the Mozilla Berlin office. The discussions at the event were spectacular. The talks by by Paul Adams and Jessica Rose were great and a huge learning experience. I missed parts of oncletom’s talk, but the bit I did catch sounded very different to my normal view of documentation.

    We had a few discussions around localization and QA of docs which were pretty eye opening. At one of the sessions, Paul, Ali, Fabian and I discussed rules of documentation, which turned out pretty good! It was an exercise in patience narrowing them down!

    I was nearly exhausted and unable to think clearly by the time Write the Docs started, but managed to face through it! Huge thanks to (among others ) Mikey and Kristof for organizing the event!

    Francesca CiceriAdventures in Mozillaland #4

    Yet another update from my internship at Mozilla, as part of the OPW.

    An online triage workshop

    One of the most interesting thing I've done during the last weeks has been to held an online Bug Triage Workshop on the #testday channel at irc.mozilla.org.
    That was a first time for me: I had been a moderator for a series of training sessions on IRC organized by Debian Women, but never a "speaker".
    The experience turned out to be a good one: creating the material for the workshop had me basically summarize (not too much, I'm way too verbose!) all what I've learned in this past months about triaging in Mozilla, and speaking of it on IRC was a sort of challenge to my usual shyness.

    And I was so very lucky that a participant was able to reproduce the bug I picked as example, thus confirming it! How cool is that? ;)

    The workshop was about the very basics of triaging for Firefox, and we mostly focused on a simplified lifecycle of bugs, a guided tour of bugzilla (including the quicksearch and the advanced one, the list view, the individual bug view) and an explanation of the workflow of the triager. I still have my notes, and I plan to upload them to the wiki, sooner or later.

    I'm pretty satisfied of the outcome: the only regret is that the promoting wasn't enough, so we have few participants.
    Will try to promote it better next time! :)

    about:crashes

    Another thing that had me quite busy in the last weeks was to learn more about crashes and stability in general.
    If you are unfortunate enough to experience a crash with Firefox, you're probably familiar with the Mozilla Crash Reporter dialog box asking you to submit the crash report.

    But how does it works?

    From the client-side, Mozilla uses Breakpad as set of libraries for crash reporting. The Mozilla specific implementation adds to that a crash-reporting UI, a server to collect and process crash reported data (and particularly to convert raw dumps into readable stack traces) and a web interface, Socorro to view and parse crash reports.

    Curious about your crashes? The about:crashes page will show you a list of the submitted and unsubmitted crash reports. (And by the way, try to type about:about in the location bar, to find all the super-secret about pages!)

    For the submitted ones clicking on the CrashID will take you to the crash report on crash-stats, the website where the reports are stored and analyzed. The individual crash report page on crash-stats is awesome: it shows you the reported bug numbers if any bug summaries match the crash signature, as well as many other information. If crash-stats does not show a bug number, you really should file one!

    The CrashKill team works on these reports tracking the general stability of the various channels, triaging the top crashes, ensuring that the crash bugs have enough information and are reproducible and actionable by the devs.
    The crash-stats site is a mine of information: take a look at the Top Crashes for Firefox 34.0a1.
    If you click on a individual crash, you will see lots of details about it: just on the first tab ("Signature Summary") you can find a breakdown of the crashes by OS, by graphic vendors or chips or even by uptime range.
    A very useful one is the number of crashes per install, so that you know how widespread is the crashing for that particular signature. You can also check the comments the users have submitted with the crash report, on the "Comments" tab.

    One and Done tasks review

    Last week I helped the awesome group of One and Done developers, doing some reviewing of the tasks pages.

    One and Done is a brilliant idea to help people contribute to the QA Mozilla teams.
    It's a website proposing the user a series of tasks of different difficulty and on different topics to contribute to Mozilla. Each task is self-contained and can last few minutes or be a bit more challenging. The team has worked hard on developing it and they have definitely done an awesome job! :)

    I'm not a coding person, so I just know that they're using Django for it, but if you are interested in all the dirty details take a look at the project repository. My job has been only to check all the existent tasks and verify that the description and instruction are correct, that the task is properly tagged and so on. My impression is that this an awesome tool, well written and well thought with a lot of potential for helping people in their first steps into Mozilla. Something that other projects should definitely imitate (cough Debian cough).

    What's next?

    Next week I'll be back on working on bugs. I kind of love bugs, I have to admit it. And not squashing them: not being a coder make me less of a violent person toward digital insects. Herding them is enough for me. I'm feeling extremely non-violent toward bugs.

    I'll try to help Liz with the Test Plan for Firefox 34, on the triaging/verifying bugs part.
    I'll also try to triage/reproduce some accessibility bugs (thanks Mario for the suggestion!).

    Planet Mozilla InternsWillie Cheong: Shutdown: 4A study term

    This term has been very unfruitful. I picked up League of Legends after an abstinence streak from DotA that lasted 4 good years. This kinda makes me sad. I’ve also lost a lot of motivation, especially with books and academia. It really isn’t the gaming that’s causing this. It is more just a lack of willpower to carry on doing something that seems so pointless. There’s a whole new post graduation world out there, with new and relevant things to learn.

    I’ve really taken a liking to software development. It’s funny because in first year I remember believing that I could never picture myself sitting in front of a computer all day typing away. Yet here I am now, not knowing what else I would rather be doing.

    I also remember having long-term plans for myself to run a self-grown start-up right after graduation. It’s not that I haven’t been trying. I have been working hard on these things over the past years but nothing seems to have gained any valuable traction at all. With only 8 months left to graduation, this once long-term goal and deadline is suddenly approaching and hitting the reality of being unattainable. Such a realization kills the motivation to carry on pushing.

    Visions of life after university used to be so bright and optimistic. But as the moment slowly approaches I realize how clueless I really am and that’s OK. Engineers are trained problem solvers; we figure things out, eventually.

    Raniere SilvaMathml August Meeting

    Mathml August Meeting

    This is a report about the Mozilla MathML August IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

    In the last 4 weeks the MathML team closed 5 bugs, worked in other 6 and open one bug. This are only the ones tracked by Bugzilla.

    The next meeting will be in September 11th at 8pm UTC. Please add topics in the PAD.

    Leia mais...

    Raniere SilvaGSoC: Pencil Down (August 11 - August 17)

    GSoC: Pencil Down (August 11 - August 17)

    This is the last report about my GSoC project and cover the thirteenth week of “Students coding”.

    At this last week I worked at the auto capitalization and deployed a land page for the project: http://r-gaia-cs.github.io/gsoc2014/.

    Bellow you will find more details about the past week and some thoughts about the project as a hole.

    Leia mais...