Armen ZambranoRun mozharness talos as a developer (Community contribution)

Thanks to our contributor Simarpreet Singh from Waterloo we can now run a talos job through mozharness on your local machine (bug 1078619).

All you have to add is the following:
--cfg developer_config.py 
--installer-url http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/firefox-37.0a1.en-US.linux-x86_64.tar.bz2

To read more about running Mozharness locally go here.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Mozilla Open Policy & Advocacy BlogSpotlight on Free Press: A Ford-Mozilla Open Web Fellow Host Organization

{This is the final in a series of posts highlighting the Ford-Mozilla Open Web Fellows program host organizations. Free Press has been at the forefront of informing tech policy and mobilizing millions to take action to protect the Internet. This year, Free Press has been an instrumental catalyst in the fight to protect net neutrality. We are thrilled to have Free Press as a host organization, and eager to see the impact from their Fellow.}

Spotlight on Free Press: A Ford-Mozilla Open Web Fellow Host Organization
By Amy Kroin, editor, Free Press

In the next few months, the Federal Communications Commission will decide whether to surrender the Internet to a handful of corporations — or protect it as a space that’s shared and shaped by millions of users.

At Free Press, we believe that protecting everyone’s rights to connect and communicate is fundamental to advancing social change. We believe that people should have the opportunities to tell their own stories, hold leaders accountable and participate in policy making. And we know that the freedom to access and share information is essential to this.

freepresslogo

But these freedoms are under constant attack.

Take Net Neutrality. In May, FCC Chairman Tom Wheeler released rules that would have allowed discrimination online and destroyed the Internet as we know it. Since then, Free Press has helped lead the movement to push Wheeler to ditch his rules — and safeguard Net Neutrality over the long term. Our nationwide mobilization efforts and our advocacy within the Beltway have prompted the president, leaders in Congress and millions of people to speak out for strong open Internet protections. Wheeler’s had to go back to the drawing board — and plans to release new rules in 2015.

Though we’ve built amazing momentum in our campaign, our opposition — AT&T, Comcast, Verizon and their hundreds of lobbyists — is not backing down. Neither are we. With the help of people like you, we can ensure the FCC enacts strong open Internet protections. And if the agency goes this route, we will do everything we can to defend those rules and fight any legal challenges.

But preserving Net Neutrality is only part of the puzzle. In addition to maintaining open networks for Internet users, we also need to curb government surveillance and protect press freedom.

In the aftermath of the Edward Snowden revelations, we helped launch the StopWatching.Us coalition, which organized the Rally Against Mass Surveillance and is pushing Congress to pass meaningful reforms. In 2015, we’re ramping up our advocacy and will cultivate more champions in Congress.

The widespread spying has had a particular impact on journalists, especially those who cover national security issues. Surveillance, crackdowns on whistleblowers and pressure to reveal confidential sources have made it difficult for many of these reporters to do their jobs.

Free Press has worked with leading press freedom groups to push the government to protect the rights of journalists. We will step up that work in the coming months with the hiring of a new journalism and press freedom program director.

This is just a snapshot of the kind of work we do every day at Free Press. We’re seeking a Ford-Mozilla Open Web Fellow with proven digital skills who can hit the ground running. Applicants should be up to speed on the latest trends in online organizing and should have experience using social media tools to advance policy goals. Candidates should also be accustomed to working within a collaborative workplace.

To join our team of Internet freedom fighters, apply to become a Ford-Mozilla Open Web Fellow at Free Press. We value excellence and diversity in our team. We strongly encourage applications from women, people of color, persons with disabilities, and lesbian, gay, bisexual and transgender individuals.


Be a Ford-Mozilla Open Web Fellow. Application deadline is December 31, 2014. Apply at https://advocacy.mozilla.org/

 

 

Tantek ÇelikHappy Winter Solstice 2014! Ready For More Daylight Hours.

The sun has set here in the Pacific Time Zone on the shortest northern hemisphere day of 2014.

Photo of Ocean Beach in the morning, partly cloudy, high tide, with a view of a tiny horizon with people walking along the shore, and the Cliff House shining in the sun.

I spent it at home, starting with an eight mile run at an even pace through Golden Gate Park to Ocean Beach and back with a couple of friends, then cooking and sharing brunch with them and a few more.

It was a good way to spend the minimal daylight hours we had: doing positive things, sharing genuinely with positive people who themselves shared genuinely.

Of all the choices we make day to day, I think these may be the most important we have to make:

  • What we choose to do with our time
  • Who we choose to spend our time with

These choices are particularly difficult because:

  • So many possibilities
  • So many people will tell you what you should do, and who you should spend time with; often only what they’re told, or to their advantage, not yours.
  • You have to explicitly choose, or others will choose for you.

When you find those who have explicitly chosen to spend time with you, doing positive things, and who appreciate that you have explicitly chosen (instead of being pressured by obligation, guilt, entitlement etc.) to spend time with them, hug them and tell them you’re glad they are there.

I’m glad you’re here.

Happy Winter Solstice and may you spend more of your hours doing positive things, and genuinely sharing (without pressures of obligation, guilt, or entitlement) with those who similarly genuinely share with you.

Here’s to more daylight hours, both physically and metaphorically.

Karl DubostUA Detection and code libs legacy

A Web site is a mix of technologies with different lifetimes. The HTML MarkUp is mostly rock-solid for years. CSS is not bad too, apart of the vendor prefixes. JavaScript, PHP, name-your-language are also not that bad. And then there's the full social and business infrastructure of these pieces put together. How do we create Web sites which are resilient and robust over time?

Cyclic Release of Web sites

The business infrastructure of Web agencies is set up for the temporary. They receive a request from a client. They create the Web site with the mood of the moment. After 2 or 3 years, the client thinks the Web site is not up to the new fashion, new trends. The new Web agency (because it's often the case) praises that they will do a better job. They throw the old Web site, breaking at the same time old URIs. They create a new Web site with the technologies of the day. Best case scenario, they understand that keeping URIs is good for branding and karma. They release the site for the next 2-3 years.

Basically the full Web site has changed in terms of content and technologies and is working fine in current browser. It's a 2-3 years release cycle of maintenance.

Understanding Legacy and Maintenance

Web browsers are being updated every 6 weeks or so. This is a very fast cycle. They are released with a lot of unstable technologies. Sometimes, they release entirely new browsers with new version numbers and new technologies. Device makers are releasing also new devices very often. It triggers both a consumerism habit and a difficulty for these to exist.

The Web developers design and focus their code on what is the most popular at the moment. Most of the time it's not their fault. These are the requirements from the business team in the Web agency or the clients. A lot of small Web agencies to not have the resources to invest in automated testing for the Web site. So they focus on two browsers. The one they develop with (the most popular of the moment) and the one the client said it was an absolute minimum bar (the most popular of the past).

Libraries of code are relying on User Agent detection for coping with bugs or unstable features of each browser. These libraries know only the past, never the future, not even the now. Libraries of code are often piles of legacy by design. Some are opensource, some have licenses fees attached to them. In both cases, they require a lot of maintenance and testing which are not planned into the budget of a Web site (which is already exploded by the development of the new shiny Web site).

UA detection and Legacy

The Web site will break for some users at a point in time. They chose to use a Web browser which didn't fit in the box of the current Web site. Last week, I went through WPTouch lib Web Compatibility bugs. Basically Firefox OS was not recognized by the User Agent detection code and in return WordPress Web sites didn't send the mobile version to the mobile devices. We opened that bug in August 2013. We contacted the BraveNewCode company which fixed the bug in March 2014. As of today, December 2014, there are still 7 sites in our list of 12 sites which have not switched to the new version of the library.

These were bugs reported by users of these sites. It means people who can't use their favorite browsers for accessing a particular Web site. I'm pretty sure that theree are more sites with the old version of WPTouch. Users either think their browser is broken or just don't understand what is happening.

Eventually these bugs will go away. It's one of my axioms in Web Compatibility: Wait long enough and the bug goes away. Usually the Web site doesn't exist anymore, or redesign from the ground up. In the meantime some users had a very bad experience.

We need a better story for code legacy, one with fallback, one which doesn't rely only on the past for making it work.

Otsukare.

François MarierMercurial and Bitbucket workflow for Gecko development

While it sounds like I should really switch to a bookmark-based Mercurial workflow for my Gecko development, I figured that before I do that, I should document how I currently use patch queues and Bitbucket.

Starting work on a new bug

After creating a new bug in Bugzilla, I do the following:

  1. Create a new mozilla-central-mq-BUGNUMBER repo on Bitbucket using the web interface and use https://bugzilla.mozilla.org/show_bug.cgi?id=BUGNUMBER as the description.
  2. Create a new patch queue: hg qqueue -c BUGNUMBER
  3. Initialize the patch queue: hg init --mq
  4. Make some changes.
  5. Create a new patch: hg qnew -Ue bugBUGNUMBER.patch
  6. Commit the patch to the mq repo: hg commit --mq -m "Initial version"
  7. Push the mq repo to Bitbucket: hg push ssh://hg@bitbucket.org/fmarier/mozilla-central-mq-BUGNUMBER
  8. Make the above URL the default for pull/push by putting this in .hg/patches-BUGNUMBER/.hg/hgrc:

    [paths]
    default = https://bitbucket.org/fmarier/mozilla-central-mq-BUGNUMBER
    default-push = ssh://hg@bitbucket.org/fmarier/mozilla-central-mq-BUGNUMBER
    

Working on a bug

I like to preserve the history of the work I did on a patch. So once I've got some meaningful changes to commit to my patch queue repo, I do the following:

  1. Add the changes to the current patch: hg qref
  2. Check that everything looks fine: hg diff --mq
  3. Commit the changes to the mq repo: hg commit --mq
  4. Push the changes to Bitbucket: hg push --mq

Switching between bugs

Since I have one patch queue per bug, I can easily work on more than one bug at a time without having to clone the repository again and work from a different directory.

Here's how I switch between patch queues:

  1. Unapply the current queue's patches: hg qpop -a
  2. Switch to the new queue: hg qqueue BUGNUMBER
  3. Apply all of the new queue's patches: hg qpush -a

Rebasing a patch queue

To rebase my patch onto the latest mozilla-central tip, I do the following:

  1. Unapply patches using hg qpop -a
  2. Update the branch: hg pull -u
  3. Reapply the first patch: hg qpush and resolve any conflicts
  4. Update the patch file in the queue: hg qref
  5. Repeat steps 3 and 4 for each patch.
  6. Commit the changes: hg commit --mq -m "Rebase patch"

Credits

Thanks to Thinker Lee for telling me about qqueue and Chris Pearce for explaining to me how he uses mq repos on Bitbucket.

Of course, feel free to leave a comment if I missed anything useful or if there's a easier way to do any of the above.

Daniel GlazmanBloomberg

Welcoming Bloomberg as a new customer of Disruptive Innovations. Just implemented the proposed caret-color property for them in Gecko.

Patrick ClokeThe so-called IRC "specifications"

In a previous post I had briefly gone over the "history of IRC" as I know it.  I’m going to expand on this a bit as I’ve come to understand it a bit more while reading through documentation.  (Hopefully it won’t sound too much like a rant, as it is all driving me crazy!)

IRC Specifications

So there’s the original specification (RFC 1459) in May 1993; this was expanded and replaced by four different specifications (RFC 2810, 2811, 2812, 2813) in April 2000.  Seems pretty straightforward, right?

DCC/CTCP

Well, kind of…there’s also the DCC/CTCP specifications, which is a separate protocol embedded/hidden within the IRC protocol (e.g. they’re sent as IRC messages and parsed specially by clients, the server sees them as normal messages).  DCC/CTCP is used to send files as well as other particular messages (ACTION commands for roleplaying, SED for encrypting conversations, VERSION to get client information, etc.). Anyway, this get’s a bit more complicated — it starts with the DCC specification.  This was replaced/updated by the CTCP specification (which fully includes the DCC specification) in 1994.  An "updated" CTCP specification was released in February 1997.  There’s also a CTCP/2 specification from October 1998, which was meant to reformulate a lot of the previous three versions.  And finally, there’s the DCC2 specification (two parts: connection negotiation and file transfers) from April 2004.

But wait!  I lied…that’s not really the end of DCC/CTCP, there’s also a bunch of extensions to it: Turbo DCC, XDCC (eXtended DCC) in 1993, DCC Whiteboard, and a few other variations of this: RDCC (Reverse DCC), SDD (Secure DCC), DCC Voice, etc.  Wikipedia has a good summary.

Something else to note about the whole DCC/CTCP mess…parts of it just don’t have any documentation.  There’s noneat all for SED (at least that I’ve found, I’d love to be proved wrong) and very little (really just a mention) for DCC Voice.

So, we’re about halfway through now.  There’s a bunch of extensions to the IRC protocol specifications that add new commands to the actual protocol.

Authentication

Originally IRC had no authentication ability except the PASS command, which very few servers seem to use, a variety of mechanisms have replaced this, including SASL authentication (both PLAIN and BLOWFISH methods, although BLOWFISH isn’t documented); and SASL itself is covered by at least four RFCs in this situation.  There also seems to be a method called "Auth" which I haven’t been able to pin down, as well as Ident (which is a more general protocol authentication method I haven’t looked into yet).

Extension Support

This includes a few that generally add a way by which servers are able to tell their clients exactly what a server supports.  The first of these was RPL_ISUPPORT, which was defined as a draft specification in January 2004, and updated in January of 2005.

A similar concept was defined as IRC Capabilities in March 2005.

Protocol Extensions

IRCX, a Microsoft extension to IRC used (at one point) for some of it’s instant messaging products exists as a draft from June 1998.

There’s also:

Services

To fill in some of the missing features of IRC, services were created (Wikipedia has a good summary again).  This commonly includes ChanServ, NickServ, OperServ, and MemoServ.  Not too hard, but different server packages include different services (or even the same services that behave differently), one of more common ones is Anope, however (plus they have awesome documentation, so they get a link).

There was an attempt to standardize how to interact with services called IRC+, which included three specifications: conference control protocol, identity protocol and subscriptions protocol.  I don’t believe this are supported widely (if at all).

IRC URL Scheme

Finally this brings us to the IRC URL scheme of which there are a few versions.  A draft from August 1996 defines the original irc: URL scheme.  This was updated/replaced by another draft which defines irc: and ircs: URL schemes.

As of right now that’s all that I’ve found…an awful lot.  Plus it’s not all compatible with each other (and sometimes out right contradicts each other).  Often newer specifications say not to support older specifications, but who knows what servers/clients you’ll end up talking to!  It’s difficult to know what’s used in practice, especially since there’s an awful lot of IRC servers out there.  Anyway, if someone does know of another specification, etc. that I missed please let me know!

Updated [2014-12-20]
Fixed some dead links. Unfortunately some links now point to the Wayback Machine. There are also copies of most, if not all, of these links in my irc-docs repository. Thanks Ultra Rocks for the heads up!

Laura Thomson2014: Engineering Operations Year in Review

On the first day of Mozlandia, Johnny Stenback and Doug Turner presented a list of key accomplishments in Platform Engineering/Engineering Operations in 2014.

I have been told a few times recently that people don’t know what my teams do, so in the interest of addressing that, I thought I’d share our part of the list. It was a pretty damn good year for us, all things considered, and especially given the level of organizational churn and other distractions.

We had a bit of organizational churn ourselves. I started the year managing Web Engineering, and between March and September ended up also managing the Release Engineering teams, Release Operations, SUMO and Input Development, and Developer Services. It’s been a challenging but very productive year.

Here’s the list of what we got done.

Web Engineering

  • Migrate crash-stats storage off HBase and into S3
  • Launch Crash-stats “hacker” API (access to search, raw data, reports)
  • Ship fully-localized Firefox Health Report on Android
  • Many new crash-stats reports including GC-related crashes, JS crashes, graphics adapter summary, and modern correlation reports
  • Crash-stats reporting for B2G
  • Pluggable processing architecture for crash-stats, and alternate crash classifiers
  • Symbol upload system for partners
  • Migrate l10n.mozilla.org to modern, flexible backend
  • Prototype services for checking health of the browser and a support API
  • Solve scaling problems in Moztrap to reduce pain for QA
  • New admin UI for Balrog (new update server)
  • Bouncer: correctness testing, continuous integration, a staging environment, and multi-homing for high availability
  • Grew Air Mozilla community contributions from 0 to 6 non-staff committers
  • Many new features for Air Mozilla including: direct download for offline viewing of public events, tear out video player, WebRTC self publishing prototype, Roku Channel, multi-rate HLS streams for auto switching to optimal bitrate, search over transcripts, integration with Mozilla Popcorn functionality, and access control based on Mozillians groups (e.g. “nda”)

DXR

  • Modeless, explorable UI with all-new JS
  • Case-insensitive searching
  • Proof-of-concept Rust analysis
  • Improved C++ analysis, with lots of new search types
  • Multi-tree support
  • Multi-line selection (linkable!)
  • HTTP API for search
  • Line-based searching
  • Multi-language support (Python already implemented, Rust and JS in progress)
  • Elasticsearch backend, bringing speed and features
  • Completely new plugin API, enabling binary file support and request-time analysis

SUMO

  • Offline SUMO app in Marketplace
  • SUMO Community Hub
  • Improved SUMO search with Synonyms
  • Instant search for SUMO
  • Redesigned and improved SUMO support forums
  • Improved support for more products in SUMO (Thunderbird, Webmaker, Open Badges, etc.)
  • BuddyUP app (live support for FirefoxOS) (in progress, TBC Q1 2015)

Input

  • Dashboards for everyone infrastructure: allowing anyone to build charts/dashboards using Input data
  • Backend for heartbeat v1 and v2
  • Overhauled the feedback form to support multiple products, streamline user experience and prepare for future changes
  • Support for Loop/Hello, Firefox Developer Edition, Firefox 64-bit for Windows
  • Infrastructure for automated machine and human translations
  • Massive infrastructure overhaul to improve overall quality

Release Engineering

  • Cut AWS costs by over 70% during 2014 by switching builds to spot instances and using intelligent bidding algorithms
  • Migrated all hardware out of SCL1 and closed datacenter to save $1 million per year (with Relops)
  • Optimized network transfers for build/test automation between datacenters, decreasing bandwidth usage by 50%
  • Halved build time on b2g-inbound
  • Parallelized verification steps in release automation, saving over an hour off the end-to-end time required for each release
  • Decommissioned legacy systems (e.g. tegras, tinderbox) (with Relops)
  • Enabled build slave reboots via API
  • Self-serve arbitrary builds via API
  • b2g FOTA updates
  • Builds for open H.264
  • Built flexible new update service (Balrog) to replace legacy system (will ship first week of January)
  • Support for Windows 64 as a first class platform
  • Supported FX10 builds and releases
  • Release support for switch to Yahoo! search
  • Update server support for OpenH264 plugins and Adobe’s CDM
  • Implement signing of EME sandbox
  • Per-checkin and nightly Flame builds
  • Moved desktop firefox builds to mach+mozharness, improving reproducibility and hackability for devs.
  • Helped mobile team ship different APKs targeted by device capabilities rather than a single, monolithic APK.

Release Operations

  • Decreased operating costs by $1 million per year by consolidating infrastructure from one datacenter into another (with Releng)
  • Decreased operating costs and improved reliability by decommissioning legacy systems (kvm, redis, r3 mac minis, tegras) (with Releng)
  • Decreased operating costs for physical Android test infrastructure by 30% reduction in hardware
  • Decreased MTTR by developing a simplified releng self-serve reimaging process for each supported build and test hardware platforms
  • Increased security for all releng infrastructure
  • Increased stability and reliability by consolidating single point of failure releng web tools onto a highly available cluster
  • Increased network reliability by developing a tool for continuous validation of firewall flows
  • Increased developer productivity by updating windows platform developer tools
  • Increased fault and anomaly detection by auditing and augmenting releng monitoring and metrics gathering
  • Simplified the build/test architecture by creating a unified releng API service for new tools
  • Developed a disaster recovery and business continuation plan for 2015 (with RelEng)
  • Researched bare-metal private cloud deployment and produced a POC

Developer Services

  • Ship Mozreview, a new review architecture integrated with Bugzilla (with A-team)
  • Massive improvements in hg stability and performance
  • Analytics and dashboards for version control systems
  • New architecture for try to make it stable and fast
  • Deployed treeherder (tbpl replacement) to production
  • Assisted A-team with Bugzilla performance improvements

I’d like to thank the team for their hard work. You are amazing, and I look forward to working with you next year.

At the start of 2015, I’ll share our vision for the coming year. Watch this space!

Mozilla FundraisingThanks to Our Amazing Supporters: A New Goal

We set a goal for ourselves of $1.75 million to raise during our year-end fundraising campaign this year. I’m excited to report that today—thanks to 213,605 donors representing  more than 174 countries who gave an average gift of $8 to … Continue reading

Chris PearceFirefox video playback's skip-to-next-keyframe behavior

One of the quirks of Firefox's video playback stack is our skip-to-next-keyframe behavior. The purpose of this blog post is to document the tradeoffs skip-to-next-keyframe makes.

The fundamental question that skip-to-next-keyframe answers is, "what do we do when the video stream decode can't keep up with the playback speed?

Video playback is a classic producer/consumer problem. You need to ensure that your audio and video stream decoders produce decoded samples at a rate no less that the rate at which the audio/video streams need to be rendered. You also don't want to produce decoded samples at a rate too much greater than the consumption rate, else you'll waste memory.

For example, if we're running on a low end PC, playing a 30 frames per second video, and the CPU is so slow that it can only decode an average of 10 frames per second, we're not going to be able to display all video frames.

This is also complicated by our video stack's legacy threading model. Our first video decoding implementation did the decoding of video and audio streams in the same thread. We assumed that we were using software decoding, because we were supporting Ogg/Theora/Vorbis, and later WebM/VP8/Vorbis, which are only commonly available in software.

The pseudo code for our "decode thread" used to go something like this:
 
while (!AudioDecodeFinished() || !VideoDecodeFinished()) {
  if (!HaveEnoughAudioDecoded()) {
    DecodeSomeAudio();
  }
  if (!HaveEnoughVideoDecoded()) {
    DecodeSomeVideo();
  }
  if (HaveLotsOfAudioDecoded() && HaveLotsOfVideoDecoded()) {
    SleepUntilRunningLowOnDecodedData();
  }
}

 
This was an unfortunate design, but it certainly made some parts of our code much simpler and easier to write.

We've recently refactored our code, so it no longer looks like this, but for some of the older backends that we support (Ogg, WebM, and MP4 using GStreamer on Linux), the pseudocode is still effectively (but not explicitly or obviously) this. MP4 on Windows, MacOSX, and Android in Firefox 36 and later now decode asynchronously, so we are not limited to decoding only on one thread.

The consequence of decoding audio and video on the same thread only really bites on low end hardware. I have an old Lenovo x131e netbook, which on some videos can take 400ms to decode a Theora keyframe. Since we use the same thread to decode audio as video, if we don't have at least 400ms of audio already decoded while we're decoding such a frame, we'll get an "audio underrun". This is where we don't have enough audio decoded to keep up with playback, and so we end up glitching the audio stream. This sounds is very jarring to the listener.

Humans are very sensitive to sound; the audio stream glitching is much more jarring to a human observer than dropping a few video frames. The tradeoff we made was to sacrifice the video stream playback in order to not glitch the audio stream playback. This is where skip-to-next-keyframe comes in.

With skip-to-next-keyframe, our pseudo code becomes:

while (!AudioDecodeFinished() || !VideoDecodeFinished()) {
  if (!HaveEnoughAudioDecoded()) {
    DecodeSomeAudio();
  }
  if (!HaveEnoughVideoDecoded()) {
    bool skipToNextKeyframe =
      (AmountOfDecodedAudio < LowAudioThreshold()) ||

       HaveRunOutOfDecodedVideoFrames();
    DecodeSomeVideo(skipToNextKeyframe);
  }
  if (HaveLotsOfAudioDecoded() && HaveLotsOfVideoDecoded()) {
    SleepUntilRunningLowOnDecodedData();
  }
}


We also monitor how long a video frame decode takes, and if a decode takes longer than the low-audio-threshold, we increase the low-audio-threshold.

If we pass a true value for skipToNextKeyframe to the decoder, it is supposed to give up and skip its decode up to the next keyframe. That is, don't try to decode anything between now and the next keyframe.

Video frames are typically encoded as a sequence of full images (called "key frames", "reference frames", or  I-frames in H.264) and then some number of frames which are "diffs" from the key frame (P-Frames in H.264 speak). (H.264 also has B-frames which are a combination of diffs of frames frames both before and after the current frame, which can lead the encoded stream to be muxed out-of-order).

The idea here is that we deliberately drop video frames in the hope that we give time back to the audio decode, so we are less likely to get audio glitches.

Our implementation of this idea is not particularly good.

Often on low end Windows machines playing HD videos without hardware accelerated video decoding, you'll get a run of say half a second of video decoded, and then we'll skip everything up to the next keyframe (a couple of seconds), before playing another half a second, and then skipping again, ad nasuem, giving a slightly weird experience. Or in the extreme, you can end up with only getting the keyframes decoded, or even no frames if we can't get the keyframes decoded in time. Or if it works well enough, you can still get a couple of audio glitches at the start of playback until the low-audio-threshold adapts to a large enough value, and then playback is smooth.

The FirefoxOS MediaOmxReader also never implemented skip-to-next-keyframe correctly, our behavior there is particularly bad. This is compounded by the fact that FirefoxOS typically runs on lower end hardware anyway. The MediaOmxReader doesn't actually skip decode to the next keyframe, it decodes to the next keyframe. This will cause the video decode to hog the decode thread for even longer; this will give the audio decode even less time, which is the exact opposite of what you want to do. What they should do is skip the demux of video up to the next keyframe, but if I recall correctly there was bugs in the Android platform's video decoder library that FirefoxOS is based on that caused this to be unreliable.

All these issues occur because we share the same thread for audio and video decoding. This year we invested some time refactoring our video playback stack to be asynchronous. This enables backends that support it to do their decoding asynchronously, on another own thread. So since audio decodes on a separate thread to video, we should have glitch-free audio even when the video decode can't keep up, even without engaging skip-to-next-keyframe. We still need to do something like skipping the video decode when the video decode is falling behind, but it can probably engage less aggressively.

I did a quick test the other day on a low end Windows 8.0 tablet with an Atom Z2760 CPU with skip-to-next-keyframe disabled and async decoding enabled, and although the video decode falls behind and gets out of sync with audio (frames are rendered late) we never glitched audio.

So I think it's time to revisit our skip-to-next-keyframe logic, since we don't need to sacrifice video decode to ensure that audio playback doesn't glitch.

When using async decoding we still need some mechanism like skip-to-next-keyframe to ensure that when the video decode falls behind it can catch up. The existing logic to engage skip-to-next-keyframe also performs that role, but often we enter skip-to-next-keyframe and start dropping frames when video decode actually could keep up if we just gave it a chance. This often happens when switching streams during MSE playback.

Now that we have async decoding, we should experiment with modifying the HaveRunOutOfDecodedVideoFrames() logic to be more lenient, to avoid unnecessary frame drops during MSE playback. One idea would be to only engage skip-to-next-keyframe if we've missed several frames. We need to experiment on low end hardware.

Gervase MarkhamGlobal Posting Privileges on the Mozilla Discussion Forums

Have you ever tried to post a message to a Mozilla discussion forum, particularly one you haven’t posted to before, and received back a “your message is held in a queue for the moderator” message?

Turns out, if you are subscribed to at least one forum in its mailing list form, you get global posting privileges to all forums via all mechanisms (mail, news or Google Groups). If you aren’t so subscribed, you have to be whitelisted by the moderator on a per-forum basis.

If this sounds good, and you are looking for a nice low-traffic list to use to get this privilege, try mozilla.announce.

Wladimir PalantCan Mozilla be trusted with privacy?

A year ago I would have certainly answered the question in the title with “yes.” After all, who else if not Mozilla? Mozilla has been living the privacy principles which we took for the Adblock Plus project and called our own. “Limited data” is particularly something that is very hard to implement and defend against the argument of making informed decisions.

But maybe I’ve simply been a Mozilla contributor way too long and don’t see the obvious signs any more. My colleague Felix Dahlke brought my attention to the fact that Mozilla is using Google Analytics and Optimizely (trusted third parties?) on most of their web properties. I cannot really find a good argument why Mozilla couldn’t process this data in-house, insufficient resources certainly isn’t it.

And then there is Firefox Health Report and Telemetry. Maybe I should have been following the discussions, but I simply accepted the prompt when Firefox asked me — my assumption was that it’s anonymous data collection and cannot be used to track behavior of individual users. The more surprised I was to read this blog post explaining how useful unique client IDs are to analyze data. Mind you, not the slightest sign of concern about the privacy invasion here.

Maybe somebody else actually cared? I opened the bug but the only statement on privacy is far from being conclusive — yes, you can opt out and the user ID will be removed then. However, if you don’t opt out (e.g. because you trust Mozilla) then you will continue sending data that can be connected to a single user (and ultimately you). And then there is this old discussion about the privacy aspects of Firefox Health Reporting, a long and fruitless one it seems.

Am I missing something? Should I be disabling all feedback functionality in Firefox and recommend that everybody else do the same?

Side-note: Am I the only one who is annoyed by the many Mozilla bugs lately which are created without a description and provide zero context information? Are there so many decisions being made behind closed doors or are people simply too lazy to add a link?

Laura HilligerWeb Literacy Lensing: Identity

webliteracy-lens-identity

Ever since version 1 of the Web Literacy Map came out, I’ve been waiting to see people take it and adjust it or interpret it for specific educational endeavors that are outside the wheelhouse of “teach the web”. As I’ve said before, I think the web can be embedded into anything, and I want to see the anything embedded into the web. I’ve been wanting to see how people put a lens on top of the web literacy map and combine teaching the web with educating a person around Cognitive Skill X. I’ve had ideas, but never put them out into the world. I was kind of waiting for someone to do it for me (ahem Web Literacy community :P Lately I’ve been realizing that I work to develop socio-emotional skills while I teach the web, and I wanted to see if I could look at the Web Literacy Map from a personal, but social (e.g. psychosocial) angle. What, exactly, does web literacy mean in the context of Identity?

Theory

First things first - there’s a media education theory (in this book) suggesting that technology has complicated our “identity”. It’s worth mentioning because it’s interesting, and I think it’s worth noting that I didn’t consider all the nuances of these various identities in thinking about how the Web Literacy Map becomes the Web Literacy Map for Identity. We as human beings have multiple, distinct identities we have to deal with in life. We have to deal with who we are with family vs with friends vs alone vs professionally regardless of whether or not we are online, but with the development of the virtual space, the theory suggests that identity has become even more complicated. Additionally, we now have to deal with:
  • The Real Virtual: an anonymous online identity that you try on. Pretending to be a particular identity online because you are curious as to how people react to it? That’s not pretending, really, it’s part of your identity that you need answers to curiosities.
  • The Real IN Virtual: an online identity that is affiliated with an offline identity. My name is Laura offline as well. Certain aspects of my offline personality are mirrored in the online space. My everyday identity is (partially) manifested online.
  • The Virtual IN Real: a kind of hybrid identity that you adopt when you interact first in an online environment and then in the physical world. People make assumptions about you when they meet you for the first time. Technology partially strips us of certain communication mannerisms (e.g. Body language, tone, etc), so those assumptions are quite different if you met through technology and then in real life.
  • The Virtual Real: an offline identity from a compilation of data about a particular individual. Shortly: Identity theft.
So, back to the Web Literacy Map: Identity - As you can gather from a single theory about the human understanding of “self”, Identity is a complicated topic anyway. But I like thinking about complicated problems. So here’s my first thinking about how Identity can be seen as a lens on top of the Web Literacy Map. webliteracy-lens-identity

Exploring Identity (and the web)

Navigation – Identity is personal, so maybe part of web literacy is about personalizing your experience. Perhaps skills become more granular when we talk about putting a lens on the Map? Example granularity: common features of the browser skill might break down into “setting your own homepage” and “pinning apps and bookmarks”. Web Mechanics - I didn’t find a way to lens this competency. It’s the only one I couldn’t. Very frustrating to have ONE that doesn’t fit. What does that say about Web Mechanics or the Web Literacy Map writ large? Search – Identity is manifested, so your tone and mood might dictate what you search for and how you share it. Are you a satirist? Are you funny? Are you serious or terse? Search is a connective competency under this lens because it connects your mood/tone to your manifestation of identity. Example skill modification/addition: Locating or finding desired information within search results ——> using specialized search machines to find desired emotional expression. (e.g. GIPHY!) Credibility – Identity is formed through beliefs and faith, and I wouldn’t have a hard time arguing that those things influence your understanding of credible information. If you believe something and someone confirms your belief, you’ll likely find that person more credible than someone who rejects your belief. Example skill modification/addition: Comparing information from a number of sources to judge the trustworthiness of content ——> Comparing information from a number of sources to judge the trustworthiness of people Security - Identity is influenced heavily by relationships. Keeping other people’s data secure seems like part of the puzzle, and there’s something about the innate need to keep people who have influenced your identity positively secure. I don’t have an example for this one off the top of my head, but it’s percolating. [caption id="attachment_2514" align="aligncenter" width="500"]braindump braindump[/caption]

Building Identity (and the web)

Composing for the Web, Remixing, and Coding/Scripting allow us to be expressive about our identities. The expression is the WHY of any of this, so directly connected to your own identity. It connects into your personality, motivations, and a mess of thinking skills we need to function in our world. Skills underneath these competencies could be modified to incorporate those emotional and psychological traits of that expression. Design and AccessibilityValues are inseparable from our identities. I think design and accessibility is a competency that radiates a persons values. It’s ok to back burner this if you’re being expressive for the sake of being expressive, but if you have a message, if you are being expressive in an effort to connect with other people (which, let’s face it, is part of the human condition), design and accessibility is a value. Not sure how I would modify the skills… Infrastructure - I was thinking that this one pulled in remembrance as a part of identity. Exporting data, moving data, understanding the internet stack and how to adequately use it so that you can keep a record of your or someone else’s online identity has lots of implications for remembrance, which I think influences who we are as much as anything else. Example skill modification/addition: “Exporting and backing up your data from web services” might lead to “Analyzing historical data to determine identity shifts” That's all for now. I've thought a little about the final strand, but I'm going to save it for next year. I would like to hear what you all think. Is this a useful experiment for the Web Literacy Map? Does this kind of thinking help hone in on ways to structure learning activities that use the web? Can you help me figure out what my brain is doing? Happy holidays everyone ;)

Gregory SzorcWhy hg.mozilla.org is Slow

At Mozilla, I often hear statements like Mercurial is slow. That's a very general statement. Depending on the context, it can mean one or more of several things:

  • My Mercurial workflow is not very efficient
  • hg commands I execute are slow to run
  • hg commands I execute appear to stall
  • The Mercurial server I'm interfacing with is slow

I want to spend time talking about a specific problem: why hg.mozilla.org (the server) is slow.

What Isn't Slow

If you are talking to hg.mozilla.org over HTTP or HTTPS (https://hg.mozilla.org/), there should not currently be any server performance issues. Our Mercurial HTTP servers are pretty beefy and are able to absorb a lot of load.

If https://hg.mozilla.org/ is slow, chances are:

  • You are doing something like cloning a 1+ GB repository.
  • You are asking the server to do something really expensive (like generate JSON for 100,000 changesets via the pushlog query interface).
  • You don't have a high bandwidth connection to the server.
  • There is a network event.

Previous Network Events

There have historically been network capacity issues in the datacenter where hg.mozilla.org is hosted (SCL3).

During Mozlandia, excessive traffic to ftp.mozilla.org essentially saturated the SCL3 network. During this time, requests to hg.mozilla.org were timing out: Mercurial traffic just couldn't traverse the network. Fortunately, events like this are quite rare.

Up until recently, Firefox release automation was effectively overwhelming the network by doing some clownshoesy things.

For example, gaia-central was being cloned all the time We had a ~1.6 GB repository being cloned over a thousand times per day. We were transferring close to 2 TB of gaia-central data out of Mercurial servers per day

We also found issues with pushlogs sending 100+ MB responses.

And the build/tools repo was getting cloned for every job. Ditto for mozharness.

In all, we identified a few terabytes of excessive Mercurial traffic that didn't need to exist. This excessive traffic was saturating the SCL3 network and slowing down not only Mercurial traffic, but other traffic in SCL3 as well.

Fortunately, people from Release Engineering were quick to respond to and fix the problems once they were identified. The problem is now firmly in control. Although, given the scale of Firefox's release automation, any new system that comes online that talks to version control is susceptible to causing server outages. I've already raised this concern when reviewing some TaskCluster code. The thundering herd of automation will be an ongoing concern. But I have plans to further mitigate risk in 2015. Stay tuned.

Looking back at our historical data, it appears that we hit these network saturation limits a few times before we reached a tipping point in early November 2014. Unfortunately, we didn't realize this because up until recently, we didn't have a good source of data coming from the servers. We lacked the tooling to analyze what we had. We lacked the experience to know what to look for. Outages are effective flashlights. We learned a lot and know what we need to do with the data moving forward.

Available Network Bandwidth

One person pinged me on IRC with the comment Git is cloning much faster than Mercurial. I asked for timings and the Mercurial clone wall time for Firefox was much higher than I expected.

The reason was network bandwidth. This person was performing a Git clone between 2 hosts in EC2 but was performing the Mercurial clone between hg.mozilla.org and a host in EC2. In other words, they were partially comparing the performance of a 1 Gbps network against a link over the public internet! When they did a fair comparison by removing the network connection as a variable, the clone times rebounded to what I expected.

The single-homed nature of hg.mozilla.org in a single datacenter in northern California is not only bad for disaster recovery reasons, it also means that machines far away from SCL3 or connecting to SCL3 over a slow network aren't getting optimal performance.

In 2015, expect us to build out a geo-distributed hg.mozilla.org so that connections are hitting a server that is closer and thus faster. This will probably be targeted at Firefox release automation in AWS first. We want those machines to have a fast connection to the server and we want their traffic isolated from the servers developers use so that hiccups in automation don't impact the ability for humans to access and interface with source code.

NFS on SSH Master Server

If you connect to http://hg.mozilla.org/ or https://hg.mozilla.org/, you are hitting a pool of servers behind a load balancer. These servers have repository data stored on local disk, where I/O is fast. In reality, most I/O is serviced by the page cache, so local disks don't come into play.

If you connect to ssh://hg.mozilla.org/, you are hitting a single, master server. Its repository data is hosted on an NFS mount. I/O on the NFS mount is horribly slow. Any I/O intensive operation performed on the master is much, much slower than it should be. Such is the nature of NFS.

We'll be exploring ways to mitigate this performance issue in 2015. But it isn't the biggest source of performance pain, so don't expect anything immediately.

Synchronous Replication During Pushes

When you hg push to hg.mozilla.org, the changes are first made on the SSH/NFS master server. They are subsequently mirrored out to the HTTP read-only slaves.

As is currently implemented, the mirroring process is performed synchronously during the push operation. The server waits for the mirrors to complete (to a reasonable state) before it tells the client the push has completed.

Depending on the repository, the size of the push, and server and network load, mirroring commonly adds 1 to 7 seconds to push times. This is time when a developer is sitting at a terminal, waiting for hg push to complete. The time for Try pushes can be larger: 10 to 20 seconds is not uncommon (but fortunately not the norm).

The current mirroring mechanism is overly simple and prone to many failures and sub-optimal behavior. I plan to work on fixing mirroring in 2015. When I'm done, there should be no user-visible mirroring delay.

Pushlog Replication Inefficiency

Up until yesterday (when we deployed a rewritten pushlog extension, the replication of pushlog data from master to server was very inefficient. Instead of tranferring a delta of pushes since last pull, we were literally copying the underlying SQLite file across the network!

Try's pushlog is ~30 MB. mozilla-central and mozilla-inbound are in the same ballpark. 30 MB x 10 slaves is a lot of data to transfer. These operations were capable of periodically saturating the network, slowing everyone down.

The rewritten pushlog extension performs a delta transfer automatically as part of hg pull. Pushlog synchronization now completes in milliseconds while commonly only consuming a few kilobytes of network traffic.

Early indications reveal that deploying this change yesterday decreased the push times to repositories with long push history by 1-3s.

Try

Pretty much any interaction with the Try repository is guaranteed to have poor performance. The Try repository is doing things that distributed versions control systems weren't designed to do. This includes Git.

If you are using Try, all bets are off. Performance will be problematic until we roll out the headless try repository.

That being said, we've made changes recently to make Try perform better. The median time for pushing to Try has decreased significantly in the past few weeks. The first dip in mid-November was due to upgrading the server from Mercurial 2.5 to Mercurial 3.1 and from converting Try to use generaldelta encoding. The dip this week has been from merging all heads and from deploying the aforementioned pushlog changes. Pushing to Try is now significantly faster than 3 months ago.

Conclusion

Many of the reasons for hg.mozilla.org slowness are known. More often than not, they are due to clownshoes or inefficiencies on Mozilla's part rather than fundamental issues with Mercurial.

We have made significant progress at making hg.mozilla.org faster. But we are not done. We are continuing to invest in fixing the sub-optimal parts and making hg.mozilla.org faster yet. I'm confident that within a few months, nobody will be able to say that the servers are a source of pain like they have been for years.

Furthermore, Mercurial is investing in features to make the wire protocol faster, more efficient, and more powerful. When deployed, these should make pushes faster on any server. They will also enable workflow enhancements, such as Facebook's experimental extension to perform rebases as part of push (eliminating push races and having to manually rebase when you lose the push race).

Roberto A. VitilloClientID in Telemetry submissions

A new functionality landed recently that allows to group Telemetry sessions by profile ID. Being able to group sessions by profile turns out be extremely useful for several reasons. For instance, as some users tend to generate an enourmous amount of sessions daily, analyses tend to be skewed towards those users.

Take uptime duration; if we just consider the uptime distribution of all sessions collected in a certain timeframe on Nightly we would get a distribution with a median duration of about 15 minutes. But that number isn’t really representative of the median uptime for our users. If we group the submissions by Client ID and compute the median uptime duration for each group, we can build a new distribution that is more representative of the general population:

clientid

And we can repeat the exercise for the startup duration, which is expressed in ms:

startupOur dashboards are still based on the session distributions but it’s likely that we will provide both session and user based distributions in our next-gen telemetry dash.

edit:

Please keep in mind that:

  • Telemetry does not collect privacy-sensitive data.
  • You do not have to trust us, you can verify what data Telemetry is collecting in the about:telemetry page in your browser and in aggregate form in our Telemetry dashboards.
  • Telemetry is an opt-in feature on release channels and a feature you can easily disable on other channels.
  • The new Telemetry Client ID does not track users, it tracks Telemetry performance & feature-usage metrics across sessions.

Will Kahn-GreeneInput status: December 18th, 2014

Preface

It's been 3 months since the last status report. Crimey! That's not great, but it's even worse because it makes this report crazy long.

First off, lots of great work done by Adam Okoye, L. Guruprasad, Bhargav Kowshik, and Deshraj Yadav! w00t!

Second, we did a ton of stuff and broke 1,000 commits! w00t!

Third, I've promised to do more frequent status reports. I'll go back to one every two weeks.

Onward!

Development

High-level summary:

  • Lots of code quality work.
  • Updated ElasticUtils to 0.10.1 so we can upgrade our Elasticsearch cluster.
  • Heartbeat v2.
  • Overhauled the generic feedback form.
  • remote-troubleshooting data capture.
  • contribute.json file.
  • Upgrade to Django 1.6.
  • Upgrade to Python 2.7!!!!!
  • Improved and added to pre-commit and commit-msg linters.

Landed and deployed:

  • ce95161 Clarify source and campaign parameters in API
  • 286869e bug 788281 Add update_product_details to deploy
  • bbedc86 bug 788281 Implement basic events API
  • 1c0ff9f bug 1071567 Update ElasticUtils to 0.10.1
  • 7fd52cd bug 1072575 Rework smart_timedelta
  • ce80c56 bug 1074276 Remove abuse classification prototype
  • 7540394 bug 1075563 Fix missing flake8 issue
  • 11e4855 bug 1025925 Change file names (Adam Okoye)
  • 23af92a bug 1025925 Change all instances of fjord.analytics.tools to fjord.analytics.utils (Adam Okoye)
  • ae28c60 bug 1025925 Change instances of of util relating to fjord/base/util.py (Adam Okoye)
  • bc77280 Add Adam Okoye to CONTRIBUTORS
  • 545dc52 bug 1025925 Change test module file names
  • fc24371 bug 1041703 Drop prodchan column
  • 9097b8f bug 1079376 Add error-email -> response admin view
  • d3cfdfe bug 1066618 Tweak Gengo account balance warning
  • a49f1eb bug 1020303 Add rating column
  • 55fede0 bug 1061798 Reset page number Resets page number when filter checkbox is checked (Adam Okoye)
  • 1d4fd00 bug 854479 Fix ui-lightness 404 problems
  • a9bf3b1 bug 940361 Change size on facet calls
  • c2b2c2b bug 1081413 Move url validation code into fjord_utils.js Rewrote url validation code that was in generic_feedback.js and added it to fjord_utils.js (Adam Okoye)
  • 4181b5e bug 1081413 Change code for url validation (Adam Okoye)
  • 2cd62ad bug 1081413 Add test for url validation (Adam Okoye)
  • f72652a bug 1081413 Correct operator in test_fjord_utils.js (aokoye)
  • c9b83df bug 1081997 Fix unicode in smoketest
  • cba9a2d bug 1086643 bug 1086650 Redo infrastructure for product picker version
  • e8a9cc7 bug 1084387 Add on_picker field to Product
  • 2af4fca bug 1084387 Add on_picker to management forms
  • 1ced64a bug 1081411 Create format test (Adam Okoye)
  • 00f8a72 Add template for mentored bugs
  • e95d0f1 Cosmetic: Move footnote
  • d0cb705 Tweak triaging docs
  • d5b35a2 bug 1080816 Add A/B for ditching chart
  • fa1a47f Add notes about running tests with additional warnings
  • ddde83c Fix mimetype -> content_type and int division issue
  • 2edb3b3 bug 1089650 Add a contribute.json file (Bhargav Kowshik)
  • d341977 bug 1089650 Add test to verify that the JSON is valid (Bhargav Kowshik)
  • dcb9380 Add Bhargav Kowshik to CONTRIBUTORS
  • 7442513 Fix throttle test
  • f27e31c bug 1072285 Update Django, django-nose and django-cache-machine
  • dd74a3c bug 1072285 Update django-adminplus
  • ececdf7 bug 1072285 Update requirements.txt file
  • 6669479 bug 1093341 Tweak Gengo account balance warning
  • f233aab bug 1094197 Fix JSONObjectField default default
  • 11193d7 Tweak chart display
  • 9d311ca Make journal.Record not derive from ModelBase
  • f778c9d Remove all Heartbeat v1 stuff
  • e5f6f4d Switch test__utils.py to test_utils.py
  • cab7050 bug 1092296 Implement heartbeat v2 data model
  • 5480c42 bug 1097352 Response view is viewable by all
  • 46b5897 bug 1077423 Overhaul generic feedback form dev
  • da31b47 bug 1077423 Update smoke tests for generic feedback form dev
  • e84094b Fix l10n email template
  • d6c8ea9 Remove gettext calls for product dashboards
  • e1a0f74 bug 1098486 Remove under construction page
  • 032a660 Fix l10n_status.py script history table
  • 19cec37 Fix JSONObjectField
  • 430c462 Improve display_history for l10n_status
  • d6c18c6 Windows NT 6.4 -> Windows 10
  • 73a4225 bug 1097847 Update django-grappelli to 2.5.6
  • 4f3b9c7 bug 1097847 Fix custom views in admin
  • 3218ea3 Fix JSONObjectfield.value_to_string
  • 67c6bf9 Fix RecordManager.log regarding instances
  • a5e8610 bug 1092299 Implement Heartbeat v2 API
  • 17226db bug 1092300 Add Heartbeat v2 debugging views
  • 11681c4 Rework env view to show python and django version
  • 9153802 bug 1087387 Add feedback_response.browser_platform
  • f5d8b56 bug 1087387 bug 1096541 Clean up feedback view code
  • c9c7a81 bug 1087391 Fix POST API user-agent inference code
  • 4e93fc7 bug 1103024 Gengo kill switch
  • de9d1c7 Capture the user-agent
  • 4796e4e bug 1096541 Backfill browser_platform for Firefox dev
  • f5fe5cf bug 1103141 Add experiment_version to HB db and api
  • 98c40f6 bug 1103141 Add experiment_version to views
  • 0996148 bug 1103045 Create a menial hb survey view
  • 965b3ee bug 1097204 Rework product picker
  • 6907e6f bug 1097203 Add link to SUMO
  • e8f3075 bug 1093843 Increase length of product fields
  • 2c6d24b bug 1103167 Raise GET API throttle
  • d527071 bug 1093832 Move feedback url documentation
  • 6f4eb86 Abstract out python2.6 in deploy script
  • f843286 Fix compile-linted-mo.sh to take pythonbin as arg
  • 966da77 Add celery health check
  • 1422263 Add space before subject of celery health email
  • 5e07dbd [heartbeat] Add experiment1 static page placeholders
  • 615ccf1 [heartbeat] Add experiment1 static files
  • d8822df [heartbeat] Add SUMO links to sad page
  • 3ee924c [heartbeat] Add twitter thing to happy page
  • d87a815 [heartbeat] Change thank you text
  • 06e73e6 [heartbeat] Remove cruft, fix links
  • 8208a72 [heartbeat] Fix "addons"
  • 2eca74c [heartbeat] Show profile_age always because 0 is valid
  • 4c4598b bug 1099138 Fix "back to picker" url
  • b2e9445 Add note about "Commit to VCS" in l10n docs
  • 9c22705 Heartbeat: instrument email signup, feedback, NOT Twitter (Gregg Lind)
  • 340adf9 [heartbeat] Fix DOCTYPE and ispell pass
  • 486bf65 [heartbeat] Change Thank you text
  • d52c739 [heartbeat] Switch to use Input GA account
  • f07716b [heartbeat] Fix favicons
  • eff9d0b [heartbeat] Fixed page titles
  • 969c4a0 [heartbeat] Nix newsletter form for a link
  • dce6f86 [heartbeat] Reindent code to make it legible
  • dad6d82 bug 1099138 Remove [DEV] from title
  • 4204b43 fixed typo in getting_started.rst (Deshraj Yadav)
  • 7042ead bug 1107161 Fix hb answers view
  • a024758 bug 1107809 Fix Gengo language guesser
  • 808fa83 bug 1107803 Rewrite Response inference code
  • d9e8ffd bug 1108604 Tweak paging links in hb data view
  • 00e8628 bug 1108604 Add sort and display ts better in hb data view
  • 17b908a bug 1108604 Change paging to 100 in hb data view
  • 39dc943 bug 1107083 Backfill versions
  • fee0653 bug 1105512 Rip out old generic form
  • b5bb54c Update grappelli in requirements.txt file
  • f984935 bug 1104934 Add ResponseTroubleshootingInfo model
  • c2e7fd3 bug 950913 Move 'TRUNCATE_LENGTH' and make accessable to other files (Adam Okoye)
  • b6f30e1 bug 1074315 Ignore deleted files for linting in pre-commit hook (L. Guruprasad)
  • 4009a59 Get list of .py files to lint using just git diff (L. Guruprasad)
  • c81da0b bug 950913 Access TRUNCATE_LENGTH from generic_feedback template (Adam Okoye)
  • 9e3cec6 bug 1111026 Fix hb error page paging
  • b89daa6 Dennis update to master tip
  • 61e3e18 Add django-sslserver
  • 93d317b bug 1104935 Add remote.js
  • ad3a5cb bug 1104935 Add browser data section to generic form
  • cc54daf bug 1104935 Add browserdata metrics
  • 31c2f74 Add jshint to pre-commit hook (L. Guruprasad)
  • 68eae85 Pretty-print JSON blobs in hb errorlog view
  • 8588b42 bug 1111265 Restrict remote-troubleshooting to Firefox
  • b0af9f5 Fix sorby error in hb data view
  • 8f622cf bug 1087394 Add browser, browser_version, and browser_platform to response view (Adam Okoye)
  • c4b6f85 bug 1087394 Change Browser Platform label (Adam Okoye)
  • eb1d5c2 Disable expansion of $PATH in the provisioning script (L. Guruprasad)
  • 59eebda Cosmetic test changes
  • aac733b bug 1112210 Tweak remote-troubleshooting capture
  • 6f24ce7 bug 1112210 Hide browser-ask by default
  • 278095d bug 1112210 Note if we have browser data in response view
  • 869a37c bug 1087395 Add fields to CSV output (Adam Okoye)

Landed, but not deployed:

  • 4ee7fd6 Update the name of the pre-commit hook script in docs (L. Guruprasad)
  • d4c5a09 bug 1112084 create requirements/dev.txt (L. Guruprasad)
  • 4f03c48 bug 1112084 Update provisioning script to install dev requirements (L. Guruprasad)
  • 03c5710 Remove instructions for manual installation of flake8 (L. Guruprasad)
  • a36a231 bug 1108755 Add a git commit message linter (L. Guruprasad)

Current head: f0ec99d

Rough plan for the next two weeks

  1. PTO. It's been a really intense quarter (as you can see) and I need some rest. Maybe a nap. Plus we have a deploy freeze through to January, so we can't push anything out anyhow. I hope everyone else gets some rest, too.

That's it!

Schalk NeethlingGetting “Cannot read property ‘contents’ of undefined” with grunt-contrib-less? Read this…

The last day or so was spent pulling my hair from my head and other regions from my facial region. Why you ask? Because of this line, spewed back at me in the Terminal: Warning: Cannot read property ‘contents’ of undefined The showed itself after I set-up a new project with much the same set … Continue reading Getting “Cannot read property ‘contents’ of undefined” with grunt-contrib-less? Read this…

Matěj CeplThird Wave and Telecommuting

I have been reading Tim Bray’s blogpost on how he started to work in Amazon, and I got ignited by the comment by len and particularly by this (he starts quoting Tim):

“First, I am to­tally sick of working remotely. I want to go and work in rooms with other people working on the same things that I am.”

And that says a lot. Whatever the web has enabled in terms of access, it has proven to be isolating where human emotions matter and exposing where business affairs matter. I can’t articulate that succinctly yet, but there are lessons to be learned worthy of articulation. A virtual glass of wine doesn’t afford the pleasure of wine. Lessons learned.

Although I generally agree with your sentiment (these are really not your Friends, except if they already are), I believe the situation with the telecommuting is more complex. I have been telecommuting for the past eight years (or so, yikes, the time fly!) and I do like it most of the time. However, it really requires special type of personality, special type of environment, special type of family, and special type of work to be able to do it well. I know plenty of people who do well working from home (with occasional stay in the coworking office) and some who just don’t. It has nothing to do with IQ or anything like that. Just for some people it works, and I have some colleagues who left Red Hat just because they cannot work from home and the nearest Red Hat office was just too far from them.

However, this trivial statement makes me think again about stuff which is much more profound in my opinion. I am a firm believer in the coming of what Alvin and Heidi Toffler called “The Third Wave”. That after the mainly agricultural and mainly industrial societies the world is changing, so that “much that once was is lost” and we don’t know exactly what is coming. One part of this change is substantial change in the way we organize our work. It really sounds weird but there were times when there were no factories, no offices, and most people were working from their homes. I am not saying that the future will be like the distant past, it never is, but the difference makes it clear to me that what is now is not the only possible world we could live in.

I believe that the standard of people leaving their home in the morning to work will be in future very very diminished. Probably some parts of the industrial world will remain around us (after all, there are still big parts of the agricultural world around us), but I think it might have the same impact (or so little impact) as the agricultural world has on the current world. If the trend of the offices dissolution will continue (and I don’t see the reason why it wouldn’t, in the end all those office buildings and commuting is terrible waste of money) we can expect really massive change in almost everything: ways we build homes (suddenly your home is not just the bedroom to survive night between two workshifts), transportation, ways we organize our communities (suddenly it does matter who is your neighbor), and of course a lot of social rules will have to change. I think we are absolutely not-prepared for this and also we are not talking about this enough. But we should.

Mozilla Reps CommunityReps Weekly Call – December 18th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

1minute

Summary

  • Privacy Day & Hello Plan.
  • End of the year! What should we do?
  • Mozlandia videos.

Note: Due holiday dates, next weekly call will be January 8.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next year!

Henrik SkupinFirefox Automation report – week 45/46 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 45 and 46.

Highlights

In our Mozmill-CI environment we had a couple of frozen Windows machines, which were running with 100% CPU load and 0MB of memory used. Those values came from the vSphere client, and didn’t give us that much information. Henrik checked the affected machines after a reboot, and none of them had any suspicious entries in the event viewer either. But he noticed that most of our VMs were running a very outdated version of the VMware tools. So he upgraded all of them, and activated the automatic install during a reboot. Since then the problem is gone. If you see something similar for your virtual machines, make sure to check that used version!

Further work has been done for Mozmill CI. So were finally able to get rid of all the traces for Firefox 24.0ESR since it is no longer supported. Further we also setup our new Ubuntu 14.04 (LTS) machines in staging and production, which will soon replace the old Ubuntu 12.04 (LTS) machines. A list of the changes can be found here.

Beside all that Henrik has started to work on the next Jenkins v1.580.1 (LTS) version bump for the new and more stable release of Jenkins. Lots of work might be necessary here.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 45 and week 46.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 45 and week 46.

Adam OkoyeOPW Week Two – Coming to a Close

Ok so technically it’s the first full week because OPW started last Tuesday December 9th, but either way – my first almost two weeks of OPW are coming to a close. My experience has been really good so far and I think I’m starting to find my groove in terms of how and where I work best location wised. I also think I’ve been pretty productive considering this is the first internship or job I’ve had (not counting the Ascend Project) in over 5 years.

So far I’ve resolved three bugs (some of which were actually feature requests) and have a fourth pull request waiting. I’ve been working entirely in Django so far primarily editing model and template files. One of the really nice things that I’ve gotten out of my project is learning while and via working on bugs/features.

Four weeks ago I hadn’t done any work with Django and realized that it would likely behoove me to dive into some tutorials (originally my OPW project with Input was going to be primarily in JavaScript but in reality doing things in Python and Django makes a lot more sense and thus the switch despite the fact that I’m still working with Input). I started a three or four basic tutorials and completed one and a half of them – in short, I didn’t have a whole lot of experience with Django ten days ago. Despite that. all of the the looking through and editing of files that I’ve done has really improved my skills both in terms of syntax and also in terms of being able to find information – both where to take information from and also where to put it. I look forward to all of the new things that I will learn and put in to practice.

Mozilla FundraisingA/B Testing: ‘Sequential form’ vs ‘Simple PayPal’

This post is one of a series where we’re sharing things we’ve learned while running A/B tests during our End of Year fundraising campaign. At Mozilla we strive to ‘work open’, to make ourselves more accountable, and to encourage others … Continue reading

Dietrich AyalaRemixable Quilts for the Web

Atul Varma set up a quilt for MozCamp Asia 2012 which I thought was a fantastic tool for that type of event. It provided an engaging visualization, was collaboratively created, and allowed a quick and easy way to dive further into the details about the participating groups.

Screenshot 2014-12-18 15.23.19I wanted to use it for a couple of projects, but the code was tied pretty closely to that specific content and layout.

This week I finally got around to moving the code over to Mozilla Webmaker, so it could be easily copied and remixed. I made a couple of changes:

  • Update font to Open Sans
  • Make it easy and clear how to re-theme the colors
  • Allow arbitrary content in squares

The JS code is still a bit too complex for what’s needed, but it works on Webmaker now!

Screenshot 2014-12-18 15.09.42

View my demo quilt. Hit the “remix” button to clone it and make your own.

The source for the core JS and CSS is at https://github.com/autonome/quilt.


Aaron ThornburghSelf Iteration

How the ultimate insider/outsider found his heart at Mozilla.


Change can be either scary or inspiring. It’s scary when it’s all happening to “you”; when you’re on the inside. Every new inundation of the unexpected can feel like an outright, personal attack. But when you’re an outsider, it’s much easier to see the stage, the actors, and your part in the overall story.

The difference is perspective.

1.0

I joined Mozilla in the summer of this year mostly as an outsider. Professionally, my career was birthed, built and cultivated within the “agency” world. In other words, I worked for clients who paid me sell their stuff to other people. Sometimes business were trying to sell services to other businesses. Other times, they were trying to sell things to consumers. Either way, I was the designer responsible for creating the tools that would help them do so. With few exceptions, all this worked happened within a team of other agency people, all held accountable to the same bottom line. And while we may have fought and argued over “user value” or the integrity of the “experience”, each of us were paid to further our client’s goals – not our personal ones. This is the very definition of an insider.

Yet even though I worked on the agency side, I was also very much an outsider. For one thing, agencies are never cause-driven, they’re revenue-driven. Being the kind of guy who asks “why” too much, the blind pursuit of profit isn’t exactly enticing. Up until I joined The Project, the best I could do was defend the interests of users to clients who wanted to make their bosses happy. Their bosses would all want to know if they would profit. And, to be perfectly honest, I just didn’t care much about making yet another nameless executive even MORE money. Eventually, the entire agency/advertising world had totally gutted my sense of purpose in life.

But what else was I supposed to do that still allowed me to practice the craft I loved so dearly?

2.0

Like most other folks, I joined Mozilla because I believed in the mission, the values, and the products. The fact that I now have a real opportunity to do good — for the sake of doing good — absolutely helps with the “purpose” part. Of course, this motivation alone doesn’t qualify me to be an official “Mozillian”. Why? Because like at any other office, I’m wasn’t on the inside just because I showed up to work. Perhaps a few had overheard that I was the guy working with the Content Services (CS) Team to design ad tiles. Otherwise, I was utterly unknown; an outsider.

So, when the CS Team asked me to explore how advertising could work on New Tab, I immediately thought of two things. First were the hundreds of millions of Firefox users it would effect. Second, was my swift death if I screwed it up. Naturally, therefore, the proposition sounded like Mission Impossible. It’s one thing to make big decisions that fundamentally changes a popular product; it’s another thing entirely to implement something that truly makes users happier than they were before. And when you start talking about advertising, corporate partnerships, and revenue generation, people tend to react strongly to very idea of change.

3.0

I had a lot of time to think since those first conversations. I also learned a lot more about the industry. In short, advertising on the Web is generally a terrible experience, and yet it pays for the majority of the free content and services we all enjoy. It’s a game rigged to reward those with the deepest pockets and access to troves of user data, and yet it’s all completely invisible to the users themselves. It’s not a system everyone particularly likes (or agrees with), but it’s the one we’ve currently got, and it’s not going anywhere anytime soon.

To change the game, however, you have to get into it. Mozilla can’t just sit on the sidelines and say “you’re doing it wrong.” From the beginning, we all knew that advertising on New Tab would stir emotions and invite criticism. But the more I thought about things, the more I saw it all as an extraordinary opportunity, and less of a problem to solve.

After all, the organization does have a mission to fulfill and users around the world who count on Mozilla to make meaningful change on the Web. It takes time, people, and a lot of money to achieve any measurable success. Although money itself is never our bottom line, the more of it Mozilla can generate, then the more we can invest directly in our Firefox products and the Web community as a whole. More importantly, if we can do that through an advertising model that respects user’s privacy, allows for more user control, and rewards content providers for the actual value they provide… then why not try? In doing so, we might even be able to make the Web a more equal place for everybody – ordinary users, content providers, and business or technology partners alike.

4.0

I understood enough about the organization and what it stood for to plant a flag, but that’s about it. It was very unclear, especially in the beginning, how this experiment would be received by Firefox users, my peers, or the community. As it turned out, my fears were unfounded. Nobody at Mozilla kicked me out of the clubhouse. Members of the community filed a few bugs. Most crucially, users seemed to be okay with the first release of Enhanced Tiles on New Tab. While there was certainly some blood left on the mat, nobody lost anything important… like an eye, or our values.

In fact, it’s a shared obsession for a Web that’s open to all – one that respects user control and sovereignty – that binds so many different people together at Mozilla. Six months ago, I felt like an awkward middle-schooler trying to blend into the wallpaper. Today, I’m actually a part of something much, much bigger than “me”. Now it’s “us”. And when everyone is in the game for the same reasons, big challenges quickly become new opportunities.

Of course I want to win. I want Firefox, Mozilla, and the mission to succeed. The competitive instinct in me will never die. Only now, there’s so much more at stake, and there’s so much more to be done. The path forward is going to be a serious battle (our competitors and the ad industry writ large aren’t exactly rooting for us), and anyone with skin in the game is going to bleed a little more. Personally, I might even get my ass kicked and my teeth knocked out trying to create a New Tab experience that redefines the relationship users have with advertisers. But that’s okay. I finally have a purpose worthy of the scars.

5.0 – Release

The real test is yet to come. As more of the experiences we design make their way onto the New Tab page in Firefox, hundreds of millions of users around the world will judge for themselves whether or not Mozilla has real vision, or has their back. With the power of their fingertips, ordinary people we’ll never meet will determine what’s valuable to them – not a client.

Honestly, I wouldn’t want it any other way. Because that’s what being a Mozillian is all about: Ensuring the Web is an open platform for opportunity and choice.

More change is coming. Only this time, users will have a more direct role in shaping the future of Firefox, and perhaps even how the Web itself works.

For once, it feels good to be on the inside.


Aaron ThornburghUnclear Intentions

Starting something is always the hardest part.


Anybody who’s built a technology product intimately understands the law of inertia: getting something started is always more difficult than sustaining momentum.

In the beginning of any new endeavor, who you are and what you’re trying to accomplish is entirely unknown. Until you’ve declared your intentions – or can point to something of value that you’ve produced – it’s difficult to generate much interest. People within your own organization are reluctant to cooperate. Business partners or vendors have all the bargaining power. And the average user has no reason to care. Meanwhile, you need human and financial capital, supplies, services and whatever else required to build something truly awesome.

Personal stuff can be equally challenging. Starting out a career, a business, or even recovery from a major injury, takes a lot of time and determination to build enough momentum before it “feels” like any progress has been made. (Sometimes the effort only seems worth it in retrospect.)

Then there’s the mundane stuff; things that are important, but not critical. We may know that something needs to happen — at some point — but just thinking about it summons immediate fatigue and a twinge of depression. They tend to be the kinds of things we put off for a long as possible, like working out and eating healthier.

Or starting a blog.

*****

As a designer, the biggest challenges I face often start with a blank canvass – literally, albeit a digital one. By this point in my career, designing is the easy part; thinking through all the dependencies is the real challenge. Long gone are the days of simple websites. Today, creative folks are working with engineers, data scientists and business development consultants to build entire technology platforms. Things get complicated quickly.

But for various reasons, I’m infinitely more confident launching a new project than I am starting a personal blog.

For many years, I resisted writing for general consumption on the Web. Does the world need yet another voice shouting into the mob? What would I have to say that’s any different than what everyone else has already said? What if I don’t want to write about just one topic, or write very often? How would I even promote a blog since my online life is decidedly unsocial? More importantly, what would be my raison d’être should people actually start reading posts?

It’s never been clear to me why my voice would matter, or to whom.

*****

Last month, my boss encouraged me to write a post for our team’s blog.

After spending several days trying to write something technical, I gave up on the first draft. Any given feature had a back story just as important as the feature itself. It wasn’t possible to write something about the specifics without getting into the “why” behind the “what”.

As it turned out, the Why was far more important to me, personally, than the particulars of anything I happened to be working on. So I wrote something much more personal instead: my journey at Mozilla, and how I’ve changed as a person and as a designer.

Only now, there wasn’t anywhere to post it because official websites and related blogs are for official announcements – not for introspective posts about things that are entirely unofficial. This is, of course, very understandable. However, my quandary remains: where the hell do I post something that’s work related, but not about actual work?

*****

My job affords me the opportunity to work with some of the most talented, intelligent people on the planet. One such individual recently asked me how I felt about blogging. He shares many of my hangups about writing for an audience on the Web, but still manages to be the most effective communicator I’ve met. So I asked him why he writes, how often, and what about.

Surprisingly, he only writes on occasion, and about diverse topics that range from Web technology to sports. Sometimes it’s a shout out to hard working bus drivers. But he doesn’t write because there’s some underlying expectation or pressure to do so. He writes because he wants to. (This is crazy talk to somebody like me, who’s been trained to think about the commercial value of everything.)

All feelings of acute narcissism aside, he discovered for himself that expressing a unique, personal perspective – even if only on occasion – makes him that much more knowable. It’s not about “personal branding”, social influence scoring, or networking with industry insiders in the hopes of securing a new job one day. It’s far more pragmatic. As Communications Director, his success is dependent on his ability to work with others. Simply by putting himself “out there”, folks are often a more willing to share and listen to new ideas because there’s something to relate to. Meanwhile, he has a platform that allows him to be himself without the limitations inherent in writing for official channels. It seemed reasonable.

Regarding the story I’d written, there still wasn’t a place to post it or an audience to address. Nevertheless, he gave me the only advice he could:

Start a personal blog.

*****

Hello world. Dammit.


Sean McArthurhyper

Rust is a shiny new systems language that the lovely folks at Mozilla are building. It focuses on complete memory-safety, and being very fast. It’s speed is equivalent to C++ code, but you don’t have to manage pointers and the like; the language does that for you. It also catches a lot of irritating runtime errors at compile time, thanks to it’s fantastic type system. That should mean less crashes.

All of this sounds fantastic, let’s use it to make server software! It will be faster, and crash less. One speed bump: there’s no real rust http library.

rust-http and Teepee

There were 2 prior attempts at HTTP libraries, but the former (rust-http) has been ditched by it’s creator, and isn’t very “rust-like”. The latter, Teepee, started in an excellent direction, but life has gotten in the way for the author.1

For the client-side only, there exists curl-rust, which are just bindings to libcurl. Ideally, we’d like to have the all of the code written in Rust, so we don’t have to trust that the curl developers have written perfectly memory-safe code.

So I started a new one. I called it hyper, cause, y’know, hyper-text transfer protocol.

embracing types

The type system in Rust is quite phenomenal. Wait, what? Did I just say that? Huh, I guess I did. I know, I know, we hate wrestling with type systems. I can’t touch any Java code without cursing the type system. Thanks to Rust’s type inference, though, it’s not irritating at all.

In contrast, I’ve gotten tired of stringly-typed languages; chief among them is JavaScript. Everything is a string. Even property lookups. document.onlood = onload; is perfectly valid, since it just treats onlood as a string. You know a big problem with strings? Typos. If you write JavaScript, you will write typos that aren’t caught until your code is in production, and you see that an event handler is never triggered, or undefined is not a function.

I’m done with that. But if you still want to be able to use strings in your rust code, you certainly can. Just use something else besides hyper.

Now then, how about some examples. It’s most noticeable when using headers. In JavaScript, you’d likely do something like:

req.headers['content-type'] = 'application/json';

Here’s how to do the same using hyper:

req.headers.set(ContentType(Mime(Application, Json, vec![])));

Huh, interesting. Looks like more code. Yes, yes it is. But it’s also code that has been checked by the compiler. It has made sure there are no typos. It also has made sure you didn’t try to see the wrong format to a header. To get the header back out:

match req.headers.get() {
    Some(&ContentType(Mime(Application, Json, _))) => "its json!",
    Some(&ContentType(Mime(top, sub, _))) => "we can handle top and sub",
    None => "le sad"
}

Here’s an example that makes sure the format is correct:

req.headers.set(Date(time::utc_now()));
// ...
match req.headers.get() {
    Some(&Date(ref tm)) => {
        // tm is a Tm instance, without you dealing with
        // the various allowed formats in the HTTP spec.
    }
    // ...
}

Yea, yea, there is a stringly-typed API, for those rare cases you might need it, but it’s purposefully not easy to use. You shouldn’t use it. Maybe you think of a reason you might maybe have a good reason; no you don’t. Don’t use it. Let the compiler check for errors before you hit production.

Let’s look at status codes. Can you tell me what exactly this response means, without looking it up?

res.status = 307;

How about this instead:

res.status = StatusCode::MovedTemporarily;

Clearly better. You’ve seen code like this:

if res.status / 100 == 4 {}

What if we could make it better:

if res.status.is_client_error() {}

Message WriteStatus

I’ve been bitten by this before, I can only bet you have also: trying to write headers after they’ve already been sent. Hyper makes this a compile-time check. If you have a Request<Fresh>, then there exists header_mut() methods to get a mutable reference to the headers, so you can add some. You can’t accidently write to a Request<Fresh>, since it doesn’t implement Writer. When you are ready to start writing the body, you must specifically convert to a Request<Streaming> using req.start().

Likewise, a Request<Streaming> does not contain a headers_mut() accessor. You cannot change the headers once streaming has started. You can still inspect them, if that’s needed, but no setting! The compiler will make sure you don’t have that mistake in your code.

NetworkStreams

Both the Server and the Client are generic over NetworkStreams. The default is to use HttpStream, which can handle HTTP over TCP, and HTTPS using openssl. This design also allows something like Servo to implement a ServoStream or something, which could handle HTTPS using NSS instead.

Goals

These are some high level goals for the library, so you can see the direction:

  • Be fast!
    • The benchmarks preach that we’re already faster than both rust-http and libcurl. And we all know science doesn’t lie.
  • Embrace types.
    • See the above post for how we’re doing this.
  • Provide an excellent http server library for rust webdev.
    • Currently used by Iron, Rustless, Sserve, and others
  • Provide an excellent http client that can be used in place of curl.

The first step for hyper was get the streams and types working correctly and quickly. With the factory working underneath, it allows others to write specific implementations without re-doing all of HTTP, such as implementing the XHR2 spec in Servo. Work now has been on providing ergonomic Client and Server implementations.

It looks increasingly likely that hyper will be available to use on Rust-1.0-day.3There will be an HTTP library for Rust 1.0!


  1. Teepee provided excellent inspiration in some of the design, and all that credit should go to it’s creator, Chris Morgan. He’s continued to provide insight in the development of hyper, so <3! 

  2. Yes, it differs. It’s been a delight to see that developers are never content with an existing spec

  3. Rust 1.0 will ship with only stable APIs and features. Some features will only be accessible by the nightlies, and not likely to be stabilized for 1.0. Hyper doesn’t depend on these, and so should be compilable using rustc v1.0.0

Gervase MarkhamGoogle Concedes Google Code Not Good Enough?

Google recently released an update to End-to-End, their communications security tool. As part of the announcement, they said:

We’re migrating End-To-End to GitHub. We’ve always believed strongly that End-To-End must be an open source project, and we think that using GitHub will allow us to work together even better with the community.

They didn’t specifically say how it was hosted before, but a look at the original announcement tells us it was here – on Google Code. And indeed, when you visit that link now, it says “Project “end-to-end” has moved to another location on the Internet”, and offers a link to the Github repo.

Is Google admitting that Google Code just doesn’t cut it any more? It certainly doesn’t have anything like the feature set of Github. Will we see it in the next round of Google spring-cleaning in 2015?

Mozilla Open Policy & Advocacy BlogThe Benefits of Fellowship

In just a few weeks, the application window to be a 2015 Ford-Mozilla Open Web Fellow will close. In its first year, the Fellows program will place emerging tech leaders at five of the world’s leading nonprofits fighting to keep the Internet as a shared, open and global resource.

We’ve already seen hundreds of applicants from more than 70 countries apply, and we wanted to answer one of the primary questions we’ve heard: why should I be a Fellow?

Fellowships offer unique opportunities to learn, innovate and gain credentials.

Fellowships offer unique opportunities to learn. Representing the notion that ‘the community is the classroom’, Ford-Mozilla Open Web Fellows will have a set of experiences in which they can learn and have an impact while working in the field. They will be at the epicenter of informing how public policy shapes the Internet. They will be working and collaborating together with a collection of people with diverse skills and experiences. They will be learning from other fellows, from the host organizations, and from the broader policy and advocacy ecosystem.

Fellowships offer the ability to innovate in policy and technology. The Fellowship offers the ability to innovate, using technology and policy as your toolset. We believe that the phrase ‘Move fast. Break things.’ is not reserved for technology companies – it is a way of being that Fellows will get to experience first-hand at our host organizations and working with Mozilla.

The Ford-Mozilla Fellowship offers a unique and differentiating credential. Our Fellows will be able to reference this experience as they continue in their career. As they advance in their chosen fields, alums of the program will be able to draw upon their experience leading in the community and working in the open. This experience will also enable them to expand their professional network as they continue to practice at the intersection of technology and policy.

We’ve also structured the program to remove barriers and assemble a Fellowship class that reflects the diversity of the entire community.

This is a paid fellowship with benefits to allow Fellows to focus on the challenging work of protecting the open Web through policy and technology work. Fellows will receive a $60,000 stipend for the 10-month program. In addition, we’ve created a series of supplements including support for housing, relocation, childcare, healthcare, continuing education and technology. We’re also offering visa assistance in order to ensure global diversity in participants.

In short, the Ford-Mozilla Open Web Fellowship is a unique opportunity to learn, innovate and gain credentials. It’s designed to enable Fellows to focus on the hard job of protecting the Internet.

More information on the Fellowship benefits can be found at https://advocacy.mozilla.org/. Good luck to the applicants of the 2015 Fellowship class.


The Ford-Mozilla Open Web Fellows application deadline is December 31, 2014. Apply at https://advocacy.mozilla.org/.

Daniel GlazmanBulgaria Web Summit

I will be speaking at the Bulgaria Web Summit 2015 in Sofia, Bulgaria, 18th of april.

Henrik SkupinFirefox Automation report – week 43/44 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 43 and 44.

Highlights

In preparation for the QA-wide demonstration of Mozmill-CI, Henrik reorganized our documentation to allow everyone a simple local setup of the tool. Along that we did the remaining deployment of latest code to our production instance.

Henrik also worked on the upgrade of Jenkins to latest LTS version 1.565.3, and we were able to push this upgrade to our staging instance for observation. Further he got the Pulse Guardian support implemented.

Mozmill 2.0.9 and Mozmill-Automation 2.0.9 have been released, and if you are curious what is included you want to check this post.

One of our major goals over the next 2 quarters is to replace Mozmill as test framework for our functional tests for Firefox with Marionette. Together with the A-Team Henrik got started on the initial work, which is currently covered in the firefox-greenlight-tests repository. More to come later…

Beside all that work we have to say good bye to one of our SoftVision team members.October the 29th was the last day for Daniel on the project. So thank’s for all your work!

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 43 and week 44.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 43 and week 44.

Mike HommeyInitial support for git pushes to mercurial, early testers needed

This push to try was not created by mercurial.

I just landed initial support for pushing to mercurial from git. Considering the scary fact that it’s possible to screw up a repository with bundles with missing content (and, guess what, I figured out the hard way), I have restricted it to local mercurial repositories until I am more confident.

As such, I would need volunteers to use and test it on local mercurial repositories. On top of being limited to local mercurial repositories, it doesn’t support pushing merges that would have been created by git, nor does it support pushing a root commit (one with no parent).

Here’s how you can use it:

$ git clone https://github.com/glandium/git-remote-hg
$ export PATH=$PATH:$(pwd)/git-remote-hg
$ git clone hg::/path/to/mercurial-repository
$ # work work, commit, commit
$ git push

[ Note: you can still pull from remote mercurial repositories ]

This will push to your local repository, where it would be useful if you could check the push didn’t fuck things up.

$ cd /path/to/mercurial-repository
$ hg verify

That’s the long, thorough version. You may just want to simply do this:

$ cd /path/to/mercurial-repository
$ hg log --stat

Hopefully, you won’t see messages like:

abort: data/build/mozconfig.common.override.i@56d6fdb13666: no match found!

Update: You can also add the following to /path/to/mercurial-repository/.hg/hgrc, which should prevent corruptions from entering the mercurial repository at all:

[server]
validate = True

Update 2: The above setting is now unnecessary, git-remote-hg will set it itself for its push session.

Then you can push with mercurial.

$ hg push

Please note that this is integrated in git in such a way that it’s possible to pass refspecs to git push and do other fancy stuff. Be aware that there are still rough edges on that part, but that your commits will be pushed, even if the resulting state under refs/remotes/ is not very consistent.

I’m planning a replay of several repositories to fully validate pushes don’t send broken bundles, but it’s going to take some time before I can set things up. I figured I’d rather crowdsource until then.

Gregory Szorcmach sub-commands

mach - the generic command line dispatching tool that powers the mach command to aid Firefox development - now has support for sub-commands.

You can now create simple and intuitive user interfaces involving sub-actions. e.g.

mach device sync
mach device run
mach device delete

Before, to do something like this would require a universal argument parser or separate mach commands. Both constitute a poor user experience (confusing array of available arguments or proliferation of top-level commands). Both result in mach help being difficult to comprehend. And that's not good for usability and approachability.

Nothing in Firefox currently uses this feature. Although there is an in-progress patch in bug 1108293 for providing a mach command to analyze C/C++ build dependencies. It is my hope that others write useful commands and functionality on top of this feature.

The documentation for mach has also been rewritten. It is now exposed as part of the in-tree Sphinx documentation.

Everyone should thank Andrew Halberstadt for promptly reviewing the changes!

Advancing ContentGetting Tiles Data From Firefox

Following the launch of Tiles in November, I wanted to provide more information on how data is transmitted into and from Firefox.  Last week, I described how we get Tiles data into Firefox differently from the usual cookie-identified requests.  In this post, I will describe how we report on users’ interactions with Tiles.

As a reminder, we have three kinds of Tiles: the History Tiles, which were implemented in Firefox in 2012, Enhanced Tiles, where we have a custom creative design for a Tile for a site that a user has an existing relationship with, and Directory Tiles, where we place a Tile in a new tab page for users with no browsing history in their profile.  Enhanced and Directory Tiles may both be sponsored, involving a commercial relationship, or they may be Mozilla projects or causes, such as our Webmaker initiative.

 

We need to be able to report data on user’s interactions with Tiles for two main reasons:

  • to determine if the experience is a good one
  • to report to our commercial partners on volumes of interactions by Firefox users

And we do these things in accordance with our data principles both to set the standards we would like the industry to follow and, crucially, to maintain the trust of our users.

 

Unless a user has opted out by switching to Classic or Blank, Firefox currently sends a list of the Tiles on a user’s new tab page to Mozilla’s servers, along with data about the user’s interaction with the Tiles, e.g., view, click, or pin.

Directory and Enhanced Tiles are identified by a Tile id, (e.g., “Firefox for Android” Tile has an id of 499 for American English-speaking users while “Firefox pour Android” has an id of 510 for French-speaking users).  History Tiles do not have an id, so we can only know that the user saw a history screenshot but not what page — except for early release channel Telemetry related experiments, we do not currently send URL information for Tiles, although of course we are able to infer it for the Directory and Enhanced Tiles that we have sent to Firefox.

 

Our implementation of Tiles uses the minimal actionable dataset, and we protect that data with with multiple layers of security.  This means:

  • cookie-less requests
  • encrypted transmission
  • aggressive cleaning of data

We also break up the data into smaller pieces that cannot be reconstructed to the original data.  When our server receives a list of seen Tiles from an IP address, we record that the specific individual Tiles were seen and not the whole list.

Sample POST graphic

Sample POST from opening a new tab

 

With the data aggregated across many users, we can now calculate how many total times a given Tile has been seen and visited.  By dividing the number of clicks by the number of views, we get a click-through-rate (CTR) that represents how valuable users find a particular tile, as well as a pin-rate and a block-rate.  This is sufficient for us to determine both if we think a Tile is useful for a user and also for us to report to a commercial partner.

 

Calculating the CTR for each tile and comparing them helps us decide if a Tile is useful to many users.  We can already see that the most popular tiles are “Customize Firefox” and “Firefox for Android” (Tile 499, remember) both in terms of clicks and pins.

For an advertiser, we create reports from our aggregated data, and they in turn can see the traffic for their URLs and are able to measure goal conversions on their back end.  Since the Firefox 10th anniversary announcement, which included Tiles and the Firefox Developer Edition, we ran a Directory Tile for the Webmaker initiative.  After 25 days, it had generated nearly 1 billion views, 183 thousand clicks, and 14 thousand pins.

Webmaker Tile

The Webmaker Tile (static and rollover states)

The Webmaker team, meanwhile, are able to see the traffic coming in (as the Tile directs traffic to a distinct URL), and they are able to give attribution to the Tile and track conversions from there:

Webmaker Dashboard

Webmaker.org’s Analytics dashboard: 182,488 sessions and 3,551 new Webmaker users!

 

We started with a relatively straightforward implementation to be able to measure how users are interacting with Tiles.  But we’ve already gotten some good ideas on how to make things even better for improved accuracy with less data.  For example, we currently cannot accurately measure how many unique users have seen a given Tile, and traditionally unique identifiers are used to measure that, but HyperLogLog has been suggested as a privacy-protecting technique to get us that data.  A separate idea is that we can use statistical random sampling that doesn’t require all Firefox users to send data while still getting the numbers we need. We’ll test sampling through Telemetry experiments to measure site popularity, and we’ll share more when we get those results.

We would love to hear your thoughts on how we treat users data to find the Tiles that users want.  And if you have ideas on how we can improve our data collection, please send them over as well!

Ed Lee on behalf of the Tiles team.

Mark SurmanDavid, Goliath and empires of the web

People in Mozilla have been talking a lot about radical participation recently. As Mitchell said at recently, participation will be key to our success as we move into ’the third era of Mozilla’ — the era where we find ways to be successful beyond the desktop browser.

davidandgoliath

This whole conversation has prompted me to reflect on how I think about radical participation today. And about what drew me to Mozilla in the first place more than five years ago.

For me, a big part of that draw was an image in my mind of Mozilla as the David who had knocked over Microsoft’s Goliath. Mozilla was the successful underdog in a fight I really cared about. Against all odds, Mozilla shook the foundation of a huge empire and changed what was possible with the web. This was magnetic. I wanted to be a part of that.

I started to think about this more the other day: what does it really mean for Mozilla to be David? And how do we win against future Goliaths?

Malcom Gladwell wrote a book last year that provides an interesting angle on this. He said: we often take the wrong lesson from David and Goliath story, thinking that it’s surprising that such a small challenger could fell such a large opponent.

Gladwell argues that Goliath was much more vulnerable that we think. He was large. But he was also slow, lumbering and had bad eyesight. Moreover, he used the most traditional fighting techniques of his time: the armour and brute force of infantry.

David, on the other hand, actually had a significant set of strategic advantages. He was nimble and good with a sling. A sling used properly, by the way, is a real weapon: it can project a rock at the speed of a .45 caliber pistol. Instead of confronting Goliath with brute force, he used a different and surprising technique to knock over his opponent. He wasn’t just courageous and lucky, he was smart.

Most other warriors would have seen Goliath as invincible. Not David: he was playing the game by his own rules.

In many ways, the same thing happened when we took on Microsoft and Internet Explorer. They didn’t expect the citizens of the web to rally against them: to build — and then choose by the millions — an unknown browser. Microsoft didn’t expect the citizens of the web to sling a rock at their weak spot, right between their eyes.

IMG_20141202_144835~3

As a community, radical participation was our sling and our rock. It was our strategic advantage and our element of surprise. And it is what shook the web loose from Microsoft’s imperial grip on the web.

Of course, participation still is our sling. It is still part of who were are as an organization and a global community. And, as the chart above shows, it is still what makes us different.

But, as we know, the setting has changed dramatically since Mozilla first released Firefox. It’s not just — or even primarily — the browser that shapes the web today. It’s not just the three companies in this chart that are vying for territorial claim. With the internet growing at breakneck speed, there are many Goliaths on many fronts. And these Goliaths are expanding their scope around the world. They are building empires.

Screen Shot 2014-12-09 at 4.46.59 AM

This has me thinking a lot about empire recently: about how the places that were once the subjects of the great European empires are by and large the same places we call “emerging markets”. These are the places where billions of people will be coming online for the first time in coming years. They are also the places where the new economic empires of the digital age are most aggressively consolidating their power.

Consider this: In North America, Android has about 68% of smartphone market share. In most parts of Asia and Africa, Android market share is in the 90% range – give or take a few points by country. That means Google has a near monopoly not only on the operating system on these markets, but also on the distribution of apps and how they are paid for. Android is becoming the Windows 98 of emerging economies, the monopoly and the control point; the arbiter of what is possible.

Also consider that Facebook and WhatsApp together control 80% of the messaging market globally, and are owned by one company. More scary: as we do market research with new smartphone users in countries like Bangladesh and Kenya. We usually ask people: do you use the internet: do you use the internet on you phone? The response is often: “what’s the Internet?” “What do you use you phone for?”, we ask. The response: “Oh, Facebook and WhatsApp.” Facebook’s internet is the only internet these people know of or can imagine.

It’s not the Facebooks and Googles of the world that concern me, per se. I use their products and in many cases, I love them. And I also believe they have done good in the world.

What concerns me is that, like the European powers in the 18th and 19th centuries, these companies are becoming empires that control both what is possible and what is imaginable. They are becoming monopolies that exert immense control over what people can do and experience on the web. And over what the web – and human society as a whole – may become.

One thing is clear to me: I don’t want this sort of future for the web. I want a future where anything is possible. I want a future where anything is imaginable. The web can be about these kinds of unlimited possibilities. That’s the web that I want everyone to be able to experience, including the billions of people coming online for the first time.

This is the future we want as a Mozilla. And, as a community we are going to need to take on some of these Goliaths. We are going to need reach down into our pocket and pull out that rock. And we are going to need to get some practice with our sling.

The truth is: Mozilla has become a bit rusty with it. Yes, participation is still a key part of who we are. But, if we’re honest, we haven’t relied on it as much of late.

If we want to shake the foundations of today’s digital empires, we need to regain that practice and proficiency. And find new and surprising ways to use that power. We need to aim at new weak spots in the giant.

We may not know what those new and surprising tactics are yet. But there is an increasing consensus that we need them. Chris Beard has talked recently about thinking differently about participation and product, building participation into the actual features and experience of our software. And we have been talking for the last couple of years about the importance of web literacy — and the power of community and participation to get people teaching each other how to wield the web. These are are the kinds of directions we need to take, and the strategies we need to figure out.

It’s not only about strategy, of course. Standing up to Goliaths and using participation to win are also about how we show up in the world. The attitude each of us embodies every day.

Think about this. Think about the image of David. The image of the underdog. Think about the idea of independence. And, then think of the task at hand: for all of us to bring more people into the Mozilla community and activate them.

If we as individuals and as an organization show up again as a challenger — like David — we will naturally draw people into what we’re doing. It’s a part of who we are as Mozillians, and its magnetic when we get it right


Filed under: mozilla, poetry, webmakers

Yunier José Sosa VázquezActualizados los canales de Firefox y Thunderbird

Se encuentra disponible actualizaciones para Firefox y Thunderbird. Esto incluye, la versión 15 de plugin Adobe Flash Player y las versiones para Andriod de Firefox.

Release: Firefox 34.0.5, Thunderbird 31.3.0, Firefox Mobile 34.0

Beta: Firefox 35.0b4, Firefox Mobile 35.0b4

Aurora/Developer Edition: Firefox 36.0a2, Firefox Mobile36.0a2 (está ubicado en el canal Nightly)

Nightly: Firefox 37 (con procesos separados gracias a Electrolysis) y Thunderbird 36

Este es un momento ideal para actualizarse antes de fin de año y llevar lo último de Firefox y Thunderbird a nuestros amigos de donde vivimos.

Ir a Descargas

Andrew HalberstadtHow to Consume Structured Test Results

You may not know that most of our test harnesses are now outputting structured logs (thanks in large part to :chmanchester's tireless work). Saying a log is structured simply means that it is in a machine readable format, in our case each log line is a JSON object. When streamed to a terminal or treeherder log, these JSON objects are first formatted into something that is human readable, aka the same log format you're already familiar with (which is why you may not have noticed this).

While this might not seem all that exciting it lets us do many things, such as change the human readable formats and add metadata, without needing to worry about breaking any fragile regex based log parsers. We are now in the process of updating much of our internal tooling to consume these structured logs. This will let us move faster and provide a foundation on top of which we can build all sorts of new and exciting tools that weren't previously possible.

But the benefits of structured logs don't need to be constrained to the Tools and Automation team. As of today, anyone can consume structured logs for use in whatever crazy tools they can think of. This post is a brief guide on how to consume structured test results.

A High Level Overview

Before diving into code, I want to briefly explain the process at a high level.

  1. The test harness is invoked in such a way that it streams a human formatted log to stdout, and a structured log to a file.
  2. After the run is finished, mozharness uploads the structured log to a server on AWS using a tool called blobber. Mozharness stores a map of uploaded file names to blobber urls as a buildbot property. The structured logs are just one of several files uploaded via blobber.
  3. The pulse build exchange publishes buildbot properties. Though the messages are based on buildbot events and can be difficult to consume directly.
  4. A tool called pulsetranslator consumes messages from the build exchange, cleans them up a bit and re-publishes them on the build/normalized exchange.
  5. Anyone creates a NormalizedBuildConsumer in pulse, finds the url to the structured log and downloads it.

Sound complicated? Don't worry, the only step you're on the hook for is step 5.

Creating a Pulse Consumer

For anyone not aware, pulse is a system at Mozilla for publishing and subscribing to arbitrary events. Pulse has all sorts of different applications, one of which is receiving notifications whenever a build or test job has finished.

The Setup

First, head on over to https://pulse.mozilla.org/ and create an account. You can sign in with Persona, and then create one or more pulse users. Next you'll need to install the mozillapulse python package. First make sure you have pip installed, then:

$ pip install mozillapulse

As usual, I recommend doing this in a virtualenv. That's it, no more setup required!

The Execution

Creating a pulse consumer is pretty simple. In this example we'll download all logs pertaining to mochitests on mozilla-inbound and mozilla-central. This example depends on the requests package, you'll need to pip install it if you want to run it locally:

import json
import sys
import traceback

import requests

from mozillapulse.consumers import NormalizedBuildConsumer

def run(args=sys.argv[1:]):
    pulse_args = {
        # a string to identify this consumer when logged into pulse.mozilla.org
        'applabel': 'mochitest-log-consumer',

        # each message contains a topic. Only messages that match the topic specified here will
        # be delivered. '#' is a wildcard, so this topic matches all messages that start with
        # 'unittest'.
        'topic': 'unittest.#',

        # durable queues will store messages inside pulse even if your consumer goes offline for
        # a bit. Otherwise, any messages published while the consumer is not explicitly
        # listeneing will be lost forever. Keep it set to False for testing purposes.
        'durable': False,

        # the user you created on pulse.mozilla.org
        'user': 'ahal',

        # the password you created for the user
        'password': 'hunter1',

        # a callback that will get invoked on each build event
        'callback': on_build_event,
    }


    pulse = NormalizedBuildConsumer(**pulse_args)

    while True:
        try:
            pulse.listen()
        except KeyboardInterrupt:
            # without this ctrl-c won't work!
            raise
        except IOError:
            # sometimes you'll get a socket timeout. Just call listen again and all will be
            # well. This was fairly common and probably not worth logging.
            pass
        except:
            # it is possible for rabbitmq to throw other exceptions. You likely
            # want to log them and move on.
            traceback.print_exc()


def on_build_event(data, message):
    # each message needs to be acknowledged. This tells the pulse queue that the message has been
    # processed and that it is safe to discard. Normally you'd want to ack the message when you know
    # for sure that nothing went wrong, but this is a simple example so I'll just ack it right away.
    message.ack()

    # pulse data has two main properties, a payload and metadata. Normally you'll only care about
    # the payload.
    payload = data['payload']
    print('Got a {} job on {}'.format(payload['test'], payload['tree']))

    # ignore anything not from mozilla-central or mozilla-inbound
    if payload['tree'] not in ('mozilla-central', 'mozilla-inbound'):
        return

    # ignore anything that's not mochitests
    if not payload['test'].startswith('mochitest'):
        return

    # ignore jobs that don't have the blobber_files property
    if 'blobber_files' not in payload:
        return

    # this is a message we care about, download the structured log!
    for filename, url in payload['blobber_files'].iteritems():
        if filename == 'raw_structured_logs.log':
            print('Downloading a {} log from revision {}'.format(
                   payload['test'], payload['revision']))
            r = requests.get(url, stream=True)

            # save the log
            with open('mochitest.log', 'wb') as f:
                for chunk in r.iter_content(1024):
                    f.write(chunk)
            break

    # now time to do something with the log! See the next section.

if __name__ == '__main__':
    sys.exit(run())

A Note on Pulse Formats

Each pulse publisher can have its own custom topics and data formats. The best way to discover these formats is via a tool called pulse-inspector. To use it, type in the exchange and routing key, click Add binding then Start Listening. You'll see messages come in which you can then inspect to get an idea of what format to expect. In this case, use the following:

Pulse Exchange: exchange/build/normalized
Routing Key Pattern: unittest.#

Consuming Log Data

In the last section we learned how to obtain a structured log. Now we learn how to use it. All structured test logs follow the same structure, which you can see in the mozlog documentation. A structured log is a series of line-delimited JSON objects, so the first step is to decode each line:

lines = [json.loads(l) for l in log.splitlines()]
for line in lines:
    # do something

If you have a large number of log lines, you'll want to use a generator. Another common use case is registering callbacks on specific actions. Luckily, mozlog provides several built-in functions for dealing with these common cases. There are two main approaches, registering callbacks or creating log handlers.

Examples

The rest depends on what you're trying to accomplish. It now becomes a matter of reading the docs and figuring out how to do it. Below are several examples to help get you started.

List all failed tests by registering callbacks:

from mozlog.structured import reader

failed_tests = []
def append_if_failed(log_item):
    if 'expected' in log_item:
        failed_tests.append(log_item['test'])

with open('mochitest.log', 'r') as log:
    iterator = reader.read(log)
    action_map = { 'test_end': append_if_failed }
    reader.each_log(iterator, action_map)

print('\n'.join(failed_tests))

List the time it took to run each test using a log handler:

import json

from mozlog.structured import reader

class TestDurationHandler(reader.LogHandler):
    test_duration = {}
    start_time = None

    def test_start(self, item):
        self.start_time = item['timestamp']

    def test_end(self, item):
        duration = item['timestamp'] - self.start_time
        self.test_duration[item['test']] = duration

handler = TestDurationHandler()
with open('mochitest.log', 'r') as log:
    iterator = reader.read(log)
    reader.handle_log(iterator, handler)

print(json.dumps(handler.test_duration, indent=2))

How to consume the log is really up to you. The built-in methods can be helpful, but are by no means required. Here is a more complicated example that receives structured logs over a socket, and spawns an arbitrary number of threads to process and execute callbacks on them.

If you have questions, comments or suggestions, don't hesitate to speak up!

Finally, I'd also like to credit Ahmed Kachkach an intern who not only worked on structured logging in mochitest over the summer, but also created the system that manages pulse users and queues.

William ReynoldsRemoving “Legacy” vouches on Mozillians.org

We announced changes to our vouching system on mozillians.org on July 29. These changes require you to receive a new vouch by December 18 to keep your vouched status. On that day we will remove “Legacy” vouches, which are vouches that do not have a description and were made before July 29. This is the last step in having the site fully transition to the improved vouching system that gives a shared understanding of vouching and describes each vouch.

Being “vouched” means you have made a meaningful contribution to the Project and and because of that, have access to special content like all profiles on mozillians.org, certain content on Air Mozilla and Mozilla Moderator, and you receive messages that are sent to vouched Mozillians. Having to get re-vouched means our community directory and vouching overall, is more meaningful.

Since we first announced this change, 3,600 out of the 6000 Mozillians have been revouched. Cheers! That also means about 2,400 will not unless they get vouched by a Mozillian who has vouching permissions by December 18.

Here’s what you need to do:

– Check your profile to see if you have a new vouch (anything other than a “Legacy vouch”). All Summit 2013 participants and paid staff have already received a new vouch. If you don’t have a new vouch, ask someone who knows your contributions to vouch for you.

– Help those who have made meaningful contributions, get a new vouch (if they need one). You can vouch for others if you have three vouches or more on your profile.

All “Legacy vouches” (those before July 29) will be removed on December 18, and only contributors with a new (non-Legacy) vouch will remain vouched. Losing your vouched status means you will not be able to access vouched Mozillians content or get Mozillians email communications until someone vouches for you.

You can learn more on the Vouching FAQ wiki page.

PomaxLet's make a Firefox Extension, the painless way

Ever had a thing you really wanted to customise about Firefox, but you couldn't because it wasn't in any regular menu, advanced menu, or about:config?

For instance, you want to be able to delete elements on a page for peace of mind from the context menu. How the heck do you do that? Well, with the publication of the new node-based jpm, the answer to that question is "pretty dang simply"...

Let's make our own Firefox extension with a "Delete element" option added to the context menu:

a screenshot of the Firefox page context menu with a 'delete element' option

We're going to make that happen in five steps.

  1. Install jpm -- in your terminal simply run: npm install -g jpm (make sure you have node.js installed) and done (this is mostly prerequisite to developing an extension, so you only have to do this once, and then never again. For future extensions, you start at step 2!)
  2. Create a dir for working on your extension whereveryou like, navigate to it in the terminal and run: jpm init to set up the standard files necessary to build your extension. Good news: it's very few files!
  3. Edit the index.js file that command generated, writing whatever code you need to do what you want to get done,
  4. Turn your code into an .xpi extension by running : jpm xpi,
  5. Install the extension by opening the generated .xpi file with Firefox

Of course, step (3) is the part that requires some effort, but let's run through this together. We're going to pretty much copy/paste the code straight from the context menu API documentation:

      // we need to make sure we have a hook into "things" we click on:
  1:  var self = require("sdk/self");

      // and we'll be using the context menu, so let's make sure we can:
  2:  var contextMenu = require("sdk/context-menu");

      // let's add a menu item!
  3:  var menuItem = contextMenu.Item({
        // the label is pretty obvious...
  4:    label: "Delete Element",

        // the context tells Firefox which things should have this in their context
        // menu, as there are quite a few elements that get "their own" menu,
        // like "the page" vs "an image" vs "a link". .. We pretty much want
        // everything on a page, so we make that happen:
  5:    context: contextMenu.PredicateContext(function(data) { return true; }),

        // and finally the script that runs when we select the option. Delete!
  6:    contentScript: 'self.on("click", function (node, data) { node.outerHTML = ""; });'
      });

The only changes here are that we want "delete" for everything, so the context is simply "for anything that the context menu opens up on, consider that a valid context for our custom script" (which we do by using the widest context possible on line 5), and of course the script itself is different because we want to delete nodes (line 6).

The contentScript property is a string, so we're a little restricted in what we can do without all manner of fancy postMessages, but thankfully we don't need it: the addon mechanism will always call the contentScript function with two arguments, "node" and "data, and the "node" argument is simply the HTML element you clicked on, which is what we want to delete. So we do! We don't even try to be clever here, we simply set the element's .outerHTML property to an empty string, and that makes it vanish from the page.

If you expected more work, then good news: there isn't any, we're already done! Seriously: run jpm run yourself to test your extension, and after verifying that it indeed gives you the new "Delete element" option in the context menu and deletes nodes when used, move on to steps (4) and (5) for the ultimate control of your browser.

Because here's the most important part: the freedom to control your online experience, and Firefox, go hand in hand.

Mark CôtéSearching Bugzilla

BMO currently supports five—count ‘em, five—ways to search for bugs. Whenever you have five different ways to perform a similar function, you can be pretty sure the core problem is not well understood. Search has been rated, for good reason, one of the least compelling features of Bugzilla, so the BMO team want to dig in there and make some serious improvements.

At our Portland get-together a couple weeks ago, we talked about putting together a vision for BMO. It’s a tough problem, since BMO is used for so many different things. We did, however, manage to get some clarity around search. Gerv, who has been involved in the Bugzilla project for quite some time, neatly summarized the use cases. People search Bugzilla for only two reasons:

  • to find a set of bugs, or
  • to find a specific bug.

That’s it. The fact that BMO has five different searches, though, means either we didn’t know that, or we just couldn’t find a good way to do one, or the other, or both.

We’ve got the functionality of the first use case down pretty well, via Advanced Search: it helps you assemble a set of criteria of almost limitless specificity that will result in a list of bugs. It can be used to determine what bugs are blocking a particular release, what bugs a particular person has assigned to them, or what bugs in a particular Product have been fixed recently. Its interface is, admittedly, not great. Quick Search was developed as a different, text-based approach to Advanced Search; it can be quicker to use but definitely isn’t any more intuitive. Regardless, Advanced Search fulfills its role fairly well.

The second use of Search is how you’d answer the question, “what was that bug I was looking at a couple weeks ago?” You have some hazy recollection of a bug. You have a good idea of a few words in the summary, although you might be slightly off, and you might know the Product or the Assignee, but probably not much else. Advanced Search will give you a huge, useless result set, but you really just want one specific bug.

This kind of search isn’t easy; it needs some intelligence, like natural-language processing, in order to give useful results. Bugzilla’s solutions are the Instant and Simple searches, which eschew the standard Bugzilla::Search module that powers Advanced and Quick searches. Instead, they do full-text searches on the Summary field (and optionally in Comments as well, which is super slow). The results still aren’t very good, so BMO developers tried outsourcing the feature by adding a Google Search option. But despite Google being a great search engine for the web, it doesn’t know enough about BMO data to be much more useful, and it doesn’t know about new nor confidential bugs at all.

Since Bugzilla’s search engines were originally written, however, there have been many advances in the field, especially in FLOSS. This is another place where we need to bring Bugzilla into the modern world; MySQL full-text searches are just not good enough. In the upcoming year, we’re going to look into new approaches to search, such as running different databases in tandem to exploit their particular abilities. We plan to start with experiments using Elasticsearch, which, as the name implies, is very good at searching. By standing up an instance beside the main MySQL db and mirroring bug data over, we can refer specific-bug searches to it; even though we’ll then have to filter based on standard bug-visibility rules, we should have a net win in search times, especially when searching comments.

In sum, Mozilla developers, we understand your tribulations with Bugzilla search, and we’re on it. After all, we all have a reputation to maintain as the Godzilla of Search Engines!

Henrik SkupinFirefox Automation report – week 41/42 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 41 and 42.

With the beginning of October we also have some minor changes in responsibilities of tasks. While our team members from SoftVision mainly care about any kind of Mozmill tests related requests and related CI failures, Henrik is doing all the rest including the framework and the maintenance of Mozmill CI.

Highlights

With the support for all locales testing in Mozmill-CI for any Firefox beta and final release, Andreea finished her blacklist patch. With that we can easily mark locales not to be tested, and get rid of the long white-list entries.

We spun up our first OS X 10.10 machine in our staging environment of Mozmill CI for testing the new OS version. We hit a couple of issues, especially some incompatibilities with mozrunner, which need to be fixed first before we can get started in running our tests on 10.10.

In the second week of October Teodor Druta joined the Softvision team, and he will assist all the others with working on Mozmill tests.

But we also had to fight a lot with Flash crashes on our testing machines. So we have seen about 23 crashes on Windows machines per day. And that all with the regular release version of Flash, which we re-installed because of a crash we have seen before was fixed. But the healthy period did resist long, and we had to revert back to the debug version without the protect mode. Lets see for how long we have to keep the debug version active.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 41 and week 42.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 41 and week 42.

Andy McKayDevelopers First

A while back we developed Marketplace Payments. The first version of those was for Firefox OS and it was tough. There were lots of thing happening at once: building out a custom API with a payment provide, a backend to talk to our payment provider through multiple security hoops, integrating the relatively new Persona, working on the Trusted UI and mozPay and so on.

At the moment we are prototyping and shipping desktop payments as part of our final steps in Marketplace Payments. One thing that came clear a while ago was that desktop payments are much, much, much easier to use, test and debug.

Desktop payments are easier for the developers who work on payments. That means they are easier to get team members working on, easier to demo, easier to record, easier to debug, easier to test and so on. That dramatically decreases the development time.

In the meantime we've also built out things that make this much easier: a Docker development environment that sets things up correctly and a fake backend so you don't need to process money to test things out.

Hindsight is wonderful thing, but at the time we were actively discouraged from doing desktop development. "Mobile first" and "Don't slow down mobile development".

But inadvertently we slowed down mobile development by not being developer first.

Nicholas NethercoteUsing Gmail filters to identify important Bugzilla mail in 2014

Many email filtering systems are designed to siphon each email into a single destination folder. Usually you have a list of rules which get applied in order, and as soon as one matches an email the matching process ends.

Gmail’s filtering system is different; it’s designed to add any number of labels to each email, and the rules don’t get applied in any particular order. Sometimes it’s really useful to be able to apply multiple labels to an email, but if you just want to apply one in a fashion that emulates folders, it can be tricky.

So here’s a non-trivial example of how I filter bugmail into two “folders”. The first “folder” contains high-priority bugmail.

  • Review/feedback/needinfo notifications.
  • Comments in bugs that I filed or am assigned to or am CC’d to.
  • Comment in secure bugs.
  • Comments in bugs in the DMD and about:memory components.

For the high priority bugmail, on Gmail’s “Create a Filter” screen, in the “From:” field I put:

bugzilla-daemon@mozilla.org

and in the “Has the words:” field I put:

“you are the assignee” OR “you reported” OR “you are on the CC list” OR subject:”granted:” OR subject:”requested:” OR subject:”canceled:” OR subject:”Secure bug” OR “Product/Component: Core :: DMD” OR “Product/Component: Toolkit :: about:memory” OR “Your Outstanding Requests”

For the low priority bugmail, on Gmail’s “Create a Filter” screen, in the “From:” field put:

bugzilla-daemon@mozilla.org

and in the “Doesn’t have:” field put:

(“you are the assignee” OR “you reported” OR “you are on the CC list” OR subject:”granted:” OR subject:”requested:” OR subject:”canceled:” OR subject:”Secure bug” OR “Product/Component: Core :: DMD” OR “Product/Component: Toolkit :: about:memory” OR “Your Outstanding Requests”)

(I’m not certain if the parentheses are needed here. It’s otherwise identical to the contents in the previous case.)

I’ve modified them a few times and they work very well for me. Everyone else will have different needs, but this might be a useful starting point.

This is just one way to do it. See here for an alternative way. (Update: Byron Jones pointed out that my approach assumes that the wording used in email bodies won’t change, and so the alternative is more robust.)

Finally, if you’re wondering about the “in 2014″ in the title of this post, it’s because I wrote a very similar post four years ago, and my filters have evolved slightly since then.

Will Kahn-GreeneDennis v0.6 released! Line numbers, double vowels, better cli-fu, and better output!

What is it?

Dennis is a Python command line utility (and library) for working with localization. It includes:

  • a linter for finding problems in strings in .po files like invalid Python variable syntax which leads to exceptions
  • a template linter for finding problems in strings in .pot files that make translator's lives difficult
  • a statuser for seeing the high-level translation/error status of your .po files
  • a translator for strings in your .po files to make development easier

v0.6 released!

Since v0.5, I've done the following:

  • Rewrote the command line handling using click and added an exception handler.
  • Merged the lint and linttemplate commands. Why should you care which file you're linting when the linter can figure it out for you?
  • Added the whimsical double vowel transform.
  • Added line numbers in the lint output. This will make it possible to find those pesky problematic strings in your .po/.pot files.
  • Add a line reporter to the linter.

Getting pretty close to what I want for a 1.0, so I'm pretty excited about this version.

Denise update

I've updated Denise with the latest Dennis and moved it to a better url. Lint your .po/.pot files via web service using http://denise.paas.allizom.org/.

Where to go for more

For more specifics on this release, see here: http://dennis.readthedocs.org/en/latest/changelog.html#version-0-6-december-16th-2014

Documentation and quickstart here: http://dennis.readthedocs.org/en/v0.6/

Source code and issue tracker here: https://github.com/willkg/dennis

Source code and issue tracker for Denise (Dennis-as-a-service): https://github.com/willkg/denise

6 out of 8 employees said Dennis helps them complete 1.5 more deliverables per quarter.

Nathan Froydwhat’s new in xpcom

I was talking to somebody at Mozilla’s recent all-hands meeting in Portland, and in the course of attempting to provide a reasonable answer for “What have you been doing lately?”, I said that I had been doing a lot of reviews, mostly because of my newfound duties as XPCOM module owner. My conversational partner responded with some surprise that people were still modifying code in XPCOM at such a clip that reviews would be a burden. I responded that while the code was not rapidly changing, people were still finding reasons to do significant modifications to XPCOM code, and I mentioned a few recent examples.

But in light of that conversation, it’s good to broadcast some of the work I’ve had the privilege of reviewing this year.  I apologize in advance for not citing everybody; in particular, my reviews email folder only goes back to August, so I have a very incomplete record of what I’ve reviewed in the past year.  In no particular order:

Michael KaplyManaging Firefox with Group Policy and PolicyPak

A lot of people ask me how to manage Firefox using Windows Group Policy. To that end, I have been working with a company called PolicyPak to help enhance their product to have more of the features that people are asking for (not just controlling preferences.) It's taken about a year, but the results are available for download now.

You can now manage the following things (and more) using PolicyPak, Group Policy and Firefox:

  • Set and lock almost all preference settings (homepage, security, etc) plus most settings in about:config
  • Set site specific permissions for pop-ups, cookies, camera and microphone
  • Add or remove bookmarks on the toolbar or in the bookmarks folder
  • Blacklist or whitelist any type of add-on
  • Add or remove certificates
  • Disable private browsing
  • Turn off crash reporting
  • Prevent access to local files
  • Always clear saved passwords
  • Disable safe mode
  • Remove Firefox Sync
  • Remove various buttons from Options

If you want to see it in action, you can check out these videos.

And if you've never heard of PolicyPak, you might have heard of the guy who runs it - Jeremy Moskowitz. He's a Group Policy MVP and literally wrote the book on Group Policy.

On a final note, if you decide to purchase, please let them know you heard about it from me.

Jennie Rose HalperinLeaving Mozilla as staff

December 31 will be my last day as paid staff on the Community Building Team at Mozilla.

One year ago, I settled into a non-stop flight from Raleigh, NC to San Francisco and immediately fell asleep. I was exhausted; it was the end of my semester and I had spent the week finishing a difficult databases final, which I emailed to my professor as soon as I reached the hotel, marking the completion of my coursework in Library Science and the beginning of my commitment to Mozilla.

The next week was one of the best of my life. While working, hacking, and having fun, I started on the journey that has carried me through the past exhilarating months. I met more friendly faces than I could count and felt myself becoming part of the Mozilla community, which has embraced me. I’ve been proud to call myself a Mozillian this year, and I will continue to work for the free and open Web, though currently in a different capacity as a Rep and contributor.

I’ve met many people through my work and have been universally impressed with your intelligence, drive, and talent. To David, Pierros, William, and particularly Larissa, Christie, Michelle, and Emma, you have been my champions and mentors. Getting to know you all has been a blessing.

I’m not sure what’s next, but I am happy to start on the next step of my career as a Mozillian, a community mentor, and an open Web advocate. Thank you again for this magical time, and I hope to see you all again soon. Let me know if you find yourself in Boston! I will be happy to hear from you and pleased to show you around my hometown.

If you want to reach out, find me on IRC: jennierose. All the best wishes for a happy, restful, and healthy holiday season.

Mike HommeyOne step closer to git push to mercurial

In case you missed it, I’m working on a new tool to use mercurial remotes in git. Since my previous post, I landed several fixes making clone and pull more reliable:

  • Of 247316 unique changesets in the various mozilla-* repositories, now only two (but both in fact come from the same patch, one of the changesets being a backport to aurora of the other) are “corrupted” because their mercurial date have a timezone with a second.
  • Of 23542 unique changesets in the canonical mercurial repository, only three are “corrupted” because their raw mercurial data contains, for an unknown reason, a whitespace after the timezone.

By corrupted, here, I mean that the round-trip hg->git->hg doesn’t lead to matching their sha1. They will be fixed eventually, but I haven’t decided how yet, because they’re really edge cases. They’re old enough that they don’t really matter for push anyways.

Pushing to mercurial, however, is still not there, but it’s getting closer. It involves several operations:

  • Negotiating with the mercurial server what it doesn’t have that we do.
  • Creating mercurial changesets, manifests and files for local git commits that were not imported from mercurial.
  • Creating a bundle of the mercurial changesets, manifests and files that we have that the server doesn’t.
  • Pushing that bundle to the server.

The first step is mostly covered by the pull code, that does a similar negotiation. I now have the third step covered (although I cheated around the “corruptions” mentioned above):

$ git clone hg::http://selenic.com/hg
Cloning into 'hg'...
(...)
Checking connectivity... done.
$ cd hg
$ git hgbundle > ../hg.hg
$ mkdir ../hg2
$ cd ../hg2
$ hg init
$ hg unbundle ../hg.hg
adding changesets
adding manifests
adding file changes
added 23542 changesets with 44305 changes to 2272 files
(run 'hg update' to get a working copy)
$ hg verify
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
2272 files, 23542 changesets, 44305 total revisions

Note: that hgbundle command won’t actually exist. It’s just an intermediate step allowing me to work incrementally.

In case you wonder what happens when the bundle contains bad data, mercurial fortunately rejects it:

$ cd ../hg
$ git hgbundle-corrupt > ../hg.hg
$ mkdir ../hg3
$ cd ../hg3
$ hg unbundle ../hg.hg
adding changesets
transaction abort!
rollback completed
abort: integrity check failed on 00changelog.i:3180!

Andrea MarchesiniPriv8 is out!

Download page: click here

What is priv8? This is a Firefox addon that uses part of the security model of Firefox OS to create sandboxed tabs. Each sandbox is a completely separated world: it doesn’t share cookies, storage, and a lots of other stuff with the rest of Firefox, but just with other tabs from the same sandbox.

Each sandbox has a name and a color, therefore it will be always easy to identify which tab is sandboxed.

Also, these sandboxes are permanent! So, when you open one of them the second time, maybe after a restart, that sandbox will still have the same cookies, same storage, etc - as you left the previous time.

You can also switch between sandboxes using the context menu for the tab.

Here an example: with priv8 you can read your gmail webmail in a tab, and another gmail webmail in another tab at the same time. Still, you can be logged in on Facebook in a tab and not in the others. This is nice!

Moreover, if you are a web developer and you want to test a website using multiple accounts, priv8 gives you the opportunity to have each account in a sandboxed tab. Much easier then have multiple profiles or login and logout manung>ally every time!

Is it stable? I don’t know :) It works but more test must be done. Help needed!

Known issues?

  • window.open() doesn’t work from a sandbox
  • e10s is not supported yet.
  • The UI must be improved.

Screenshots:

The manager

This is the manager, where you can “manage” your sandboxes.

The panel

The panel is always accessible from the firefox toolbar.

Context menu

The context menu allows you to switch between sandboxes for the current tab. This will reload the tab after the switch.

3 gmail tabs

3 separate instances of Gmail at the same time.

License: Priv8 is released under Mozilla Public License.
Source code: bakulf :: priv8

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1063818] Updates to form.dev-engagement-event
  • [1111954] Updates to Spreadsheet Data in form.dev-engagement-event
  • [1092578] Decide if an email needs to be encrypted at the time it is generated, not at the time it is sent
  • [1107275] Include Build.PL file for bmo/4.2 to install Perl dependencies (useful for Travis CI, etc.)
  • [829358] Changing the name of a private attachment in an unhidden bug results in the name change being sent unencrypted
  • [1104291] The form.web.bounty page does not say it’s a bounty form
  • [1105585] Fix bug bounty form to validate its input more and relax the restriction on the paid field to include -+? suffix
  • [1105155] Indicate that an existing comment has been modified for tracking flags with prefill text
  • [1105745] changes made via the bounty form are not emailed immediately
  • [1111862] HTML code injection in review history page

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Cameron McCormackSubmission

254 pages, eleven and a half years of my life.

Me submitting my thesis at the Monash Institute of Graduate Research office.

My thesis front cover: Authoring and Publishing Adaptive Diagrams.

Now for the months-long wait for the examiners to review it.

Paul RougetFirefox.html screencast

Firefox.html screencast. Contribute: http://github.com/paulrouget/firefox.html.

Youtube video: https://www.youtube.com/watch?v=IBzrCmGVDkA

David HumphreyVideo killed the radio star

One of the personal experiments I'm considering in 2015 is a conscious movement away from video-based participation in open source communities. There are a number of reasons, but the main one is that I have found the preference for "realtime," video-based communication media inevitably leads to ever narrowing circles of interaction, and eventually, exclusion.

I'll speak about Mozilla, since that's the community I know best, but I suspect a version of this is happening in other places as well. At some point in the past few years, Mozilla (the company) introduced a video conferencing system called Vidyo. It's pretty amazing. Vidyo makes it trivial to setup a virtual meeting with many people simultaneously or do a 1:1 call with just one person. I've spent hundreds of hours on Vidyo calls with Mozilla, and other than the usual complaints one could level against meetings in general, I've found them very productive and useful, especially being able to see and hear colleagues on the other side of the country or planet.

Vidyo is so effective that for many parts of the project, it has become the default way people interact. If I need to talk to you about a piece of code, for example, it would be faster if we both just hopped into Vidyo and spent 10 minutes hashing things out. And so we do. I'm guilty of this.

I'm talking about Vidyo above, but substitute Skype or Google Hangouts or appear.in or some cool WebRTC thing your friend is building on Github. Video conferencing isn't a negative technology, and provides some incredible benefits. I believe it's part of what allows Mozilla to be such a successful remote-friendly workplace (vs. project). I don't believe, however, that it strengthens open source communities in the same way.

It's possible on Vidyo to send an invitation URL to someone without an account (you need an account to use it, by the way). You have to be invited, though. Unlike irc, for example, there is no potential for lurking (I spent years learning about Mozilla code by lurking on irc in #developers). You're in or you're out, and people need to decide which it will be. Some people work around this by recording the calls and posting them online. The difficulty here is that doing so converts what was participation into performance--one can watch what happened, but not engage it, not join the conversation and therefore the decision making. And the more we use video, the more likely we are to have that be where we make decisions, further making it difficult for those not in the meeting to be part of the discussion.

Even knowing that decisions have been made becomes difficult in a world where those decisions aren't sticky, and go un-indexed. If we decided in a mailing list, bug, irc discussion, Github issue, etc. we could at least hope to go back and search for it. So too could interested members of the community, who may wish to follow along with what's happening, or look back later when the details around how the decision came to be become important.

I'll go further and suggest that in global, open projects, the idea that we can schedule a "call" with interested and affected parties is necessarily flawed. There is no time we can pick that has us all, in all timezones, able to participate. We shouldn't fool ourselves: such a communication paradigm is necessarily geographically rooted; it includes people here, even though it gives the impression that everyone and anyone could be here. They aren't. They can't be. The internet has already solved this problem by privileging asynchronous communication. Video is synchronous.

Not everything can or should be open and public. I've found that certain types of communication work really well over video, and we get into problems when we do too much over email, mailing lists, or bugs. For example, a conversation with a person that requires some degree of personal nuance. We waste a lot of time, and cause unnecessary hurt, when we always choose open, asynchronous, public communication media. Often scheduling an in person meeting, getting on the phone, or using video chat would allow us to break through a difficult impasse with another person.

But when all we're doing is meeting as a group to discuss something public, I think it's worth asking the question: why aren't we engaging in a more open way? Why aren't we making it possible for new and unexpected people to observe, join, and challenge us? It turns out it's a lot easier and faster to make decisions in a small group of people you've pre-chosen and invited; but we should consider what we give up in the name of efficiency, especially in terms of diversity and the possibility of community engagement.

When I first started bringing students into open source communities like Mozilla, I liked to tell them that what we were doing would be impossible with other large products and companies. Imagine showing up at the offices of Corp X and asking to be allowed to sit quietly in the back of the conference room while the engineers all met. Being able to take them right into the heart of a global project, uninvited, and armed only with a web browser, was a powerful statement; it says: "You don't need permission to be one of us."

I don't think that's as true as it used to be. You do need permission to be involved with video-only communities, where you literally have to be invited before taking part. Where most companies need to guard against leaks and breaches of many kinds, an open project/company needs to regularly audit to ensure that its process is porous enough for new things to get in from the outside, and for those on the inside to regularly encounter the public.

I don't know what the right balance is exactly, and as with most aspects of my life where I become unbalanced, the solution is to try swinging back in the other direction until I can find equilibrium. In 2015 I'm going to prefer modes of participation in Mozilla that aren't video-based. Maybe it will mean that those who want to work with me will be encouraged to consider doing the same, or maybe it will mean that I increasingly find myself on the outside. Knowing what I do of Mozilla, and its expressed commitment to working open, I'm hopeful that it will be the former. We'll see.

Daniel StenbergCan curl avoid to be in a future funnily named exploit that shakes the world?

During this year we’ve seen heartbleed and shellshock strike (and a  few more big flaws that I’ll skip for now). Two really eye opening recent vulnerabilities in projects with many similarities:

  1. Popular corner stones of open source stacks and internet servers
  2. Mostly run and maintained by volunteers
  3. Mature projects that have been around since “forever”
  4. Projects believed to be fairly stable and relatively trustworthy by now
  5. A myriad of features, switches and code that build on many platforms, with some parts of code only running on a rare few
  6. Written in C in a portable style

Does it sound like the curl project to you too? It does to me. Sure, this description also matches a slew of other projects but I lead the curl development so let me stay here and focus on this project.

cURLAre we in jeopardy? I honestly don’t know, but I want to explain what we do in our project in order to minimize the risk and maximize our ability to find problems on our own before they become serious attack vectors somewhere!

previous flaws

There’s no secret that we have let security problems slip through at times. We’re right now working toward our 143rd release during our around 16 years of life-time. We have found and announced 28 security problems over the years. Looking at these found problems, it is clear that very few security problems are discovered quickly after introduction. Most of them linger around for several years until found and fixed. So, realistically speaking based on history: there are security bugs still in the code, and they have probably been present for a while already.

code reviews and code standards

We try to review all patches from people without push rights in the project. It would probably be a good idea to review all patches before they go in for real, but that just wouldn’t work with the (lack of) man power we have in the project while we at the same time want to develop curl, move it forward and introduce new things and features.

We maintain code standards and formatting to keep code easy to understand and follow. We keep individual commits smallish for easier review now or in the future.

test cases

As simple as it is, we test that the basic stuff works. We don’t and can’t test everything but having test cases for most things give us the confidence to change code when we see problems as we then remain fairly sure things keep working the same way as long as the test go through. In projects with much less test coverage, you become much more conservative with what you dare to change and that also makes you more vulnerable.

We always want more test cases and we want to improve on how we always add test cases when we add new features and ideally we should also add new test cases when we fix bugs so that we know that we don’t introduce any such bug again in the future.

static code analyzes

We regularly scan our code base using static code analyzers. Both clang-analyzer and coverity are good tools, and they help us by pointing out code that look wrong or suspicious. By making sure we have very few or no such flaws left in the code, we minimize the risk. A static code analyzer is better than run-time tools for cases where they can check code flows that are hard to repeat in my local environment.

valgrind

bike helmetValgrind is an awesome tool to detect memory problems in run-time. Leaks or just stupid uses of memory or related functions. We have our test suite automatically use valgrind when it runs tests in case it is present and it helps us make sure that all situations we test for are also error-free from valgrind’s point of view.

autobuilds

Building and testing curl on a plethora of platforms non-stop is also useful to make sure we don’t depend on behaviors of particular library implementations or non-standard features and more. Testing it all is basically the only way to make sure everything keeps working over the years while we continue to develop and fix bugs. We would course be even better off with more platforms that would test automatically and with more developers keeping an eye on problems that show up there…

code complexity

Arguably, one of the best ways to avoid security flaws and bugs in general, is to keep the source code as simple as possible. Complex functions need to be broken down into smaller functions that are possible to read and understand. A good way to identify functions suitable for fixing is pmccabe,

essential third parties

curl and libcurl are usually built to use a whole bunch of third party libraries in order to perform all the functionality. In order to not have any of those uses turn into a source for trouble we must of course also participate in those projects and help them stay strong and make sure that we use them the proper way that doesn’t lead to any bad side-effects.

You can help!

All this takes time, energy and system resources. Your contributions and help will be appreciated where ever among these tasks that you can insert any. We could do more of all this, more often and more thorough if we only were more people involved!

Julien VehentStripe's AWS-Go and uploading to S3

Yesterday, I discovered Stripe's AWS-Go library, and the magic of auto-generated API clients (which is one fascinating topic that I'll have to investigate for MIG).

I took on the exercise of writing a simple file upload tool using aws-go. It was fairly easy to achieve, considering the complexity of AWS's APIs. I would have to evaluate aws-go further before recommending it as a comprehensive AWS interface, but so far it seems complete. Check out http://godoc.org/github.com/stripe/aws-go/gen for a detailed doc.

The source code is below. It reads credentials from ~/.awsgo:

$ cat ~/.awsgo
[credentials]
    accesskey = "AKI...."
    secretkey = "mw0...."

It takes a file to upload as the only argument, and returns the URL where it is posted.

$ ./s3up s3up
https://s3.amazonaws.com/testawsgo/s3up
AWS-Go is not revolutionary compared to python & boto, but benefits from Go's very clean approach to programming. And getting rid of install dependencies, pip and python{2{6,7},3} hell is kinda nice!

package main

import (
	"code.google.com/p/gcfg"
	"fmt"
	"github.com/stripe/aws-go/aws"
	"github.com/stripe/aws-go/gen/s3"
	"os"
)

// conf takes an AWS configuration from a file in ~/.awsgo
// example:
//
// [credentials]
//    accesskey = "AKI...."
//    secretkey = "mw0...."
//
type conf struct {
	Credentials struct {
		AccessKey string
		SecretKey string
	}
}

func main() {
	var (
		err         error
		conf        conf
		bucket      string = "testawsgo" // change to your convenience
		fd          *os.File
		contenttype string = "binary/octet-stream"
	)
	// obtain credentials from ~/.awsgo
	credfile := os.Getenv("HOME") + "/.awsgo"
	_, err = os.Stat(credfile)
	if err != nil {
		fmt.Println("Error: missing credentials file in ~/.awsgo")
		os.Exit(1)
	}
	err = gcfg.ReadFileInto(&conf, credfile)
	if err != nil {
		panic(err)
	}

	// create a new client to S3 api
	creds := aws.Creds(conf.Credentials.AccessKey, conf.Credentials.SecretKey, "")
	cli := s3.New(creds, "us-east-1", nil)

	// open the file to upload
	if len(os.Args) != 2 {
		fmt.Printf("Usage: %s <inputfile>\n", os.Args[0])
		os.Exit(1)
	}
	fi, err := os.Stat(os.Args[1])
	if err != nil {
		fmt.Printf("Error: no input file found in '%s'\n", os.Args[1])
		os.Exit(1)
	}
	fd, err = os.Open(os.Args[1])
	if err != nil {
		panic(err)
	}
	defer fd.Close()

	// create a bucket upload request and send
	objectreq := s3.PutObjectRequest{
		ACL:           aws.String("public-read"),
		Bucket:        aws.String(bucket),
		Body:          fd,
		ContentLength: aws.Integer(int(fi.Size())),
		ContentType:   aws.String(contenttype),
		Key:           aws.String(fi.Name()),
	}
	_, err = cli.PutObject(&objectreq)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
	} else {
		fmt.Printf("%s\n", "https://s3.amazonaws.com/"+bucket+"/"+fi.Name())
	}

	// list the content of the bucket
	listreq := s3.ListObjectsRequest{
		Bucket: aws.StringValue(&bucket),
	}
	listresp, err := cli.ListObjects(&listreq)
	if err != nil {
		panic(err)
	}
	if err != nil {
		fmt.Printf("Error: %v\n", err)
	} else {
		fmt.Printf("Content of bucket '%s': %d files\n", bucket, len(listresp.Contents))
		for _, obj := range listresp.Contents {
			fmt.Println("-", *obj.Key)
		}
	}
}

PomaxLet's make a Firefox Extension, the painless way

Ever had a thing you really wanted to customise about Firefox, but you couldn't because it wasn't in any regular menu, advanced menu, or about:config?

For instance, you want to be able to delete elements on a page for peace of mind from the context menu. How the heck do you do that? Well, with the publication of the new node-based jpm, the answer to that question is "pretty dang simply"...

Let's make our own Firefox extension with a "Delete element" option added to the context menu:

a screenshot of the Firefox page context menu with a 'delete element' option

We're going to make that happen in five steps.

  1. Install jpm -- in your terminal simply run: npm install -g jpm (make sure you have node.js installed) and done (this is mostly prerequisite to developing an extension, so you only have to do this once, and then never again. For future extensions, you start at step 2!)
  2. Create a dir for working on your extension whereveryou like, navigate to it in the terminal and run: jpm init to set up the standard files necessary to build your extension. Good news: it's very few files!
  3. Edit the index.js file that command generated, writing whatever code you need to do what you want to get done,
  4. Turn your code into an .xpi extension by running : jpm xpi,
  5. Install the extension by opening the generated .xpi file with Firefox

Of course, step (3) is the part that requires some effort, but let's run through this together. We're going to pretty much copy/paste the code straight from the context menu API documentation:

      // we need to make sure we have a hook into "things" we click on:
  1:  var self = require("sdk/self");

      // and we'll be using the context menu, so let's make sure we can:
  2:  var contextMenu = require("sdk/context-menu");

      // let's add a menu item!
  3:  var menuItem = contextMenu.Item({
        // the label is pretty obvious...
  4:    label: "Delete Element",

        // the context tells Firefox which things should have this in their context
        // menu, as there are quite a few elements that get "their own" menu,
        // like "the page" vs "an image" vs "a link". .. We pretty much want
        // everything on a page, so we make that happen:
  5:    context: contextMenu.PredicateContext(function(data) { return true; }),

        // and finally the script that runs when we select the option. Delete!
  6:    contentScript: 'self.on("click", function (node, data) { node.outerHTML = ""; });'
      });

The only changes here are that we want "delete" for everything, so the context is simply "for anything that the context menu opens up on, consider that a valid context for our custom script" (which we do by using the widest context possible on line 5), and of course the script itself is different because we want to delete nodes (line 6).

The contentScript property is a string, so we're a little restricted in what we can do without all manner of fancy postMessages, but thankfully we don't need it: the addon mechanism will always call the contentScript function with two arguments, "node" and "data, and the "node" argument is simply the HTML element you clicked on, which is what we want to delete. So we do! We don't even try to be clever here, we simply set the element's .outerHTML property to an empty string, and that makes it vanish from the page.

If you expected more work, then good news: there isn't any, we're already done! Seriously: run jpm run yourself to test your extension, and after verifying that it indeed gives you the new "Delete element" option in the context menu and deletes nodes when used, move on to steps (4) and (5) for the ultimate control of your browser.

Because here's the most important part: the freedom to control your online experience, and Firefox, go hand in hand.

Mozilla Open Policy & Advocacy BlogSpotlight on Public Knowledge: A Ford-Mozilla Open Web Fellow Host

(This is the fourth in our series spotlighting host organizations for the 2015 Ford-Mozilla Open Web Fellowship. For years, Public Knowledge has been at the forefront of fighting for citizens and informing complex telecommunications policy to protect people. Working at Public Knowledge, the Fellow will be at the center of emerging policy that will shape the Internet as we know it. Apply to be a Ford-Mozilla Open Web Fellow and use your tech skills at Public Knowledge to protect the Web.)

Spotlight on Public Knowledge: A Ford-Mozilla Open Web Fellow Host
by Shiva Stella, Communications Manager of Public Knowledge

This year has been especially intense for policy advocates passionate about protecting a free and open internet, user protections, and our digital rights. Make no mistake: From net neutrality to the Comcast/Time Warner Cable merger, policy makers will continue to have an outsized influence over the web.

In order to enhance our advocacy efforts, Public Knowledge is hosting a Ford-Mozilla Open Web Fellow. We are looking for a leader with technical skills and drive to defend the internet, focusing on fair-use copyright and consumer protections. There’s a lot of important work to be done, and we know the public could use your help.

Public Knowledge Long

Public Knowledge works steadfastly in the telecommunications and digital rights space. Our goal is to inform the public of key policies that impact and limit a wide range of technology and telecom users. Whether you’re the child first responders fail to locate accurately because you dial 911 from a cell phone or the small business owner who can’t afford to “buy into” the internet “fast lane,” these policies affect your digital rights – including the ability to access, use and own communications tools like your set-top box (which you currently lease forever from your cable company, by the way) and your cell phone (which your carrier might argue can’t be used on a competing network due to copyright law).

There is no doubt that public policy impacts people’s lives, and Public Knowledge is advocating for the public interest at a critical time when special interests are attempting to shape policy that benefits them at our cost or that overlooks an issue’s complexity.

Indeed, in this interconnected world, the right policy outcome isn’t always immediately clear. Location tracking, for example, can impact people’s sense of privacy; and yet, when deployed in the right way, can lead to first responders swiftly locating someone calling 911 from a mobile device. Public Knowledge sifts through the research and makes sure consumers have a seat at the table when these issues are decided.

Public policy in this area can also impact the broader economy, and raises larger questions: Should we have an internet with a “fast lane“ for the relatively few companies that can afford it, and a slow lane for the rest of us? What would be the impact on innovation and small business if we erase net neutrality as we know it?

The answers to these questions require a community of leaders to advocate for policies that serve the public interest. We need to state in clear language the impact of ill-informed policies and how they affect people’s digital rights —including the ability to access, use and own communications tools, as well as the ability to create and innovate.

Even as the U.S. Federal Communications Commission reviews millions of net neutrality comments and considers approving huge mergers that risk consumers, the cable industry is busy hijacking satellite bills (STAVRA), stealthily slipping “pro-cable” provisions into legislation and that must be passed so 1.5 million satellite subscribers may continue receiving their (non-cable!) service. Public Knowledge shines light on these policies to prevent them from harming innovation or jeopardizing our creative and connected future. To this end we advocate for an open internet and public access to affordable technologies and creative works, engaging policy makers and the public in key policy decisions that affect us all.

Let us be clear: private interests are hoping you won’t notice or just don’t care about these issues. We’re betting that’s not the case. Please apply today to join the Public Knowledge team as a Ford-Mozilla Open Web Fellow to defend the internet we love.


Apply to be a Ford-Mozilla Open Web Fellow. Application deadline for the 2015 Fellowship is December 31, 2014.

Gervase MarkhamFirefoxOS 3 Ideas: Hack The Phone Call

People are brainstorming ideas for FirefoxOS 3, and how it can be more user-centred. Here’s one:

There should be ways for apps to transparently be hooked into the voice call creation and reception process. I want to use the standard dialer and address book that I’m used to (and not have to use replacements written by particular companies or services), and still e.g.:

  • My phone company can write a Firefox OS extension (like TU Go on O2) such that when I’m on Wifi, all calls transparently use that
  • SIP or WebRTC contacts appear in the standard contacts app, but when I press “Call”, it uses the right technology to reach them
  • Incoming calls can come over VoIP, the phone network or any other way and they all look the same when ringing
  • When I dial, I can configure rules such that calls to certain prefixes/countries/numbers transparently use a dial-through operator, or VoIP, or a particular SIM
  • If a person has 3 possible contact methods, it tries them in a defined order, or all simultaneously, or best quality first, or whatever I want

These functions don’t have to be there by default; what I’m arguing for is the necessary hooks so that apps can add them – an app from your carrier, an app from your SIP provider, an app from a dial-through provider, or just a generic app someone writes to define call routing rules. But the key point is, you don’t have to use a new dialer or address book to use these features – they can be UI-less (at least when not explicitly configuring them.)

In other words, I want to give control over the phone call back to the user. At the moment, doing SIP on Android requires a new app. TU Go requires a new app. There’s no way to say “for all international calls, when I’m in the UK, use this dial-through operator”. I don’t have a dual-SIM Android phone, so I’m not sure if it’s possible on Android to say “all calls to this person use SIM X” or “all calls to this network (defined by certain number prefixes) use SIM Y”. But anyway, all these things should be possible on FirefoxOS 3. They may not be popular with carriers, because they will all save the user money. But if we are being user-centric, we should do them.

Benjamin KerensaGive a little

GiveGive by Time Green (CC-BY-SA)

The year is coming to an end and I would encourage you all to consider making a tax-deductible donation (If you live in the U.S.) to one of the following great non-profits:

Mozilla

The Mozilla Foundation is a non-profit organization that promotes openness, innovation and participation on the Internet. We promote the values of an open Internet to the broader world. Mozilla is best known for the Firefox browser, but we advance our mission through other software projects, grants and engagement and education efforts.

EFF

The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.

ACLU

The ACLU is our nation’s guardian of liberty, working daily in courts, legislatures and communities to defend and preserve the individual rights and liberties that the Constitution and laws of the United States guarantee everyone in this country.

Wikimedia Foundation

The Wikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual, educational content, and to providing the full content of these wiki-based projects to the public free of charge. The Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia, a top-ten internet property.

Feeding America

Feeding America is committed to helping people in need, but we can’t do it without you. If you believe that no one should go hungry in America, take the pledge to help solve hunger.

Action Against Hunger

ACF International, a global humanitarian organization committed to ending world hunger, works to save the lives of malnourished children while providing communities with access to safe water and sustainable solutions to hunger.
These six non-profits are just one of many causes to support but these ones specifically are playing a pivotal role in protecting the internet, protecting liberties, educating people around the globe or helping reduce hunger.

Even if you cannot support one of these causes, consider giving this post a share to add visibility to your friends and family and help support these causes in the new year!

 

Erik VoldHow is the Jetpack/Add-on SDK used?

This is a follow-up post to my What is the Jetpack/Add-on SDK post, in which I wish to discuss the ways Jetpack/Add-on SDK is currently being used (that I know about).

Firefox DevTools

The first obvious place to mention is Firefox DevTools, not long after the Add-on SDK team was merged with the Firefox DevTools team, Dave Camp began a process of molding the Firefox DevTools code to use CommonJS module structure supported by the Jetpack SDK loader. Additionally new DevTools features have been prototyped with CFX/JPM (the SDK’s associated CLIs) such as the Valence (aka Firefox Tools Adapter) (source code), Firebug Next (source code), and the WebIDE (source code).

Firefox OS Simulator

The Firefox OS Simulator (source code) was built using the Jetpack/Add-on SDK, one core feature it utilized was the subprocess module (now called child_process for NodeJS parity) it even used third party modules to add UI like toolbar buttons, and menuitems. Finally it used the SDK test framework.

Click here to find the Firefox OS Simulator on AMO

Firefox Testpilot

Firefox Test Pilot is an opt-in program by feedback on things like features is collected. With Testpilot Firefox can experiment with new features (alone or in an A/B scenario), to see if and how they are used, how much they are used, and to determine whether or not they need more work.

You can explore the experiments here

Australis

Australis was the UI redesign project codename for Firefox, it was prototyped with the Jetpack SDK, the source code is here

New Tab Tiles

The new tab page, which uses tiles, some of which are ads, that was also prototyped with the Jetpack/Add-on SDK the source code is here

Mozilla Labs

Lightbeam (aka Collusion)

Before the Mozilla Labs project ended parts of the team used the Jetpack/Add-on SDK to develop ideas for Firefox, one highlight was Lightbeam (formerly known as Collusion) (source code) which was the topic of a TED talk which currently has been viewed more than 1.5 million times.

Click here to find Lightbeam on AMO

Prospector

There was also a sub project of MozLabs called Prospector which also used the Jetpack/Add-on SDK to build feature prototypes such as:

Click here for the full AMO list

Old School Add-ons

  • Scriptish which is a fork of Greasemonkey which uses the Jetpack/Add-on SDK loader even though it is still technically an old school add-on.

A few old school add-ons are using the dev tools loader, Firebug 2.

New School Add-ons

Mozilla Add-ons
Community Add-ons

Finally I’d like to mention some of the add-ons that were developed outside of Mozilla, which was one of the primary goals that lead to the project’s conception. This are some of my favorites:

Click here to see the full AMO list

  • Note: user counts were taken on Oct 15th 2014 and the numbers will obviously change over time

Addons.Mozilla.Org (AMO)

It should be no surprise that there is a fast track on AMO for extensions built with the Add-on SDK, this is because there is less code to review, and the reviews are generally easier. This is good news for the Mozilla community in three ways, the first is that there are more people developing add-ons, secondly the review times are shorter than they would be otherwise, and last of all (but not least) reviewing add-ons is easier, which results in more reviewers.

Summary

There are many ways in which the Jetpack/Add-on SDK is being used in, adds value to, and is an essential part of the Mozilla mission and community, which are not obvious. Furthermore all of these use cases now depend on the Jetpack/Add-on SDK and all have to be factored in to the team’s decision making, because bugs come from all of these important sources. So the team can no longer merely focus on new school add-on metrics imho.

Next I want to describe areas the project could work on in the future.

Related Links

Nigel BabuMozlandia - Arrival

Portland. The three words that come to mind are overwhelmed, cold, and exhilarating. Getting there was a right pain, I’d have to admit. Though, flying around the US the weekend after Black Friday isn’t the best idea anyway. According to my rough calculations, it took about 25 hours from take off in Delhi to wheels down in Portland. That’s a heck a lot of time on planes and at airports. But hey, I’ve been doing this for weeks in a row at this point.

At the airport, I ran into people holding up the Mozilla board. As I waited for the shuttle, I was very happy to run into Luke, from the MDN team. We met at the summit and he was a familiar face. We were chatting all the way to the hotel about civic hacking.

This work week is the most exciting Mozilla event that I’ve attended. I’m finally getting to meet a lot of people I know and renewing friendships from the last few events. I started contributing to Mozilla by contributing to the Webdev team. My secret plan at this work week was to meet all the folks from the old Webdev team in person. I’ve known them for more than 3 years and never quite managed to meet everyone in person.

After a quick shower, I decided to step out to the Mozilla PDX. According to Google Maps, it was a quick walk away and I was trying not to sleep all day despite my body trying to convince me it was a good idea. At the office, I met Fred’s team and we sat around talking for a while. It was good to meet Christie again too! That’s when a wave of exhaustion hit. I didn’t see it coming. Suddenly, I felt sluggish and a warm bed seemed very tempting. After lunch with Jen, Sole, and Matt, I quickly retired to bed.

Sole and the Whale

When I got down after the nap, there was a small group headed to the opening event. This was good, because I got very confused with Google Maps (paper maps were much more helpful).

Whoa, people overload. I walked around a few rounds meeting lots of people. It was fun running into a lot of people from IRC in the flesh. I enjoyed meeting the folks from the Auckland office (I often back them out :P). And I finally met Laura and her team. For change, I’m visiting bkero’s town this time instead of him visiting mine ;)

The crowd

The rest of the evening is a bit of a blur. Eventually, I was exhausted and walked back to the hotel for a good night’s sleep before the fun really started!

Andy McKaySelf Examination

A few weeks ago we had the Mozilla Mozlanida meet up in Portland. I had a few things on my agenda going into that meeting. My biggest was to critically examine the project my team and I have been working on for almost two years.

That project is Marketplace Payments, which we provide through the Firefox Marketplace for developers. We don't limit what kind of payment system you use in Web Apps, unlike Google or Apple.

In Mozlandia, I was arguing (along with some colleagues) that there really is little point in working on this much anymore. There are many reasons for this, but here's the high level:

  • Providing a payments service that competes against every other web based payment service in existence is outside of our core goals

  • We can't actually compete against every other web based payment service without significant investment

  • Developer uptake doesn't support further investment in the project.

There was mostly agreement on this, so we've agreed to complete our existing work on it and then leave it as it is for a while. We'll watch the metrics, see what happens and make some decisions based on the that.

But really the details of this are not that important. What I believe is really, really important is the ability to critically examine your job and projects and examine their worth.

What normally happens is that you get a group of people and tell them to work on project X. They will iterate through features and complete features. And repeat and keep going. And if you don't stop at some point and critically examine what is going on, it will keep repeating. People will find new features, new enhancements, new areas to add to the project. Just as they have been trained to do so. And the project will keep growing.

That's a perfectly normal thing for a team to do. It's harder to call a project done, the features complete and realize that there might be an end.

Normally that happens externally. Sometimes its done a positive way, sometimes it's done negatively. In the latter people get upset and recriminations and accusations fly. It's not a fun time.

But being able to step aside and declare the project done internally can be hard for one main reason: people fear for their job.

That's what some people said to me in Mozlandia "Andy you've just talked yourself out of a job" or "You've just thrown yourself under a bus".

Maybe, but so be it. I have no fear that there's important stuff to be doing at Mozilla and that my awesome team will have plenty to do.

Right, next project.

Update: Marketplace Payments are still there and we are completing the last projects we have for them. But we aren't going to be doing development beyond that on them for a while. Let's see what the data shows.

Doug BelshawBittorrent's Project Maelstrom is 'Firecloud' on steroids

Earlier this week, BitTorrent, Inc. announced Project Maelstrom. The idea is to apply the bittorrent technologies and approaches to more of the web.

Project Maelstrom

Note: if you can’t read the text in the image, it says: “This is a webpage powered by 397 people + You. Not a central server.” So. Much. Win.

The blog post announcing the project doesn’t have lots of details, but a follow-up PC World article includes an interview with a couple of the people behind it.

I think the key thing comes in this response from product manager Rob Velasquez:

We support normal web browsing via HTTP/S. We only add the additional support of being able to browse the distributed web via torrents

This excites me for a couple of reasons. First, I’ve thought on-and-off for years about how to build a website that’s untakedownable. I’ve explored DNS based on the technology powering Bitcoin, experimented with the PirateBay’s now-defunct blogging platform Baywords, and explored the dark underbelly of the web with sites available only through Tor.

Second, Vinay Gupta and I almost managed to get a project off the ground called Firecloud. This would have used a combination of interesting technologies such as WebRTC, HTML5 local storage and DHT to provide distributed website hosting through a Firefox add-on.

I really, really hope that BitTorrent turn this into a reality. I’d love to be able to host my website as a torrent. :-D

Update: People pay more attention to products than technologies, but I’d love to see Webtorrent get more love/attention/exposure.


Comments? Questions Email me: doug@mozillafoundation.org

Mozilla FundraisingPrivacy-Forward Fundraising

There are a lot of ways that fundraising at Mozilla is very different than the fundraising I’ve done at other non-profit organizations. One of the most striking differences is how our Privacy Principles guide our donor experience, our fundraising systems, … Continue reading

Fabien Cazenave"pip install" & "gem install" without sudo

Following yesterday’s post about using “npm install -g” without root privileges, here are the Python and Ruby counterparts for your beloved OSX or Linux box.

By default, pip install and gem install try to install stuff in /usr/, which requires root privileges. Hence, most users will “naturally” do a sudo to perform the install — which is, in my opinion at least, a very bad idea (do you really want to give root privileges to packages that haven’t been reviewed?). Fortunately, there’s more than the default setting.

Python: pip install --user

With Python 2.6 and later you can avoid “sudoing” your pip install by using the --user argument (thanks @cmdevienne for the tip!). Let’s test this with html-linter:

$ pip install --user html-linter

By default on Linux and OSX (non-framework builds) this will install your package into ~/.local, which is just fine for me. All executables are in ~/.local/bin/, which is included in my $PATH, and all Python libraries are in ~/.local/lib/python2.7/. The world couldn’t be any better.

You can specify a custom destination by setting the PYTHONUSERBASE environment variable:

$ export PYTHONUSERBASE=/myappenv
$ pip install --user html-linter

Of course, you’ll have to add that to your $PATH to make it work. You can add the following lines to your ~/.profile like that:

export PYTHONUSERBASE=/myappenv
PATH="$PYTHONUSERBASE/bin:${PATH}"

The only downside (compared to npm) is that you’ll have to remember to use the --user argument when installing Python packages. If there’s a way to make it the default mode, please let me know.

EDIT: a good workaround is to define a custom pip function in your ~/.bash_aliases (or bashrc, zshrc, whatever), as suggested in comment #1.

Ruby: gem install --user-install

gem’s --user-install argument is quite similar. One good thing is that you can easily make it the default mode:

$ echo "gem: --user-install" >> ~/.gemrc

Now let’s try that with the most valuable gem I know:

$ gem install vimgolf
Fetching: vimgolf-0.4.6.gem (100%)
WARNING:  You don't have /home/kaze/.gem/ruby/1.8/bin in your PATH,
          gem executables will not run.

As you can see, gem installs everything in ~/.gem by default; unfortunately, the file structure does not allow to put executables in the same ~/.local/bin/ directory. Never mind, we’ll add those ~/.gem/ruby/*/bin/ directories to the $PATH manually by adding these lines to the ~/.profile:

for dir in $HOME/.gem/ruby/*; do
  [ -d "$dir/bin" ] && PATH="${dir}/bin:${PATH}"
done

Source your ~/.profile, you’re done.

Joel MaherTracking Firefox performance as we uplift – the volume of alerts we get

For the last year, I have been focused on ensuring we look at the alerts generated by Talos.  For the last 6 months I have also looked a bit more carefully at the uplifts we do every 6 weeks.  In fact we wouldn’t generate alerts when we uplifted to beta because we didn’t run enough tests to verify a sustained regression in a given time window.

Lets look at data, specifically the volume of alerts:

Trend of improvements/regressions from Firefox 31 to 36 as we uplift to Aurora

Trend of improvements/regressions from Firefox 31 to 36 as we uplift to Aurora

this is a stacked graph, you can interpret it as Firefox 32 had a lot of improvements and Firefox 33 had a lot of regressions.  I think what is more interesting is how many performance regressions are fixed or added when we go from Aurora to Beta.  There is minimal data available for Beta.  This next image will compare alert volume for the same release on Aurora then on Beta:

Side by side stacked bars for the regressions going into Aurora and then going onto Beta.

Side by side stacked bars for the regressions going into Aurora and then going onto Beta.

One way to interpret this above graph is to see that we fixed a lot of regressions on Aurora while Firefox 33 was on there, but for Firefox 34, we introduced a lot of regressions.

The above data is just my interpretation of this, Here are links to a more fine grained view on the data:

As always, if you have questions, concerns, praise, or other great ideas- feel free to chat via this blog or via irc (:jmaher).


Mozilla Reps CommunityReps Weekly Call – December 11st 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

whosjoining

Summary

  • FOSDEM update.
  • Portland Work Week.
  • ReMo/Mozillians websites testing.
  • End of year receipts campaign.
  • Remo challenges.
  • Stumbling in a box events.
  • Reps Monthly newsletter.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Blair McBrideUX Deisgn Day, Dunedin 2014

Things I’ve been saying for a long time: I need to blog more. I haven’t been very good at achieving that.

So, recently I was at UX Design Day – a one-day conference focused on UX and design. It’s the only conference of it’s kind in NZ, and it started here in Dunedin. Working remotely and not really part of the design community, I don’t often get a chance to sit down and talk UX/design in-person with people. This year the conference was back in Dunedin, so I jumped at the chance to attend.

UX Design Day intro slideI was impressed by the diverse turnout this year. Interaction design, visual design, content strategy, marketing, education, user research, and software development were all represented. I had tried to drum up support from the local developer community to attend, and that seemed to have worked well. Too often do I see developers ignoring UX/design issues – either being very dismissive, or claiming it’s another person’s job; so this felt like a good sign.

Alone those lines, one of the things that stuck with me was the talk around not having UX teams separate from everything else. The largest example talked about was UX and content strategy, but I think it applies equally to software development teams too. Having these two groups work closely together, not segregated, helps bring so much context to both teams.

The other important take-away for me was the importance of not accepting crap. That is, experiences or systems that are, intentionally or not, lacking in design forethought and therefore lead to an unnecessarily difficult experiences, or a design that by default leads to harm. The primary concrete example here was physical safety in various workplaces, where people were put at needless risk due to the lack of safety by default design. I think this is a very relevant point for those of us building software, given that we so often experience design in software that feels broken, but too often don’t do anything constructive to help fix it.

Obligatory wall of Post-It notes

On the whole, I enjoyed the conference. However, since the talks covered such a wide corpus, I feel it didn’t provide enough time for any one area. Diversity is an asset, but I would have liked time for more in-depth explorations of topics.

Guillaume DestuynderVPN and DNS

The problem

Note

TLDR

With split-view DNS and VPN it makes your web browsing and what not slower due to slower DNS resolution. This is a “solution” mainly for Linux and OSX.

When connecting to a VPN, usually, it’s going to push it’s own DNS name servers. It does this because many, or dare I say most networks behind the VPN actually have hostnames that are “internal” and will only resolve on the internal name server. This situation is also called “split-view DNS”.

The internal name server also resolves public hostnames - but because of the VPN round-trip this is slower. In some cases, it can be much slower (for example if your company’s VPN is in the USA and you live in Europe... hint).

dnsmasq to the rescue

dnsmasq is a well-known DNS caching server, DHCP, TFTP, PXE (and recently even RA) server. You can configure it so that requests for certain domains are resolved with a specific name-server.

For example, you would want to forward all internal domains to the DNS name server that is provided by the VPN:

File: /etc/resolv.conf

nameserver 127.0.0.1

File: /etc/resolv2.conf

#Your local/ISP nameserver(s)
nameserver 192.168.0.1
nameserver 8.8.8.8

File: /etc/dnsmasq.conf

server=/scl3.mozilla.com/10.0.0.1/
server=/phx1.mozilla.com/10.0.0.1/

resolv-file=/etc/resolv2.conf

Note

In this example, *.scl3.mozilla.com will resolve through the name server at 10.0.0.1

If you use openresolv (if you don’t know, you probably do...) you’ll have to instruct it to always use your local DNS cache (dnsmasq) as well so that it doesn’t override your settings.

File: /etc/resolvconf.conf

#Optional, if you use openresolv
name_servers=127.0.0.1

And off you go! Don’t forget to restart dnsmasq ;)

systemctl restart dnsmasq
# or..
/etc/init.d/dnsmasq restart

Jeff WaldenIntroducing the JavaScript Internationalization API

(also cross-posted on the Hacks blog — comment over there if you have anything to say)

Firefox 29 issued half a year ago, so this post is long overdue. Nevertheless I wanted to pause for a second to discuss the Internationalization API first shipped on desktop in that release (and passing all tests!). Norbert Lindenberg wrote most of the implementation, and I reviewed it and now maintain it. (Work by Makoto Kato should bring this to Android soon; b2g may take longer due to some b2g-specific hurdles. Stay tuned.)

What’s internationalization?

Internationalization (i18n for short — i, eighteen characters, n) is the process of writing applications in a way that allows them to be easily adapted for audiences from varied places, using varied languages. It’s easy to get this wrong by inadvertently assuming one’s users come from one place and speak one language, especially if you don’t even know you’ve made an assumption.

function formatDate(d)
{
  // Everyone uses month/date/year...right?
  var month = d.getMonth() + 1;
  var date = d.getDate();
  var year = d.getFullYear();
  return month + "/" + date + "/" + year;
}

function formatMoney(amount)
{
  // All money is dollars with two fractional digits...right?
  return "$" + amount.toFixed(2);
}

function sortNames(names)
{
  function sortAlphabetically(a, b)
  {
    var left = a.toLowerCase(), right = b.toLowerCase();
    if (left > right)
      return 1;
    if (left === right)
      return 0;
    return -1;
  }

  // Names always sort alphabetically...right?
  names.sort(sortAlphabetically);
}

JavaScript’s historical i18n support is poor

i18n-aware formatting in traditional JS uses the various toLocaleString() methods. The resulting strings contained whatever details the implementation chose to provide: no way to pick and choose (did you need a weekday in that formatted date? is the year irrelevant?). Even if the proper details were included, the format might be wrong e.g. decimal when percentage was desired. And you couldn’t choose a locale.

As for sorting, JS provided almost no useful locale-sensitive text-comparison (collation) functions. localeCompare() existed but with a very awkward interface unsuited for use with sort. And it too didn’t permit choosing a locale or specific sort order.

These limitations are bad enough that — this surprised me greatly when I learned it! — serious web applications that need i18n capabilities (most commonly, financial sites displaying currencies) will box up the data, send it to a server, have the server perform the operation, and send it back to the client. Server roundtrips just to format amounts of money. Yeesh.

A new JS Internationalization API

The new ECMAScript Internationalization API greatly improves JavaScript’s i18n capabilities. It provides all the flourishes one could want for formatting dates and numbers and sorting text. The locale is selectable, with fallback if the requested locale is unsupported. Formatting requests can specify the particular components to include. Custom formats for percentages, significant digits, and currencies are supported. Numerous collation options are exposed for use in sorting text. And if you care about performance, the up-front work to select a locale and process options can now be done once, instead of once every time a locale-dependent operation is performed.

That said, the API is not a panacea. The API is “best effort” only. Precise outputs are almost always deliberately unspecified. An implementation could legally support only the oj locale, or it could ignore (almost all) provided formatting options. Most implementations will have high-quality support for many locales, but it’s not guaranteed (particularly on resource-constrained systems such as mobile).

Under the hood, Firefox’s implementation depends upon the International Components for Unicode library (ICU), which in turn depends upon the Unicode Common Locale Data Repository (CLDR) locale data set. Our implementation is self-hosted: most of the implementation atop ICU is written in JavaScript itself. We hit a few bumps along the way (we haven’t self-hosted anything this large before), but nothing major.

The Intl interface

The i18n API lives on the global Intl object. Intl contains three constructors: Intl.Collator, Intl.DateTimeFormat, and Intl.NumberFormat. Each constructor creates an object exposing the relevant operation, efficiently caching locale and options for the operation. Creating such an object follows this pattern:

var ctor = "Collator"; // or the others
var instance = new Intl[ctor](locales, options);

locales is a string specifying a single language tag or an arraylike object containing multiple language tags. Language tags are strings like en (English generally), de-AT (German as used in Austria), or zh-Hant-TW (Chinese as used in Taiwan, using the traditional Chinese script). Language tags can also include a “Unicode extension”, of the form -u-key1-value1-key2-value2..., where each key is an “extension key”. The various constructors interpret these specially.

options is an object whose properties (or their absence, by evaluating to undefined) determine how the formatter or collator behaves. Its exact interpretation is determined by the individual constructor.

Given locale information and options, the implementation will try to produce the closest behavior it can to the “ideal” behavior. Firefox supports 400+ locales for collation and 600+ locales for date/time and number formatting, so it’s very likely (but not guaranteed) the locales you might care about are supported.

Intl generally provides no guarantee of particular behavior. If the requested locale is unsupported, Intl allows best-effort behavior. Even if the locale is supported, behavior is not rigidly specified. Never assume that a particular set of options corresponds to a particular format. The phrasing of the overall format (encompassing all requested components) might vary across browsers, or even across browser versions. Individual components’ formats are unspecified: a short-format weekday might be “S”, “Sa”, or “Sat”. The Intl API isn’t intended to expose exactly specified behavior.

Date/time formatting

Options

The primary options properties for date/time formatting are as follows:

weekday, era
"narrow", "short", or "long". (era refers to typically longer-than-year divisions in a calendar system: BC/AD, the current Japanese emperor’s reign, or others.)
month
"2-digit", "numeric", "narrow", "short", or "long"
year
day
hour, minute, second
"2-digit" or "numeric"
timeZoneName
"short" or "long"
timeZone
Case-insensitive "UTC" will format with respect to UTC. Values like "CEST" and "America/New_York" don’t have to be supported, and they don’t currently work in Firefox.

The values don’t map to particular formats: remember, the Intl API almost never specifies exact behavior. But the intent is that "narrow", "short", and "long" produce output of corresponding size — “S” or “Sa”, “Sat”, and “Saturday”, for example. (Output may be ambiguous: Saturday and Sunday both could produce “S”.) "2-digit" and "numeric" map to two-digit number strings or full-length numeric strings: “70” and “1970”, for example.

The final used options are largely the requested options. However, if you don’t specifically request any weekday/year/month/day/hour/minute/second, then year/month/day will be added to your provided options.

Beyond these basic options are a few special options:

hour12
Specifies whether hours will be in 12-hour or 24-hour format. The default is typically locale-dependent. (Details such as whether midnight is zero-based or twelve-based and whether leading zeroes are present are also locale-dependent.)

There are also two special properties, localeMatcher (taking either "lookup" or "best fit") and formatMatcher (taking either "basic" or "best fit"), each defaulting to "best fit". These affect how the right locale and format are selected. The use cases for these are somewhat esoteric, so you should probably ignore them.

Locale-centric options

DateTimeFormat also allows formatting using customized calendaring and numbering systems. These details are effectively part of the locale, so they’re specified in the Unicode extension in the language tag.

For example, Thai as spoken in Thailand has the language tag th-TH. Recall that a Unicode extension has the format -u-key1-value1-key2-value2.... The calendaring system key is ca, and the numbering system key is nu. The Thai numbering system has the value thai, and the Chinese calendaring system has the value chinese. Thus to format dates in this overall manner, we tack a Unicode extension containing both these key/value pairs onto the end of the language tag: th-TH-u-ca-chinese-nu-thai.

For more information on the various calendaring and numbering systems, see the full DateTimeFormat documentation.

Examples

After creating a DateTimeFormat object, the next step is to use it to format dates via the handy format() function. Conveniently, this function is a bound function: you don’t have to call it on the DateTimeFormat directly. Then provide it a timestamp or Date object.

Putting it all together, here are some examples of how to create DateTimeFormat options for particular uses, with current behavior in Firefox.

var msPerDay = 24 * 60 * 60 * 1000;

// July 17, 2014 00:00:00 UTC.
var july172014 = new Date(msPerDay * (44 * 365 + 11 + 197));

Let’s format a date for English as used in the United States. Let’s include two-digit month/day/year, plus two-digit hours/minutes, and a short time zone to clarify that time. (The result would obviously be different in another time zone.)

var options =
  { year: "2-digit", month: "2-digit", day: "2-digit",
    hour: "2-digit", minute: "2-digit",
    timeZoneName: "short" };
var americanDateTime =
  new Intl.DateTimeFormat("en-US", options).format;

print(americanDateTime(july172014)); // 07/16/14, 5:00 PM PDT

Or let’s do something similar for Portuguese — ideally as used in Brazil, but in a pinch Portugal works. Let’s go for a little longer format, with full year and spelled-out month, but make it UTC for portability.

var options =
  { year: "numeric", month: "long", day: "numeric",
    hour: "2-digit", minute: "2-digit",
    timeZoneName: "short", timeZone: "UTC" };
var portugueseTime =
  new Intl.DateTimeFormat(["pt-BR", "pt-PT"], options);

// 17 de julho de 2014 00:00 GMT
print(portugueseTime.format(july172014));

How about a compact, UTC-formatted weekly Swiss train schedule? We’ll try the official languages from most to least popular to choose the one that’s most likely to be readable.

var swissLocales = ["de-CH", "fr-CH", "it-CH", "rm-CH"];
var options =
  { weekday: "short",
    hour: "numeric", minute: "numeric",
    timeZone: "UTC", timeZoneName: "short" };
var swissTime =
  new Intl.DateTimeFormat(swissLocales, options).format;

print(swissTime(july172014)); // Do. 00:00 GMT

Or let’s try a date in descriptive text by a painting in a Japanese museum, using the Japanese calendar with year and era:

var jpYearEra =
  new Intl.DateTimeFormat("ja-JP-u-ca-japanese",
                          { year: "numeric", era: "long" });

print(jpYearEra.format(july172014)); // 平成26年

And for something completely different, a longer date for use in Thai as used in Thailand — but using the Thai numbering system and Chinese calendar. (Quality implementations such as Firefox’s would treat plain th-TH as th-TH-u-ca-buddhist-nu-latn, imputing Thailand’s typical Buddhist calendar system and Latin 0-9 numerals.)

var options =
  { year: "numeric", month: "long", day: "numeric" };
var thaiDate =
  new Intl.DateTimeFormat("th-TH-u-nu-thai-ca-chinese", options);

print(thaiDate.format(july172014)); // ๒๐ 6 ๓๑

Calendar and numbering system bits aside, it’s relatively simple. Just pick your components and their lengths.

Number formatting

Options

The primary options properties for number formatting are as follows:

style
"currency", "percent", or "decimal" (the default) to format a value of that kind.
currency
A three-letter currency code, e.g. USD or CHF. Required if style is "currency", otherwise meaningless.
currencyDisplay
"code", "symbol", or "name", defaulting to "symbol". "code" will use the three-letter currency code in the formatted string. "symbol" will use a currency symbol such as $ or £. "name" typically uses some sort of spelled-out version of the currency. (Firefox currently only supports "symbol", but this will be fixed soon.)
minimumIntegerDigits
An integer from 1 to 21 (inclusive), defaulting to 1. The resulting string is front-padded with zeroes until its integer component contains at least this many digits. (For example, if this value were 2, formatting 3 might produce “03”.)
minimumFractionDigits, maximumFractionDigits
Integers from 0 to 20 (inclusive). The resulting string will have at least minimumFractionDigits, and no more than maximumFractionDigits, fractional digits. The default minimum is currency-dependent (usually 2, rarely 0 or 3) if style is "currency", otherwise 0. The default maximum is 0 for percents, 3 for decimals, and currency-dependent for currencies.
minimumSignificantDigits, maximumSignificantDigits
Integers from 1 to 21 (inclusive). If present, these override the integer/fraction digit control above to determine the minimum/maximum significant figures in the formatted number string, as determined in concert with the number of decimal places required to accurately specify the number. (Note that in a multiple of 10 the significant digits may be ambiguous, as in “100” with its one, two, or three significant digits.)
useGrouping
Boolean (defaulting to true) determining whether the formatted string will contain grouping separators (e.g. “,” as English thousands separator).

NumberFormat also recognizes the esoteric, mostly ignorable localeMatcher property.

Locale-centric options

Just as DateTimeFormat supported custom numbering systems in the Unicode extension using the nu key, so too does NumberFormat. For example, the language tag for Chinese as used in China is zh-CN. The value for the Han decimal numbering system is hanidec. To format numbers for these systems, we tack a Unicode extension onto the language tag: zh-CN-u-nu-hanidec.

For complete information on specifying the various numbering systems, see the full NumberFormat documentation.

Examples

NumberFormat objects have a format function property just as DateTimeFormat objects do. And as there, the format function is a bound function that may be used in isolation from the NumberFormat.

Here are some examples of how to create NumberFormat options for particular uses, with Firefox’s behavior. First let’s format some money for use in Chinese as used in China, specifically using Han decimal numbers (instead of much more common Latin numbers). Select the "currency" style, then use the code for Chinese renminbi (yuan), grouping by default, with the usual number of fractional digits.

var hanDecimalRMBInChina =
  new Intl.NumberFormat("zh-CN-u-nu-hanidec",
                        { style: "currency", currency: "CNY" });

print(hanDecimalRMBInChina.format(1314.25)); // ¥ 一,三一四.二五

Or let’s format a United States-style gas price, with its peculiar thousandths-place 9, for use in English as used in the United States.

var gasPrice =
  new Intl.NumberFormat("en-US",
                        { style: "currency", currency: "USD",
                          minimumFractionDigits: 3 });

print(gasPrice.format(5.259)); // $5.259

Or let’s try a percentage in Arabic, meant for use in Egypt. Make sure the percentage has at least two fractional digits. (Note that this and all the other RTL examples may appear with different ordering in RTL context, e.g. ٤٣٫٨٠٪ instead of ٤٣٫٨٠٪.)

var arabicPercent =
  new Intl.NumberFormat("ar-EG",
                        { style: "percent",
                          minimumFractionDigits: 2 }).format;

print(arabicPercent(0.438)); // ٤٣٫٨٠٪

Or suppose we’re formatting for Persian as used in Afghanistan, and we want at least two integer digits and no more than two fractional digits.

var persianDecimal =
  new Intl.NumberFormat("fa-AF",
                        { minimumIntegerDigits: 2,
                          maximumFractionDigits: 2 });

print(persianDecimal.format(3.1416)); // ۰۳٫۱۴

Finally, let’s format an amount of Bahraini dinars, for Arabic as used in Bahrain. Unusually compared to most currencies, Bahraini dinars divide into thousandths (fils), so our number will have three places. (Again note that apparent visual ordering should be taken with a grain of salt.)

var bahrainiDinars =
  new Intl.NumberFormat("ar-BH",
                        { style: "currency", currency: "BHD" });

print(bahrainiDinars.format(3.17)); // د.ب.‏ ٣٫١٧٠

Collation

Options

The primary options properties for collation are as follows:

usage
"sort" or "search" (defaulting to "sort"), specifying the intended use of this Collator. (A search collator might want to consider more strings equivalent than a sort collator would.)
sensitivity
"base", "accent", "case", or "variant". This affects how sensitive the collator is to characters that have the same “base letter” but have different accents/diacritics and/or case. (Base letters are locale-dependent: “a” and “ä” have the same base letter in German but are different letters in Swedish.) "base" sensitivity considers only the base letter, ignoring modifications (so for German “a”, “A”, and “ä” are considered the same). "accent" considers the base letter and accents but ignores case (so for German “a” and “A” are the same, but “ä” differs from both). "case" considers the base letter and case but ignores accents (so for German “a” and “ä” are the same, but “A” differs from both). Finally, "variant" considers base letter, accents, and case (so for German “a”, “ä, “ä” and “A” all differ). If usage is "sort", the default is "variant"; otherwise it’s locale-dependent.
numeric
Boolean (defaulting to false) determining whether complete numbers embedded in strings are considered when sorting. For example, numeric sorting might produce "F-4 Phantom II", "F-14 Tomcat", "F-35 Lightning II"; non-numeric sorting might produce "F-14 Tomcat", "F-35 Lightning II", "F-4 Phantom II".
caseFirst
"upper", "lower", or "false" (the default). Determines how case is considered when sorting: "upper" places uppercase letters first ("B", "a", "c"), "lower" places lowercase first ("a", "c", "B"), and "false" ignores case entirely ("a", "B", "c"). (Note: Firefox currently ignores this property.)
ignorePunctuation
Boolean (defaulting to false) determining whether to ignore embedded punctuation when performing the comparison (for example, so that "biweekly" and "bi-weekly" compare equivalent).

And there’s that localeMatcher property that you can probably ignore.

Locale-centric options

The main Collator option specified as part of the locale’s Unicode extension is co, selecting the kind of sorting to perform: phone book (phonebk), dictionary (dict), and many others.

Additionally, the keys kn and kf may, optionally, duplicate the numeric and caseFirst properties of the options object. But they’re not guaranteed to be supported in the language tag, and options is much clearer than language tag components. So it’s best to only adjust these options through options.

These key-value pairs are included in the Unicode extension the same way they’ve been included for DateTimeFormat and NumberFormat; refer to those sections for how to specify these in a language tag.

Examples

Collator objects have a compare function property. This function accepts two arguments x and y and returns a number less than zero if x compares less than y, 0 if x compares equal to y, or a number greater than zero if x compares greater than y. As with the format functions, compare is a bound function that may be extracted for standalone use.

Let’s try sorting a few German surnames, for use in German as used in Germany. There are actually two different sort orders in German, phonebook and dictionary. Phonebook sort emphasizes sound, and it’s as if “ä”, “ö”, and so on were expanded to “ae”, “oe”, and so on prior to sorting.

var names =
  ["Hochberg", "Hönigswald", "Holzman"];

var germanPhonebook = new Intl.Collator("de-DE-u-co-phonebk");

// as if sorting ["Hochberg", "Hoenigswald", "Holzman"]:
//   Hochberg, Hönigswald, Holzman
print(names.sort(germanPhonebook.compare).join(", "));

Some German words conjugate with extra umlauts, so in dictionaries it’s sensible to order ignoring umlauts (except when ordering words differing only by umlauts: schon before schön).

var germanDictionary = new Intl.Collator("de-DE-u-co-dict");

// as if sorting ["Hochberg", "Honigswald", "Holzman"]:
//   Hochberg, Holzman, Hönigswald
print(names.sort(germanDictionary.compare).join(", "));

Or let’s sort a list Firefox versions with various typos (different capitalizations, random accents and diacritical marks, extra hyphenation), in English as used in the United States. We want to sort respecting version number, so do a numeric sort so that numbers in the strings are compared, not considered character-by-character.

var firefoxen =
  ["FireFøx 3.6",
   "Fire-fox 1.0",
   "Firefox 29",
   "FÍrefox 3.5",
   "Fírefox 18"];

var usVersion =
  new Intl.Collator("en-US",
                    { sensitivity: "base",
                      numeric: true,
                      ignorePunctuation: true });

// Fire-fox 1.0, FÍrefox 3.5, FireFøx 3.6, Fírefox 18, Firefox 29
print(firefoxen.sort(usVersion.compare).join(", "));

Last, let’s do some locale-aware string searching that ignores case and accents, again in English as used in the United States.

// Comparisons work with both composed and decomposed forms.
var decoratedBrowsers =
  [
   "A\u0362maya",  // A͢maya
   "CH\u035Brôme", // CH͛rôme
   "FirefÓx",
   "sAfàri",
   "o\u0323pERA",  // ọpERA
   "I\u0352E",     // I͒E
  ];

var fuzzySearch =
  new Intl.Collator("en-US",
                    { usage: "search", sensitivity: "base" });

function findBrowser(browser)
{
  function cmp(other)
  {
    return fuzzySearch.compare(browser, other) === 0;
  }
  return cmp;
}

print(decoratedBrowsers.findIndex(findBrowser("Firêfox"))); // 2
print(decoratedBrowsers.findIndex(findBrowser("Safåri")));  // 3
print(decoratedBrowsers.findIndex(findBrowser("Ãmaya")));   // 0
print(decoratedBrowsers.findIndex(findBrowser("Øpera")));   // 4
print(decoratedBrowsers.findIndex(findBrowser("Chromè")));  // 1
print(decoratedBrowsers.findIndex(findBrowser("IË")));      // 5

Odds and ends

It may be useful to determine whether support for some operation is provided for particular locales, or to determine whether a locale is supported. Intl provides supportedLocales() functions on each constructor, and resolvedOptions() functions on each prototype, to expose this information.

var navajoLocales =
  Intl.Collator.supportedLocalesOf(["nv"], { usage: "sort" });
print(navajoLocales.length > 0
      ? "Navajo collation supported"
      : "Navajo collation not supported");

var germanFakeRegion =
  new Intl.DateTimeFormat("de-XX", { timeZone: "UTC" });
var usedOptions = germanFakeRegion.resolvedOptions();
print(usedOptions.locale);   // de
print(usedOptions.timeZone); // UTC

Legacy behavior

The ES5 toLocaleString-style and localeCompare functions previously had no particular semantics, accepted no particular options, and were largely useless. So the i18n API reformulates them in terms of Intl operations. Each method now accepts additional trailing locales and options arguments, interpreted just as the Intl constructors would do. (Except that for toLocaleTimeString and toLocaleDateString, different default components are used if options aren’t provided.)

For brief use where precise behavior doesn’t matter, the old methods are fine to use. But if you need more control or are formatting or comparing many times, it’s best to use the Intl primitives directly.

Conclusion

Internationalization is a fascinating topic whose complexity is bounded only by the varied nature of human communication. The Internationalization API addresses a small but quite useful portion of that complexity, making it easier to produce locale-sensitive web applications. Go use it!

(And a special thanks to Norbert Lindenberg, Anas El Husseini, Simon Montagu, Gary Kwong, Shu-yu Guo, Ehsan Akhgari, the people of #mozilla.de, and anyone I may have forgotten [sorry!] who provided feedback on this article or assisted me in producing and critiquing the examples. The English and German examples were the limit of my knowledge, and I’d have been completely lost on the other examples without their assistance. Blame all remaining errors on me. Thanks again!)

(and to reiterate: comment on the Hacks post if you have anything to say)

Sriram RamasubramanianCentered Buttons

How can we use the same hack as Multiple Text Layout in some UI we need most of the times? Let’s take buttons for example. If we want the glyph in the button to be centered along with the text, we cannot use compound drawables — as they are always drawn along the edges of the container.

Centered Buttons

We could use our getCompoundPaddingLeft() to pack the glyph with the text.

    @Override
    public int getCompoundPaddingLeft() {
        // Ideally we should be overriding getTotalPaddingLeft().
        // However, android doesn't make use of that method,
        // instead uses this method for calculations.
        int paddingLeft = super.getCompoundPaddingLeft();
        paddingLeft += mDrawableWidth + getCompoundDrawablePadding();
        return paddingLeft;
    }

This offsets the space on the left and Android will take care of placing the text accordingly. Now we can place the Drawable in the space we created.

    @Override
    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);

        int paddingLeft = getPaddingLeft();
        int paddingRight = getPaddingRight();

        int drawableVerticalHeight = mDrawableHeight + getPaddingTop() + getPaddingBottom();
        int width = getMeasuredWidth();
        int height = Math.max(drawableVerticalHeight, getMeasuredHeight());
        setMeasuredDimension(width, height);

        int compoundPadding = getCompoundDrawablePadding();
        float totalWidth = mDrawableWidth + compoundPadding + getLayout().getLineWidth(0);
        float offsetX = (width - totalWidth - paddingLeft - paddingRight)/2.0f;
        mTranslateX = offsetX + paddingLeft;
        mTranslateY = (height - mDrawableHeight)/2.0f;
    }

The mTranslateX and mTranslateY are used to hold how far to translate to draw the drawable. Either the Drawable’s bounds can be shifted inside onMeasure() to reflect the translation. Or, the Canvas can be translated inside onDraw(). This will help us draw the glyph centered along with the text as a part of a Button!


Kim MoirReleng 2015 CFP now open

Florence, Italy.  Home of beautiful architecture.

Il Duomo di Firenze by ©runner310, Creative Commons by-nc-sa 2.0


Delicious food and drink.

Panzanella by © Pete Carpenter, Creative Commons by-nc-sa 2.0

Caffè ristretto by © Marcelo César Augusto Romeo, Creative Commons by-nc-sa 2.0


And next May, release engineering :-)

The CFP for Releng 2015 is now open.  The deadline for submissions is January 23, 2015.  It will be held on May 19, 2015 in Florence Italy and co-located with ICSE 2015.   We look forward to seeing your proposals about the exciting work you're doing in release engineering!

If you have questions about the submission process or anything else, please contact any of the program committee members. My email is kmoir and I work at mozilla.com.

Naoki HirataEinstein Quote for Mozillians

“Out of clutter, find simplicity. From discord find harmony. In the middle of difficulty lies opportunity.” – Albert Einstein

From : http://www.folderarchy.com/albert-einstein/


Filed under: Planet Tagged: Planet

Fabien CazenaveKompoZer 0.8b2

KompoZer logo

KompoZer 0.8b2 is finally ready. Few visible changes, but a lot of bugfixes and code cleaning under the hood.

You can grab KompoZer 0.8b2 here: http://kompozer.net/download.php

Enjoy, and please report bugs!

Bug Fixes

We’ve tried to solve the most frequently reported bugs:

  • the CSS Editor shouldn't add those annoying “*|” strings in the selectors any more
  • the preview in the “Image Properties” box now works properly
  • better FTP support (right-click in the Site Manager context menu)
  • the markup cleaner doesn't crash on nested lists any more
  • Enter in a paragraph now creates a new paragraph
  • the “Credits” panel in the About box is back ;-)

KompoZer 0.8b2 is now a more reliable editor: the regressions in the CSS editor were a complete blocker for myself, so I guess it’s been a real nightmare for most users. We’ve fixed a lot of small bugs and I think the overall user experience should be much better than with the previous versions.

18*4 Localized Binaries

Cédric Corazza, our l10n lead, has done a great job to release localized binaries for all the supported languages at once. This time he’s had much more work than for the previous beta:

  • we had 9 locales for the 0.8b1 release, there are 18 locales for 0.8b2:
    • Catalan, Dutch, Hungarian, Japanese got ready after the 0.8b1 release
    • Simplified Chinese, Esperanto, Finnish, Portuguese, Upper Sorbian have been added for the 0.8b2
  • Cédric has made Windows™ installers, which should put an end to one of the most frequent feature request
  • he’s built all binaries manually, as we don’t have any kind of script to ease this task (I considered that as a typical “l10n lead job”)

Cédric, congrats! and go get some sleep, the Korean and Bulgarian locales are getting ready. ;-) I’ll definitely write a few scripts to ease your work for the next release.

Inline Spell Checker

The inline spell checker in KompoZer 0.7.10 was inherited from Nvu, it was implemented with a specific patch against the Gecko 1.7 core and it caused a lot of freezes and crashes. As a result, most users (including myself) disabled it and I didn’t see it as an important feature to bring back in KompoZer 0.8.

As you can guess, a lot of users had a very different opinion on this. :-)

Unlike Gecko 1.7, Gecko 1.8.1 has a very good built-in inline spell checker. I’ve had a look at Thunderbird’s code and I found out enabling the inline spell checker in KompoZer was a snap. I’m sorry I didn’t do it sooner — but now it’s done, and it’s working fine as far as I know.

DOM Explorer Sidebar

I’m working with Fabien ’Kasparov’ Rocu on the next version of the DOM Explorer. As Fabien is implementing his ideas in an extension, I had to clean up the DOM Explorer and add a few hooks for his addon. To ease the development of his add-on, we’ve decided to implement a part of his work directly in KompoZer 0.8b2:

  • the DOM Explorer now shows the HTML attributes of the current element
  • a double-click on an element in the DOM Explorer brings up its “Property” dialog

The real improvement will come with Fabien’s extension, which should be released in April 2010. I’ll come back to this in another blog post.

New Keyboard Shortcuts

I’m known to be a dangerous pervert when it comes to computer keyboards — I admit I hate having to use a mouse when I’m editing a text. These new keyboard shortcuts aren’t documented, you can see them as a hidden bonus:

  • Ctrl+(Up|Down) moves the caret to the (beginning|end) of the current element
  • Ctrl(+Shift)+Enter adds a new line after (before) the current element
  • Alt+Shift+Enter switches to “Source” view

The Ctrl+Up/Down shortcut is more than a productivity booster. One of the known problems of the Mozilla editor component is that in some situations, it can be difficult to put the caret where you want it: for instance, there’s no easy way to put the caret right after a <div> block if it’s the last block in the page. With KompoZer 0.7.10 you had to select the <div> in the status bar, press the right arrow and hit Return; now all you need is to do a Ctrl+Down.

The “Source” View Still Sucks…

…and I’m aware of that. Please configure KompoZer to use your favorite text editor to work on the HTML source, there’s a specific “HTML” button by default in the main toolbar for that. I can’t help it, I hate the “Source” view in Nvu and KompoZer 0.7:

  • I don’t see much point in a pseudo syntax hilighting that doesn’t update as you type
  • I don’t see any point in showing line numbers that don’t match the *real* line numbers in the HTML file
  • nobody understands why the “Source” view hides the document tabs
  • it was the main source of crashes for KompoZer 0.7

The SeaMonkey-like plaintext editor, in my opinion, is much better at the moment — and on my first trunk builds (KompoZer 0.9a1pre / Gecko 1.9.3), Bespin is already working quite well.

Again, I understand a lot of users have a very different opinion on this, so I’ve tried an interesting experiment with this “Source” view: basically, I’ve re-written the main <tabeditor> element so it includes its own source editor. This embedded source editor could be used either for the “Split” view or for the “Source” view, and I could switch to “Source” mode without loosing the document tabs.

Unfortunately, this new <tabeditor> element raised a few problems that I couldn’t solve easily for this 0.8b2 release, so I’ve had to revert to the good old plaintext editor. For the 0.8b3 I’ll probably re-implement an Nvu-like “Source” view, rather than spending too much time on a feature that won’t work as well as Bespin: I prefer to release KompoZer 0.8 sooner in order to propose a Bespin-based KompoZer 0.9 as soon as possible.

The HTML Serializer Still Sucks…

…but we’re working on it. As you may have noticed, the HTML output of KompoZer 0.8 is already much cleaner than the one we had in KompoZer 0.7, especially if you check the “reformat HTML source” option: the most visible point is, there are (almost) no empty lines any more in the output files. But your well-defined indentation is still destroyed by KompoZer, which is a real pain when switching to “Source” mode.

Of course, you can use HTML Tidy as a workaround; I even used to design an Nvu extension for that. But this means dealing with temp files, serializing the files twice (once with KompoZer + reformatting with Tidy), and risking data losses (especially in utf-8, don’t ask me why). And the HTML code in the “Source” view is still a mess.

The great news is, Laurent Jouanneau has backported his XHTML serializer to Gecko 1.8.1 so I could use it for KompoZer 0.8 — and the first results look great! See this small example I saved with KompoZer 0.7.10, KompoZer 0.8b2 and KompoZer 0.8b3pre. Looks like we can finally get rid of HTML Tidy!

Almost Done

There are four main points to address before we can release a third (and hopefully last) beta:

  • adapt KompoZer 0.8 to the new HTML serializer;
  • get some kind of colored source view working;
  • fix the bugs in the “Split” view so people start using it;
  • work on FTP support to replace the current “Publish” button.

Please test this new version and report bugs. Many thanks to all users who donated or gave some time to keep this project running!

Fabien CazenaveKompoZer 0.8b3

KompoZer logo

We’ve just released KompoZer 0.8b3:

Localized binaries are available on the official download page: http://kompozer.net/download.php.

This maintainance release fixes two regressions that have been introduced in the previous beta:

  • bug #2957813, the "Source" mode was not applying modifications properly
  • bug #2959534, the "class" drop-down list was broken by a dirty attempt to make it UTF8-friendly

I didn’t want to take the risk of addressing other bugs but I did work on bug 1831943 by disabling line wrapping for Asian users. The relevant preference (editor.htmlWrapColumn) it now set to zero for Chinese (zh-CN, zh-TW) and Japanese (ja) builds, and it should be read properly by KompoZer — both when switching to “Source” mode and when saving HTML documents. This is still experimental, so your feedback will be welcome.

We’ve spent a few hours designing a bash/python script to make localized binaries for the 18 languages that are currently supported by KompoZer. This script works fine on Linux and OSX and it can build win32 installers by launching the InnoSetup compiler through Wine. It also checks that I haven't forgotten to include the MSVC7 DLLs in the win32 binaries, which should avoid us a few bad surprises for the next releases…

For the next beta we’ll focus on the “Source” view and the FTP module. We’ll do our best to release it in March.

EDIT In case you’ve downloaded a Windows build with missing MSVC7 dlls, I’ve just changed the path of all Windows binaries on SourceForge. Please download KompoZer 0.8b3 again, the problem should be solved. Sorry for the trouble. :-/

Chris IliasMy Installed Add-ons – Context Search

I love finding new extensions that do things I never even thought to search for. One of the best ways to find them is through word of mouth. In this case, I guess you can call it “word of blog”. I’d like to start a series of blog posts about the extensions I use, and maybe you’ll see one that you want to use.

The first one is Context Search. Context Search is one of those extensions I think should be part of Firefox. Context Search allows you to choose which search engine you use for each search. If it’s a word you aren’t familiar with, you can choose the Websters search engine. If it’s an acronym you aren’t familiar with, you can choose the Acronym Finder search engine.

Without the extension, when you highlight text then right-click, the menu will contain an item to search your preferred search engine for the text that is highlighted. With Context Search, you are instead given a list of your installed search engines, so you can pick which one to use. The search results will open in a new tab. I find myself using it more than the search bar.

Here’s a screenshot:

You can install it via the Mozilla Add-ons site.