Emma IrwinPassports for Community Leadership

This is #1 of 5 posts I identified as perhaps, being worth finishing and sharing.   Writing never feels finished, and it’s a vulnerable thing… to share ideas – but perhaps better than never sharing them at all?

I wrote most of this post in April of this year (making this outdated with the current work of the Participation Team), thinking about ways the learning format of the Leadership Summit in Singapore could evolve into a valuable tool for community leadership development and credentialing.  Community Leadership Passport(s) perhaps…


At the Participation Leadership Summit in Singapore, we designed the schedule in time blocks sorted by the Leadership Framework.  This meant that everyone attended at least one session identified under each of the building blocks.  The schedule was structured something like this…

Copy of Schedule(1)

As you can see, the structure  ensured that everyone experienced learning outcomes of the entire framework, while still providing choice in what felt most relevant, exciting or interesting in their personal development.  You can find some of this content here.

I’m started wondering..

How might we evolve the schedule design and content into a format for leadership development that also provides real world credentials?

I don’t think the answer is to take this schedule and make it a static ‘course’ or offering, I don’t think it is about ‘event in box’,  but I do think there’s something in using the framework to enforce quality leadership development, while giving power to what people want to learn, and how they prefer to learn.

Merging this idea + my previous work with participation ‘steps & ladders’ into something like a passport, or series of passports for leadership.



Really, this is about creating a mechanism for helping people build leadership credentials in a way that intersects what they want to learn and do, and what the project needs. It could be used for anything from developing strong mentors, to project leads in areas like IoT and Rust, to governance and diversity & inclusion. Imagining Passports with  3 attributes:

Experience – Taking action, completing tasks, generating experiences associated with learning and project outcomes. Should be clear, and feel doable without too much detail.

Mozilla Content – Completing a course either developed by, or approved as Mozilla content.   These could be online, or in person events.

Learner Choice – Encouraging exploration, and learning that feels valuable, interesting and fun – but with some guidelines for topics, outcomes and likely recommendations to make things easier.  For example, some people might want to complete a Coursera Course on IOT and Embedded systems, while others might prefer a ‘learning by doing’ approach via YouTube channels.

Something like a Leadership Passport would obviously require more thought in implementation, tracking and issuing certification. It could also be used to test and evolve Leadership Framework. I prefer it over a participation ladder because it feels less prescriptive in ‘how’ we step up as leaders and more supportive of ways want to learn and lead — and ultimately help us recognize and invest in emerging leaders sooner.

Image Credit:  Kate Harding – Quilt of Nations.


Air MozillaConnected Devices Weekly Program Update, 26 Jul 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Andreas TolfsenUpdate from WebDriver WG meeting in July 2016

The W3C Browser Tools- and Testing Working Group met again 13-14 July 2016 in Redmond, WA to discuss the progress of the WebDriver specification. I will try to summarise the discussions, but if you’re interested in all the details the meetings have been meticulously scribed.

I wrote about the progress from our TPAC 2015 meeting previously, and we appear to have made good progress since then. The specification text is nearing completion, although it is missing a few important chapters: Some particularly obvious omissions are the complete lack of input handling, and a big, difficult void where advanced user actions are meant to be.


James has been hard at work drafting a proposal for action semantics, which we went over in great detail. I think it’s fair to say there had been conceptual agreement in the working group on what the actions were meant to accomplish, but that the details of how they were going to work were extremely scarce.

WebDriver tries to innovate on the actions as they appear in Selenium. Actions in Selenium were originally meant to provide a way to pipeline a sequence of interactions—such as pressing down a mouse button, moving the mouse, and releasing it—through a complex data structure to a single command endpoint. The idea was that this would help address some of the race conditions that are intrinsically part of the one-directional design of the protocol, and reduce latency which may be critical when interacting with a document.

Unfortunately the pipelining design to reduce the number of HTTP requests was never quite implemented in Selenium, and the API design suffered from over-specialisation of different types of input devices and actions. The specification attempts to rectify this by generalising the range of input device classes, and by associating the actions that can be performed with a certain class. This means we are moving away from a flat sequence of types, such as [{type: "mouseDown"}, {type: "mouseMove"}, {type: "mouseUp"}] to a model where each input device has its own “track”. This limits the actions you can perform with each device, which makes some conceptual sense because it would be impossible to i.e. type keys with a mouse or press a mouse button with a stylus/pen input device.

The side-effect of this design is that it allows for parallelisation of actions from one or more types of input devices. This is an important development, as it makes it possible to combine primitives for input methods such as touch: In reality, a device cannot determine whether two fingers are “associated” with the same hand. So instead of defining high-level actions such as pinch and flick, it gives you the right level of granularity to combine actions from two or more touch “finger” devices to synthesise more complex movements. We believe this is a good approach with the right level of granularity that doesn’t try to over-specify or shoehorn in primitives that might not make sense in a cross-browser automation setting.

I’m looking forward to seeing James’ work land in the specificaton text. I think probably some explanatory notes and examples are required to fully explain this concept for both implementors and users.

Input locality

A known limitation of Selenium that we are not proud of is that it does not have a good story for input with alternative keyboard layouts. We have explicitly phrased the specification in such a way that it doesn’t make it impossible to retrofit in support for multiple layouts in the future. But right now we want to finish the baseline of the specification before we try moving into this.

The current design ideas floating around are to have some way of setting a keyboard layout either through a command or a capability. This would allow / to generate key events for Shift and ? on an American layout, and Shift and 7 on Norwegian layout. The biggest reason this is hard is because we need to find the right key code conversion tables for what would happen when typing for example .

Untrusted SSL certificates

We had a big discussion on invalid, self-signed, and untrusted SSL certificates. The general agreement in the WG is that it would be good to have functionality to allow a WebDriver session to bypass the security checks associated with them, as WebDriver may be run in an environment where it is difficult or even impossible to instrument the browser/environment in such a way that they are accepted implicitly (e.g. by modifying the root store).

Different browser vendors raised questions over whether this would pass security review as implementing such a feature increases the attack surface in one of the most critical components in web browsers. A counterargument is that by the point your browser has WebDriver enabled, you probably have bigger things to worry about than the fact that untrusted certificates are implicitly accepted.

We also found that this is highly inconsistently implemented in Selenium. For the two drivers that support it, FirefoxDriver (written and maintained by Selenium) has an acceptSslCerts capability that takes a boolean to switch off security checks, and chromedriver (by Google) by contrast accepts all certificates by default. The remaining drivers have no support for it.

This leaves the working group free to decide on a new and consistent approach. One point of concern is that a boolean to disable all security checks seems like an overly coarse design. A suggested alternative is to provide a list of domains to disable the checks for, where wildcards can be expanded to cover every domain or every subdomain, so that i.e. ["*"] would be equivalent to setting acceptSslCerts to true in today’s Firefox implementation, but that ["*.sny.no"] would only disable untrusted certificates on this domain.

Navigation and URLs

Because WebDriver taps into the browser’s navigation algorithm at a much later point than when a user interacts with the address bar, we decided that malformed URLs should consistently return an error. We have also changed the prose to no longer mislead users to think that navigating in effect means the same as using the address bar; the address bar is not a concept of the web platform.

There was a proposal from Mozilla to allow navigation to relative URLs, so that one could navigate to i.e. "/foo" to go to the path on the current domain, similar to how window.location = "/foo" works. This was unfortunately voted down. I feel it would be useful, even just for consistency, for the WebDriver navigation command to mirror the platform API, modulo security checks.

Desired vs. required capabilities

A big discussion during the meeting was around the continuing confusion around capabilities: Many feel they are an intermediary node concept that is best left undefined in the core specification text itself, because the specification explicitly does not define any qualities or expectations about local ends (clients bindings) or intermediary nodes (Selenium server or proxy that gives you a session).

There was however consensus around the fact that having a way to pick a browser configuration from some matrix was a good idea. The uncertainty, I think, comes largely from driver implementors who feel that once capabilities reach the driver there is very little that can be done about the sort of conflict resolution that required- and desired capabilities warrant.

For example, what does it mean to desire a profile and how do you know if the provided profile is valid? We were unable to reach any agreement on this and decided to punt the topic for our next meeting in Lisbon.

Test coverage

In order to push the specification to “Rec” (short for Recommendation) one must have at least two interoperable implemenations by two separate vendors. To determine that they are interoperable, one needs a test suite. I’ve written previously about the test harness I wrote for the Web Platform Tests that integrates WebDriver spec tests with wptrunner.

We have a few exhaustive tests for a couple of chapters, but I hope to continue this work this quarter.

Next meeting

The working group is meeting again for TPAC that this year is in Lisbon (how civilised!) in late September. I’m enormously looking forward to visiting there as I’ve never been.

We hope resolve the outstanding capabilities discussion and make final decisions on a few more minor outstanding issues then.

Tim TaubertThe Evolution of Signatures in TLS

This post will take a look at the evolution of signature algorithms and schemes in the TLS protocol since version 1.0. I at first started taking notes for myself but then decided to polish and publish them, hoping that others will benefit as well.

(Let’s ignore client authentication for simplicity.)

Signature algorithms in TLS 1.0 and TLS 1.1

In TLS 1.0 as well as TLS 1.1 there are only two supported signature schemes: RSA with MD5/SHA-1 and DSA with SHA-1. The RSA here stands for the PKCS#1 v1.5 signature scheme, naturally.

select (SignatureAlgorithm)
    case rsa:
        digitally-signed struct {
            opaque md5_hash[16];
            opaque sha_hash[20];
    case dsa:
        digitally-signed struct {
            opaque sha_hash[20];
} Signature;

An RSA signature signs the concatenation of the MD5 and SHA-1 digest, the DSA signature only the SHA-1 digest. Hashes will be computed as follows:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

The ServerParams are the actual data to be signed, the *Hello.random values are prepended to prevent replay attacks. This is the reason TLS 1.3 puts a downgrade sentinel at the end of ServerHello.random for clients to check.

The ServerKeyExchange message containing the signature is sent only when static RSA/DH key exchange is not used, that means we have a DHE_* cipher suite, an RSA_EXPORT_* suite downgraded due to export restrictions, or a DH_anon_* suite where both parties don’t authenticate.

Signature algorithms in TLS 1.2

TLS 1.2 brought bigger changes to signature algorithms by introducing the signature_algorithms extension. This is a ClientHello extension allowing clients to signal supported and preferred signature algorithms and hash functions.

enum {
    none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5), sha512(6)
} HashAlgorithm;

enum {
    anonymous(0), rsa(1), dsa(2), ecdsa(3)
} SignatureAlgorithm;

struct {
    HashAlgorithm hash;
    SignatureAlgorithm signature;
} SignatureAndHashAlgorithm;

If a client does not include the signature_algorithms extension then it is assumed to support RSA, DSA, or ECDSA (depending on the negotiated cipher suite) with SHA-1 as the hash function.

Besides adding all SHA-2 family hash functions, TLS 1.2 also introduced ECDSA as a new signature algorithm. Note that the extension does not allow to restrict the curve used for a given scheme, P-521 with SHA-1 is therefore perfectly legal.

A new requirement for RSA signatures is that the hash has to be wrapped in a DER-encoded DigestInfo sequence before passing it to the RSA sign function.

DigestInfo ::= SEQUENCE {
    digestAlgorithm DigestAlgorithm,
    digest OCTET STRING

This unfortunately led to attacks like Bleichenbacher’06 and BERserk because it turns out handling ASN.1 correctly is hard. As in TLS 1.1, a ServerKeyExchange message is sent only when static RSA/DH key exchange is not used. The hash computation did not change either:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

Signature schemes in TLS 1.3

The signature_algorithms extension introduced by TLS 1.2 was revamped in TLS 1.3 and MUST now be sent if the client offers a single non-PSK cipher suite. The format is backwards compatible and keeps old code points.

enum {
    /* RSASSA-PKCS1-v1_5 algorithms */
    rsa_pkcs1_sha1 (0x0201),
    rsa_pkcs1_sha256 (0x0401),
    rsa_pkcs1_sha384 (0x0501),
    rsa_pkcs1_sha512 (0x0601),

    /* ECDSA algorithms */
    ecdsa_secp256r1_sha256 (0x0403),
    ecdsa_secp384r1_sha384 (0x0503),
    ecdsa_secp521r1_sha512 (0x0603),

    /* RSASSA-PSS algorithms */
    rsa_pss_sha256 (0x0700),
    rsa_pss_sha384 (0x0701),
    rsa_pss_sha512 (0x0702),

    /* EdDSA algorithms */
    ed25519 (0x0703),
    ed448 (0x0704),

    /* Reserved Code Points */
    private_use (0xFE00..0xFFFF)
} SignatureScheme;

Instead of SignatureAndHashAlgorithm, a code point is now called a SignatureScheme and tied to a hash function (if applicable) by the specification. TLS 1.2 algorithm/hash combinations not listed here are deprecated and MUST NOT be offered or negotiated.

New code points for RSA-PSS schemes, as well as Ed25519 and Ed448-Goldilocks were added. ECDSA schemes are now tied to the curve given by the code point name, to be enforced by implementations. SHA-1 signature schemes SHOULD NOT be offered, if needed for backwards compatibility then only as the lowest priority after all other schemes.

The current draft-13 still lists RSASSA-PSS as the only valid signature algorithm allowed to sign handshake messages with an RSA key. The rsa_pkcs1_* values solely refer to signatures which appear in certificates and are not defined for use in signed handshake messages. There is hope.

To prevent various downgrade attacks like FREAK and Logjam the computation of the hashes to be signed has changed significantly and covers the complete handshake, up until CertificateVerify:

h = Hash(Handshake Context + Certificate) + Hash(Resumption Context)

This includes amongst other data the client and server random, key shares, the cipher suite, the certificate, and resumption information to prevent replay and downgrade attacks. With static key exchange algorithms gone the CertificateVerify message is now the one carrying the signature.

Giorgos LogiotatidisUser friendly website analytics with Sandstorm Oasis and Piwik

Piwik is a great FLOSS website analytics platform. I've been self hosting it for different small websites I've managed through the years. Although it's fairly easy to setup and maintain at this level of use, I want to avoid having another service in my maintenance list.

While looking for user and web respecting alternatives -read looking for something else than Google Analytics- I realized that Sandstorm Oasis does support Piwik.

I logged-in and setup my Piwik instance, or Grain as Sandstorm calls instances, in less than 30 of seconds. The tricky part is to copy the code provided by Sandstorm instead of using the code in Piwik documentation, since that's customized to work with Sandstorms special API interface. Paste in the HTML and you're done!

So if you're looking for decent solutions that respect your users and the web, give Sandstorm a try. They are on a "mission to make open source and indie web applications viable as an ecosystem" and they are doing so by developing a platform which makes it super easy to run many open source web apps, like Piwik, Rocket.Chat, Ghost, GitLab, Wordpress and others. Their hosted Oasis platform also comes with a free plan.

This Week In RustThis Week in Rust 140

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

In what seems to become a kind of tradition, User gsingh93 suggested his trace crate, a syntax extension to insert print! statements to functions to help trace execution. Thanks, gsingh93!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

76 pull requests were merged in the last two weeks.

New Contributors

  • Evgeny Safronov
  • Matt Horn

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

you have a problem. you decide to use Rust. now you have a Rc<RefCell<Box<Problem>>>

kmc on #rust.

Thanks to Alex Burka for the tip. Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Nicholas NethercoteFirefox 64-bit for Windows can take advantage of more memory

By default, on Windows, Firefox is a 32-bit application. This means that it is limited to using at most 4 GiB of memory, even on machines that have more than 4 GiB of physical memory (RAM). In fact, depending on the OS configuration, the limit may be as low as 2 GiB.

Now, 2–4 GiB might sound like a lot of memory, but it’s not that unusual for power users to use that much. This includes:

  • users with many (dozens or even hundreds) of tabs open;
  • users with many (dozens) of extensions;
  • users of memory-hungry web sites and web apps; and
  • users who do all of the above!

Furthermore, in practice it’s not possible to totally fill up this available space because fragmentation inevitably occurs. For example, Firefox might need to make a 10 MiB allocation and there might be more than 10 MiB of unused memory, but if that available memory is divided into many pieces all of which are smaller than 10 MiB, then the allocation will fail.

When an allocation does fail, Firefox can sometimes handle it gracefully. But often this isn’t possible, in which case Firefox will abort. Although this is a controlled abort, the effect for the user is basically identical to an uncontrolled crash, and they’ll have to restart Firefox. A significant fraction of Firefox crashes/aborts are due to this problem, known as address space exhaustion.

Fortunately, there is a solution to this problem available to anyone using a 64-bit version of Windows: use a 64-bit version of Firefox. Now, 64-bit applications typically use more memory than 32-bit applications. This is because pointers, a common data type, are twice as big; a rough estimate for 64-bit Firefox is that it might use 25% more memory. However, 64-bit applications also have a much larger address space, which means they can access vast amounts of physical memory, and address space exhaustion is all but impossible. (In this way, switching from a 32-bit version of an application to a 64-bit version is the closest you can get to downloading more RAM!)

Therefore, if you have a machine with 4 GiB or less of RAM, switching to 64-bit Firefox probably won’t help. But if you have 8 GiB or more, switching to 64-bit Firefox probably will help the memory usage situation.

Official 64-bit versions of Firefox have been available since December 2015. If the above discussion has interested you, please try them out. But note the following caveats.

  • Flash and Silverlight are the only supported 64-bit plugins.
  • There are some Flash content regressions due to our NPAPI sandbox (for content that uses advanced features like GPU acceleration or microphone APIs).

On the flip side, as well as avoiding address space exhaustion problems, a security feature known as ASLR works much better in 64-bit applications than in 32-bit applications, so 64-bit Firefox will be slightly more secure.

Work is being ongoing to fix or minimize the mentioned caveats, and it is expected that 64-bit Firefox will be rolled out in increasing numbers in the not-too-distant future.

UPDATE: Chris Peterson gave me the following measurements about daily active users on Windows.

  • 66.0% are running 32-bit Firefox on 64-bit Windows. These users could switch to a 64-bit Firefox.
  • 32.3% are running 32-bit Firefox on 32-bit Windows. These users cannot switch to a 64-bit Firefox.
  • 1.7% are running 64-bit Firefox already.

UPDATE 2: Also from Chris Peterson, here are links to 64-bit builds for all the channels:

Mozilla Localization (L10N)L20n in Firefox: A Summary for Developers

L20n is a new localization framework for Firefox and Gecko. Here’s what you need to know if you’re a Firefox front-end developer.

Gecko’s current localization framework hasn’t changed in the last two decades. It is based on file formats which weren’t designed for localization. It offers crude APIs. It tasks developers with things they shouldn’t have to do. It doesn’t allow localizers to use the full expressive power of their languages.

L20n is a modern localization and internationalization infrastructure created by the Localization Engineering team in order to overcome these limitations. It was successfully used in Firefox OS. We’ve put parts of it on the ECMA standardization path. Now we intend to integrate it into Gecko and migrate Firefox to it.

Overview of How L20n Works

For Firefox, L20n is most powerful when it’s used declaratively in the DOM. The localization happens on the runtime and gracefully falls back to the next language in case of errors. L20n doesn’t force developers to programmatically create string bundles, request raw strings from them and manually interpolate variables. Instead, L20n uses a Mutation Observer which is notified about changes to data-l10n-* attributes in the DOM tree. The complexity of the language negotiation, resource loading, error fallback and string interpolation is hidden in the mutation handler. It is still possible to use the JavaScript API to request a translation manually in rare situations when DOM is not available (e.g. OS notifications).

What problems L20n solves?

The current localization infrastructure is tightly-coupled: it touches many different areas of the codebase.  It also requires many decisions from the developer. Every time someone wants to add a new string they need to go through the following mental checklist:

  1. Is the translation embedded in HTML or XUL? If so, use the DTD format. Be careful to only use valid entity references or you’ll end up with a Yellow Screen of Death. Sure enough, the list of valid entities is different for HTML and for XUL. (For instance &hellip;
    is valid in HTML but not in XUL.)
  2. Is the translation requested dynamically from JavaScript? If so, use the .properties format.
  3. Does the translation use interpolated variables? If so, refer to the documentation on good practices and use #1, %S, %1$S, {name} or &name; depending on the use-case. (That’s five different ways of interpolating data!) For translations requested from JavaScript, replace the interpolation placeables manually with String.prototype.replace.
  4. Does the translation depend on a number in any of the supported languages? If so, use the PluralForm.jsm module to choose the correct variant of the translation. Specify all variants on a single line of the .properties file, separated by semicolons.
  5. Does the translation comprise HTML elements? If so, split the copy into smaller parts surrounding the HTML elements and put each part in its own translation. Remember to keep them in sync in case of changes to the copy. Alternatively write your own solution for replacing interpolation specifiers with HTML markup.

What a ride! All of this just to add a simple You have no new notifications message to the UI.  How do we fix this tight-coupled-ness?

L20n is designed around the principle of separation of concerns. It introduces a single syntax for all use-cases and offers a robust fallback mechanism in case of missing or broken translations.

Let’s take a closer look at some of the features of L20n which mitigate the headaches outlined above.

Single syntax

In addition to DTD and .properties files Gecko currently also uses .ini and .inc files for a total of four different localization formats.

L20n introduces a single file format based on ICU’s MessageFormat. It’s designed to look familiar to people who have previous experience with .properties and .ini. If you’ve worked with .properties or .ini before you already know how to create simple L20n translations.

Primer on the FTL syntax

Fig. 1. A primer on the FTL syntax

A single localization format greatly reduces the complexity of the ecosystem. It’s designed to keep simple translations simple and readable. At the same time it allows for more control from localizers when it comes to defining and selecting variants of translations for different plural categories, genders, grammatical cases etc. These features can be introduced only in translations which need them and never leak into other languages. You can learn more about L20n’s syntax in my previous blog post and at http://l20n.org/learn. An interactive editor is also available at https://l20n.github.io/tinker.

Separation of Concerns: Plurals and Interpolation

In L20n all the logic related to selecting the right variant of the translation happens inside of the localization framework. Similarly L20n takes care of the interpolation of external variables into the translations. As a developer, all you need to do is declare which translation identifier you are interested in and pass the raw data that is relevant.

Plurals and interpolation in L20n

Fig. 2. Plurals and interpolation in L20n

In the example above you’ll note that in the BEFORE version the developer had to manually call the PluralForm API. Furthermore the calling code is also responsible for replacing #1 with the relevant datum. There’s is no error checking: if the translation contains an error (perhaps a typo in #1) the replace() will silently fail and the final message displayed to the user will be broken.

Separation of Concerns: Intl Formatters

L20n builds on top of the existing standards like ECMA 402’s Intl API (itself based in large part on Unicode’s ICU). The Localization team has also been active in advancing proposals and specification for new formatters.

L20n provides an easy way to use Intl formatters from within translations. Often times the Intl API completely removes the need of going through the localization layer. In the example below the logic for displaying relative time (“2 days ago”) has been replaced by a single call to a new Intl formatter, Intl.RelativeTimeFormat.

Intl API in use

Fig. 3. Intl API in use

Separation of Concerns: HTML in Translations

L20n allows for some semantic markup in translations. Localizers can use safe text-level HTML elements to create translations which obey the rules of typography and punctuation. Developers can also embed interactive elements inside of translations and attach event handlers to them in HTML or XUL. L20n will overlay translations on top of the source DOM tree preserving the identity of elements and the event listeners.

Semantic markup in L20n

Fig. 4. Semantic markup in L20n

In the example above the BEFORE version must resort to splitting the translation into multiple parts, each for a possible piece of translation surrounding the two <label> elements.  The L20n version only defines a single translation unit and the localizer is free to position the <label> elements as they see fit.

Resilient to Errors

L20n provides a graceful and robust fallback mechanism in case of missing or broken translations. If you’re a Firefox front-end developer you might be familiar with this image:

Yellow Screen of Death

Fig. 5. Yellow Screen of Death

This errors happens whenever a DTD file is broken. The way a DTD file can be broken might be as subtle as a translation using the &hellip; entity which is valid in HTML but not in XUL.

In L20n, broken translations never break the UI. L20n tries its best to display a meaningful message to the user in case of errors. It may try to fall back to the next language preferred by the user if it’s available. As the last resort L20n will show the identifier of the message.

New Features

L20n allows us to re-think major design decisions related to localization in Firefox. The first area of innovation that we’re currently exploring is the experience of changing the browser’s UI language. A runtime localization framework allows the change to happen seamlessly on the fly without restarts. It will also become possible to go back and forth between languages for just a part of the UI, a feature often requested by non-English users of Developer Tools.

Another innovation that we’re excited about is the ability to push updates to the existing translations independent of the software updates which currently happen approximately every 6 weeks. We call this feature Live Updates to Localizations.

We want to decouple the release schedule of Firefox from the release schedule of localizations. The whole release process can then become more flexible and new translations can be delivered to users outside of regular software updates.


L20n’s goal is to improve Mozilla’s ability to create quality multilingual user interfaces, simplify the localization process for developers, improve error recovery and allow us to innovate.

The migration will result in cleaner and easier to maintain code base. It will improve the quality and the security of Firefox. It will provide a resilient runtime fallback, loosening the ties between code and localizations. And it will open up many new opportunities to innovate.

Daniel StenbergA workshop Monday

http workshopI decided I’d show up a little early at the Sheraton as I’ve been handling the interactions with hotel locally here in Stockholm where the workshop will run for the coming three days. Things were on track, if we ignore how they got the wrong name of the workshop on the info screens in the lobby, instead saying “Haxx Ab”…

Mark welcomed us with a quick overview of what we’re here for and quick run-through of the rough planning for the days. Our schedule is deliberately loose and open to allow for changes and adaptations as we go along.

Patrick talked about the 1 1/2 years of HTTP/2 working in Firefox so far, and we discussed a lot around the numbers and telemetry. What do they mean and why do they look like this etc. HTTP/2 is now at 44% of all HTTPS requests and connections using HTTP/2 are used for more than 8 requests on median (compared to slightly over 1 in the HTTP/1 case). What’s almost not used at all? HTTP/2 server push, Alt-Svc and HTTP 308 responses. Patrick’s presentation triggered a lot of good discussions. His slides are here.

RTT distribution for Firefox running on desktop and mobile, from Patrick’s slide set:


The lunch was lovely.

Vlad then continued to talk about experiences from implementing and providing server push at Cloudflare. It and the associated discussions helped emphasize that we need better help for users on how to use server push and there might be reasons for browsers to change how they are stored in the current “secondary cache”. Also, discussions around how to access pushed resources and get information about pushes from javascript were briefly touched on.

After a break with some sweets and coffee, Kazuho continued to describe cache digests and how this concept can help making servers do better or more accurate server pushes. Back to more discussions around push and what it actually solved, how much complexity it is worth and so on. I thought I could sense hesitation in the room on whether this is really something to proceed with.

We intend to have a set of lightning talks after lunch each day and we have already have twelve such suggested talks listed in the workshop wiki, but the discussions were so lively and extensive that we missed them today and we even had to postpone the last talk of today until tomorrow. I can already sense how these three days will not be enough for us to cover everything we have listed and planned…

We ended the evening with a great dinner sponsored by Mozilla. I’d say it was a great first day. I’m looking forward to day 2!

Air MozillaMozilla Weekly Project Meeting, 25 Jul 2016

Mozilla Weekly Project Meeting The Monday Project Meeting

The Mozilla BlogSusan Chen, Promoted to Vice President of Business Development

I’m excited to announce that Susan Chen has been appointed Vice President of Business Development at Mozilla, a new role we are creating to recognize her achievements.

Susan ChenSusan joined Mozilla in 2011 as Head of Strategic Development. During her five years at Mozilla, Susan has worked with the Mozilla team to conceive and execute multiple complex negotiations and concluded hundreds of millions dollar revenue and partnership deals for Mozilla products and services.

As Vice President of Business Development, Susan is now responsible for planning and executing major business deals and partnerships for Mozilla across its product lines including search, commerce, content, communications, mobile and connected devices. She is also in charge of managing the business development team working across the globe.

We are pleased to recognize Susan’s achievements and expanded scope with the title of Vice President. Please join me in welcoming Susan to the leadership team at Mozilla!


Susan’s bio & Mozillians profile

LinkedIn profile

High-resolution photo

Mitchell BakerUpdate on the United Nations High Level Panel on Women’s Economic Empowerment

It is critical to ensure that women are active participants in digital life. Without this we won’t reach full economic empowerment. This is the perspective and focus I bring to the UN High Level Panel for Women’s Economic Empowerment (HLP), which met last week in Costa Rica, hosted by President Luis Guillermo Solis.

(Here is the previous blog post on this topic.)

Many thanks to President Solis, who led with both commitment and authenticity. Here he shows his prowess with selfie-taking:

Screen Shot 2016-07-22 at 12.32.19 PM

Members of the High Level Panel – From Left to Right: Tina Fordham, Citi Research; Laura Tyson, UC Berkeley; Alejandra Mora, Government of Costa Rica; Ahmadou Ba, AllAfrica Global Media; Renana Jhabvala, WIEGO; Elizabeth Vazquez, WeConnect; Jeni Klugman, Harvard Business School; Mitchell Baker, Mozilla; Gwen Hines, DFID-UK; Phumzile Mlambo, UN Women; José Manuel Salazar Xirinachs, International Labour Organization; Simona Scarpaleggia, Ikea; Winnie Byanyima, Oxfam; Fiza Farhan, Buksh Foundation; Karen Grown, World Bank; Margo Thomas, HLP Secretariat.

Photo Credit: Luis Guillermo Solis, President, Costa Rica

In the meeting we learned about actions the Panel members have initiated, and provided feedback and guidelines on the first draft of the HLP report. The goal for the report is to be as concrete as possible in describing actions in women’s economic empowerment which have shown positive results so that interested parties could adopt these successful practices. An initial version of the report will be released in September, with the final report in 2017.  In the meantime, Panel members are also initiating, piloting and sometimes scaling activities that improve women’s economic empowerment.

As Phumzile Mlambo-Ngcuka, the Executive Director of UN Women often says, the best report will be one that points to projects that are known to work. One such example is a set of new initiatives, interventions and commitments to be undertaken in the Punjab, announced by the Panel Member and Deputy from Pakistan, Fiza Farhan and Mahwish Javaid.

Mozilla, too, is engaged in a set of new initiatives. We’ve been tuning our Mozilla Clubs program, which are on-going events to teach Web Literacy, to be interesting and more accessible to women and girls. We’ve entered into a partnership with UN Women to deepen this work and the pilots are underway. If you’d like to participate, consider applying your organizational, educational, or web skills to start a Mozilla Club for women and girls in your area. Here are examples of existing clubs for women in Nairobi and Cape Town.

Mozilla is also involved in the theme of digital inclusion as a cross-cutting, overarching theme of the HLP report. This is where Anar Simpson, my official Deputy for the Panel, focuses her work. We are liaising with companies in Silicon Valley who are working in the fields of connectivity and distribution of access to explore if, when and and how their projects can empower women economically.  We’re looking to gather everything they have learned about what has been effective. In addition to this information/content gathering task, Mozilla is working with the Panel on the advocacy and publicity efforts of the report.

I joined the Panel because I see it as a valuable mechanism for driving both visibility and action on this topic. Women’s economic empowerment combines social justice, economic growth benefits and the chance for more stability in a fragile world. I look forward to meeting with the UN Panel again in September and reporting back on practical and research-driven initiatives.

QMOFirefox 49.0 Aurora Testday Results

Hello mozillians!

Last week on Friday (July 22nd), we held another successful event – Firefox 49.0 Aurora Testday.

Thank you all for helping us making Mozilla a better place – Moin Shaikh, Georgiu Ciprian, Marko Andrejić, Dineesh Mv, Iryna Thompson.

From Bangladesh: Rezaul Huque Nayeem, Nazir Ahmed Sabbir, Hossain Al Ikram, Azmina Akter Papeya, Md. Rahimul Islam, Forhad Hossain, Akash, Roman Syed, Niaz Bhuiyan Asif, Saddam Hossain, Sajedul Islam, Md.Majedul islam, Fahim, Abdullah Al Jaber Hridoy, Raihan Ali, Md.Ehsanul Hassan, Sauradeep Dutta, Mohammad Maruf Islam, Kazi Nuzhat Tasnem, Maruf Rahman, Fatin Shahazad, Tanvir Rahman, Rakib Rahman, Tazin Ahmed, Shanjida Tahura Himi, Anika Nawar and Md. Nazmus Shakib (Robin).

From India: Nilima, Paarttipaabhalaji, Ashly Rose Mathew M, Selva Makilan R, Prasanth P, Md Shahbaz Alam and Bhuvana Meenakshi.K

A big thank you goes out to all our active moderators too!


I strongly advise everyone of you to reach out to us, the moderators, via#qa during the events when you encountered any kind of failures. Keep up the great work!

Keep an eye on QMO for upcoming events! 😉

Mike HommeyAnnouncing git-cinnabar 0.4.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b1?

  • Some more bug fixes.
  • Updated git to 2.9.2 for cinnabar-helper.
  • Now supports `git push –dry-run`.
  • Added a new `git cinnabar fetch` command to fetch a specific revision that is not necessarily a head.
  • Some improvements to the experimental native wire protocol support.

The Servo BlogThis Week In Servo 72

In the last week, we landed 79 PRs in the Servo organization’s repositories.

The team working on WebBluetooth in Servo has launched a new site! It has two demo videos and very detailed instructions and examples on how to use standards-based Web Platform APIs to connect to Bluetooth devices.

We’d like to especially thank UK992 this week for their AMAZING work helping us out with Windows support! We are really eager to get the Windows development experience from Servo up to par with that of other platforms, and UK992’s work has been essential.

Connor Brewster (cbrewster) has also been on an incredible tear, working with Alan Jeffrey, on figuring out how session history is supposed to work, clarifying the standard and landing some great fixes into Servo.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Notable Additions

  • UK992 added support for tinyfiledialogs on Windows, so that we can prompt there, too!
  • UK992 uncovered the MINGW magic to get AppVeyor building again after the GCC 6 bustage
  • jdm made it possible to generate the DOM bindings in parallel, speeding up some incremental builds by nearly a minute!
  • aneesh restored better error logging to our BuildBot configuration and provisioning steps
  • canaltinova fixed the reference test for text alignment in input elements
  • larsberg fixed up some issues preventing the Windows builder from publishing nightlies
  • upsuper added support for generating bindings for MSVC
  • heycam added FFI glue for 1-arg CSS supports() in Stylo
  • manish added Stylo bindings for calc()
  • johannhof ensured we only expose Worker interfaces to workers
  • cbrewster implemented joint session history
  • shinglyu optimized dirty flags for viewport percentage units based on viewport changes
  • stshine blockified some children of flex containers, continuing the work to flesh out flexbox support
  • creativcoder integrated a service worker manager thread
  • izgzhen fixed Blob type-strings
  • ajeffrey integrated logging with crash reporting
  • malisas allowed using ByteString types in WebIDL unions
  • emilio ensured that transitions and animations can be tested programmatically

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


See the aforementioned demos from the team at the University of Szeged.

The Rust Programming Language BlogThe 2016 Rust Conference Lineup

The Rust Community is holding three major conferences in the near future, and we wanted to give a shout-out to each, now that all of the lineups are fully announced.

Sept 9-10: RustConf

RustConf is a two-day event held in Portland, OR, USA on September 9-10. The first day offers tutorials on Rust given directly by members of the Rust core team, ranging from absolute basics to advanced ownership techniques. The second day is the main event, with talks at every level of expertise, covering both core Rust concepts and design patterns, production use of Rust, reflections on the RFC process, and systems programming in general. We offer scholarship for those who would otherwise find it difficult to attend. Join us in lovely Portland and hear about the latest developments in the Rust world!

Follow us on Twitter @rustconf.

Sept 17-18: Rust Fest

Join us at RustFest, Europe’s first conference dedicated to the Rust programming language. Over the weekend 17-18th September we’ll gather in Berlin to talk Rust, its ecosystem and community. All day Saturday will have talks with topics ranging from hardware and testing over concurrency and disassemblers, and all the way to important topics like community, learning and empathy. While Sunday has a focus on learning and connecting, either at one of the many workshops we are hosting or in the central meet-n-greet-n-hack area provided.

Thanks to the many awesome sponsors, we are able to offer affordable tickets to go on sale this week, with an optional combo—including both Viewsource and RustFest. Keep an eye on http://www.rustfest.eu/, get all the updates on the blog and don’t forget to follow us on Twitter @rustfest

Oct 27-28: Rust Belt Rust

Rust Belt Rust is a two-day conference in Pittsburgh, PA, USA on October 27 and 28, 2016, and people with any level of Rust experience are encouraged to attend. The first day of the conference has a wide variety of interactive workshops to choose from, covering topics like an introduction to Rust, testing, code design, and implementing operating systems in Rust. The second day is a single track of talks covering topics like documentation, using Rust with other languages, and efficient data structures. Both days are included in the $150 ticket! Come learn Rust in the Rust Belt, and see how we’ve been transforming the region from an economy built on manufacturing to an economy built on technology.

Follow us on Twitter @rustbeltrust.

Karl Dubost[worklog] Edition 028. appearance on Enoshima

Each time I "set up my office" (moved to a new place for the next 3 months, construction work on the main home), I'm mesmerized by how easy it is to set up a work environment. Laptop, wifi and electricity are the main things needed to start. A table and a chair are useful but non essential. And eventually an additional screen to have more working surface to be comfortable. Basically in 5 minutes we are ready to work. And that's one chance of our area of work. How long does it take before you can start working?

Working with a view on Enoshima for the next 3 months. Tune of the week: Omoide no Enoshima.

Webcompat Life

Progress this week:

Today: 2016-07-25T06:21:33.702789
296 open issues
needsinfo       5
needsdiagnosis  71
needscontact    14
contactready    34
sitewait        164

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Time to time, people are reporting usability issues which are more or less cross-browsers. They basically hinder every browsers. It's out scope for the Web Compatibility project, but hint at something interesting about browsers and users perception. Often, I wonder if browsers should do more than just supporting the legacy Web site (aka it displays), but also adjust the content to a more palatable experience. Somehow the way, Reader mode is doing on user request, a beautify button for legacy content.
  • Google Image Search and black arrow. A kind of cubist arrow for Firefox. Modern design?
  • I opened an issue on Tracking Protection and Webcompat. Adam pointed me this morning to a project on moving tracking protection to a Web extension.
  • Because we have more issues on Firefox Desktop and Firefox Android, we focus our energy there, so we need someone in the community to focus on Firefox OS issues.
  • When I test Web sites on Firefox Android, I usually do it through the remote debugging in WebIDE and instead of typing a long URI on the device itself, I usually go to the console and paste the address I want window.location = 'http://example.com/long/path/with/strange/356374389dgjlkj36s'.
  • Starting to test a bit more in depth what appearance means in different browsers. Specifically to determine what is needed for Web compatibility and/or Web standards.
  • a WONTFIX which is a good news. Bug 1231829 - Implement -webkit-border-image quirks for compatibility. It means it has been fixed by the site owners.
  • On this Find my phone issue on Google search, the wrong order of CSS properties creates a layout issue where the XUL -moz-box was finally interpreted, but it triggered a good question from Xidorn. Should we expose the XUL display values to Web content? Add to that that some properties in the CSS never existed.
  • hangout doesn't work the same way for Chrome and Firefox. There's something happening either on the Chrome side or the servers, which creates the right path of actions. I haven't determined it yet.

WebCompat.com dev

Reading List

Follow Your Nose


  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Patrick ClokeWindows Mobile (or Windows Phone) and FastMail

I’ve been a big fan of Windows Phone (now Windows Mobile) for a while and have had a few phones across versions 7, 8, and now 10. A while ago I switched to FastMail as my e-mail provider [1], but had been stuck using Google as my calendar provider still (and my contacts were on my Windows Live account). I had a desire to move all these onto a single account, but Windows 10 Mobile only officially supports e-mail from arbitrary providers. Calendar and contacts are limited to a few special providers.

Below I’ve outlined how I’ve gotten all three services (email, contacts, and calendar) from my FastMail account onto my Windows Mobile device.


Email is the easy one, FastMail even has a guide to setting up email on Windows Phone. This guide did not handle sending emails with a custom domain name, if you don’t have that situation, probably just use the FastMail guide.

  1. Add a new account, choose “other account”.
  2. Type in your email address (e.g. you@yourcustomdomain.com) and password.
  3. It will complain about being unable to find proper account settings. Click “try again”.
  4. It will complain again, but not give you an option for “advanced”, click it.
  5. Choose “Internel email account”.
  6. Enter any “Account name” and “Your name” that you want.
  7. Choose “IMAP4” as the “Account type”.
  8. Change the incoming mail server to mail.messagingengine.com.
  9. Change the username to your FastMail username (e.g. you@fastmail.com).
  10. Change the outgoing mailserver to mail.messagingengine.com.

Now when you send email it should show up properly as you@yourcustomdomain.com, but be sent via FastMail’s servers!


FastMail added support for CardDAV last year and Windows Phone added support back in 2013, so why is this hard? Well…turns out that there isn’t a way to make a CardDAV account on Windows Mobile, it’s just used for certain account types. Luckily, there is a forum post about hooking up CardDAV via a hack. Steps are reproduced below:

  1. Add a new account, choose “iCloud”.
  2. Type in your FastMail username, but add +Default before the @ (e.g. you+Default@fastmail.com), note that this isn’t anything special, just the scheme FastMail uses for CardDAV usernames.
  3. Put in your password. [2]
  4. Click “sign in”, it will fail.
  5. Go back into the account settings (click “Manage”) and modify the advanced settings (“Change mailbox sync settings”). Choose manually for when to download new email. Disable syncing of email and calendar.
  6. Go to “Advanced account settings”. Change the “Incoming email server”, “Outgoing (SMTP) email server” and “Calendar server (CalDAV)” to localhost. [3]
  7. Change “Contacts server (CardDAV)” to carddav.messagingengine.com:443/dav/addressbooks/user/you@fastmail.com/Default, changing you@fastmail.com to your FastMail username.
  8. Click “Done”!

Your contacts should eventually appear in your address book! I couldn’t figure out a way to force my phone to sync contacts, but they appeared fairly quickly.


FastMail added support for CalDAV back in the beginning of 2014 [4]. These steps are almost identical to the Contacts section above, but using information from the guide for setting up Calendar.app.

  1. Add a new account, choose “iCloud”.
  2. Type in your FastMail username (e.g. you@fastmail.com).
  3. Put in your password.
  4. Click “sign in”, it will fail.
  5. Go back into the account settings (click “Manage”) and modify the advanced settings (“Change mailbox sync settings”). Choose manually for when to download new email. Disable syncing of email and contacts.
  6. Go to “Advanced account settings”. Change the “Incoming email server”, “Outgoing (SMTP) email server” and “Contacts server (CardDAV)” to localhost.
  7. Change “Calendar server (CalDAV)” to caldav.messagingengine.com/dav/principals/user/you@fastmail.com/, changing you@fastmail.com to your FastMail username.
  8. Click “Done”!

My default calendar appeared very quickly, but additional calendars took a bit to sync onto my phone.

Good luck and let me know if there are any errors, easier ways, or other tricks to getting the most of FastMail on a Windows Mobile device!

[1]There are a variety of reasons why I switched, I had recently bought a domain name to get better control over my online presence (email, website, etc.). I was also tired of my email being used to server me advertisements and various other issues with free webmail. I highly recommend FastMail, they have awesome security and privacy policies. They also have amazing support, give back to (a lot) to open source and a whole slew of other things.
[2]I put a dummy one in and then changed it after I updated the servers in step 6. This was to not send my password to iCloud servers. The password is hopefully encrypted and hashed, but I don’t know for sure.
[3]We’re just ensuring that our credentials for these other services will not hit Apple servers for any reason.
[4]That article talks about beta.fastmail.fm, but this is now available on the production FastMail servers too!

Daniel StenbergHTTP Workshop 2016, day -1

http workshop The HTTP Workshop 2016 will take place in Stockholm starting tomorrow Monday, as I’ve mentioned before. Today we’ll start off slowly by having a few pre workshop drinks and say hello to old and new friends.

I did a casual count, and out of the 40 attendees coming, I believe slightly less than half are newcomers that didn’t attend the workshop last year. We’ll see browser people come, more independent HTTP implementers, CDN representatives, server and intermediary developers as well as some friends from large HTTP operators/sites. I personally view my attendance to be primarily with my curl hat on rather than my Firefox one. Firmly standing in the client side trenches anyway.

Visitors to Stockholm these days are also lucky enough to arrive when the weather is possibly as good as it can get here with the warmest period through the summer so far with lots of sun and really long bright summer days.

News this year includes the @http_workshop twitter account. If you have questions or concerns for HTTP workshoppers, do send them that way and they might get addressed or at least noticed.

I’ll try to take notes and post summaries of each workshop day here. Of course I will fully respect our conference rules about what to reveal or not.

stockholm castle and ship

Cameron KaiserTenFourFox 45 is more of a thing

Since the initial liftoff of TenFourFox 45 earlier this week, much progress has been made and this blog post, ceremonially, is being typed in it. I ticked off most of the basic tests including printing, YouTube, social media will eat itself, webcam support, HTML5 audio/video, canvas animations, font support, forums, maps, Gmail, blogging and the major UI components and fixed a number of critical bugs and assertions, and now the browser is basically usable and able to function usefully. Still left to do is collecting the TenFourFox-specific strings into their own DTD for the localizers to translate (which will include the future features I intend to add during the feature parity phase) and porting our MP3 audio support forward, and then once that's working compiling some opt builds and testing the G5 JavaScript JIT pathways and the AltiVec acceleration code. After that it'll finally be time for the first beta once I'm confident enough to start dogfooding it myself. We're a little behind on the beta cycle, but I'm hoping to have 45 beta 1 ready shortly after the release of 38.10 on August 2nd (the final 38 release, barring a serious showstopper with 45), a second beta around the three week mark, and 45 final ready for general use by the next scheduled release on September 13th.

A couple folks have asked if there will still be a G3 version and I am pleased to announce the answer will very likely be yes; the JavaScript JIT in 45 does not mandate SIMD features in the host CPU, so I don't see any technical reason why not (for that matter, the debug build I'm typing this on isn't AltiVec accelerated either). Still, if you're bravely rocking a Yosemite in 2016 you might want to think about a G4 for that ZIF socket.

I've been slack on some other general interest posts such as the Power Mac security rollup and the state of the user base, but I intend to write them when 45 gets a little more stabilized since there have been some recurring requests from a few of you. Watch for those soon also.

Support.Mozilla.OrgSUMO Show & Tell: How I Got Involved With Mozilla

Hey SUMO Nation!

London Work Week 2016During the Work Week in London we had the utmost pleasure of hanging out with some of you (we’re still a bit sad about not everyone making it… and that we couldn’t organize a meetup for everyone contributing to everything around Mozilla).

Among the numerous sessions, working groups, presentations, and demos we also had a SUMO Show & Tell – a story-telling session where everyone could showcase one cool thing they think everyone should know about.

I have asked those who presented to help me share their awesome stories with everyone else – and here you go, with the second one presented by Andrew, a jack-of-all-trades and Bugzilla tamer.

Take a look below and relive the origin story of a great Mozillian – someone just like you!

WhistlerIt all started… with an issue that I had with Firefox on my desktop computer running Windows XP, back in 2011. Firefox wouldn’t stop crashing! I then discovered the support site for Firefox. There I found help with my issue through support articles, and at the same time, I was also intrigued by the ability to help other users through the very same site as well.

As I looked into the available opportunities to contribute to the support team, I landed upon live chat. Live chat was a 1-on-1 chat to help out users with the issues they had. Unfortunately, after I joined the team, the live chat was placed on a hiatus. It was recommended that I move on to the forums and knowledge base, because rather than just helping one user and only them benefiting, on the forums I could help many more people through a single suggestion. For some, this floated well and with others it didn’t, because we weren’t taking care of the user personally (like on the chat).

It definitely took some time for me to adjust to this new setting, as things were (and are) handled differently on the forum and on the knowledge base. Users on the forum sometimes do respond immediately, but most of the time they respond later, and some actually don’t respond at all. This is one of the differences between helping out through live chat and through the forums.

The knowledge base on the other hand, can be really complex. There is markup being used to present text in a different way to different users. We must be as clear and precise as possible when writing the article, since although we may know really well what we are talking about, the article reader (usually a user in need of helpful information) may not. It is definitely challenging for some Mozillians to get involved with writing, but once you do, you get the hang of it and truly enjoy it.

From there on, I kept contributing to the forum and knowledge base, but I also went to find out how I could contribute to other areas of Mozilla. I landed upon triaging bugs within Mozilla sites thanks to the help of Liz Henry and Tyler Downer. Furthermore, as Firefox OS rolled out, I started to provide support to the users, write more articles and file bugs in regards to the OS.

As things moved forward so did life – at the moment I am contributing through the Social Support team. Contributing through Social helps our users on social media realise that we are listening to them and that their comments and woes are not falling on deaf ears. We respond to all types of concerns, be they praises or complaints. Helping users on Twitter while being restricted to 140 characters is difficult, whereas on Facebook we can provide a more detailed explanation and response. With Social Support, a single response from us sometimes reaches only a single person – other times it can reach thousands through re-sharing.

Social media makes it easy to identify issues, crises, and hot topics – it is where people nowadays now go to seek assistance, rant, and share their experiences. Also, as posts and tweets can spread easily on social media, it is a double-edged sword: if something positive is spreading, we hope it spreads more. However, if something negative is spreading, we must contain it, identify, and address the root cause of the issue. The bottom line is: we must help our users while keeping everything in the balance and being constantly vigilant.

TorontoIn 2013, I was very thankful that I was able to attend the Summit that was held in 3 places across the world. I was invited to Toronto, where I held a session called “What does ‘Mozillian’ mean?” In that session, we defined what the term “Mozillian” meant, who was included, not included, and what roles and capabilities were necessary to classify an individual to be a Mozillian. At the end of the session, we touched base via email to finalize our thoughts and gather the necessary information to pass along to others. Although we made some progress, defining who a Mozillian is, who can (or can’t) be one, and setting a specific criteria is somewhat impossible. We must be accepting of those who come and go, those with different backgrounds, personal preferences regarding getting things done, and (sometimes highly) different opinions. All that said, we are a huge family – a huge Mozilla family.

Thank you Andrew for sharing your story with us. I personally appreciate your relaxed and flexible perspective on (sometimes inevitable) changes and challenges we all face when trying to make Mozilla work for the users of the web.

Here’s to many more great chances for you to rock the (helpful, but not only) web with Mozilla and others!

David BurnsWebDriver F2F - July 2016

Last week saw the latest WebDriver F2F to work on the specification. We held the meeting at the Microsoft campus in Redmond, Washington.

The agenda for the meeting was placed, as usual, on the W3 Wiki. We had quite a lot to discuss and, as always, was a very productive meeting.

The meeting notes are available for Wednesday and Thursday. The most notable items are;

  • Finalising Actions in the specification
  • newSession
  • Certificate handling on navigation
  • Specification tests

We also welcomed Apple to their first WG meeting. You may have missed it, but there is going to be a Safari Driver built in in macOS.

Honza BambasIllusion of atomic reference counting

Most people believe that having an atomic reference counter makes them safe to use RefPtr on multiple threads without any more synchronization.  Opposite may be truth, though!

Imagine a simple code, using our commonly used helper classes, RefPtr<> and an object Type with ThreadSafeAutoRefCnt reference counter and standard AddRef and Release implementations.

Sounds safe, but there is a glitch most people may not realize.  See an example where one piece of code is doing this, no additional locks involved:

RefPtr<Type> local = mMemeber; // mMember is RefPtr<Type>, holding an object

And other piece of code then, on a different thread presumably:

mMember = new Type(); // mMember's value is rewritten with a new object

Usually, people believe this is perfectly safe.  But it’s far from it.

Just break this to actual atomic operations and put the two threads side by side:

Thread 1

local.value = mMemeber.value;
/* context switch */ 

Thread 2

Type* temporary = new Type();
Type* old = mMember.value; 
mMember.value = temporary; 
/* context switch */ 

Similar for clearing a member (or a global, when we are here) while some other thread may try to grab a reference to it:

RefPtr<Type> service = sService;
if (!service) {
  return; // service being null is our 'after shutdown' flag

And another thread doing, usually during shutdown:

sService = nullptr; // while sService was holding an object

And here what actually happens:

Thread 1

local.value = sService.value;
/* context switch */

Thread 2

Type* old = sService.value; 
sService.value = nullptr; 
/* context switch */

And where is the problem?  Clearly, if the Release() call on the second thread is the last one on the object, the AddRef() on the first thread will do its job on a dying or already dead object.  The only correct way is to have both in and out assignments protected by a mutex or, ensure that there cannot be anyone trying to grab a reference from a globally accessed RefPtr when it’s being finally released or just being re-assigned. The letter may not always be easy or even possible.

Anyway, if somebody has a suggestion how to solve this universally without using an additional lock, I would be really interested!

The post Illusion of atomic reference counting appeared first on mayhemer's blog.

Gervase MarkhamSamsung’s L-ish Model Numbers

A slow hand clap for Samsung, who have managed to create versions of the S4 Mini phone with model numbers (among others):

  • GT-i9195
  • GT-i9195L (big-ell)
  • GT-i9195i (small-eye)
  • GT-i9195l (small-ell)

And of course, the small-ell variant, as well as being case-confusable with the big-ell variant and visually confusable with the small-eye variant if it’s written with a capital I as, say, here, is in fact an entirely different phone with a different CPU and doesn’t support the same aftermarket firmware images that all of the other variants do.

See this post for the terrible details.

Cameron KaiserTenFourFox 45 is a thing

The browser starts. Lots of problems but it boots. More later.

Armen ZambranoMozci and pulse actions contributions opportunities

We've recently finished a season of feature development adding TaskCluster support to add new jobs to Treeherder on pulse_actions.

I'm now looking at what optimizations or features are left to complete. If you would like to contribute feel free to let me know.

Here's some highligthed work (based on pulse_action issues and bugs):
This will help us save money in Heroku since using Buildapi + buildjson files is memory hungry and requires us to use bigger Heroku nodes.
This is important to help us change the behaviour of the Heroku app without having to commit any code. I've used this in the past to modify the logging level when debugging an issue.

This is also useful if we want to have different pipelines in Heroku. 
Having Heroku pipelines help us to test different versions of the software.
This is useful if we want to have a version running from 'master' against the staging version of Treeherder.
It would also help contributors to have a version of their pull requests running live.
We don't have any tests running. We need to determine how to run a minimum set of tests to have some confident in the product.

This needs integration tests of Pulse messages.
The comment is the bug is rather accurate and it shows that there are many small things that need fixing.
Manual backfilling uses Buildapi to schedule jobs. If we switched to scheduling via TaskCluster/Buildbot-bridge we would get better results since we can guarantee proper scheduling of a build + associated dependent jobs. Buildapi does not give us this guarantee. This is mainly useful when backfilling PGO test and talos jobs.

If instead you're interested on contributing to mozci you can have a look at the issues.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Support.Mozilla.OrgWhat’s Up with SUMO – 21st July

Hello, SUMO Nation!

Chances are you have noticed that we had some weird temporal issues, possibly caused by a glitch in the spacetime continuum. I don’t think we can pin the blame on the latest incarnation of Dr Who, but you never know… Let’s see what the past of the future brings then, shall we?

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 27th of July!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

Knowledge Base & L10n

  • If you’re an active localizer in one of the top 20+ locales, expect a list of high priority articles coming your way within the next 24 hours. Please make sure that they are localized as soon as possible – our users rely on your awesomeness!
  • Final reminder: remember the discussion about the frequency & necessity of KB updates and l10n notifications? We’re trying to address this for KB editors and localizers alike. Give us your feedback!
  • Reminder: L10n hackathons everywhere! Find your people and get organized! If you have questions about joining, contact your global locale team.


  • for Android
    • Version 48 is still on track – release in early August.
  • for Desktop
    • Version 48 is still on track – release in early August.

Now that we’re safely out of the dangerous vortex of a spacetime continuum loop, I can only wish you a great weekend. Take it easy and keep rocking the helpful web!

Mozilla Addons BlogNew WebExtensions Guides and How-tos on MDN

The official launch of WebExtensions is happening in Firefox 48, but much of what you need is already supported in Firefox and AMO (addons.mozilla.org). The best place to get started with WebExtensions is MDN, where you can find a trove of helpful information. I’d like to highlight a couple of recent additions that you might find useful:

Thank you to Will Bamberg for doing the bulk of this work. Remember that MDN is a community wiki, so anyone can help!

Air MozillaWeb QA Team Meeting, 21 Jul 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Air MozillaReps weekly, 21 Jul 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Reps CommunityRep of the Month – June 2016

Please join us in congratulating Alex Lakatos as Reps of the Month for June 2016!

Alex is a Mozilla Rep based in London, Great Britain, originally from Romania. He is also a Mozilla TechSpeaker, giving talks all around Europe.

In the last 2 months Alex held several technical talks all over Europe (CodeCamp Cluj, OSCAL in Albania, DevTalks in Bucharest and DevSum in Sweden just to name a few) to promote Mozilla’s mission and the Open Web. With his enthusiasm in tech he is a crucial force to promote our mission and educate developers all around Europe about new Web technologies. He covered both the transition we are doing shifting from Firefox OS to a more innovative area with Connected Devices but also changes in Firefox and why you should consider the improvements made on the DevTools side.

Please don’t forget to congratulate him on Discourse!

Adam StevensonCompatibility Screenshots

I’ve been trying to learn more about how screenshots can help us identify compatibility issues in Firefox. It started with the question:

How does Firefox compare to Chrome in the top 100 websites?

Pretty good it turns out, on the front pages at least, you can view them yourself [Some images are offensive and NSFW]. You can also check out the same list of sites but comparing Firefox to Firefox with tracking protection. I made some scripts to capture the screens in OSX. They make use of the screencapture utility and this other cool little utility called GetWindowID. GetWindowID determines which Window ID is associated to a program on the screen, Firefox or Chrome in this case.

Let’s look at how these utilities work together.

Running the GetWindowID command requires that we specify which program we are looking for and which tab is active as well. I’ve made sure that my version of Firefox starts up with the Mozilla Firefox Start Page. If we execute this command:

./GetWindowID "Firefox" "Mozilla Firefox Start Page";

It returns a numeric value like:

This is great because the screencapture utility needs to know which window ID to look at.
So let’s take that same GetWindowID command from earlier and store the result into a variable called ‘gcwindow’.

gcwindow=$(./GetWindowID "Firefox" "Mozilla Firefox Start Page");

Now gcwindow has the value 1072 from before. Let’s feed that into the screencapture utility:

screencapture -t jpg -T 40 -l $gcwindow -x ~/Desktop/screens/firefoxtest/$site.jpg;

When this runs the program will wait 40 seconds from the "-T 40” parameter then take a screenshot of Window ID 1072, which is our Firefox instance. The JPG file will be stored in a folder on my desktop under screens/firefoxtest. The rest of the script is looping through each website name that we’ve entered, opening a new browser window, opening the website we want to capture, killing the browser process after each screenshot and some sleep commands in between that give the computer time to execute each step.

There are some browser preferences and considerations that you will want to be aware of before running these scripts.

Why do all this in OSX? Cause I like to work on a mac, I guess. OK I don’t have a good reason but if you want to make it work on a Linux docker or something cool that’d be super sweet. The other thing to keep in mind is I’m looking at viewport screenshots right now, full page would be nice, but we’ll get there.

So the side by side comparison of popular sites is pretty useful but looking at things is a lot of work. It would be cool if we could automate some or all of that looking, right? Luckily there are image comparison tools that can help with this. I decided to try out Yahoo’s blink-diff tool which is built using node.js.

First off only PNG’s are supported with this tool, but that’s easy to change using the screencapture command line tool.

So we use 'screencapture -t png’ instead of 'screencapture -t jpg’.

Let’s go through setting this up for a single test. You’ll need to have node.js installed first.
We need to create a new folder, the name isn’t important.

mkdir onetime-diff

Then download this javascript file from Github and put it in that folder. Now let’s initialize our project:

npm init

And just accept all the defaults. Next let’s install the dependancies:

npm install blink-diff
npm install pngjs-image

Great, it’s ready to run now. The index.js file we downloaded looks for two files in the same folder called firefox.png and chrome.png and will generate a file called output.png. If you need a couple files to test with:


Note that if you provide your own PNG files, you may need to adjust the cropping parameters. I’ve configured the script to work best for Firefox and Chrome screenshots captured on a retina display, if you aren’t using a retina display divide those numbers by 2. You can see here y:160 and y:144, this is cropping out the top portion of the screenshot where the browser's “chrome” is.

cropImageA: { x:0, y:160, width:0, height:0 }, // Firefox
cropImageB: { x:0, y:144, width:0, height:0 }, // Chrome

Once you’re ready to run the test, execute:

node index.js

After a minute, it should generate an output.png file that looks like this and the script will return a result to the command prompt:

Found 1116908 differences.

So this is a good start, we have an image comparison program and an automated screenshot utility. To make it more useful I created another script that combines these together. On a high level it works like this:

First site > Screenshot Firefox > Screenshot Chrome > Compare images in background process > Next Site...

It has the same dependancies as before, but now we run it like this:


After giving this is a few runs and playing with the settings, I started to see some issues.

  • Advertisements placed in different positions, sizes, style or even amount
  • Regional site redirects
  • Different home page, providing a ‘fresh look’ or they are A/B testing
  • Site surveys or other pop ups
  • Large image sliders
  • Random overlay pop up ads
  • Rotating background images
  • Very slow process when using one computer

We want each site to have a decent amount of time to load, I normally use between 30-40 seconds. But that adds up over 1000 or more sites. I decided to hack something basic together to allow multiple computers in my house to split the load. It helps but it would be much better to have this running on Linux virtual machines or dockers.

So what’s next?

  • More sample runs to find a decent set of parameters for the baseline
  • Identifying in the top 1000 sites, which ones will continue to fail
  • Can we set higher thresholds and still detect when something breaks?
  • Can the tool ignore areas that are constantly changing?
  • Get the results out in the open for others to look at

If any of this interests you and want to get involved, I’d love to hear from you. Or if you have advice on how to make this better, please reach out as well.

Matjaž HorvatImproving in-page localization in Pontoon

We’re improving the way in-page localization works in Pontoon by droping a feature instead of introducing a new one. Translating text on the web page itself using the contentEditable attribute has been turned off.

That means the actual translation (typing) always takes place in the translation editor, which gives you access to all the relevant information you need for the translation.

The sidebar is always visible, allowing you to select strings from the list and then translate them. Additionally, you can still use the Inspector-like tool to select any localizable string on the page, which will then open in the translation editor in the sidebar to be translated.

Translation within the web page has turned out to be suboptimal for various reasons:

  • Original string is not always presented unambiguously, e.g. if containing markup,
  • Additional string details like comments and file paths are not displayed,
  • Suggestions from history, machinery and other locales are not available,
  • Only the first plural form can be translated,
  • It’s hard to control markup or new lines on various sites if they’re part of the string.

Mozilla Addons BlogCompleting Firefox Accounts on AMO

In Feburary we rolled out Firefox Accounts on addons.mozilla.org (AMO). That first phase created a migration flow from old AMO accounts over to Firefox Accounts. Since then, 84% of developers who have logged in have transitioned over to a Firefox Account.

The next step is to remove the ability to log in using an old AMO account. Once this is complete, the only way to log in to AMO is by using Firefox Accounts.

If you have an old account on AMO and have not gone through the migration flow, you can still access your account if the email you use to log in through Firefox Accounts is the same as the one previously registered on AMO.

We expect that the removal of old logins will be completed in a couple of weeks, unless any unforeseen problems occur.

Frequently asked questions

What happens to the add-ons I develop when I convert to a new Firefox Account?

All the add-ons are accessible to the new Firefox Account.

Why do I want a Firefox Account?

Firefox Accounts is the identity system that is used to synchronize Firefox across multiple devices. Many Firefox products and services will soon begin migrating over, simplifying your sign-in process and making it easier for you to manage all your accounts.

Where do I change my password?

Once you have a Firefox Account, you can go to accounts.firefox.com, sign in, and click on Password.

If you have forgotten your current password:

  1. Go to the AMO login page
  2. Click on I forgot my password
  3. Proceed to reset the password

QMOFirefox 49.0 Aurora Testday, July 22nd

Hello Mozillians,

Good news! We are having another testday for you 😀 This time we will take a swing at Firefox 49.0 Aurora, this Friday, 22nd of July.  The main focus during the testing will be around Context Menu, PDF Viewer and Browser Customization. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

I know this is short notice but we hope you will join us in the process of making Firefox a better browser. See you on Friday!

Dustin J. MitchellRecovering from TaskWarrior Corruption

I use TaskWarrior along with TaskWarrior for Android to organize my life. I use FreeCinc to synchronize all of my desktops, VPS, and phone, using a crontask. Most of the time, it works pretty well.

FreeCinc Fail

However, yesterday, all of FreeCinc’s keys expired. There’s a big red warning on the home page instructing users to download new keys.; Since my sync’s operate on a crontask, I didn’t notice this until I discovered tasks I remembered modifying in one place did not appear in another. By that time, I had modifed tasks everywhere – a few things to buy on my phone, some work stuff on the laptop, some more work stuff on the VPS, and some personal stuff on the desktop.

So, downloading new keys is easy. However, TaskWarrior doesn’t magically take four different sets of tasks and combine them into a single coherent set of tasks, just by syncing to a server. No, in fact, since there are no changes to sync, it does nothing. Just leaves the different sets of tasks in place on different machines. So basically everything I modified in 24 hours, across four machines, was now unsynchronized. And I use this to run my life, so it was probably 100 or so changes.

What Was I Doing Again?

Here’s how I fixed this:

I copied pending.data and completed.data from all four hosts onto a single host. These files are in a pretty simple one-task-per-line format, with a uuid and modification timestamp embedded in each line. The rough approach was to take all of the tasks in all of these files, and select most recent instance for each uuid. There’s a little bit of extra complication to handle whether a task is completed or not. I used the following script to do this calculation:

import re

uuid_re = re.compile(r'uuid:"([^"]*)"')
modified_re = re.compile(r'modified:"([0-9]*)"')
def read(filename):
	with open(filename) as f:
		for l in f:
			uuid = uuid_re.search(l).group(1)
				modified = modified_re.search(l).group(1)
			except AttributeError:
				modified = 0
			yield uuid, int(modified), l

def add_to(uuid, modified, completed, line, coll):
	if uuid in coll:
		ex_modified, ex_completed, _ = coll[uuid]
		if ex_modified >= modified:
		if ex_completed and not completed:
	coll[uuid] = (modified, completed, line)

by_uuid = {}
for c, fn in [
	(True, "rama-completed.data"),
	(True, "hopper-completed.data"),
	(True, "dorp-completed.data"),
	(True, "android-completed.data"),
	(False, "rama-pending.data"),
	(False, "hopper-pending.data"),
	(False, "android-pending.data"),
	for uuid, modified, line in read(fn):
		add_to(uuid, modified, c, line, by_uuid)

with open("completed-result.data", "w") as f:
	for _, completed, line in by_uuid.itervalues():
		if completed:

with open("pending-result.data", "w") as f:
	for _, completed, line in by_uuid.itervalues():
		if not completed:

As it turns out, I might have simplified this a little by looking at the status field: completed and deleted are in completed.data, and the rest are in pending.data.

Once I was happy with the results (approximately the right number of pending tasks, basically), I copied them into ~/.task on one machine, and ran some task queries to check everything looked good (looking for tasks I recalled adding on various machines). Satisfied with this, I downloaded yet another set of keys from FreeCinc and installed them on that same machine. I deleted ~/.task/backlog.data on that machine (just in case) and ran task sync init which appeared to upload all pending tasks. Great!

Next, I deleted ~/.task/*.data on all of the other machines, installed the new FreeCinc keys, and ran task sync. On these machines, it happily downloaded the pending tasks. And we’re back in business!

I chose not to just copy ~/.task/*.data between systems because I run slightly different versions of TaskWarrior on different systems, so the data format might be different. I might have used task export and task import with some success, but I didn’t think of it in time.

Julia ValleraMozilla Clubs end of year goals

Mozilla Clubs are excited to share our goals for the rest of 2016. We’ve come a long way since the program’s launch in 2015. What lies ahead for us is exciting and challenging. Below is what we will be working on and information about how you can join in the fun.

Curious to learn more about Mozilla Clubs? Check out our website, facebook page, event gallery, and discussion forum.

Mozilla Club leaders come together in June 2016 at Mozilla all-hands. Photo by Randy Macdonald

Mozilla Club leaders came together in June 2016 at Mozilla all-hands. Photo by Randy Macdonald

Our process

In June 2016, eight Mozilla Club leaders came together in London, UK for Mozilla’s bi-annual All Hands gathering. They participated in many conversations, one of which was a 90 minute deep dive session to identify objectives for clubs over the next six months. During the session we brainstormed topics, ideated in pairs and had a group share out. In addition to informing our goals for the rest of 2016, this session gave club leaders the opportunity to learn more about each other’s work and regional challenges.

In July, we shared the results of our deep dive session more broadly during our monthly call for club leaders and internal clubs info session. This allowed us to gather more feedback and ultimately votes on what goals we should focus on for Mozilla Clubs between now and January 2017.

Here is the list of goals that resulted, why they are important to our work how we plan to approach them.

Six Month Goals

Curate and/or create new resources for running clubs offline
  • Why: We want to build and curate more web literacy curriculum that can be used without internet access so that club participants can learn offline.
  • How: We will make our current offline activities and curriculum easier to locate, curate new resources and build new ones.
Connect the community through a global gathering
  • Why: Club participants learn from each other and feel connected to a global community when they have the opportunity to see each other face-to-face.
  • How: We will draw from event models across Mozilla like global sprints, state of the Hive and Mozilla Festival to connect club participants (virtually and/or in person) to work on challenges, share experiences and exchange knowledge.
Continue to localize content and resources
  • Why: As we translate more curriculum, activities and club guides into languages other than English more people can access and learn from them.
  • How: We will work with Mozilla volunteers, staff and partners to build localization into the process of content creation and start with translating current activities and creating new location-specific resources.
Reward and recognize club leaders
  • Why: Club leaders need rewards and recognition for their work so that they feel empowered to grow and spread web literacy in their communities.
  • How: We will recognize club leaders for their work through a formal rewards process and develop an agreement policy to create more clarity around the responsibilities of being a club leader.
Strengthen clubs as an organizing model for Mozilla campaigns
  • Why: Mozilla Club participants should continue to have an active role in Mozilla campaigns like Maker Party, Copyright, Take back the Web, Encryption, etc.
  • How: We will leverage club calls, office hours, the discussion forum, etc. to get input from club participants as campaigns take shape and will share campaign related activities that can be incorporated into their offerings.
Connect club participants across Mozilla
  • Why: Mozilla program participants have a lot of expertise to share and they should be able to connect with each other easily and frequently.
  • How: Create opportunities for community members in Clubs, Hives, Open Science and Advocacy to share work with each other, get feedback, build networks and more.
Assess club activity
  • Why: It is important that we maintain an accurate and up-to-date list of active clubs so that we can provide support where it is needed most.
  • How: We will identify which clubs are active by holding individual meetings, checking in via email and reviewing the club event reporter.

Join in the fun!

Here are some ways you can contribute to our work over the next six months and beyond.

  1. Connect with a Mozilla Club in your area. Don’t see any clubs in your area? Apply to start your own!
  2. Help us translate one of our web literacy activities into your preferred language.
  3. Use our offline activities, tell us what you think and suggest new ones.
  4. Join our facebook group to get updates about upcoming events and campaigns.

Jen Kagandraggable min-vid, part 1

since merging john and i’s css PR, i’ve been digging into min-vid again. lots has changed! dave rewrote min-vid in react.js to make it easier for contributors to plug in.

why react.js? because we won’t have to write a thousand different platform checks anymore. for example, we’d have to trigger one set of behaviors if the platform was youtube.com and another set of behaviors if the platform was vimeo.com. this wasn’t scalable and it wasn’t very contributor-friendly. now, to add support for additional video-streaming platforms, contributors will just have to construct the URL to access the platform’s video files (hopefully via a well-documented API) and add the new URL constructing code to min-vid’s /lib folder in file called get-[platform]-url.js.

so that’s awesome!

right now, i’m working on how to make the video panel draggable within the browser window so you’re not just limited to watching yr vids in the lower left-hand corner:

Screen Shot 2016-07-20 at 12.23.26 PM

john came up with a hacky idea for draggability where, on mouseDown, we’ll:

  1. create an invisible container the size of the entire browser window
  2. as long as mouseDown is true, drag the panel wherever we want within the invisible container
  3. onMouseUp, snap the container to be the size of the panel again.

the idea is to make dragging less glitchy by changing our dragging process so we’re no longer sending data back and forth between react, the add-on, and the window.

how to get started? jared broke down the task into smaller pieces for me. here’s the first piece:

Screen Shot 2016-07-20 at 12.25.41 PM

the function for setting up the panel size is in the index.js file. we determine how and when to panel.show() and panel.hide() based on the block of code below. the code tells the panel to listen for

  1. a message being emitted and
  2. for the content of that message, in this case from the controls.js file:

// require the Panel element from the Mozilla SDK
var panel = require('sdk/panel').Panel({
// set the panel content using the /default.html file
  contentURL: './default.html',
// set the panel functionality using the /controls.js file
  contentScriptFile: './controls.js',
// set the panel dimensions and position
  width: 320,
  height: 180,
  position: {
    bottom: 10,
    left: 10

then, do different stuff based on what the message said.

// turn the panel port on to listen for a 'message' being emitted
panel.port.on('message', opts => {
// assign title to be whatever 'opts' were emitted
  var title = opts.action;

  if (title === 'send-to-tab') {
    const pageUrl = getPageUrl(opts.domain, opts.id);
    if (pageUrl) require('sdk/tabs').open(pageUrl);
    else console.error('could not parse page url for ', opts); // eslint-disable-line no-console
  } else if (title === 'close') {
  } else if (title === 'minimize') {
      height: 40,
      position: {
        bottom: 0,
        left: 10
  } else if (title === 'maximize') {
      height: 180,
      position: {
        bottom: 10,
        left: 10

i added another little chunk in there which says: if the title is drag, hide the panel and then show it again with these new dimensions. the whole new block of code looks like this:

panel.port.on('message', opts => {
  var title = opts.action;

  if (title === 'send-to-tab') {
    const pageUrl = getPageUrl(opts.domain, opts.id);
    if (pageUrl) require('sdk/tabs').open(pageUrl);
    else console.error('could not parse page url for ', opts); // eslint-disable-line no-console
  } else if (title === 'close') {
  } else if (title === 'minimize') {
      height: 40,
      position: {
        bottom: 0,
        left: 10
  } else if (title === 'maximize') {
      height: 180,
      position: {
        bottom: 10,
        left: 10
  else if (title === 'drag') {
      height: 360,
      width: 640,
      position: {
        bottom: 0,
        left: 0

so we have some new instructions for the panel. but how do we trigger them?  we trigger the instructions by creating the drag function within the PlayerView component and then rendering it. this code says: on whatever new custom event, send a message. the content of the message is an object with the format {detail: obj}—in this case, {action: 'drag'}. then, render the trigger in a <div> in an <a> tag.

function sendToAddon(obj) {
  window.dispatchEvent(new CustomEvent('message', {detail: obj}));

const PlayerView = React.createClass({
  getInitialState: function() {
    return {showVolume: false, hovered: false};
  drag: function() {
    sendToAddon({action: 'drag'});
render: function() {
    return (
     <div className={'right'}>
       <a onClick={this.drag} className={'drag'} />

and we style the class in our css file:

.drag {
    background: red;

so we get something like this, before clicking the red square:

Screen Shot 2016-07-20 at 1.19.37 PM

and after clicking the red square:

Screen Shot 2016-07-20 at 1.19.45 PM

next, i have to see if i can make the panel fill the page, then only drag the video element inside the panel, then snap the panel its position on the window and put it back to its original size, 320 x 180.

Mozilla WebDev CommunityBeer and Tell – July 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Moby von Briesen: Jam Circle

This week’s only presenter was mobyvb, who shared Jam Circle, a webapp that lets users play music together. Users who connect join a shared room and see each other as circles connected to a central node. Using the keyboard (or, in browsers that support it, any MIDI-capable device), users can play notes that all other users in the channel hear and see as colored lines on each circle’s connection to the center.

The webapp also includes the beginnings of an editor that will allow users to write chord progressions and play them alongside live playback.

A instance of the site is up and running at jam-circle.herokuapp.com. Check it out!

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Daniel PocockHow many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.

Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

Air MozillaThe Joy of Coding - Episode 64

The Joy of Coding - Episode 64 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaThe Invention Cycle: Going From Inspiration to Implementation with Tina Seelig

The Invention Cycle: Going From Inspiration to Implementation with Tina Seelig Bringing fresh ideas to life and ultimately to market is not a well charted course. In July, our guest Tina Seelig will share a new...

Daniel Stenbergcurl wants to QUIC

The interesting Google transfer protocol that is known as QUIC is being passed through the IETF grinding machines to hopefully end up with a proper “spec” that has been reviewed and agreed to by many peers and that will end up being a protocol that is thoroughly documented with a lot of protocol people’s consensus. Follow the IETF QUIC mailing list for all the action.

I’d like us to join the fun

Similarly to how we implemented HTTP/2 support early on for curl, I would like us to get “on the bandwagon” early for QUIC to be able to both aid the protocol development and serve as a testing tool for both the protocol and the server implementations but then also of course to get us a solid implementation for users who’d like a proper QUIC capable client for data transfers.


The current version (made entirely by Google and not the output of the work they’re now doing on it within the IETF) of the QUIC protocol is already being widely used as Chrome speaks it with Google’s services in preference to HTTP/2 and other protocol options. There exist only a few other implementations of QUIC outside of the official ones Google offers as open source. Caddy offers a separate server implementation for example.

the Google code base

For curl’s sake, it can’t use the Google code as a basis for a QUIC implementation since it is C++ and code used within the Chrome browser is really too entangled with the browser and its particular environment to become very good when converted into a library. There’s a libquic project doing exactly this.

for curl and others

The ideal way to implement QUIC for curl would be to create “nghttp2” alternative that does QUIC. An ngquic if you will! A library that handles the low level protocol fiddling, the binary framing etc. Done that way, a QUIC library could be used by more projects who’d like QUIC support and all people who’d like to see this protocol supported in those tools and libraries could join in and make it happen. Such a library would need to be written in plain C and be suitably licensed for it to be really interesting for curl use.

a needed QUIC library

I’m hoping my post here will inspire someone to get such a project going. I will not hesitate to join in and help it get somewhere! I haven’t started such a project myself because I think I already have enough projects on my plate so I fear I wouldn’t be a good leader or maintainer of a project like this. But of course, if nobody else will do it I will do it myself eventually. If I can think of a good name for it.

some wishes for such a library

  • Written in C, to offer the same level of portability as curl itself and to allow it to get used as extensions by other languages etc
  • FOSS-licensed suitably
  • It should preferably not “own” the socket but also work in-memory and to allow applications to do many parallel connections etc.
  • Non-blocking. It shouldn’t wait for things on its own but let the application do that.
  • Should probably offer both client and server functionality for maximum use.
  • What else?

Air MozillaConnected Devices Weekly Program Update, 19 Jul 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Mozilla Localization (L10N)Localization Hackathon in Berlin

After much delays, collectively we picked a balmy first weekend of June and Berlin as our host city for a localization hackathon. We had four representing each of Dutch/Frisian and Ukrainian communities, three of German, one of South African English. Most of them had not been to an l10n hackathon, many have never not met in person within the community even though they had been collaborating for years.

Group shot

As with the other hackathons this year we allowed each team to plan how they spent their time together, and set team goals on what they wanted to accomplish over the weekend. The localization drivers would lead some group discussions. As a group, we split the weekend covering the following topics:

A series of spectrograms where attendees answer yes/no, agree/disagree questions by physically standing on a straight line from one side of the room to the other. We learned a lot about our group on recognition, about the web in their language, and about participation patterns. As we’re thinking about how to improve localization of Firefox, gaining insights into localizers hearts and life is always helpful.

Axel shared some organizational updates from the Orlando All-Hands: we recaped the status of Firefox OS and the new focus on Connected Devices. We also covered the release schedule of Firefox for iOS and Android.

We spent a bit more time talking about the upcoming changes to localization of Firefox, with L20n and repository changes coming up. In the meantime, we have a dedicated blog post on l20n for localizers, so read up on l20n there. Alongside, we’ll stop using individual repositories and workflows for localizing Firefox Nightly, Developer Edition, Beta, and release. Instead the strings needed for all of them will be in a single place. That’s obviously quite a few changes coming up, and we got quite a few questions in the conversations. At least Axel enjoys answering them.


Our renewed focus on translation quality that resulted in development of the style guide template as a guideline for localization communities to emulate. We went through all the categories and sub-categories and explained what was expected of them to elaborate and provide locale specific examples. We stressed the importance of having one as it would help with consistency between multiple contributors to a single product or all products and projects across the board. This exercise encouraged some of the communities who thought they had a guide to review and update, and those who didn’t have one to create one. The Ukrainian community created a draft version soon after they returned home. Having an established style guide would help with training and on boarding new contributors.
We also went over the categories and definitions specified in MQM. We immediately used that knowledge to review through live demo in Pontoon-like tool some inconsistencies in the strings extracted from projects in Ukrainian. To me, that was one of the highlights of the weekend: 1) how to give constructive feedback using one of the defined categories; 2) Reoccurring type of mistakes either by a particular contributor or locale; 3). Terminology consistency within a project, product or a group of products, especially with multiple contributors; 4) Importance of peer review

For the rest of the weekend, each of the community had their own breakout sessions, reviewed their own to-do list, fixed bugs, completed some projects, and spent one on one time with the l10n drivers.

Brandenburg Gate and the teamWe were incredibly blessed with great weather. The unusually heavy rain that flooded many parts of Germany stopped during our visit. A meetup like this would not be complete without experiencing some local cultures. Axel, a Berlin native was tasked to show us around. We walked, walked and walked and with occasionally public transportation in between. We covered several landmarks such as the Berlin Wall, the Brandenburg Gate, several memorials, the landmark Gedächtniskirche as well as parks and streets crowded with the locals. Of course we sampled cuisines that reflected the diverse culture that Berlin had been: we had great kebabs and the best kebabs, Chinese fusion, the seasonal asparagus and of course the German beer. For some of us, this was not the first Berlin visit. But a group activity together, with Axel as our guide, the visit was so much memorable. Before we said goodbye, the thought of next year’s hackathon came to mind. Our Ukraine community had volunteered to host it in Lviv, a beautiful city in the western part of the country. We shall see.

Air MozillaMartes mozilleros, 19 Jul 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1283323] Rename “Triage Report” link on Reports page.
  • [1286650] Allow explicit specification of an API key in scripts/issue-api-key.pl
  • [1287039] Please add Katharina Borchert and CIO to recruiting lists
  • [1286960] certain github commit messages are not being auto-linkified properly
  • [1254882] develop a nightly script to revoke access to legal bugs from ex-employees

discuss these changes on mozilla.tools.bmo.

Armen ZambranoUsability improvements for Firefox automation initiative - Status update #1

The developer survey conducted by Engineering Productivity last fall indicated that debugging test failures that are reported by automation is a significant frustration for many developers. In fact, it was the biggest deficit identified by the survey. As a result,
the Engineering Productivity Team (aka A-Team) is working on improving the user experience for debugging test failures in our continuous integration and speeding up the turnaround for Try server jobs.

This quarter’s main focus is on:
  • Debugging tests on interactive workers (only Linux on TaskCluster)
  • Improve end to end times on Try (Thunder Try project)

For all bugs and priorities you can check out the project management page for it:

In this email you will find the progress we’ve made recently. In future updates you will see a delta from this email.

PS = These status updates will be fortnightly

Debugging tests on interactive workers
Accomplished recently:
  • Landed support for running reftest and xpcshell via tests.zip
  • Many UX improvements to the interactive loaner workflow

  • Make sure Xvfb is running so you can actually run the tests!
  • Mochitest support + all other harnesses

Thunder Try - Improve end to end times on try

Project #1 - Artifact builds on automation

Accomplished recently:
  • Landed prerequisites for Windows and OS X artifact builds on try.
  • Identified which tests should be skipped with artifact builds

  • Provide a try syntax flag to trigger only artifact builds instead of full builds; starting with opt Linux 64.

Project #2 - S3 Cloud Compiler Cache

Accomplished recently:
  • Sccache’s Rust re-write has reached feature parity with Python’s sccache
  • Now testing sccache2 on Try

  • We want to roll out a two-tier sccache for Try, which will enable it to benefit from cache objects from integration branches

Project #3 - Metrics

Accomplished recently:

  • Putting Mozharness steps’ data inside Treeherder’s database for aggregate analysis

  • TaskCluster Linux builds are currently built using a mix of m3/r3/c3 2xlarge AWS instances, depending on pricing and availability. We’re going to be looking to assess the effects on build speeds of using more powerful AWS instances types, as one potential way of reducing e2e Try times.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

This Week In RustThis Week in Rust 139

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

This week has a belated Crate of the Week with Vincent Esche's self-submitted cargo-modules, which gives us the cargo modules subcommand that shows the module structure of our crates in a tree view, optionally warning of orphans. Thanks, Vincent!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

105 pull requests were merged in the last two weeks.

New Contributors

  • abhi
  • Aravind Gollakota
  • Ben Boeckel
  • Ben Stern
  • David
  • Dridi Boukelmoune
  • Isaac Andrade
  • Zhen Zhang

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

  • 7/20. Rust Community Team Meeting at #rust-community on irc.mozilla.org.
  • 7/21. Rust Hack & Learn Karlsruhe.
  • 7/27. Rust Community Team Meeting at #rust-community on irc.mozilla.org.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

fzammetti: Am I the only one that finds highly ironic the naming of something that's supposed to be new and cutting-edge after a substance universally synonymous with old, dilapidated and broken down?

paperelectron: Rust is as close to the bare metal as you can get.

On /r/programming.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Karl Dubost[worklog] Edition 027. Tracking protection and a week of boxes.

Tracking protection is an interesting beast. A feature to help users but users think the site is broken. I guess it's something similar to habits. If you put a mask on your face and you have forgotten about it, you may be surprised that people do not want to talk to you.

Webcompat Life

Progress this week:

Today: 2016-07-19T11:32:54.030052
316 open issues
needsinfo       5
needsdiagnosis  76
needscontact    20
contactready    41
sitewait        168

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • issue with MLB site displaying the plays.
  • Interesting CSS issue about display:table and max-height having a different behavior in Chrome and Firefox, maybe something related to a known issue. To be confirmed.
  • Enabling Tracking Protection in Firefox creates a lot of issues, which are not completely understood by users. We are starting to have a set of Web Compatibility reports because the site breaks or crashes when tracking protection is enabled. Usually, the JavaScript code of the site didn't take into account that some people might want to block some of the page assets, and this creates unintended consequences. There is probably something around UX to improve here. So users really understand their choices.

WebCompat.com dev

Reading List

  • CSS Containment.

    the contain property, which indicates that the element’s subtree is independent of the rest of the page.

    If I understand, this seems like something which would answer many of the complaints we hear from Web developers about CSS isolation. Specifically the layout term: contain: layout.

    This value turns on layout containment for the element. This ensures that the containing element is totally opaque for layout purposes; nothing outside can affect its internal layout, and vice versa.

    Implemented in Blink. I didn't find an issue on WebKit project (Safari). I didn't find a bug in Mozilla Bugzilla either. Can I use?. Probably no.

Follow Your Nose


  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Roberto A. VitilloData Analysis Review Checklist

Writing good code is hard, writing a good analysis is harder. Peer-review is an essential tool to fight repetitive errors, omissions and more generally divulge knowledge. I found the use of a checklist to be invaluable to help me remember the most important things I should watch out for during a review. It’s far too easy to focus on few details and ignore others which might be catched (or not) in a successive round.

I don’t religiously apply every bullet point of the following checklist to every analysis, nor is this list complete; more items would have to be added depending on the language, framework, libraries, models, etc. used.

  • Is the question the analysis should answer clearly stated?
  • Is the best/fastest dataset that can answer the question being used?
  • Do the variables used measure the right thing (e.g. submission date vs activity date)?
  • Is a representative sample being used?
  • Are all data inputs checked (for the correct type, length, format, and range) and encoded?
  • Do outliers need to be filtered or treated differently?
  • Is seasonality being accounted for?
  • Is sufficient data being used to answer the question?
  • Are comparisons performed with hypotheses tests?
  • Are estimates bounded with confidence intervals?
  • Should the results be normalized?
  • If any statistical method is being used, are the assumptions of the model met?
  • Is correlation confused with causation?
  • Does each plot communicate an important piece of information or address a question of interest?
  • Are legends and axes labelled and do the they start from 0?
  • Is the analysis easily reproducible?
  • Does the code work, i.e. does it perform its intended function?
  • Is there a more efficient way to solve the problem, assuming performance matters?
  • Does the code read like prose?
  • Does the code conform to the agreed coding conventions?
  • Is there any redundant or duplicate code?
  • Is the code as modular as possible?
  • Can any global variables be replaced?
  • Is there any commented out code and can it be removed?
  • Is logging missing?
  • Can any of the code be replaced with library functions?
  • Can any debugging code be removed?
  • Where third-party utilities are used, are returning errors being caught?
  • Is any public API commented?
  • Is any unusual behavior or edge-case handling described?
  • Is there any incomplete code? If so, should it be removed or flagged with a suitable marker like ‘TODO’?
  • Is the code easily testable?
  • Do tests exist and do they actually test that the code is performing the intended functionality?

Christian HeilmannA great time and place to ask about diversity and inclusion

whiteboard code

There isn’t a single day going by right now where you can’t read a post or see a talk about diversity and inclusiveness in our market. And that’s a great thing. Most complain about the lack of them. And that’s a very bad thing.

It has been proven over and over that diverse teams create better products. Our users are all different and have different needs. If your product team structure reflects that you’re already one up against the competition. You’re also much less likely to build a product for yourself – and we are not our end users.

Let’s assume we are pro-diversity and pro-inclusiveness. And it should be simple for us – we come from a position of strength:

  • We’re expert workers and we get paid well.
  • We are educated and we have companies courting us and looking after our needs once we have been hired.
  • We’re not worried about being able to pay our bills or random people taking our jobs away.

I should say yet, because automation is on the rise and even our jobs can be optimised away sooner or later. Some of us are even working on that.

For now, though, we are in a very unique position of power. There are not enough expert workers to fill the jobs. We have job offers thrown at us and our hiring bonuses, perks and extra offers are reaching ridiculous levels. When you tell someone outside our world about them, you get shocked looks. We’re like the investment bankers and traders of the eighties and we should help to ensure that our image won’t turn into the same they have now.

If we really want to change our little world and become a shining beacon of inclusion, we need not to only talk about it – we should demand it. A large part of the lack of diversity in our market is that it is not part of our hiring practices. The demands to our new hires make it very hard for someone not from a privileged background or with a degree from a university of standing to get into our market. And that makes no sense. The people who can change that is us – the people in the market who tick all the marks.

To help the cause and make the things we demand in blog posts and keynotes happen, we should bring our demands to the table when and where they matter: in job interviews and application processes.

Instead of asking for our hardware, share options and perks like free food and dry cleaning we should ask for the things that really matter:

  • What is the maternity leave process in the company? Can paternity leave be matched? We need to make it impossible for an employer to pick a man over a woman because of this biological reason.
  • Why is a degree part of the job? I have none and had lots of jobs that required one. This seems like an old requirement that just got copied and pasted because of outdated reasons.
  • What is the long term plan the company has for me? We kept getting asked where we see ourselves in five years. This question has become cliché by now. Showing that the company knows what to do with you in the long term shows commitment, and it means you are not a young and gifted person to be burned out and expected to leave in a year.
  • Is there a chance for a 4 day week or flexible work hours? For a young person it is no problem doing an 18 hours shift in an office where all is provided for you. As soon as you have children all kind of other things add to your calendar that can’t me moved.
  • What does this company do to ensure diversity? This might be a bit direct, but it is easy to weed out those that pay lip service.
  • What is the process to move in between departments in this company? As you get older and you stay around for longer, you might want to change career. A change in your life might make that necessary. Is the company supporting this?
  • Is there a way to contribute to hiring and resourcing even when you are not in HR? This could give you the chance to ask the right questions to weed out applicants that are technically impressive but immature or terrible human beings.
  • What is done about accessibility in the internal company systems? I worked for a few companies where internal systems were inaccessible to visually impaired people. Instead of giving them extra materials we should strive for making internal systems available out-of-the-box.
  • What is the policy on moving to other countries or working remotely? Many talented people can not move or don’t want to start a new life somewhere else. And they shouldn’t have to. This is the internet we work on.
  • What do you do to prevent ageism in the company? A lot of companies have an environment that is catering to young developers. Is the beer-pong table really a good message to give?

I’ve added these questions to a repo on GitHub, please feel free to add more questions if you find them.

FWIW, I started where I am working right now because I got good answers to questions like these. My interviews were talking to mixed groups of people telling me their findings as teams and not one very aggressive person asking me to out-code them. It was such a great experience that I started here, and it wasn’t a simple impression. The year I’ve worked here now proved that even in interviewing, diversity very much matters.

Photo Credit: shawncplus

Mozilla WebDev CommunityExtravaganza – July 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Basket switch to Salesforce

First up was pmac, who shared the news that Basket, email newsletter subscription service, has switched to using Salesforce as the backend for storing newsletter subscriptions. In addition, the service now has a nifty public DataDog metrics dashboard showing off statistics about how the service is performing.

Engagement Engineering Status Board

Next was giorgos, who shared status.mozmar.org, a status page listing the current status of all the services that Engagement Engineering maintains. The status board pulls monitoring information from Dead Man’s Snitch as well as New Relic‘s application and Synthetics monitoring. The app runs a worker using AWS Lambda that pulls the information and writes it to a YAML file in the repo‘s gh-pages branch, and the status page itself reads the YAML file via JavaScript to build the display.


ErikRose stopped by to share more cool things that shipped in DXR this month:

  • Indexing for XBL and JavaScript.
  • Indexing 32+ new projects
  • Added a 3rd build server
  • Several performance optimizations that cut down build times by roughly 25%.
  • C++ macro definitions, method overrides, pure virtuals, substructs, and more are all now indexed. In addition, you can now easily jump between header files and their implementations.
  • UI improvements, including contrast improvements, a new filename filter, and jumping directly to files that are the only result of a query.

Special thanks to intern new_one and contributors twointofive and abbeyj. Also special thanks to MXR for being shut down due to security bugs and allowing DXR to flourish in its wake.

Fathom 1.0 and 1.1

Erik also brought up Fathom, an experimental framework for extracting meaning from webpages. Fathom allows you to write declarative rules that score and classify DOM nodes, and then extract those nodes from a DOM that it analyzes.

This month we shipped the 1.0 version of Fathom, as well as a 1.1 release with a bug fix for Firefox support as well as an optimization fix. It’s available as an NPM module for use as a library.


The Roundtable is the home for discussions that don’t fit anywhere else.

Engagement Engineering Hiring – Senior Webdev and Site Reliability Engineer

Last up was pmac again, who wanted to mention that the Mozilla Engagement Engineering team is hiring a Senior Web Developer and a Site Reliability Engineer. If you’re interested in working at Mozilla, click those links to apply on our careers site!

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Chris H-CUnits and Data Follow-Up: Pokémon GO in the United Kingdom

Hit augmented-reality mobile gaming sensation Pokémon GO is now available in the UK, so it’s time to test my hypothesis about searches for 5km converted to miles in that second bastion of “Let’s use miles as distance units in defiance of basically every other country”:


Results are consistent with hypothesis.

(( Now if only I could get around how the Google Play Store is identifying my Z10 as incompatible with the game… ))


Mozilla Addons BlogA Better Add-on Discovery Experience

People who personalize Firefox like their Firefox better. However, many people don’t know that they can, and for those who know it isn’t particularly easy to do. So a few months ago, we began rethinking our entire add-on discovery experience—from helping people understand the benefits of personalization, to making it easier to install an add-on, to putting the right add-on in front of people at the right time.

The first step we’ve taken towards a better discovery experience is in the redesign of our Add-on Discovery Pane. This is typically the first page users see when they launch the Add-on Manager at about:addons.

Add-on Discovery Pane before Firefox 48

Add-on Discovery Pane before Firefox 48

We updated this page to target people who are just getting started with add-ons, by simplifying add-on installation to just one click and using clean images and text to quickly orient a new user.

Disco Pane One Click Install

It features a tightly curated list of add-ons that provide customizations that are easy for new users to understand.

Add-on Discovery Pane starting with Firefox 48

Add-on Discovery Pane starting with Firefox 48

We started with a small list and collaborated with their developers to ensure the best possible experience for users. For future releases, we will refresh the featured content on a more frequent basis and open up the nomination process for inclusion.

Our community of developers create awesome add-ons, and we want to help users discover them and love their Firefox even more. In the coming months, we are going to continue improving the experience by making recommendations that are as uniquely helpful to users as possible.

In the meantime, this first step toward improving the Firefox personalization experience will land in Firefox 48 on August 1, and is available in Firefox Beta now. So download Firefox Beta, go to about:addons and give it a try! (You can also reach this page by going to the Tools menu and choosing “Add-ons”). We would love to hear your feedback in the forums.

Hal WineLegacy vcs-sync is dead! Long live vcs-sync!

Legacy vcs-sync is dead! Long live vcs-sync!

tl;dr: No need to panic - modern vcs-sync will continue to support the gecko-dev & gecko-projects repositories.

Today’s the day to celebrate! No more bash scripts running in screen sessions providing dvcs conversion experiences. Woot!!!

I’ll do a historical retrospective in a bit. Right now, it’s time to PARTY!!!!!

Kartikaya GuptaBitcoin mining as an ad replacement?

The web as we know it basically runs on advertising. Which is not really great, for a variety of reasons. But charging people outright for content doesn't work that great either. How about bitcoin mining instead?

Webpages can already run arbitary computation on your computer, so instead of funding themselves through ads, they could instead include a script that does some mining client-side and submits the results back to their server. Instead of paying with dollars and cents you're effectively paying with electricity and compute cycles. Seems a lot more palatable to me. What do you think?

Shing LyuIdentify Performance Regression in Servo

Performance has always been a key focus for the Servo browser engine project. But just measure the performance through profilers and benchmarks is not enough. The first impression to a real user is the page load time. Although many internal, non-visible optimizations are important, we still want to make sure our page load time is doing well.

Back in April, I opened this bug #10452 to start planning the page load test. With the kind advice from the Servo community and the Treeherder people, we finally settled for a test design similar to the Talos test suite, and decided to use Perfherder for visualization.

Test Design

Talos is a performance test suite designed for Gecko, the browser engine for Firefox. It has many different kinds of tests, covering user-level UI testing and benchmarking. But what we really care about is the TP5 page load test suite. As the wiki says, TP5 use Firefox to load 51 scrapped websites selected from the Alexa Top 500 sites of its time. Those sites are hand-picked, then downloaded and cleaned to remove all external web resources. Then these web pages are hosted on a local server to reduce network latency impact.

Each page is tested three times for Servo, then we take the medium of the three. (We should test more times, but it will take too long.) Then all the mediums are averaged using geometric mean. Geometric mean has a great property that even if two test results are of different scale (e.g. 500 ms v.s. 10000 ms), if any one of them changed by 10%, they will have equal impact on the average.


Talos test results for Gecko have been using Treeherder and Perfherder for a while. The former is a dashboard for test results per commit; the latter is a line plot visualization for the Talos results. With the help from the Treeherder team, we were able to push Servo performance test results to the Perfherder dashboard. I had a blog post on how to do this. You’ll see screenshots for Treeherder and Perfherder in the following sections.


We created a python test runner to execute the test. To minimize the effect of hardware differences, we run the Vagrant (VirtualBox backend) virtual machine used in Servo’s CI infrastructure. (You can find the Vagrantfile here). The test is scheduled by buildbot and runs every midnight.

The test results are collected into a JSON file, then consumed by the test result uploader script. The uploader script will format the test result, calculate the average and push the data to Treeherder/Perfherder throught the Python client

The 25% Speedup!

A week before the Mozilla London Workweek, we found a big gap in the Perfherder graph. The average page load time changed from about 2000 ms to 1500 ms on June 10th.

Improvement graph

We were very excited about the significant improvement. Perfherder conveniently links to the commits in that build, but there are 26 commits in between.

Link to commits

GitHub commits

You should notice that there are many commits by the “bors-servo” bot, who is out automatic CI bot that does the pre-commit testing and auto-merging. Those commits are the merge commits generated when the pull request is merged. Other commits are commits from the contributors branch, so they may appear earlier then the corresponding merge commit. Since we only care when the commit gets merged to the master branch, not when the contributor commits to their own branch, we’ll only bisect the merge commits by bors-servo.

Buildbot provides a convenient web interface for forcing a build on certain commits.

Buildbot force build

You can simply type the commit has in the “Revision” field and buildbot will checkout that commit, build it and run all the tests.

Buildbot force build zoom in

You can track the progress on the Buildbot waterfall dashboard.

Buildbot waterfall

Finally, you’ll be able to see the test result on Treeherder and Perfherder.


Perfherder with bisects

The performance improvement turns out to be the result of this patch by Florian Duraffourg, he use a hashmap to replace a slow list search.

Looking Forward

In the near future, we’ll focus on improving the framework’s stability to support Servo’s performance optimization endeavor. We’ll also work closely with the Treeherder team to expand the flexibility of Treeherder and Perfherder to support more performance frameworks.

If you are interested in the framework, you can find open bugs here, or join the discussion in the tracking bug.

Thanks William Lachance for his help on the Treeherder and Perfherder stuff, and helped me a lot in setting it up on Treeherder staging server. And thanks Lars Bergstrom and Jack Moffit for their advice throughout the planning process. And thanks Adrian Utrilla for contributing many good features to this project.

Air MozillaIntroducing Mozilla Tech Speakers

Introducing Mozilla Tech Speakers Havi Hoffman introduces the Mozilla Tech Speakers Series.

The Servo BlogThese Weeks In Servo 71

In the last two weeks, we landed 173 PRs in the Servo organization’s repositories.

We have gotten great feedback and many new contributors from the release of initial Servo Nightly builds. Hopefully we can continue that as we launch Windows builds this week!

In addition to the list of CSS properties that Servo supports, we now also automatically generate a list of DOM APIs that are implemented.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Notable Additions

  • xidorn ensured our Rust bindings generator work better with MSVC
  • Andrew Mackenzie added keyboard shortcuts to quit
  • jdm improved our inlining on some DOM bindings - twice!
  • shinglyu added page-break-before/after for Stylo
  • stshine fixed the treatment of flex-flow during float calculation
  • emilio got our Rust bindings generator building with stable Rust
  • emilio also implemented dirtyness tracking for Stylo
  • SimonSapin got geckolib building with stable Rust
  • aneesh added tests of the download code on our arm32 builder
  • cbrewster made network listener runnables cancellable
  • imperio added the final infrastructure bits required for video tag metadata support
  • izgzhen implemented FileID validity checking for blob URLs
  • Steve Melia added basic support for the :active selector
  • Aravind Gollakota added origin and same-origin referrer policies, as well as the Referrer-Policy header.
  • johannhof switched Servo to use the faster Brotli crate
  • manish ensured we don’t panic when <img> fails to parse its src
  • cbrewster made Servo on macOS properly handle case-sensitive file systems
  • canaltinova implemented the referrer property on the Document object
  • Ms2ger implemented the missing Exposed WebIDL annotation
  • jdm fixed keyboard input for non-QWERTY layouts
  • emilio implemented basic CSS keyframe animation
  • notriddle added support for CSS animations using rotation

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


Servo now supports CSS keyframe animations:

Dustin J. MitchellChapter in 500 Lines or Less

I wrote a chapter in the latest book in the “Architecture of Open Source Applications”, “500 Lines or Less”.

The theme of the book is to look in detail at the decisions software engineers make “in the small”. This isn’t about large-scale system design, community management, or working on huge codebases (like, say, Firefox). Nor is it about the design and implementation of “classic” computer science algorithms that a student might learn in school. The focus is in the middle ground: given a single real-world problem, how do we approach solving it and implementing the solution in an elegant, instructive form?

My chapter is on distributed consensus. I chose the topic because I was not already familiar with it, and I felt that I might produce a more instructive result if I experienced the struggles of solving a novel problem first-hand. Indeed, distributed consensus delivered! Building on some basic, published algorithms, I worked to build a practical library to provide distributed state to an application. In the process, I ran into issues from data structure aliasing to livelock (Paxos promises not to reach more than one different decisions. It does not promise to reach more than zero!)

The line limit (500 lines) was an interesting constraint. Many of my attempts to work around fundmantal issues in distributed consensus, such as detecting failed nodes, quickly ran hundreds of lines too long. Where in a professional setting I might have produced a library of many thousands of lines and great complexity. Instead, I produced a simple, instructive implementation with some well-understood limitations.

The entire book is available for reading online, or you can buy a PDF or printed copy. All proceeds go to charity, so please do consider buying, but at any rate please have a look and let me know what you think!

I haven’t yet read the other chapters, aside from a few early drafts. My copy is being printed, and once it arrives I’ll enjoy reading the remainder of the book.

Mozilla Localization (L10N)Localization Hackathon in Ljubljana

Earlier this week I came back from the Ljubljana Localization Hackathon which took place over the weekend. It was an inspiring meetup focused on translating Mozilla projects. I left full of energy and ideas and happy to have met many amazing people contributing to Mozilla.

Group photo

Almost thirty participants from Armenia, Bulgaria, Greece, Hungary, Macedonia, Romania, Serbia and Slovenia gathered in Ljubljana for three days. We discussed the current state of localization, the future of the localization process and technology at Mozilla. There was time for each of the communities to work on their goals as well as time for everyone to talk to each other and have fun.


The morning of the first day was dedicated to a series of updates from the Localization Drivers team. I started out by announcing the team’s updated mission statement which is all about becoming an efficient localization provider for Mozilla. I also summarized the recent changes in the team which is now divided in two working groups: the Technical Project Management group (Delphine, Francesco, Jeff and Peiying) and the Technical group (Axel, Matjaž, Staś and Zibi).

Francesco (flod) explained the thinking behind the new release process and the plan to use single repositories for each locale for all release channels. Right now there are five different versions of Firefox: Nightly, Dev Edition Aurora, Beta, Release, and ESR and each localization exists across four different repositories. After the migration there will only be one canonical repository for each locale. This will greatly simplify the setup for the localization teams.

Delphine then took the stage to introduce MQM which was well received. MQM is a framework for evaluating translation issues. It provides structure to the process of reviewing localizations and makes it easier to give constructive feedback to the localizers as well as track the quality of the localization over time. A central piece of MQM is a good and up-to-date style guide. Many localization teams spent the Saturday and Sunday afternoon working on their style guides. Delphine also mentioned new Transvision features available to the localizers: the Translation Consistency view and the Unchanged Strings view.

The main theme of the morning updates was simplicity and quality. We’re making a lot effort to reduce the complexity of the localization process at Mozilla and to create approachable quality benchmarks. We want to close the feedback loop between the current localizers and new contributors; help them connect, discuss and encourage participation.  The reviewers should be able to explain why a suggestion was rejected.  There needs to be an easy and contextual communication layer between new localizers and the reviewers.

An important part of this strategy is careful planning of the work load for the localization teams. In the past we ran an experiment involving Firefox for iOS: the localizers weren’t required to manually sign off on any particular changeset. It turned out to be a big success. Delphine announced that going forward there will be no more sign-off requests done by localizers themselves. Instead the Localization Drivers will verify that the changesets work and sign off on them, thus reducing the overhead for the localizers.

Without signoffs we’ll need a better way of tracking progress and understanding the state of completion for each locale. Francesco briefed everyone on the plan to group strings into so-called buckets. Buckets will be based on how visible strings are or who their target audience is. The likely candidates for individual buckets include: the main browser UI, Developer Tools, DOM/CSS Parser errors etc. Buckets will have priorities and will allow us to track progress separately per bucket.


Saturday morning is also when we took it outside to do an exercise called a spectrogram. If you’ve followed the blog posts about the previous hackathons or you have been to one, you’re likely familiar with the concept. Spectrograms allow us to have a laid-back conversations on different topics. We start off by asking a question. Depending on how strongly they feel about the answer the participants then move along an imaginary line. The line allows us to capture the subtleties in opinions and feelings. That is, the whole spectrum of responses.

Spectrograms in Ljubljana

A number of responses stood out to me during the exercise. We talked about the communication inside of the localization teams and between the localization teams and the Drivers team. People were generally happy with how quick they got their answers in IRC and the mailing list. However a few contributors expressed a concern that it is not clear to the new-comers how to contact their locale’s team.

We asked how people felt about the amount of work they did and the number of projects there were to localize. Everyone ended up in the middle of the spectrum and Stoyan from Bulgaria offered an interesting explanation: it’s because localizers like what they do and they choose to do it themselves. Related to this was a question about deadlines: are they too short? A sentiment that resonated with a lot of participants was that they didn’t mind the deadlines—but rather the amount of time it takes for a translation to reach the users. This turned out to be an important insight closely related to Live Updates to Localizations that we’re working on as part of the effort to port Firefox to L20n (more on that later).

Goce from Macedonia recalled an experience which I feel we should all remember about and try not to repeat in the future. Back in the Firefox OS days there was a big rush to get a lot of content localized before the launch. In the end the release was delayed and canceled. It’s important for all involved stakeholders to have a good visibility into the release planning. Precise time estimates help prioritize community work and make sure efforts beyond the call of duty that we often see from the community aren’t in vain.

The question about mentoring new contributors vs. localizing alone spurred a long conversation with many take-aways. Fredy from Greece admitted that reviewing someone’s translations can sometimes be the same amount of work then just doing the whole translation from scratch himself. He wondered if translations could have their Discussion pages similar to definitions in Wikipedia which would be then publicly accessible and easy to reference in the future. Matjaž noted that reviewing might be in fact rather easy, but it’s giving meaningful feedback to the translators that is hard and time-consuming. MQM will definitely help in this respect. We’re working on prototyping the UI for Pontoon to make it easy to use for everyone.

Balázs from Hungary then quipped that there are two kinds of people: those who are willing to contribute and those who are able to do it. Usually these two kinds don’t overlap. The role of authoring tools like Pontoon and Pootle is to help those who are willing bridge the gap of not being able to.

The conversation about mentoring and evaluating new contributors ended on a high note with a great story from Slovenia’s very own Lan. Lan is a SUMO localizer and he was able to attract a new contributor to the Slovenian localization a year ago. When he started reviewing their contributions he realized the translations weren’t as good as he had hoped. Lan didn’t get discouraged though. He took his time to review all translations, corrected them and then asked the new contributor to go through the fixes to get an idea of how to improve. This turned out to be a good investment of Lan’s time; today the new contributor is still active and their translations are much better!

(Lan’s story is even more impressive if you consider that he is still in high school. In fact two very active members of the Slovenian community, Lan and Amadej, are both very young. I can’t wait to see what Mozilla will have become when they’re my age!)

While it’s clear that chaperoning new contributors could help foster the community growth, the group was divided with respect to how to actually do it. Some prefer to give out small independent assignments to localize real existing projects and nurture the sense of ownership. Others would rather create a single testing project with a known good translation and evaluate new contributions in reference to it.


The question about the preferred frequency of contribution always leads to interesting findings. The two extremes of the spectrum are “I want to localize small numbers of strings daily” and “I want to localize a big number of strings once a year”. In Ljubljana most of the participants chose to stand somewhere in the middle. A smaller group preferred daily assignments that could be completed during a coffee break. Another group would rather see the frequency of localization aligned with the frequency of releases of the software. One person in the middle was Marko from Serbia who summarized his choice by saying that he liked seeing the results of his work on a regular basis.

The last question that I would like to highlight here was about testing. We wanted to know how the localizers test their localizations. The responses covered the entire spectrum. Some localization communities have dedicated QA teams while other localizers dogfood their own work by using the software themselves. Another group wasn’t sure how to test nor where to find the nightly builds. This suggests that we can do better with documenting the best practices for testing. Matjaž also suggested that this kind of information for each project could be displayed by Pontoon.

The difficulties in testing often stem from the fact that some translations are hard to come by in regular usage and only show up in rare situations. The experiences from localizing Firefox for iOS prove that automated screenshots are an invaluable tool in such situations, in addition to being useful overall.

Pontoon and L20n

On Sunday morning Matjaž and I took the stage to represent the Technical group of the Localization Drivers team. Matjaž presented Pontoon which had already been used by some of the localization teams.  There have been many improvements to performance and usability of Pontoon as of late: faster sync, bulk actions, more readable diffs, better filtering, suggestions from other languages, support for L20n’s syntax and more!  Looking into the future, one of the most important tasks ahead of us is the merger of Pontoon and l10n.mozilla.org. We want to make sure all information relevant to localization is in one place.

The L20n presentation was divided into two parts.  I introduced L20n’s new syntax, FTL, and recommended the L20n by Example and the FTL Tinker as the learning resources.  I then showed a few use-cases where L20n really shines. The first one was according past participles with the gender of the subject. The second one was about particles governing the grammatical case of nouns.  My audience could easily relate—their native languages are among the ones with the most complex grammars in the world. In fact, it was an incredible diverse gathering of languages!  We had a strong group of Slavic languages (Bulgarian, Macedonian, Serbian, Slovenian), followed by a Romance language (Romanian) as well as two of the oldest  languages in the world: Greek and Armenian. And don’t forget Hungarian which is one of the few European languages that isn’t part of the Indo-European family of languages.

In the second part I showed a build of Firefox ported to L20n and demoed Live Updates to Localization. It seems like the ability to push translation updates and fixes almost live without having to wait for the next software update really is a game changer. The feedback I got afterwards was very positive.

Working in Groups

Both afternoons on Saturday and Sunday were dedicated to working in groups. All the participating localization teams had set goals leading up to the hackathon. The goals ranged from catching up with localizations to reviewing suggestions to discussing the health of the community to creating style guides.  The list of goals is available in the wiki.

Working in groups in Ljubljana

The Bulgarian community deserves a special mention here.  During the hackathon Ognyan transferred the leadership of the localization team to Stoyan (first from the left in the picture below).  Congratulations, Stoyan!  Ognyan and Stoyan then quickly proceeded to update the Bulgarian localization of Firefox Desktop to 100% complete.

Stoyan and Ognyan

Notice the Mozilla l10n photo frame above?  Make sure to check out more pictures with it taken by Nino. We also had custom-made name badges and other visual accessories.  All design materials were created by Mozilla Slovenija community designer Rok.  They are available on GitHub for other teams and hackathons to reuse.

Fun and Rest

Ljubljana is a beautiful city. It’s very friendly to pedestrians; almost everything was in walking distance from the venue. It’s also very green and inviting when it comes to spending time outdoors. After a period of focused effort it was great to take a short stroll across the city.

Thanks to the amazing organizer and host, Gašper, each evening was full of activities and opportunities to get to know each other. We tasted traditional dishes from Prekmurje, a region of Slovenia close to the Hungarian border. We visited the Ljubljana castle which offers fantastic views on the mountains surrounding the city. We even competed at a kart racing circuit!


Helping Gašper was Nino, also from Slovenia, who managed the hackathon’s presence on social media. He took over the Mozillagram Instagram account for the weekend which resulted in a 10% increase in followers. Nino’s work was highlighted during the Weekly Meeting on Monday. I would also like to give a special shout-out to Jobava from Romania who did a great job taking notes during the whole weekend.  Having a dedicated person in charge of the note taking was instrumental to making everyone’s time productive. Jobava managed to capture a lot of details and insights. I often looked into his notes when writing this blog post. Hvala, Nino and Jobava!

The hackathon was an astounding success. It was diverse, educational and inspiring. A huge Thank you! to everyone who participated and helped organize it. I couldn’t be more excited for the future of localization at Mozilla!

Mozilla WebDev CommunityAuto-generating a changelog from git history

At Mozilla we share a lot of open source libraries. When you’re using someone else’s library, you might find that an upgrade breaks something in your application — but why? Glancing at the library’s changelog can help. However, manually maintaining a changelog when building features for a library can be a challenge.

We’ve been experimenting with auto-generating a changelog from commit history itself and so far it makes changelogs easy and painless. Here’s how to set it up. These tools require NodeJS so it’s best suited for JavaScript libraries.

First, you need to write commit messages in a way that allows you to extract metadata for a changelog. We use the Angular conventions which specify simple prefixes like feat: for new features and fix: for bug fixes. Here’s an example of a commit message that adds a new feature:

feat: Added a `--timeout` option to the `run` command

Here’s an example of a bug fix:

fix: Fixed `TypeError: runner is undefined` in the `run` command

The nice thing about this convention is that tools such as Greenkeeper, which sends pull requests for dependency updates, already support it.

The first problem with this is a social one; all your contributors need to follow the convention. We chose to solve this with automation by making the tests fail if they don’t follow the conventions 🙂 It’s also documented in our CONTRIBUTING.md file. We use the conventional-changelog-lint command as part of our continuous integration to trigger a test failure:

conventional-changelog-lint --from master

There was one gotcha in that TravisCI only does a shallow clone which doesn’t create a master branch. This will probably be fixed in the linter soon but until then we had to add this to our .travis.yml:

Alright, everybody is writing semantic commits now! We can generate a changelog before each release using the conventional-changelog tool. Since we adopted the Angular conventions, we run it like this before tagging to get the unreleased changes:

conventional-changelog -p angular -u

This scrapes our commit log, ignores merges, ignores chores (such as dependency updates), ignores documentation updates, and makes a Markdown list of features and fixes linked to their git commit. Example:

### Bug fixes
* Fixed `TypeError: runner is undefined` in the `run` command ([abc1abcd](https://github.com/.../))

### Features
* Added a `--timeout` option to the `run` command ([abc1abcd](https://github.com/.../))

As you can see, we also make sure to write commit messages in past tense so that it reads more naturally as a historic changelog. You can always edit the auto-generated changelog to make it more readable though.

The conventional-changelog tool can update a README.md file but, for us, we just paste the Markdown into our github releases so that it shows up next to each release tag.

That’s it! There are a lot of options in the tools to customize linting commits or changelog generation.

Air MozillaWebdev Beer and Tell: July 2016

Webdev Beer and Tell: July 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Addons BlogWriting an opt-in UI for an extension

Our review policies for addons.mozilla.org (AMO) require transparency when it comes to an add-on’s features and how they might affect a user’s privacy and security.

These policies are particularly relevant in cases where an add-on includes monetization features that are unrelated to its main function. For example, a video downloader add-on could include a shopping recommendation feature that injects offers from commercial websites. In cases like this, to be listed on AMO the feature would need to be opt-in. Opt-in means the add-on needs to present to the user the option to enable the feature, with a default action of keeping it disabled.

We’re often asked for examples of add-ons that do this, so I decided to create a sample WebExtension for this purpose. You can find the code on GitHub, and the built and signed example can be downloaded here.

The extension implements two opt-in prompts:

  1. A new tab. Tab opt-in
  2. A panel in the main add-on button. Panel opt-in

(If you don’t recognize the dark theme in the screenshots, it’s because I’m using Firefox Developer Edition—currently Firefox 49—for testing.)

An add-on would normally implement one prompt or the other. Opening a new tab has the advantage of getting the user’s attention right away, and has more room for content. The main disadvantage is that it can annoy users and feel too pushy. The pop-up approach is more user-friendly because it appears when the user is ready to engage with the add-on and is better integrated with the add-on UI. However, it wouldn’t work if the add-on doesn’t include buttons.

This example is completely minimal, hence the almost complete lack of styling. However, it includes the elements that AMO policies deem necessary:

  • Text explaining clearly to the user what the opt-in feature does and why it’s being offered. In this case, the extra feature is a variation of the borderify example on GitHub.
  • A link to a privacy policy and/or more information about the feature. That page should spell out its privacy and security implications.
  • A clear choice between enabling the feature and keeping it disabled, defaulting to disabled. I set autofocus="true" to the Cancel button, which can be clearly seen in the popup screenshot.

Hitting the Return key in either case, or closing the opt-in tab should be assumed to mean that the user is choosing not to accept the feature (the tab closing case isn’t implemented in this example to make it easier to test). The example uses the storage API to keep track of two flags: one that indicates the user has clicked on either button, and one that indicates if the user has enabled the feature or not. After the opt-in is registered as shown, the tab won’t show up again, and the content changes to something else.

Note: I couldn’t find a way to look at the extension’s storage in the developer tools (I suppose it’s not implemented yet). You can clear the storage to reset the state of the extension by deleting the browser-extension-data folder in the profile.

Remember that sneaking in unwanted features is bad for your users and your add-on’s reputation, so make sure you give your users control and give them clear information to make a decision. Happy coding!

Mozilla Reps CommunityRep of the Month – May 2016

Please join us in congratulating Konstantina Papadea as Rep of the Month for May.

Konstantina is a long-time Mozilla Reps from Greece. Additionally she is also responsible for the budget and swag requests in the Reps program.

In the past months Konstantina has helped out with organizing and chairing the Reps weekly call together with Ioana. That means that they are weekly in contact with many mozillians to find new interesting topics and prepare the agenda and the notifications. Further she is helping the Council with the formation of the Review Team we are implementing. This was already announce here and will give council more time to spend on mission and strategy. She became a mentor and will help inspire the new people applying for the program.

Please don’t forget to congratulate her on Discourse!

Anthony HughesReducing the NVIDIA Blacklist

We recently relaxed our graphics blocklist for users with NVIDIA hardware using certain older graphics drivers. The original blocklist entry blocked all versions less than v8.17.11.8265 due to stability issues with NVIDIA 182.65 and earlier. We recently learned however that the first two numbers in the version string indicate platform version and the latter numbers refer to the actual driver version. As a result we were inadvertently blocking newer drivers on older platforms.

We have since opened up the blacklist for versions newer than (Win XP) and (Vista/Win7) via bug 1284322, effectively drivers released beyond mid-2009.This change only exists on Nightly currently but we expect it to ride the trains unless some critical regression is discovered.

If you are triaging bugs and user feedback, or are engaging with users on social media, please keep an eye out for users with NVIDIA hardware. If the user does have NVIDIA hardware please have them check the Graphics section of about:support to confirm if they are using a driver version that was previously blocked. If they are try to help them get updated to the most recent driver version. If the issue persists, have them disable hardware acceleration to see if the issue goes away.

The same goes if you are a user experiencing quality issues (crashes, hangs, black screening, checkerboarding, etc) on NVIDIA hardware with these drivers. Please make sure your drivers are up to date.

In either case, if the issue persists please file a bug so we can investigate what is happening.

Feel free to email me if you have any questions.

Thank you for your help!

Air MozillaRelease Promotion: Keeping Tabs on Automation

Release Promotion: Keeping Tabs on Automation Reviewing Release Engineering, Release Promotion, and the Pulse-Notify microservice developed by Connor Sheehan during his internship at Mozilla.

Air MozillaRamping Up the Web

Ramping Up the Web Nancy Pang has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

Air MozillaIntern Presentations 2016

Intern Presentations 2016 Group 1 of our interns are going to be presenting on what they worked on this summer.

Air MozillaGod Bless FxA

God Bless FxA Sai Chandramouli talks about what was accomplished in a summer internship at Mozilla: 1. Password Hints for weak passwords 2. Adding Geolocation data to emails...

Air MozillaEverything on the Side…

Everything on the Side… Erica Wright has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.