Chris CooperRelEng & RelOps Weekly Highlights - October 9, 2015

The beginning of October means autumn in the Northern hemisphere. Animals get ready for winter as the leaves change colour, and managers across Mozilla struggle with deliverables for Q4. Maybe we should just investigate that hibernating thing instead.

Modernize infrastructure: Releng, Taskcluster, and A-team sat down a few weeks ago to hash out an updated roadmap for the buildbot-to-taskcluster migration ( As you can see from the document, our nominal goal this quarter is to have 64-bit linux builds *and* tests running side-by-side with the buildbot equivalents, with a stretch goal to actually turn off the buildbot versions entirely. We’re still missing some big pieces to accomplish this, but Morgan and the Taskcluster team are tackling some key elements like hooks and coalescing schedulers over the coming weeks.

Aside from the Taskcluster, the most pressing releng concern is release promotion. Release promotion entails taking an existing set of builds that have already been created and passed QA and “promoting” them to be used as a release candidate.This represents a fundamental shift in how we deliver Firefox to end users, and as such is both very exciting and terrifying at the same time. Much of the team will be involved in this in Q4 because it will greatly simplify a future transition of the release process to Taskcluster.

Improve CI pipeline: Vlad and Alin have 10.10.5 tests running on try and are working on greening up tests (

Kim started discussion on dev.planning regarding reducing frequency of linux32 builds and tests!topic/ (Related bug:

Windows tests take a long time, in case you hadn’t noticed. This is largely due to e10s ( which has effectively doubled the number of tests we need to run per-push. We’ve been able to absorb this extra testing on other platforms, but Windows 7 and Windows 8 have been particularly hard hit by the increased demand, often taking more than 24 hours to work through backlog from the try server. While e10s is a product decision and ultimately in the best interest for Firefox, we realize the current situation is terrible in terms of turnaround time for developer changes. Releng will be investigating updating our hardware pool for Windows machines in the new year. In the interim, please be considerate with your try usage, i.e. don’t test on Windows unless you really need to. If you can help fix e10s bugs so to make that the default on beta/release ASAP, that would be awesome.

Release: The big “moment-in-time” release of Firefox 42 approaches. Rail is on the hook for releaseduty for this cycle, and is overseeing beta 5 builds currently.

Operational: Kim increased size of tst-emulator64 spot pool ( so we’ll be able to enable additional Android 4.3 tests on debug once we have when SETA data for them (

Coop (me) spent last week in Romania getting to know our Softvision contractors in person. Everyone was very hospitable and took good care of me. Alin and Vlad took full advantage of the visit to get better insight into how the various releng systems are interconnected. Hopefully this will pay off with them being able to take on more challenging bugs to advance the state of buildduty. Already they’re starting to investigate how they could help contribute to the slave loan tool. Alin and Vlad will also be joining us for Mozlando in December, so look forward to more direct interaction with them there.

See you next week!

Nick CameronMacros

We're currently planning an overhaul of the syntax extension and macro systems in Rust. I thought it would be a good opportunity to cover some background on macros and some of the issues with the current system. (Note, that we're not considering anything really radical for the new systems, but hopefully the improvements will be a little bit more than incremental). In this blog post I'd like to talk a bit about macros in general. In later posts I'll try and cover some more Rust-specific things and some areas (like hygiene) in more detail. If you're a Lisp (or Rust macro) expert, this post will probably be very dull.

What are macros?

Macros are a syntactic programming language feature. A macro use is expanded according to a macro definition. Macros usually look somewhat like functions, however, macro expansion happens entirely at compile-time (never at runtime), and usually in the early stages of compilation - sometimes as a preprocessing step (as in C), sometimes after parsing but before further analysis (as in Rust).

Macro expansion is usually a completely syntactic operation. That is, it operates on the program text (or the AST) without knowledge about the meaning of that text (such as type analysis).

At its simplest, macro expansion is textual substitution. For example (in C):

#define FOO 42
int x = FOO;  

is expanded by the preprocessor to

int x = 42;  

by simply replacing FOO with 42.

Likewise, with arguments, we just substitute the actual arguments into the macro definition, and then the macro into the source:

#define MIN(X, Y)  ((X) < (Y) ? (X) : (Y))
int x = MIN(10, 20);  

expands to

int x = ((10) < (20) ? (10) : (20));  

After expansion, the expanded program is compiled just like a regular program.

What is macro hygiene?

The naive implementation of macros described above can easily go wrong, for example:

static int a = 42;  
#define ADD_A(X)  ((X) + a)

void foo() {  
    int a = 0;
    int x = ADD_A(10);

You might expect x to be 52 at runtime, but it isn't, it is 10. That's because the expansion is:

static int a = 42;

void foo() {  
    int a = 0;
    int x = ((10) + a);

There is nothing special about a, it is just a name, so the usual scoping rules apply and we get the a in scope at the macro use site, not the macro definition site as you might expect.

These kind of unexpected results are because C macros are unhygienic. An hygienic macro system (as in Lisp or Rust) would preserve the scoping of the macro definition, so post-expansion, the a from the macro would still refer to the global a rather than the a in foo.

This is the simplest kind of macro hygiene. Once we get into the complexities of hygiene, it turns out there is no great definition. Hygiene applies to variables declared inside a macro (can they be referenced outside it?) as well as applying in some sense to aspects such as privacy. Implementing hygiene gets complex when macro definitions can include macro uses and further macro definitions. To make things even worse, sometimes perfect hygiene is too strong and you want to be able to bend the rules in a (hopefully) safe way.

How can macros be implemented?

Macros can be implemented as simple textual substitution, by manipulating tokens after lexing, or by manipulating the AST after parsing. Conceptually though we simply replace a macro use with the definition, whether the use and definition are represented by text, tokens, or AST nodes. There are some interesting details about exactly how lexing, parsing, and macro expansion interact. But, the most interesting implementation aspect is the algorithm used to maintain hygiene (which I'll cover in a later post).

How macros are implemented also depends on how macros are defined. The simple examples I gave above just substitute the macro definition for the macro use. macro_rules macros in Rust and syntax-rules macros in Scheme allow for pattern matching of arguments in the macro definition, so different code is substituted for the macro use depending on the arguments.

Depending on how macros are defined will affect how and when macros are lexed and parsed. (C macros are not parsed until substitution is completely finished. Rust macros are lexed into tokens before expansion and parsed afterwards).

Procedural macros

The macros described so far simply replace a macro use with macro definition. The macro expander might manipulate the macro definition (to implement hygiene or pattern matching), but the macro definition does not affect the expansion other than providing input. In a procedural macro system, each macro is defined as a program. When a macro use is encountered, the macro is executed (at compile time still) with the macro arguments as input. The macro use is replaced by the result of execution.

For example (using a made up macro language, which should be understandable to Rust programmers. Note though that Rust procedural macros work nothing like this):

proc_macro! foo(x) {  
    let mut result = String::new();
    for i in 0..10 {

fn main() {  
    let a = "foo!(bar)"; // Hand-waving about string literals.

will expand to

fn main() {  
    let a = "barbarbarbarbarbarbarbarbarbar";

A procedural macro is a generalisation of the syntactic macros described so far. One could imagine implementing a syntactic macro as a procedural macro by returning the text of the syntactic macro after manually substituting the arguments.

John O'DuinnThe “Distributed” book-in-progress: Early Release#1 now available!

My previous post described how O’Reilly does rapid releases, instead of waterfall-model releases, for book publishing. Since then, I’ve been working with the folks at O’Reilly to get the first milestone of my book ready.

As this is the first public deliverable of my first book, I had to learn a bunch of mechanics, asking questions and working through many, many details. Very time consuming, and all new-to-me, hence my recent silence. The level of detailed coordination is quite something – especially when you consider how many *other* books O’Reilly has in progress at the same time.

Book Cover for DistributedOne evening, while in the car to a social event with friends, I looked up the “not-yet-live” page to show to friends in the car – only to discover it was live. Eeeeek! People could now buy the 1st milestone drop of my book. Exciting, and scary, all at the same time. Hopefully, people like it, but what if they don’t? What if I missed an important typo in all the various proof-reading sessions? I barely slept at all that night.

In O’Reilly language, this drop is called “Early Release #1 (ER#1)”. Now that ER#1 is out, and I have learned a bunch about the release mechanics involved, the next milestone drop should be more routine. Which is good, because we’re doing these every month. Oh, and like software: anyone who buys ER#1 will be prompted to update when ER#2 is available later in Oct, and prompted again when ER#3 is available in Nov, and so on.

You can buy the book-in-progress by clicking here, or clicking on the thumbnail of the book cover. And please, do let me know what you think – Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?

To make sure that any feedback doesn’t get lost or caught in spam filters, I’ve setup a special email address (feedback at oduinn dot com) although I’ve already been surprised by feedback via twitter and linkedin. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.

Now, it’s time to brew more coffee and get back to typing.


Nathan Froydgecko include file statistics

I was inspired to poke at which files were most heavily #include‘d and which files contributed the most text as a result of their #include‘ing after seeing the simplicity of Libre Office’s script for doing so. I had to rewrite it in Python, as the obvious modifications to the awk script weren’t working, and I had no taste for debugging awk code. I’ve put the script up as a gist:

It’s intended to be run from a newly built objdir on Linux like so:

python .

The ability to pick a subdirectory of interest:

python dom/bindings/

was useful to me when I was testing the script, so I wasn’t groveling through several thousand files at a time.

The output lines are formatted like so:

total_size file_size num_of_includes filename

and are intended to be manipulated further via sort, etc. The script might work on Mac and Windows, but I make no promises.

The results were…interesting, if not especially helpful at suggesting modifications for future work. I won’t show the entirety of the script’s output, but here are the top twenty files by total size included (size of the file on disk multiplied by number of times it appears as a dependency), done by filtering the script’s output through sort -n -k 1 -r | head -n 20 | cut -f 1,4 -d ' ':

332478924 /usr/lib/gcc/x86_64-linux-gnu/4.9/include/avx512fintrin.h
189877260 /home/froydnj/src/gecko-dev.git/js/src/jsapi.h
161543424 /usr/include/c++/4.9/bits/stl_algo.h
141264528 /usr/include/c++/4.9/bits/random.h
113475040 /home/froydnj/src/gecko-dev.git/xpcom/glue/nsTArray.h
105880002 /usr/include/c++/4.9/bits/basic_string.h
92449760 /home/froydnj/src/gecko-dev.git/xpcom/glue/nsISupportsImpl.h
86975736 /usr/include/c++/4.9/bits/random.tcc
76991387 /usr/include/c++/4.9/type_traits
72934768 /home/froydnj/src/gecko-dev.git/mfbt/TypeTraits.h
68956018 /usr/include/c++/4.9/bits/locale_facets.h
68422130 /home/froydnj/src/gecko-dev.git/js/src/jsfriendapi.h
66917730 /usr/include/c++/4.9/limits
66625614 /home/froydnj/src/gecko-dev.git/xpcom/glue/nsCOMPtr.h
66284625 /usr/include/x86_64-linux-gnu/c++/4.9/bits/c++config.h
63730800 /home/froydnj/src/gecko-dev.git/js/public/Value.h
62968512 /usr/include/stdlib.h
57095874 /home/froydnj/src/gecko-dev.git/js/public/HashTable.h
56752164 /home/froydnj/src/gecko-dev.git/mfbt/Attributes.h
56126246 /usr/include/wchar.h

How does avx512fintrin.h get included so much? It turns out <algorithm> drags in a lot of code, despite people usually only needing min, max, or swap. In this case, <algorithm> includes <random> because std::shuffle requires std::uniform_int_distribution from <random>. This include chain is responsible for essentially all of the /usr/include/c++/4.9-related files in the above list.

If you are compiling with SSE2 enabled (as is the default on x86-64 Linux), then<random> includes <x86intrin.h> because <random> contains a SIMD Mersenne Twister implementation. And <x86intrin.h> is a clearinghouse for all sorts of x86 intrinsics, even though all we need is a few typedefs and intrinsics for SSE2 code. Minus points for GCC header cleanliness here.

What about the top twenty files by number of times included (filter the script’s output through sort -n -k 3 -r | head -n 20 | cut -f 3,4 -d ' ')?

2773 /home/froydnj/src/gecko-dev.git/mfbt/Char16.h
2268 /home/froydnj/src/gecko-dev.git/mfbt/Attributes.h
2243 /home/froydnj/src/gecko-dev.git/mfbt/Compiler.h
2234 /home/froydnj/src/gecko-dev.git/mfbt/Types.h
2204 /home/froydnj/src/gecko-dev.git/mfbt/TypeTraits.h
2132 /home/froydnj/src/gecko-dev.git/mfbt/Likely.h
2123 /home/froydnj/src/gecko-dev.git/memory/mozalloc/mozalloc.h
2108 /home/froydnj/src/gecko-dev.git/mfbt/Assertions.h
2079 /home/froydnj/src/gecko-dev.git/mfbt/MacroArgs.h
2002 /home/froydnj/src/gecko-dev.git/xpcom/base/nscore.h
1973 /usr/include/stdc-predef.h
1955 /usr/include/x86_64-linux-gnu/gnu/stubs.h
1955 /usr/include/x86_64-linux-gnu/bits/wordsize.h
1955 /usr/include/x86_64-linux-gnu/sys/cdefs.h
1955 /usr/include/x86_64-linux-gnu/gnu/stubs-64.h
1944 /usr/lib/gcc/x86_64-linux-gnu/4.9/include/stddef.h
1942 /home/froydnj/src/gecko-dev.git/mfbt/Move.h
1941 /usr/include/features.h
1921 /opt/build/froydnj/build-mc/js/src/js-config.h
1918 /usr/lib/gcc/x86_64-linux-gnu/4.9/include/stdint.h

Not a lot of surprises here. A lot of these are basic definitions for C++ and/or Gecko (<stdint.h>, mfbt/Move.h).

There don’t seem to be very many obvious wins, aside from getting GCC to clean up its header files a bit. Getting us to the point where we can use <type_traits> instead of own homegrown mfbt/TypeTraits.h would be a welcome development. Making js/src/jsapi.h less of a mega-header might help some, but brings of a burden of “did I remember to include the correct JS header files”, which probably devolves into people cutting-and-pasting complete lists, which isn’t a win. Splitting up nsISupportsImpl.h seems like it could help a little bit, though with unified compilation, I suspect we’d likely wind up including all the split-up files at once anyway.

QMOFirefox 42 Beta 7 Testday, October 16th

Greetings Mozillians!

We are holding the Firefox 42.0 Beta 7 Testday next Friday, October 16th. The main focus of this event is Control Center feature. As usual, there will be unconfirmed bugs to triage and resolved bugs to verify. :)

Detailed participation instructions are available in this etherpad.

No previous testing experience is required so feel free to join us on the #qa IRC channel and our moderators will make sure you’ve got everything you need to get started.

Hope to see you all on Friday!

Let’s make Firefox better together!

Vladan DjericUpdate from the Content Performance program #2

During Q3, Avi Halachmi, Aaron Klotz and I compared Firefox's scrolling & page-loading performance against other Windows browsers on several popular sites using a low-end HP Pavilion 14t i3-5010u laptop. This post describes our findings.

First, a refresher…

In my previous post from June, I published our initial Q2 findings:

  • bug 1213413, bug 715376 related: Process-per-tab e10s is necessary to prevent heavy activity in a background tab (e.g. loading GMail) from affecting scrolling smoothness in a foreground tab
  • bug 1213425: Firefox scrolling smoothness badly deteriorates when the laptop is in power-saver mode
  • bug 1174899: Aaron Klotz found a 100% CPU usage bug while scrolling a Facebook profile containing many HTML5 videos
  • Scrolling a Twitter feed with YouTube HTML5 videos is jankier in Firefox, but Twitter newsfeed changed to no longer autoplay videos

The June post also outlined other scenarios needing testing which we studied in Q3.

Our more recent discoveries

NOTE: All phase 1 content-perf findings are in meta bug 1213469, and all content perf bugs we have on file are tagged with [content perf]

  • bug 1213434: Chrome navigates a lot faster to from a Google search result page. The difference is particularly noticeable with a cold network cache. Video showing the differences:
  • bug 1199468: Our smooth-scrolling parameters might not be optimal, but this is still being studied (and debated!)
  • We learned that some graphics acceleration configurations actually harm Firefox scrolling performance:
    • bug 1213429: Scrolling a Facebook profile page with D3D9 acceleration (and D2D disabled) is actually worse than scrolling without any gfx acceleration at all (no D3D + no D2D)
    • bug 1213432: Similarly, D3D11 Warp acceleration (without D2D) provides worse scrolling performance than no gfx acceleration at all (no D3D + no D2D)
    • Direct3D 11 and Direct2D acceleration (either D2D 1.0 or 1.1) did not produce better scrolling performance on our reference pages
      • bug 1213440: In particular, on a Yahoo search page with a few image results embedded, scrolling performance and consistency will be worse with D2D 1.1. than D2D 1.0

During our investigations, we also discovered other user-visible issues that did not directly relate to page scrolling & page loading:

  • bug 1213435: Firefox content-process memory usage is significantly worse than Chrome's and is lagging IE as well
  • bug 1172205: Firefox's page-loading tab throbber spins erratically while loading
  • bug 1213438: Scrolling through a Facebook profile triggers additional network requests, but only Firefox repeatedly changes the tab's title from between its real title and “Connecting…”. This is distracting and draws unnecessary attention to any network delays

We tested many more use cases since the previous progress update, but we mostly found Firefox's performance on par or better:

  • Firefox's scrolling performance did not regress on Windows 10 as compared to Windows 8 (for the 3 reference sites: Yahoo, Facebook and Twitter)
  • Most gfx configurations did not hurt Firefox scrolling performance: external monitor connected, DPI scaling enabled, e10s enabled (comparing non-APZ e10s vs non-e10s), accessibility technology enabled, etc
    • In tests with external monitors, we noticed that Firefox consistently chooses the correct refresh rate, unlike other tested browsers
    • Unfortunately, this finding did not generalize to better gfx performance overall on all multi-monitor setups
  • We also found page-loading times on Facebook, Twitter, and Yahoo comparable across browsers
  • As an aside, APZC has a noticeably positive impact on scrolling smoothness, but there are issues with checkerboarding and correctness (bug 1178298, so it might be a while before we see APZC riding the trains.

We didn't have time to get as far as we wanted to with content-perf tooling in Q3, but Wander Costa did contribute a prototype of a tool for measuring browser responsiveness during page-loads:

Content Performance in Q4

The Perf team's top priority in Q4 is to verify e10s performance using our many measurement systems and generally help get e10s performance ready for release.

As a result, Avi will be the only developer working on content-perf this quarter. However, Avi will be working on content-perf full-time in Q4 and he has already covered additional ground (including Fennec). He will soon be blogging about additional content-perf findings over at his blog

Mozilla FundraisingRebuilding Mozilla’s donate form using ReactJS

Last year’s fundraising campaign was a success thanks to the team who worked on it. After the campaign ended our team met together to see what worked well and what didn’t go as planned. The engineers who worked on the … Continue reading

Julien VehentSSL/TLS analysis of the Internet's top 1,000,000 websites

microscope.gifIt seems that evaluating different SSL/TLS configurations has become a hobby of mine. After publishing Server Side TLS back in October, my participation in discussions around ciphers preferences, key sizes, elliptic curves security etc...has significantly increased (ironically so, since the initial, naive, goal of "Server Side TLS" was to reduce the amount of discussion on this very topic).

More guides are being written on configuring SSL/TLS server side. One that is quickly gaining traction is Better Crypto, which we discussed quite a bit on the dev-tech-crypto mailing list.

People are often passionate about these discussions (and I am no exception). But one item that keeps coming back, is the will to kill deprecated ciphers as fast as possible, even if that means breaking connectivity for some users. I am absolutely against that, and still believe that it is best to keep backward compatibility to all users, even at the cost of maintaining RC4 or 3DES or 1024 DHE keys in our TLS servers.

update: this blog post was posted almost two years ago, and thankfully we've managed to almost kill RC4 and 1024 DHE keys in the meantime. I still believe that backward compatibility is critical, but I certainly do not want to have to enable RC4 of weak keys ever again :)

One question that came up recently, on dev-tech-crypto, is "can we remove RC4 from Firefox entirely ?". One would think that, since Firefox supports all of these other ciphers (AES, AES-GCM, 3DES, Camellia, ...), surely we can remove RC4 without impacting users. But without numbers, it is not an easy decision to make.

Challenge accepted: I took my cipherscan arsenal for a spin, and decided to scan the Internet.

Scanning methodology

The scanning scripts are on github at The results dataset is here: Uncompressed, the dataset is around 1.2GB, but XZ does an impressive job at compressing that to a 17MB archive.

I use Alexa's list of top 1,000,000 websites as a source. The script called "" scans targets in parallel, with some throttling to limit the numbers of simultaneous scans around 100, and writes the results into the "results" directory. Each target's results are stored in a json file named after the target. Another script named "", walks through the results directory and computes the stats. It's quite basic, really.

It took a little more than 36 hours to run the entire scan. A total of 451,470 websites have been found to have TLS enabled. Out of 1,000,000, that's a 45% ratio.

While not a comprehensive view of the Internet, it carries enough data to estimate the state of SSL/TLS in the real world.

SSL/TLS survey of 451,470 websites from Alexa's top 1 million websites


Cipherscan retrieves all supported ciphers on a target server. The listing below shows which ciphers are typically supported, and which ciphers are only supported by some websites. This last item is the most interesting, as it appears that 1.23% of websites only accept 3DES, and 1.56% of websites only accept RC4. This is important data for developers who are considering dropping support for 3DES and RC4.

Noteworthy: there are two people, out there, who, for whatever reason, decided to only enable Camellia on their sites. To you, Sirs, I raise my glass.

The battery of unusual ciphers, prefixed with a 'z' to be listed at the bottom, is quite impressive. The fact that 28% of websites support DES-CBC-SHA clearly outlines the need for better TLS documentation and education.

Supported Ciphers         Count     Percent
3DES                      422845    93.6596
3DES Only                 5554      1.2302
AES                       411990    91.2552
AES Only                  404       0.0895
CAMELLIA                  170600    37.7877
CAMELLIA Only             2         0.0004
RC4                       403683    89.4152
RC4 Only                  7042      1.5598
z:ADH-DES-CBC-SHA         918       0.2033
z:ADH-SEED-SHA            633       0.1402
z:AECDH-NULL-SHA          3         0.0007
z:DES-CBC-MD5             55824     12.3649
z:DES-CBC-SHA             125630    27.8269
z:DHE-DSS-SEED-SHA        1         0.0002
z:DHE-RSA-SEED-SHA        77930     17.2614
z:ECDHE-RSA-NULL-SHA      3         0.0007
z:EDH-DSS-DES-CBC-SHA     11        0.0024
z:EDH-RSA-DES-CBC-SHA     118684    26.2883
z:EXP-ADH-DES-CBC-SHA     611       0.1353
z:EXP-DES-CBC-SHA         98680     21.8575
z:EXP-EDH-DSS-DES-CBC-SHA 11        0.0024
z:EXP-EDH-RSA-DES-CBC-SHA 87490     19.3789
z:EXP-RC2-CBC-MD5         105780    23.4301
z:IDEA-CBC-MD5            7300      1.6169
z:IDEA-CBC-SHA            53981     11.9567
z:NULL-MD5                379       0.0839
z:NULL-SHA                377       0.0835
z:NULL-SHA256             9         0.002
z:RC2-CBC-MD5             63510     14.0674
z:SEED-SHA                93993     20.8193
Key negotiation

A pleasant surprise, is the percentage of deployment of ECDHE. 21% is not a victory, but an encouraging number for an algorithm that will hopefully replace RSA soon (at least for key negotiation).

DHE, supported since SSLv3, is closed to 60% deployment. We need to bump that number up to 100%, and soon !

Supported Handshakes      Count     Percent
DHE                       267507    59.2524
ECDHE                     97570     21.6116

Perfect Forward Secrecy is all the rage, so evaluating its deployment is most interesting. I am actually triple checking my results to make sure that the percentage below, 75% of websites supporting PFS, is accurate, because it seems so large to me. Even more surprising, is the fact that 61% of tested websites, either prefer, or let the client prefer, a PFS key exchange (DHE or ECDHE) to other ciphers.

As expected, the immense majority, 98%, of DHE keys are 1024 bits. Several reasons to this:

  • In Apache 2.4.6 and before, the DH parameter is always set to 1024 bits and is not user configurable. Future versions of Apache will automatically select a better value for the DH parameter.
  • Java 6, and probably other libraries as well, do not support a DHE key size larger than 1024 bits.

So, while everyone agrees that requiring a RSA modulus of 2048 bits, but using 1024 bits DHE keys, effectively reduces TLS security, there is no solution to this problem right now, other than breaking backward compatibility with old clients.

On ECDHE's side, handshakes almost always use the P-256 curve. Again, this makes sense, since Internet Explorer, Chrome and Firefox only support P256 at the moment. But according to recent research published by DJB & Lange, this might not be the safest choice.

The curve stats below are to take with a grain of salt: Cipherscan uses OpenSSL under the hood, and I am not certain of how OpenSSL elects the curve during the Handshake. This is an area of cipherscan that needs improvement, so don't run away with these numbers just yet.

Supported PFS             Count     Percent  PFS Percent
Support PFS               342725    75.9131
Prefer PFS                279430    61.8934

DH,1024bits               262561    58.1569  98.1511
DH,1539bits               1         0.0002   0.0004
DH,2048bits               3899      0.8636   1.4575
DH,3072bits               2         0.0004   0.0007
DH,3248bits               2         0.0004   0.0007
DH,4096bits               144       0.0319   0.0538
DH,512bits                76        0.0168   0.0284
DH,768bits                825       0.1827   0.3084

ECDH,P-256,256bits        96738     21.4273  99.1473
ECDH,B-163,163bits        37        0.0082   0.0379
ECDH,B-233,233bits        295       0.0653   0.3023
ECDH,B-283,282bits        1         0.0002   0.001
ECDH,B-571,570bits        329       0.0729   0.3372
ECDH,P-224,224bits        4         0.0009   0.0041
ECDH,P-384,384bits        108       0.0239   0.1107
ECDH,P-521,521bits        118       0.0261   0.1209

A few surprises in the Protocol scanning: there is still 18.7% of websites that support SSLv2! Seriously, guys, we've been repeating it for years: SSLv2 is severely broken, don't use it!

I particularly appreciate the 38 websites that only accept SSLv2. Nice job.

Also of interest, is the 2.6% of websites that support TLSv1.2, but not TLSv1.1. This would make sense, if the number of TLSv1.2 websites was actually larger than 2.6%, but it isn't (0.001%). So I can only imagine that, for some reason, websites use TLSv1 and TLSv1.2, but not 1.1.

Update: ''harshreality'', on HN, dug up a changelog in OpenSSL that could explain this behavior:

Changes between 1.0.1a and 1.0.1b 26 Apr 2012

- OpenSSL 1.0.0 sets SSL_OP_ALL to 0x80000FFFL and OpenSSL 1.0.1 and 1.0.1a set SSL_OP_NO_TLSv1_1 to 0x00000400L which would unfortunately mean any application compiled against OpenSSL 1.0.0 headers setting SSL_OP_ALL would also set SSL_OP_NO_TLSv1_1, unintentionally disablng TLS 1.1 also. Fix this by changing the value of SSL_OP_NO_TLSv1_1 to 0x10000000L Any application which was previously compiled against OpenSSL 1.0.1 or 1.0.1a headers and which cares about SSL_OP_NO_TLSv1_1 will need to be recompiled as a result.

Unsurprisingly, however, the immense majority supports SSLv3 and TLSv1. Respectively 99.6% and 98.7%. The small percentage of websites that support TLSv1.1 and 1.2 is worrisome, but not surprising.

Systems administrators are hardly to blame, considering the poor support of recent TLS versions in commercial products. Vendors could definitely use a push, so before you renew your next contract, make sure to add TLSv1.2 to your wishlist.

Supported Protocols       Count     Percent
SSL2                      85447     18.9264
SSL2 Only                 38        0.0084
SSL3                      449864    99.6443
SSL3 Only                 4443      0.9841
TLS1                      446575    98.9158
TLS1 Only                 736       0.163
TLS1.1                    145266    32.1762
TLS1.1 Only               1         0.0002
TLS1.2                    149921    33.2073
TLS1.2 Only               5         0.0011
TLS1.2 but not 1.1        11888     2.6332

What isn't tested

This is not a comprehensive test. RSA key sizes are not evaluated. Nor are TLS extensions, OCSP Stapling support, and a bunch of features that could be interesting to loop at. Maybe next time.

Educate, and be backward compatible

If this little experiment showed something, it is that old ciphers and protocols are far from dead. Sure, you can decide to kill RC4 and 3DES in your client today, but be aware that a small percentage of the internet will be unreachable to you, and your users.

garrison.jpg What can we do about it? Education is key: TLS is a complex subject, and most administrators and website owners don't have the time and knowledge to dig through dozens of mailing lists and blog posts to find the best configuration choices.

It is the primary motivation for documents such as Server Side TLS and Better Crypto. Some of us are working on improving these documents. But we need an army to broadcast the message, teach administrators in conferences, mailing lists and user groups, and push websites owners to apply more secure configuration to their websites.

We could use some help: go out there and teach TLS !

Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting Zweiwöchentliches Meeting der deutschsprachigen Community / German speaking community bi-weekly meeting

Yunier José Sosa VázquezLa visión de Mozilla para una Web sana y sostenible

Mozilla FoundationEsta es una traducción del artículo original publicado en el blog de The Mozilla Blog. Escrito por Denelle Dixon-Thayer.

No es nada sorprendente que las últimas discusiones en torno al bloqueo de contenido hayan dado lugar a un debate polarizado sobre los usuarios que optan por bloquear el contenido como una manera de controlar su experiencia Web y los intereses comerciales que monetizan al mismo. Esto, inevitablemente, nos conduce a una discusión acerca de cual contenido es bueno, cual contenido es malo, y cual contenido debe ser bloqueado.

En vez de enfocarnos en los síntomas del problema, deberíamos preguntarnos por qué los usuarios han buscado utilizar objetos contundentes, como bloqueadores de contenido, para ayudarlos a navegar sus vidas en línea. Todavía no sabemos la respuesta completa a esta pregunta. Lo que podemos ver es que los motivos difieren entre los usuarios y puede depender del dispositivo que usen (por ejemplo, los usuarios de escritorio quizás se enfocan más sobre la privacidad por sobre el rendimiento que puede ser un beneficio secundario, mientras que el rendimiento y el uso de datos móviles serían prioridad para un usuario de celular). Nosotros, como industria, debemos entender que necesita el usuario.

Las necesidades del usuario y los intereses comerciales no son un juego de suma cero – las dos son partes complementarias de una próspera web resistente. La creación de un equilibrio entre los beneficios comerciales y los beneficios de los usuarios es de crítica importancia para la salud de la Web.

Una cuestión que requiere más equilibrio son los datos del usuario. La recopilación y el uso de los datos no es inherentemente dañina. La misma ayuda con la alimentación de características personalizadas, la manutención de productos al día, proporcionando soporte al usuario y mejorando la manera en que los mismos trabajan. Darle valor al usuario a través de la recopilación de datos es una forma saludable y necesaria para ayudar a crear experiencias que resulten atractivas para este. Sin embargo, cuando los datos del usuario son recolectados sin darles un valor o control, el intercambio entre el usuario y la industria se vuelve opaco, por lo que la confusión se establece. Es entonces cuando los usuarios empiezan a desconfiar del sistema entero —incluyendo también a los que hacen valer la relación que tienen para con ellos.

Nosotros estamos tratando de llegar a la raíz del problema —pero no solo a través de la investigación. Estamos trabajando también para desarrollar productos, características y el compromiso de apoyar al usuario para que tenga una gran experiencia sin perderse de vista la sostenibilidad comercial.

Necesitamos tu ayuda para encontrar este delicado equilibrio y trazar un camino para una Web basada en la confianza. Adicionalmente, herramientas como LightbeamSmart On Privacy y Web Literacy, son programas que educan a los usuarios y ofrecen un conocimiento de cómo funciona la Web internamente.

En el extremo comercial de la ecuación, estamos jugando un papel de liderazgo (EN) en iniciativas editoriales para que uno se haga cargo de las experiencias entregadas en sus sitios y para ofrecer una experiencia de publicidad más aceptable para los usuarios.

Como industria tenemos que mantener al usuario como centro de la visión de los productos en lugar de ver al mismo sólo como un objeto a adquirir. Esta es la única manera de honrar la elección del usuario y de entregarle la mejor  y más confiable experiencia posible.

Fuente: Mozilla Hispano

Mozilla Open Policy & Advocacy Blog4 Days in NYC for the Open Web Fellows

The inaugural cohort of the Ford-Mozilla Open Web Fellows met in New York last week for only the second time face to face.  Working remotely from Lima, Washington DC, Boston and London, the 6 fellows meet weekly with Melissa Romaine from Mozilla’s San Francisco office, and with me from my home office in Victoria, British Columbia. This was an In Real Life™ meeting we were all looking forward to, if for nothing else than the important reminder that we aren’t squares on a video conference call – we are talented and complicated humans.

Mozilla NYC

The six fellows are placed within Internet Freedom organizations, working on a mixture of team and individual projects.

      • Paola Villarreal, American Civil Liberties Union, Massachusetts.
        Paola is working on Data for Justice, a data-driven advocacy tool that visualizes information critical for eliminating injustice in communities.
      • Tim Sammut, Amnesty International. Tim’s projects are:
        Secure Communications Framework: An approachable framework for human rights researchers that helps them understand how to communicate with contacts around the world safely in the context of varying threats and information sensitivity.
        Community Incident Response: Help human rights organizations in Amnesty’s worldwide network access technical assistance during active digital attacks.
      • Andrea Del Rio, Association for Progressive Communications
        Andrea is creating the web version of the Feminist Principles of The Internet, which aims to inspire people not only to imagine a Feminist Internet but actually build one that is fair, inclusive, empowering and safe for everyone.
      • Drew Wilson, Free Press
        Drew is embedded in Free Press’ Internet2016 campaign and is building tools that internet rights advocates can use to bootstrap their own activism projects.
      • Gem Barrett, Open Technology Institute
        Gem is a member of the MLab team at OTI, helping to build the largest collection of open Internet performance data on the planet.
      • Tennyson Holloway, Public Knowledge
        Tennyson is working on projects that inspire and educate future web advocates. “What can i do for the” is a website that represents a vision of a story based platform that educates, inspires, and assists users to join the open web movement. His other projects involve creating web games that explain tech policy Washington issues, such as copyright and patent trolls.

The Weather Report

Being the first cohort, the 2015 fellows have their fair share of challenges and opportunities.  The challenge: we’re living a plan that is being executed for the first time.  Almost everything needs to be answered by “I don’t know. Let me get back to you”.  On the plus side, this cohort will likely play the largest role in shaping the program and will have the highest degree of input on where we need to make adjustments.  This day was about navigating that tension and also identifying where we are starting to win.


A random sample of substantive issues we discussed:

-How do we design a fellowship program that serves both established and emerging careers?

-What’s the right balance of individual projects and independent research within a fellowship year?

-How do we identify our mentors? Can these people be found for us, or is it in fact something we need to find time to do? (spoiler alert – that’s on us)

Some key takeaways for the Mozilla program team:

-The Mozilla network is a key asset. We need to present the “menu” of potential contacts and access to people that we can provide

-We need to find a way to bring the work of the fellows to Mozilla audiences

-We can assist fellows in finding mentors – those individuals that fellows can go to for advice and that have their best interests at heart

We ended the day with a Q & A with Mozilla’s Executive Director, Mark Surman.  Mark shared with the fellows his vision for leadership development at Mozilla, which he’s previously blogged about here.   He left with two invitations for the cohort – be demanding, and make sure Mozilla is doing all it can to advance your goals.  But also, be generous – give to each other and the program.

Mapping Collaboration

The 2015 cohort is impressive.  They’ve advised governments, settled refugees, built movements and shipped products.  One thing we needed to accomplish together was an identification of the believable ways that the cohort could collaborate together – from running workshops with one another to building a shared project, we spent time mapping this landscape and committing to some next steps. We were joined by Mozilla’s Internet Policy manager Jochai Ben-Avie,who will be working with the cohort during their fellowship year.


Some things we committed to producing together

-5 Lightning Talks we’ll give within the cohort about skills we want to share or an issue we are passionate about

-A Mozilla Wiki page about the fellowship cohort – You can now refer to this page to stay up to date on the 2015 cohort.

-Collaborating with the larger Mozilla Advocacy team to help develop advocacy campaigns

-Net Posi, a podcast about activism started by the cohort – listen to the first episode below and subscribe here.

We headed to midtown for a meeting with Jenny Toomey, Lori McGlinchey and Michael Brennan from Ford’s Internet Rights program.  We were also joined by Joshua Cinelli, who manages Ford’s strategic communications. It was a great chance for us all to learn more about why Internet Rights has been a strategic focus for Ford, and how they see field building and talent development fitting into their strategy.  As Lori McGlinchey, the Internet Rights Program officer expressed – “we need civil society orgs to see technologists not as the cherry on top of a cake they already are having trouble paying for – technologists need to be thought of as essential to these teams”.  It was also a chance for Ford to internalize the diversity and talent of our cohort and the projects we’ve undertaken.  This was the first time that the fellows and Ford staff had met, and we all left with a heightened understanding of not only our role within the Internet Freedom ecosystem, but the opportunities for us to make an impact.


From there we headed to Civic Hall for our closing event.  We hosted 30 activists and technologists for social change in a conversation designed to learn more about the projects of our cohort. We also met with several organizations hoping to place fellows within their organizations in 2016, and were fortunate to be able to dedicate some 1-1 time to these allies in the field.  We split into small groups where fellows lead discussions around their projects.

We finished the evening by braving the rainy ripple effects of Hurricane Joaquin to have a final meal together.  Exhausted but productive, the trains, planes and automobiles took us out of New York to reflect on, internalize, and act on what we’d learned.

A HUGE thank you to Misty Avila who joined us from Aspiration Technology to facilitate our days together.  We couldn’t have accomplished so much without her talent and spirit!


Niko MatsakisVirtual Structs Part 4: Extended Enums And Thin Traits

So, aturon wrote this interesting post on an alternative virtual structs approach, and, more-or-less since he wrote it, I’ve been wanting to write up my thoughts. I finally got them down.

Before I go any further, a note on terminology. I will refer to Aaron’s proposal as the Thin Traits proposal, and my own previous proposal as the Extended Enums proposal. Very good.

(OK, I lied, one more note: starting with this post, I’ve decided to disable comments on this blog. There are just too many forums to keep up with! So if you want to discuss this post, I’d recommend doing so on this Rust internals thread.)


Let me lead with my conclusion: while I still want the Extended Enums proposal, I lean towards implementing the Thin Traits proposal now, and returning to something like Extended Enums afterwards (or at some later time). My reasoning is that the Thin Traits proposal can be seen as a design pattern lying latent in the Extended Enums proposal. Basically, once we implement specialization, which I want for a wide variety of reasons, we almost get Thin Traits for free. And the Thin Traits pattern is useful enough that it’s worth taking that extra step.

Now, since the Thin Traits and Extended Enums proposal appear to be alternatives, you may wonder why I would think there is value in potentially implementing both. The way I see it, they target different things. Thin Traits gives you a way to very precisely fashion something that acts like a C++ or Java class. This means you get thin pointers, inherited fields and behavior, and you even get open extensibility (but, note, you thus do not get downcasting).

Extended Enums, in contrast, is targeting the fixed domain use case, where you have a defined set of possibilities. This is what we use enums for today, but (for the reasons I outlined before) there are various places that we could improve, and that was what the extended enums proposal was all about. One advantage of targeting the fixed domain use case is that you get additional power, such as the ability to do match statements, or to use inheritance when implementing any trait at all (more details on this last point below).

To put it another way: with Thin Traits, you write virtual methods whereas with Extensible Enums, you write match statements – and I think match statements are far more common in Rust today.

Still, Thin Traits will be a very good fit for various use cases. They are a good fit for Servo, for example, where they can be put to use modeling the DOM. The extensibility here is probably a plus, if not a hard requirement, because it means Servo can spread the DOM across multiple crates. Another place that they might (maybe?) be useful is if we want to have a stable interface to the AST someday (though for that I think I would favor something like RFC 757).

But I think there a bunch of use cases for extensible enums that thin traits don’t cover at all. For example, I don’t see us using thin traits in the compiler very much, nor do I see much of a role for them in LALRPOP, etc. In all these cases, the open-ended extensibility of Thin Traits is not needed and being able to exhaustively match is key. Refinement types would also be very welcome.

Which brings me to my final thought. The Extended Enums proposal, while useful, was not perfect. It had some rough spots we were not happy with (which I’ll discuss later on). Deferring the proposal gives us time to find new solutions to those aspects. Often I find that when I revisit a troublesome feature after letting it sit for some time, I find that either (1) the problem I thought there was no longer bothers me or (2) the feature isn’t that important anyway or (3) there is now a solution that was either previously not possible or which just never occurred to me.

OK, so, with that conclusion out of the way, the post continues by examining some of the rough spots in the Extended Enums proposal, and then looking at how we can address those by taking an approach like the one described in Thin Traits.

Thesis: Extended Enums

Let’s start by reviewing a bit of the Extended Enums proposal. Extended Enums, as you may recall, proposed making types for each of the enum variants, and allowing them to be structured in a hierarchy. It also proposed permitting enums to be declared as unsized, which meant that the size of the enum type varies depending on what variant a particular instance is.

In that proposal, I used a syntax where enums could have a list of common fields declared in the body of the enum:

enum TypeData<'tcx> {
    // Common fields:
    id: u32,
    flags: u32,

    // Variants:
    Int { },
    Uint { },
    Ref { referent_ty: Ty<'tcx> },

One could also declare the variants out of line, as in this example:

unsized enum Node {
  position: Rectangle, // <-- common fields, but no variants

enum Element: Node {

struct TextElement: Element {


Note that in this model, the variants, or leaf nodes in the type hierarchy, are always structs. The inner nodes of the hierarchy (those with children) are enums.

In order to support the abstraction of constructors, the proposal includes a special associated type that lets you pull out a struct containing the common fields from an enum. For example, Node::struct would correspond to a struct like

struct NodeFields {
    position: Rectangle,

Complications with common fields

The original post glossed over certain complications that arise around common fields. Let me outline some of those complications. To start, the associated struct type has always been a bit odd. It’s just an unusual bit of syntax, for one thing. But also, the fact that this struct is not declared by the user raises some thorny questions. For example, are the fields declared as public or private? Can we implement traits for this associated struct type? And so forth.

There are similar questions raised about the common fields in the enum itself. In a struct, fields are private by default, and must be declared as public (even if the struct is public):

pub struct Foo { // the struct is public...
   f: i32        // ...but its fields are private.

But in an enum, variants (and their fields) are public if the enum is public:

pub enum Foo { // the enum is public...
    Variant1 { f: i32 }, // ...and so are its variants, and their fields.

This default matches how enums and structs are typically used: public structs are used to form abstraction barriers, and public enums are exposed in order to allow the outside world to match against the various cases. (We used to make the fields of public structs be public as well, but we found that in practice the overwhelming majority were just declared as private.)

However, these defaults are somewhat problematic for common fields. For example, let’s look at that DOM example again:

unsized pub enum Node {
  position: Rectangle,

This field is declared in an enum, and that enum is public. So should the field position be public or private? I would argue that this enum is more struct-like in its usage pattern, and the default should be private. We could arrive at this by adjusting the defaults based on whether the enum declares its variant inline or out of line. I expect this would actually match pretty well with actual usage, but you can see that this is a somewhat subtle rule.

Antithesis: Thin Traits

Now let me pivot for a bit and discuss the Thin Traits proposal. In particular, let’s revisit the DOM hierarchy that we saw before (Node, Element, etc), and see how that gets modeled. In the thin traits proposal, every logical class consists of two types. The first is a struct that defines its common fields and the second is a trait that defines any virtual methods. So, the root of a DOM might be a Node type, modeled like so:

struct NodeFields {
    id: u32

trait Node: NodeFields {
    fn something(&self);
    fn something_else(&self);

The struct NodeFields here just represents the set of fields that all nodes must have. Because it is declared as a superbound of Node, that means that any type which implements Node must have NodeFields as a prefix. As a result, if we have a &Node object, we can access the fields from NodeFields at no overhead, even without knowing the precise type of the implementor.

(Furthermore, because Node was declared as a thin trait, a &Node pointer can be a thin pointer, and not a fat pointer. This does mean that Node can only be implemented for local types. Note though that you could use this same pattern without declaring Node as a thin trait and it would still work, it’s just that &Node references would be fat pointers.)

The Node trait shown had two virtual methods, something() and something_else(). Using specialization, we can provide a default impl that lets us give some default behavior there, but also allows subclasses to override that behavior:

partial impl<T:Node> Node for T {
    fn something(&self) {
        // Here something_else() is not defined, so it is "pure virtual"

Finally, if we have some methods that we would like to dispatch statically on Node, we can do that by using an inherent method:

impl Node {
    fn get_id(&self) -> u32 { }

This impl looks similar to the partial impl above, but in fact it is not an impl of the trait Node, but rather adding inherent methods that apply to Node objects. So if we call node.get_id() it doesn’t go through any virtual dispatch at all.

You can continue this pattern to create subclasses. So adding an Element subclass might look like:

struct ElementFields: NodeFields {

trait Element: Node + ElementFields {

and so forth.

Synthesis: Extended Enums as a superset of Thin Traits

The Thin Traits proposal addresses common fields by creating explicit structs, like NodeFields, that serve as containers for the common fields, and by adding struct inheritance. This is an alternative to the special Node::struct we used in the Extended Enums proposal. There are pros and cons to using struct inheritance over Node::struct. On the pro side, struct inheritance sidesteps the various questions about privacy, visibility, and so forth that arose with Node::struct. On the con side, using structs requires a kind of parallel hierarchy, which is something we were initially trying to avoid. A final advantage for using struct inheritance is that it is a reusable mechanism. That is, whereas adding common fields to enums only affects enums, using struct inheritance allows us to add common fields to enums, traits, and other structs. Considering all of these things, it seems like struct inheritance is a better choice.

If we were to convert the DOM example to use struct inheritance, it would mean that an enum may inherit from a struct, in which case it gets the fields of that struct. For out-of-line enum declarations, then, we can simply create an enum with an empty body:

struct NodeFields {
  position: Rectangle, // <-- common fields, but no variants

enum Node: NodeFields;

struct ElementFields: NodeFields {

enum Element: Node + ElementFields;

(I’ve also taken the liberty of changing from the unsized keyword to an annotation, #[repr(unsized)]. Given that making an enum unsized doesn’t really affect its semantics, just the memory layout, using a #[repr] attribute seems like a good choice. It was something we considered before; I’m not really sure why we rejected it anymore.)

Method dispatch

My post did not cover how virtual method dispatch was going to work. Aaron gave a quick summary in the Thin Trait proposal. I will give an even quicker one here. It was a goal of the proposal that one should be able to use inheritance to refine the behavior over the type hierarchy. That is, one should be able to write a set of impls like the following:

impl<T> MyTrait for Option<T> {
    default fn method1() { ... }
    default fn method2() { ... }
    default fn method3();

impl<T> MyTrait for Option::Some<T> {
    fn method1() { /* overrides the version above */ }
    fn method3() { /* must be implemented */ }

impl<T> MyTrait for Option::None<T> {
    fn method2() { /* overrides the version above */ }
    fn method3() { /* must be implemented */ }

This still seems like a very nice feature to me. As the Thin Traits proposal showed, specialization makes this kind of refinement possible, but it requires a variety of different impls. The example above, however, didn’t have quite so many impls – why is that?

What we had envisioned to bridge the gap was that we would use a kind of implicit sugar. That is, the impl for Option<T> would effectively be expanded to two impls. One of them, the partial impl, provides the defaults for the variants, and other, a concrete impl, effectively implements the virtual dispatch, by matching and dispatching to the appropriate variant:

// As originally envisioned, `impl<T> MyTrait for Option<T>`
// would be sugar for the following two impls:

partial impl<T> MyTrait for Option<T> {
    default fn method1() { ... }
    default fn method2() { ... }
    default fn method3();

impl<T> MyTrait for Option<T> {
    fn method1(&self) {
        match self {
            this @ &Some(..) => Option::Some::method1(this),
            this @ &None => Option::None::method1(this),
    ... // as above, but for the other methods

Similar expansions are needed for inherent impls. You may be wondering why it is that we expand the one impl (for Option<T>) into two impls in the first place. Each plays a distinct role:

  • The partial impl handles the defaults part of the picture. That is, it supplies default impls for the various methods that impls for Some and None can reuse (or override).
  • The impl itself handles the virtual dispatch part of things. We want to ensure that when we call method1() on a variable o of type Option<T>, we invoke the appropriate method1 depending on what variant o actually is at runtime. We do this by matching on o and then delegating to the proper place. If you think about it, this is roughly equivalent to loading a function pointer out of a vtable and dispatching through that, though the performance characteristics are interesting (in a way, it resembles a fully expanded builtin PIC).

Overall, this kind of expansion is a bit subtle. It’d be nice to have a model that did not require it. In fact, in an earlier design, we DID avoid it. We did so by introducing a new shorthand, called match impl. This would basically create the downcasting impl that we added implicitly above. This would make the correct pattern as follows:

partial impl<T> MyTrait for Option<T> { // <-- this is now partial
    default fn method1() { ... }
    default fn method2() { ... }
    default fn method3();

match impl<T> MyTrait for Option<T>; // <-- this is new

impl<T> MyTrait for Option::Some<T> {
    fn method1() { /* overrides the version above */ }
    fn method3() { /* must be implemented */ }

impl<T> MyTrait for Option::None<T> {
    fn method2() { /* overrides the version above */ }
    fn method3() { /* must be implemented */ }

At first glance, this bears a strong resemblance to how the Thin Trait proposal handled virtual dispatch. In the Thin Trait proposal, we have a partial impl as well, and then concrete impls that override the details. However, there is no match impl in Thin Trait proposal. It is not needed because, in that proposal, we were implementing the Node trait for the Node type – and in fact the compiler supplies that impl automatically, as part of the object safety notion.

Expression problem, I know thee well—a serviceable villain

But there is another difference between the two examples, and it’s important. In this code I am showing above, there is in fact no connection between MyTrait and Option. That is, under the Extended Enums proposal, I can implement foreign traits and use inheritance to refine the behavior depending on what variant I have. The Thin Traits pattern, however, only works for implementing the main traits (e.g., Node, Element, etc) – and the reason why is because you can’t write match impls under the Thin Traits proposal, since the set of types is open-ended. (Instead we lean on the compiler-generated virtual impl of Node for Node, etc.)

What you can do in the Thin Traits proposal is to add methods to the main traits and just delegate to those. So I could do something like:

trait MyTrait {
    fn my_method(&self);


trait Node {
    fn my_trait_my_method(&self);

impl MyTrait for Node {
    fn my_method(&self) {
        // delegate to the method in the `Node` trait

Now you can use inheritance to refine the behavior of my_trait_my_method if you like. But note that this only works if the MyTrait type is in the same crate as Node or some ancestor crate.

The reason for this split is precisely the open-ended nature of the Thin Trait pattern. Or, to give this another name, it is the famous expression problem. With Extensible Enums, we enumerated all the cases, so that means that other, downstream crates, can now implement traits against those cases. We’ve fixed the set of cases, but we can extended infinitely the set of operations. In contrast, with Thin Traits, we enumerated the operations (as the contents of the master traits), but we allow downstream crates to implement new cases for those operations.

So method dispatch proves to be pretty interesting:

  • It gives further evidence that Extensible Enums represent a useful entity in their own right.
  • It seems like a case where we may find that the tradeoffs change over time. That is, maybe match impl is not such a bad solution after all, particularly if the Thin Trait pattern is covering some share of the object-like use cases. In which case one of the main bits of magic in the Extensible Enums proposal goes away.


Oh, wait, I already gave it. Well, the most salient points are:

  • Extensible Enums are about a fixed set of cases, open-ended set of operations. Thin Traits are not. This matters.
  • Thin Traits are (almost) a latent pattern in the Extensible Enums proposal, requiring only #[repr(thin)] and struct inheritance.
    • Struct inheritance might be nicer than associated structs anyway.
  • We could consider doing both, and if so, it would probably make sense to implement Specialization, then Thin Traits, and only then consider Extensible Enums.

Air MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

Andreas TolfsenThe case against visibility checks in WebDriver

The WebDriver specification came to life as a blueprint description of Selenium’s behaviour to turn what is the de facto browser automation solution, into a specification that would become a de jure standard. Along the way we have rectified and corrected quirks and oddities in the existing work to make commands cohesive units that form part of a larger, more consistent protocol.

Now that almost all of the formal remote end steps are defined, we are looking closer at the relationship among different commands. A part of this is questioning the command primitives, and a current burning issue is the approximation to visibility.

For looking up and interacting with elements, a series of precondition steps must be met, one of which is that the element is deemed visible. The visibility of an element is said to be guided by what is “perceptually visible to the human eye”. This is a tough mark to hit since other web standards we rely on are refusing to go anywhere near this topic. Defining what it means to be visible to the user, it turns out, is extremely difficult to give an exhaustive definition of.

Tree traversal and relationships

From Selenium the specification has inherited a long and complex algorithm to give a crude approximation about the element’s nature and its relationships in the tree. This was further developed to take into account more things that we knew Selenium was missing.

The specification gives a highly non-normative summary of what it means by element visibility:

An element is in general to be considered visible if any part of it is drawn on the canvas within the [boundaries] of the viewport.

Through the special meaning of many HTML features and how the ancestral relationship between different elements have effect on elements’ visibility, it goes on to describe the steps of an algorithm that traverses the tree up and down, starting from the element it is trying to determine the visibility of.

Practically it looks at certain computed (resolved) styling properties that are generally known to make an element invisible, such as display: none, width and height &c. The visibility of certain elements, such as an <option> element, depend upon the characteristics of its parent: For these it traverses up the document until it finds the containing <select> element, then applies the same checks there.

Because many HTML elements need special treatment, each has separate rules defined for them. Some of these rules include <map>, <area>, textual nodes, <img>, but also many more.

Following explicit hiding rules, if the element has more than a single direct descendant element its own visibility will depend upon the visibility of its children. The same is true if any of its direct ancestral elements in tree order fail the visibility test, in which case the visibility will cascade or trickle down to all child elements.

What we arrive at is a recursive algorithm that, although extremely ineffective, tells if a node is visible within the constraints of the tree it is part of. But looking solely at the part of the tree an element is in, overlapping elements from other trees are not considered.

The tree-traversal approach also entirely avoids addressing issues around absolute positioning, custom elements, overflow, and device concepts such as initial- and actual viewport. Owing to some of the provisions it makes around how border widths influence a parent element’s effectual dimensions, or preconceived ideas on how input widgets are styled by user agents, it furthermore ascribes meaning, or interpretation, to areas where the web platform is bad at exposing the primitives.


A suggested alternative to tree traversal is a form of hit-testing that involves testing which element is under the cursor, and then do this for each coordinate of the element’s bounding box that is inside the viewport. This has the upside of avoiding the problems associated with tree traversal altogether, but the downside of being extremely inefficient, and can in the worst scenario be as bad as O(n).

It is also complicated by the fact is that the element inside the bounding box does not necessarily fill the entire rectangle. This is true if the element is clipped by border-radius, has a degree of rotation, or similar. The primitives offered to us give little guidance for tracking the exact path of an element’s shape.

SVG, canvas, Shadow DOM, and transforms

The tree traversal algorithm is also limited to HTML. Other document types such as SVG and canvas have entirely different visibility primitives, and the idea of implicit ancestor visibility simply makes no sense in these contexts.

Shadow DOM is similarly problematic because it introduces the concept of multiple DOMs. Elements that have a Shadow DOM attached do not expose the same standard DOM traversal and query APIs (such as Node, ParentNode, Element) as regular elements.

They also have the idea of scoped stylesheets, whereby it’s possible to style implementation details with a <style> element that just applies to the local scope. Matching rules are also constrained to the same DOM, meaning style selectors in the host document do not match inside a Shadow DOM.

CSS Transforms in itself isn’t an issue, but since the tree traversal algorithm is not a single atomic action, it does not take into account that the state of the document may change whilst it is executing. It should be possible to work around this issue by evaluating the entire algorithm on a sandboxed snapshot of the tree, but again, this exposes us to non-standardised realms of the web platform.


By this point it should almost go without saying that providing a consistent, future-proof method of ascertaining visibility is futile. Whilst WebDriver does have access to the browser internals required to solve this problem, the solution likely lies outside the scope of specifying a remote control protocol.

As we cannot guarantee that new platform features will be taken into account, the tree-traversal approach offered by Selenium is not a building block for the future. It’s a hacky approach that may work reasonably well given the naïve narritive of the simple and undynamic web document world of 10 years ago.

To provide a future-proof notion of naked-eye visibility, there’s a need to push for separate, foundational platform APIs that other standards, in turn, must relate to. WebDriver’s primary role is exposing a privileged remote control interface that enables introspection and control of user agents; not to define element- or textual visibility. When such primitives are made available WebDriver should be an obvious consumer of these.

The good news is that the current Selenium behaviour, given the set of restrictions pointed out, is possible to replicate using existing, public APIs. This will allow consumers to inject automation atoms (chunks of browser-independent JavaScript implementations) to reproduce the behaviour using WebDriver primitives.

The Mozilla BlogProposed Principles for Content Blocking

Content blocking has become a hot issue across the Web and mobile ecosystems. It was already becoming pervasive on desktop, and now Apple’s iOS has made it possible to develop iOS applications whose purpose is to block content. This caused the most recent flurry of activity, concern and focus. We need to pay attention.

Content blocking is not going away – it is now part of our online experience. But the landscape isn’t well understood, making it harder to know how best to advance a healthy, open Web. Users want it –whether to avoid the display of ads, protect against unwanted tracking, improve load speed, or reduce data consumption– and we need to address how we as an industry should respond. We wanted to start by hacking on proposed principles for content blocking. The growing availability and use of content blockers tells us that users want to control their experience.

This is a good thing. But some content blocking could be harmful in ways that may not be obvious. For example, if content blocking creates new gatekeepers who can pick winners and losers in the publishing space or who favor their own content over others’, it ultimately harms competition and innovation. In the long run, users could lose as much control as they gain. The same happens if the commercial model of the Web is not part of the content blocking debate.

In my last post, I conveyed our intention to engage with this landscape, not solely through analysis and research, but also through experimentation, product development, and advocacy.

To help guide our efforts and hopefully inform others, we’ve developed three proposed “content blocking principles” that would help advance the beneficial effects of content blocking while minimizing the risks. We want your help hacking on them. Just as our data privacy principles help guide our data practices, these content blocking principles will help guide what we build and what we support across the industry.

Content is not inherently good or bad – with some notable exceptions, such as malware. So these principles aren’t about what content is OK to block and what isn’t. They speak to how and why content can be blocked, and how the user can be maintained at the center through that process.

At Mozilla, our mission is to ensure a Web that is open and trusted and that puts our users in control. For content blocking, here is what we think that means:

  • Content Neutrality: Content blocking software should focus on addressing potential  user needs  (such as on performance, security, and privacy) instead of blocking specific types of content (such as advertising).
  • Transparency & Control: The content blocking software should provide users with transparency and meaningful controls over the needs it is attempting to address.
  • Openness: Blocking should maintain a level playing field and should block under the same principles regardless of source of the content. Publishers and other content providers should be given ways to participate in an open Web ecosystem, instead of being placed in a permanent penalty box that closes off the Web to their products and services.

Tell us what you think of these proposed principles on your social channels using #contentblocking and join us on Friday October 9 at 11am PT for our #BlockParty, a conversation around the problems and possible solutions to the content blocking question. We look forward to working with our users, our partners and the rest of the Web ecosystem to advance our shared goal of a healthy, open Web.

Air MozillaProduct Coordination Meeting

Product Coordination Meeting This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order to ensure that...

David BurnsA new Marionette version available for Selenium Users with Java, .NET and Ruby support

If you have been wanting to use Marionette but couldn't because you don't work in Python now is your chance to do so! Well, if you are a Java User, .NET and Ruby you can use it too!! All the latest downloads of the Marionette executable are available from our development github repository releases page. We will be moving this to the Mozilla organization the closer we get to a full release.

There is also a new page on MDN that walks you through the process of setting up Marionette and using it. There are examples for all the language bindings currently supported.

Since you are awesome early adopters it would be great if we could raise bugs.

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • getPageSource not available. This will be added in at a later stage, it was a slightly contentious part in the specification.
  • I am sure there are other things we don't remember

Switching of Frames needs to be done with either a WebElement or an index. Windows can only be switched by window handles. This is currently how it has been discussed in the specification.

If in doubt, raise bugs!

Thanks for being an early adopter and thanks for raising bugs as you find them!

Nick CameronRustfmt-ing Rust

Rustfmt is a tool for formatting Rust code. It has seen some rapid and impressive development over the last few months, thanks to some awesome contributions from @marcusklaas, @cassiersg, and several others. It is far from finished, but it is a powerful and useful tool. I would like Rustfmt to be a standard part of every Rustacean's toolkit. In particular, I would like Rustfmt to be used on every check-in of the Rust repo (and other large projects). For this to be possible, running Rustfmt on Rust must work without crashes, without generating poor formatting, and the whole repo must be pre-formatted so that future changes are not polluted with tonnes of formatting churn.

To work towards this, I've been running Rustfmt on some crates and modules. I would love to have some help doing this! It should be fairly easy to do with no experience with Rust or Rustfmt necessary (and you certainly don't need to know about compiler or library implementation). Hopefully you'll learn a fair bit about the Rust source code and Rustfmt in the process. This blog post is all about how to help.

For this blog post, I'll assume you don't know much about the world of Rust and go over the background etc. in a little detail. Feel free to ping me on irc or GitHub (nrc in both places) if you need any help.


The Rust repo

Can be found at It contains the source code for the Rust compiler and the standard library. When people talk about contributing to the Rust project, they often mean this repo. There are also other important repos in the rust-lang org.


Rustfmt is a tool for formatting Rust code. Rustfmt is not finished by a long shot, and there are plenty of bugs, as well as some code it doesn't even try to reformat yet. You can find the rustfmt source at

The problem

We would like to be able to run Rustfmt on the Rust repo. Ideally, we'd like to run it as part of the test suite to make sure it is properly styled. However, there are two problems with this: having not run Rustfmt on it before, there will be a lot of changes the first time it is run; and there are lots of bugs in Rustfmt which we haven't found and prevent us using it on a project the size of Rust.

The solution

Pick a module or sub-module (best to start small, I wouldn't try whole crates), run Rustfmt on it and inspect the result, if there are problems (or Rustfmt crashes while running), file an issue against Rustfmt. If it succeeds (possibly with some manual fixups), make a PR of the changes and land it!


Running rustfmt

First off you'll need to build rustfmt. For that you'll need the source code and an up to date version of the nightly compiler (scroll down to the bottom). Then to build, just run cargo build in the directory where you cloned Rustfmt. You can check it worked by running cargo test.

The most common reasons for a build failing are not using the nightly version of the compiler, or using one which is not new enough - Rustfmt lives on the bleeding edge of Rust development!

You'll also need a clone of the rust repo. Once you have that, pick a module to work on. You'll have to identify the main file for that module which will be or Then to reformat, run something like target/debug/rustfmt --write-mode=overwrite ~/rust/src/librustc_trans/save/ from the directory you installed Rustfmt in (you will need to change the path to reflect the module you want to reformat).

Rustfmt issues

If Rustfmt crashes during formatting, please get a backtrace by re-running with RUST_BACKTRACE=1. File an issue on the Rustfmt repo. If you can make a minimal test case, that is very much appreciated. The crash should tell you which file Rustfmt failed on. My technique for getting a test case is to copy that file to a temporary and cut code until I have the smallest program which still gives the same crash. Note that input to Rustfmt does not have to be valid Rust, it only needs to parse.

If formatting succeeds, then take a look at the diff to see what has been changed. You can use git diff. I find it easier to push to GitHub and look at their diff. If there are places which you think should have been re-formatted, but weren't, or which were formatted poorly, please submit an issue to the Rustfmt repo. Include the diff of the change which you think is poor, or the code which was not reformatted.

Most cases of poor formatting should block landing the changes to the Rust repo. Some cases will be acceptable, but we could do better. In those cases please file an issue and submit a PR too, but leave a reference to the issue with the PR.


Some code will need fixing up after Rustfmt is done. There are two reasons for this: in some places Rustfmt won't reformat the code yet, but it moves around surrounding code in a way which makes this a problem. In other places you might want non-standard formatting, for example if you have a 3x3 array which represents a matrix, you might want this on three lines even though rustfmt can fit it on one line.

For the first case you can manually edit the code. Check that Rustfmt preserves the changed code by running Rustfmt again and checking there are no changes.

For the second case, you can use the #[rustfmt_skip] attribute. This can be placed on functions, modules, and most other items. Again, after fixing up the source code and adding the attribute, check that Rustfmt does not make any further changes (and report as a bug if it does).

Rust PRs

And if it all works out, then submit a PR to Rust! Do this from your branch on GitHub. When submitting the PR, you can put r? @nrc in the first PR comment (or any subsequent comment) to ensure I get pinged for review. Or, if you know the files you worked on are frequently reviewed by a particular person, you can use the same method to request review from them. In that case, please cc @nrc so that I can keep track of what is getting formatted.


Some example PRs I've done and the issues filed on Rustfmt:

Mozilla WebDev CommunityUsing peep on Heroku

I recently moved a Django app to Heroku which was using peep, rather than pip, for package installation. By default Heroku will use pip to install your required packages when it sees a requirements.txt file in the root of your project, and the option to use peep instead does not exist.

Luckily one of my colleagues, pmac, was kind enough to create a share a Heroku buildpack which uses peep instead of pip for package installation. All you need to do to make use of this buildpack is issue the following command using the Heroku CLI for your app:

heroku buildpacks:set

Of course, make sure your requirements.txt is peep-compatible.

Happy peeping!

Air MozillaWebdev Extravaganza: October 2015

Webdev Extravaganza: October 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Patrick McManusBrotli Content-Encoding for Firefox 44

The best way to make data appear to move faster over the Web is to move less of it and lossless compression has always been a core tenet of good web design.

Sometimes that is done via over the top gzip of text resources (html, js, css), but other times it is accomplished via the compression inherent in the file format of media elements. Modern sites apply gzip to all of their text as a best practice.

Time marches on, and it turns out we can often do a better job than the venerable gzip. Until recently, new formats struggled with matching the decoding rates of gzip, but lately a new contender named brotli has shown impressive results. It has been able to improve on gzip anywhere from 20% to 40% in terms of compression ratios while keeping up on the decoding rate. Have a look at the author's recent comparative results.

The deployed WOFF2 font file format already uses brotli internally.

If all goes well in testing, Firefox 44 (ETA January 2016) will negotiate brotli as a content-encoding for https resources. The negotiation will be done in the usual way via the Accept-Encoding request header and the token "br". Servers that wish to encode a response with brotli can do so by adding "br" to the Content-Encoding response header. Firefox won't decode brotli outside of https - so make sure to use the HTTP content negotiation framework instead of doing user agent sniffing.

[edit note - around Oct 6 2015 the token was changed to br from brotli. The token brotli was only ever deployed on nightly builds of firefox 44.]

We expect Chrome will deploy something compatible in the near future.

The brotli format is defined by this document working its way through the IETF process. We will work with the authors to make sure the IANA registry for content codings is updated to reference it.

You can get tools to create brotli compressed content here and there is a windows executable I can't vouch for linked here

Yunier José Sosa VázquezXavier Martorell: “Mozilla me aporta humildad, respeto y pensamiento libre”

Xavier-MartorellEs lunes de entrevistas nuevamente y es un placer para nosotros presentarte a uno de los más jóvenes integrantes de nuestra comunidad. Estamos hablando del español Xavier Martorell o mejor dicho Yunito, como todos lo conocemos.

Pregunta:/ ¿Cómo te llamas y a qué te dedicas actualmente?

Respuesta:/ Me llamo Xavier Martorell, actualmente resido en las Islas Baleares (están ubicadas a la derecha de España), y estoy estudiando Administración y Dirección de Empresas en la Universidad de las Islas Baleares (UIB) aunque he de decir que durante el próximo año estoy pensando en irme durante un año a otro país a estudiar aunque aún no he decidido cual será.

P:/ ¿Cómo conociste de Mozilla y la comunidad?

R:/ En su día, un amigo mio que estaba mucho más concienciado que yo en temas de privacidad y protección de datos, se empeñó en que usara Mozilla Firefox por ser un navegador más seguro, libre y más respetuoso con la privacidad del usuario. Le hice caso, me informe un poco y decidí probarlo. He de decir, qué me gustó desde el primer momento por su variedad de extensiones, lo fluido que era y por todas las preocupación que tenia Mozilla acerca de la privacidad, la seguridad y su intención de un Internet libre. Así que, cuando un día, en el instituto, navegando por Internet, encontré la oportunidad de colaborar con Mozilla no lo dudé ni un segundo y decidí hacerlo.

P:/ ¿Actualmente qué labor desempeñas en la Comunidad?

R:/ Actualmente soy Responsable del Área de Control de Calidad, junto a mi amiga y mentora Gabriela Montagu, y mi labor es la de comprobar el correcto funcionamiento de las versiones de prueba en los distintos navegadores (tanto en escritorio como en Android), así como asegurarme de correcto funcionamiento de Firefox OS. También participo junto con otros miembros de Mozilla en los llamados “Testdays” que son unos días de prueba en los que se trabaja sobre alguna nueva función del navegador, así como revisar que todo lo que ya hay actualmente en Firefox funcione a la perfección. Además. estoy llevando a cabo el proyecto de Cambio de Pagina de Difusión junto con Berni, Jusai, Javfox y Miguel Useche, entre otros.

P:/ La mayoría de nosotros te conocemos por Yunito y al menos yo, pensaba que ese era tu nombre. ¿Podrías contarme más acerca de ello?

R:/ Es una historia realmente curiosa, vino de que hace ya algún tiempo me dedicaba a jugar al GTA San Andreas Multiplayer y allí existían servidores en los que podías simular otra vida tipo el juego Second Life. Para entrar tenía que inventarme un nombre, y el primer nombre que se me ocurrió fue Yunito Betancourt. Al cabo de de un tiempo me acostumbré a que la gente de Internet me llamará Yunito, así que desde entonces usé ese apodo.

P:/ Es muy interesantes esa historia, entonces te gustan los video juegos, ¿tienes algún otro pasatiempo?

R:/ Si, si quitamos mis estudios en Administración y Dirección de Empresas, así como la labor que desarrollo en Mozilla. Podría decir que mis hobbies son el estudio de la Psicología Social y el Krav Maga. Me gusta la psicología social ya que esta me permite conocer mejor el comportamiento de la sociedad, también me permite mejorar mis dotes de comunicación tanto verbal como no verbal y es una forma de saber entender y escuchar mejor a las personas. Sobre el Krav Maga, me decidí a probarlo cuando leí que era un sistema de defensa usado por el ejercito israelí y creado hace relativamente poco (hace unos 50 años, si tenemos en cuenta que el Judo tiene más de 1000…) y además muy efectivo. Desde el primer momento, me enamoré de este sistema defensa por su sencillez y efectividad y hasta día de hoy aun estoy con el.

P:/ ¿Qué es lo que más valoras o es lo más positivo de Mozilla / la Comunidad? ¿Qué te aporta a ti Mozilla / la Comunidad?

R:/ Lo que más valoro de Mozilla, así como de su comunidad, es su actitud. La definiría como una actitud positiva y amable con todo el mundo así como muy enfocada a la privacidad y la seguridad de los usuarios de los diferentes productos de Mozilla. La comunidad me aporta la oportunidad que participar en algo realmente grande en donde soy capaz de ayudar a millones de usuarios de Internet a tener un navegador que respete más sus derechos. También me aporta una serie de valores como la humildad, el respeto, el pensamiento libre, la libertad del individuo, etc. Desde que estoy en Mozilla podría decir que mi actitud hacia el mundo ha pasado de ser de un poco egocéntrica a intentar ayudar lo máximo que pueda y a preocuparme por los demás.

P:/ ¿Cómo crees que será Mozilla en el futuro?

R:/ Creo que en un futuro mucha más gente se dará cuenta de lo importante que es la privacidad para el usuario y va a querer defenderla y es por eso que mucha gente va a venir a colaborar con Mozilla y a hacer que seamos una gran comunidad en donde espero aún estar formando parte.

P:/ Unas palabras para las personas que desean unirse a la Comunidad.

R:/ Les diría que si se preocupan por su privacidad, su seguridad en Internet, sus derechos o les gusta la filosofía de Mozilla no duden en apuntarse, ya que aquí van a ser muy bien recibidos y además no se necesita tener conocimientos técnicos para ayudar a la comunidad porqué existe la posibilidad de ayudar difundiendo el mensaje de Mozilla, así como ayudando a nuevos usuarios con los diversos productos de Mozilla y muchas cosas más.

Muchas gracias Xavier Yunito por acceder a la entrevista.

Fuente: Mozilla Hispano

The Mozilla BlogMozilla Boosts Leadership Team With Connected Devices Appointment

Today, we are pleased to announce that Ari Jaaksi will be joining the Mozilla leadership team next month as our new Senior Vice President of Connected Devices.

In this role, Ari will be responsible for Firefox OS and broader exploration of opportunities to advance our mission across the ever-increasing range of connection points of the modern Internet, i.e. phones, TVs, IoT, etc.

His deep understanding of Open Source projects and mobile leadership experience at Intel, HP, and Nokia developing platforms and products make him ideally positioned to lead our Firefox OS and Connected Devices strategy.

Firefox OS is an important part of our mobile strategy, in addition to Firefox for Android and iOS and other initiatives. We believe that building an open, independent alternative to proprietary, single-vendor platforms is critical to the future of a healthy mobile ecosystem. And it is core to our mission to promote openness, innovation and opportunity in online life.

We believe Mozilla’s role in the world is more important today than it has ever been. Issues of digital rights, privacy, online safety and security are real and impact our lives daily. The pace and complexity of online life continues to accelerate from here.

Over the last year we’ve focused on building our our team to compliment the vibrant Mozillian community adding the necessary know-how to continue to bring choice, control and opportunity to everyone on the Web.

Please join me in welcoming Ari to Mozilla!

Ari’s LinkedIn Profile and Bio

Byron Joneshappy bmo push day!

the following changes have been pushed to

  • [1209332] Make the master kick-off bug “Confidential Mozilla Employee Bug” by default
  • [1209971] Suggested reviewers should exclude the current user from the list displayed
  • [1200958] group owners should always be able to view group membership reports for their groups
  • [1196620] support automatic removal of inactive users from groups
  • [1210654] “MozReview Requests” is not shown in the page after submitting a change
  • [1210762] User stories aren’t saved as part of the “remember values as bookmarkable template”
  • [1198519] Add link to bug history page to the top-right drop-down menu
  • [1210246] help link is busted
  • [1210690] Display only commits and only relevant data
  • [1205748] Can’t mark inaccessible bug dependent on a regression it caused
  • [1164063] show a warning near the attachments table for sec-high/sec-crit bugs without sec-approval? on patches
  • [1211750] Changing password with MFA turned on will not work

discuss these changes on

Filed under: bmo, mozilla

Alex ClarkI Reinstalled Again

A while back I wrote about reinstalling OS X. This is another one of those posts.

I like to reinstall OS X, a lot. So much so, you'd think I'd find some way to automate the process. There must be something soothing about it, though, because I keep doing it.

I'm writing this post now because since my last post, I've begun storing a snippets on to help automate the process. This way, I get "the best of both worlds":

  • Automation of the tedious parts &
  • Interaction with the fun parts.

Specifically, with El Capitan I've settled on these 4 snippets:





Next, I perform various additional steps manually either because I've not figured out how to automate them or the automation prospects are not attractive:

  • Security & Privacy → Allow apps downloaded from Anywhere
  • Drag /opt to Finder Favorites for easy access to Homebrew Casks, then:
    • Users & Groups → Login items → Jumpcut
  • Keyboard → Shortcuts → Mission Control → Move left a space → ⌘ ←
  • Keyboard → Shortcuts → Mission Control → Move right a space → ⌘ →
  • Dock → Terminal → Keep in Dock
  • Dock → Firefox → Keep in Dock

Still, I'd trade all these steps for full automation if I could find an approach that's not more tedious than cut & pasting the above.

Lastly, I hope this helps someone. Please add a comment below if you have a better approach.

Air MozillaParticipation Demos

Participation Demos Watch the Participation Team share what we've learned and worked on in Q3 2015.

Selena Deckelmann[berlin] TaskCluster Platform: A Year of Development

Back in September, the TaskCluster Platform team held a workweek in Berlin to discuss upcoming feature development, focus on platform stability and monitoring and plan for the coming quarter’s work related to Release Engineering and supporting Firefox Release. These posts are documenting the many discussions we had there.

Jonas kicked off our workweek with a brief look back on the previous year of development.

Prototype to Production

In the last year, TaskCluster went from an idea with a few tasks running to running all of FirefoxOS aka B2G continuous integration, which is about 40 tasks per minute in the current environment.

Architecture-wise, not a lot of major changes were made. We went from CloudAMQP to Pulse (in-house RabbitMQ). And shortly, Pulse itself will be moving it’s backend to CloudAMQP! We introduced task statuses, and then simplified them.

On the implementation side, however, a lot changed. We added many features and addressed a ton of docker worker bugs. We killed Postgres and added Azure Table Storage. We rewrote the provisioner almost entirely, and moved to ES6. We learned a lot about babel-node.

We introduced the first alternative to the Docker worker, the Generic worker. We for the first time had Release Engineering create a worker, the Buildbot Bridge.

We have several new users of TaskCluster! Brian Anderson from Rust created a system for testing all Cargo packages for breakage against release versions. We’ve had a number of external contributors create builds for FirefoxOS devices. We’ve had a few Github-based projects jump on taskcluster-github.

Features that go beyond BuildBot

One of the goals of creating TaskCluster was to not just get feature parity, but go beyond and support exciting, transformative features to make developer use of the CI system easier and fun.

Some of the features include:

Features coming in the near future to support Release

Release is a special use case that we need to support in order to take on Firefox production worload. The focus of development work in Q4 and beyond includes:

  • Secrets handling to support Release and ops workflows. In Q4, we should see go into production and UI for roles-based management.
  • Scheduling support for coalescing, SETA and cache locality. In Q4, we’re focusing on an external data solution to support coalescing and SETA.
  • Private data hosting. In Q4, we’ll be using a roles-based solution to support these.

Dave TownsendDelivering Firefox features faster

Over time Mozilla has been trying to reduce the amount of time between developing a feature and getting it into a user’s hands. Some time ago we would do around one feature release of Firefox every year, more recently we’ve moved to doing one feature release every six weeks. But it still takes at least 12 weeks for a feature to get to users. In some cases we can speed that up by landing new things directly on the beta/aurora branches but the more we do this the harder it is for release managers to track the risk of shipping a given release.

The Go Faster project is investigating ways that we can speed up getting changes to users. System add-ons are one piece of this that will let us deliver updates to core Firefox features more often than the regular six week releases. Instead of being embedded in the rest of the code certain features will be developed as standalone system add-ons.

Building features as add-ons gives us more flexibility in how we deliver the features to users. System add-ons will ship in two different ways. First every Firefox release will include a default set of system add-ons. These are the latest versions of the features at the time the Firefox build was produced. Later during runtime Firefox will contact Mozilla’s update servers to ask for the current list of system add-ons. If there are new or updated versions listed Firefox will download and install them giving users access to the newest features without needing to update the entire application.

Building a feature as an add-on gives developers a lot of benefits too. Developers will be able to work on and test new features without doing custom Firefox builds. Users can even try out new features by just installing the add-ons. Once the feature is ready to ship it ships as an add-on with no code changes necessary for integration into Firefox. This is something we’ve attempted to do before with things like Test Pilot and pdf.js, but system add-ons make this process much smoother and reduces the differences between how the feature runs as an add-on and how it runs when shipped in the application.

The basic support for system add-ons is already included in current nightly builds and Firefox 44 should be the first release that we could use to deliver features like this if we choose. If you’re interested in the details you can read the client implementation plan or follow along the tracking bug for the client side of the feature.

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Selena DeckelmannTaskCluster Platform: 2015Q3 Retrospective

Welcome to TaskCluster Platform’s 2015Q3 Retrospective! I’ve been managing this team this quarter and thought it would be nice to look back on what we’ve done. This report covers what we did for our quarterly goals. I’ve linked to “Publications” at the bottom of this page, and we have a TaskCluster Mozilla Wiki page that’s worth checking out.

High level accomplishments

  • Dramatically improved stability of TaskCluster Platform for Sheriffs by fixing TreeHerder ingestion logic and regexes, adding better logging and fixing bugs in our taskcluster-vcs and mozilla-taskcluster components
  • Created and Deployed CI builds on three major platforms:
    • Added Linux64 (CentOS), Mac OS X cross-compiled builds as Tier2 CI builds
    • Completed and documented a prototype Windows 2012 builds in AWS and task configuration
  • Deployed, enabling better security, better support for self-service authorization and easier contributions from outside our team
  • Added region biasing based on cost and availability of spot instances to our AWS provisioner
  • Managed the workload of two interns, and significantly mentored a third
  • Onboarded Selena as a new manager
  • Held a workweek to focus attention on bringing our environment into production support of Release Engineering

Goals, Bugs and Collaborators

We laid out our Q3 goals in this etherpad. Our chosen themes this quarter were:

  • Improve operational excellence — focus on sheriff concerns, data collection,
  • Facilitate self-serve consumption — refactoring auth and supporting roles for scopes, and
  • Exploit opportunities to differentiate from other platforms — support for interactive sessions, docker images as artifacts, github integration and more blogging/docs.

We had 139 Resolved FIXED bugs in TaskCluster product.

Link to graph of resolved bugs

We also resolved 7 bugs in FirefoxOS, TreeHerder and RelEng products/components.

We received significant contributions from other teams: Morgan (mrrrgn) designed, created and deployed taskcluster-github; Ted deployed Mac OS X cross compiled builds; Dustin reworked the Linux TC builds to use CentOS, and resolved 11 bugs related to TaskCluster and Linux builds.

An additional 9 people contributed code to core TaskCluster, intree build scripts and and task definitions: aus, rwood, rail, mshal, gerard-majax,, htsai, cmanchester, and echen.

The Big Picture: TaskCluster integration into Platform Operations

Moving from B2G to Platform was a big shift. The team had already made a goal of enabling Firefox Release builds, but it wasn’t entirely clear how to accomplish that. We spent a lot of this quarter learning things from RelEng and prioritizing. The whole team spent the majority of our time supporting others use of TaskCluster through training and support, developing task configurations and resolving infrastructure problems. At the same time, we shipped docker-worker features, provisioner biasing and a new authorization system. One tricky infra issue that John and Jonas worked on early in the quarter was a strange AWS Provisioner failure that came down to an obscure missing dependency. We had a few git-related tree closures that Greg worked closely on and ultimately committed fixes to taskcluster-vcs to help resolve. Everyone spent a lot of time responding to bugs filed by the sheriffs and requests for help on IRC.

It’s hard to overstate how important the Sheriff relationship and TreeHerder work was. A couple teams had the impression that TaskCluster itself was unstable. Fixing this was a joint effort across TreeHerder, Sheriffs and TaskCluster teams.

When we finished, useful errors were finally being reported by tasks and starring became much more specific and actionable. We may have received a partial compliment on this from philor. The extent of artifact upload retries, for example, was made much clearer and we’ve prioritized fixing this in early Q4.

Both Greg and Jonas spent many weeks meeting with Ed and Cam, designing systems, fixing issues in TaskCluster components and contributing code back to TreeHerder. These meetings also led to Jonas and Cam collaborating more on API and data design, and this work is ongoing.

We had our own “intern” who was hired on as a contractor for the summer, Edgar Chen. He did some work with the docker-worker, implementing Interactive Sessions, and did analysis on our provisioner/worker efficiency. We made him give a short, sweet presentation on the interactive sessions. Edgar is now at CMU for his sophomore year and has referred at least one friend back to Mozilla to apply for an internship next summer.

Pete completed a Windows 2012 prototype build of Firefox that’s available from Try, with documentation and a completely automated process for creating AMIs. He hasn’t created a narrated video with dueling, British-English accented robot voices for this build yet.

We also invested a great deal of time in the RelEng interns. Jonas and Greg worked with Anhad on getting him productive with TaskCluster. When Anthony arrived, we also onboarded him. Jonas worked closely to get him working on a new project, To take these two bits of work from RelEng on, I pushed TaskCluster’s roadmap for generic-worker features back a quarter and Jonas pushed his stretch goal of getting the big graph scheduler into production to Q4.

We worked a great deal with other teams this quarter on taskcluster-github, supporting new Firefox and B2G builds, RRAs for the workers and generally telling Mozilla about TaskCluster.

Finally, we spent a significant amount of time interviewing, and then creating a more formal interview process that includes a coding challenge and structured-interview type questions. This is still in flux, but the first two portions are being used and refined currently. Jonas, Greg and Pete spent many hours interviewing candidates.

Berlin Work Week

TaskCluster Platform Team in Berlin

Toward the end of the quarter, we held a workweek in Berlin to focus our next round of work on critical RelEng and Release-specific features as well as production monitoring planning. Dustin surprised us with delightful laser cut acrylic versions of the TaskCluster logo for the team! All team members reported that they benefited from being in one room to discuss key designs, get immediate code review, and demonstrate work in progress.

We came out of this with 20+ detailed documents from our conversations, greater alignment on the priorities for Platform Operations and a plan for trainings and tutorials to give at Orlando. Dustin followed this up with a series of ‘TC Topics’ Vidyo sessions targeted mostly at RelEng.

Our Q4 roadmap is focused on key RelEng features to support Release.


Our team published a few blog posts and videos this quarter:

Yunier José Sosa VázquezCómo se hace? Cambiar tu navegador predeterminado en Windows 10

La llegada de la última versión de Windows causó mucho revuelo entre los usuarios al ver que su navegador predeterminado había sido cambiado. Por su parte, Mozilla reaccionó y su CEO Chris Beard le envió una carta a su similar de Microsoft Satya Nadella pidiendo que no retrocedieran en la elección y control de los usuarios.

Las versiones actuales del panda rojo deberían cambiar esto, pero si por alguna razón no lo hace y te gustaría recuperar Firefox u otro navegador como predeterminado, te recomiendo que sigas estos pasos:

  1. Haz clic en el botón de menú menu, después selecciona Opciones.
  2. En el panel General, haz clic en Convertir en predeterminado.opciones-firefox-1
  3. La aplicación de Ajustes de Windows abrirá la pantalla de Selecciona programas predeterminados.
  4. Desplázate hacia abajo y haz clic en la entrada de Explorador web. En este caso, el icono mostrará Microsoft Edge o bien Selecciona tu navegador predeterminado.aplicaciones-predeterminadas-windows
  5. En la pantalla de Elegir una aplicación, haz clic en Firefox para establecerlo como el navegador predeterminado.elegir-firefox
  6. Firefox ahora aparece como tu navegador predeterminado. Cierra la ventana para guardar tus cambios.

Y listo, ya tendrás Firefox de vuelta como tu navegador predeterminado y preferido.

Fuente: Mozilla Support

The Servo BlogThis Week In Servo 36

In the last week, we landed 69 PRs in the Servo repository!

Glenn wrote a short report on how webrender is coming along. Webrender is a new renderer for Servo which is specialized for web content. The initial results are quite promising!

Notable additions

New Contributors


Snazzy new form widgets:


At last week’s meeting, we discussed webrender, and pulling app units out into a separate crate.

Daniel StenbergTalked HTTP/2 at ApacheCon

I was invited as one of the speakers at the ApacheCon core conference in Budapest, Hungary on October 1-2, 2015.


I was once again spreading the news about HTTP/2, why it was made and how it works and of course: updated numbers on adoption right now.

The talk was unfortunately not filmed, but I’ve put my slides for this version of my talk online. Readers of this blog and those who’ve seen my presentations before will recognize large parts of it.

Following my talk was talks about mod_http2, the Apache module for HTTP/2 that will be coming in the upcoming 2.4.17 release of Apache Httpd, explained by its author Stefan Eissing. The name of the module was actually a bit of a surprise to me since it has been known as just mod_h2 for its entire life time up until now.

William A Rowe took us through the state of TLS for the main Apache servers and yeah, the state seem to be pretty good and they’re coming along really well. TLS and then HTTPS is important as that’s really a prerequisite for HTTP/2

I also got to listen to Mark Thomas explain the agonies of making Tomcat support HTTP/2, and then perhaps especially how ALPN and a good set of ciphers are hard to get in Java.

Jean-Frederic Clere then explained how to activate HTTP/2 on all the Apache servers (tomcat, httpd and traffic server) and a little about their HTTP/2 state, following with an explanation how they worked on tomcat to make that use OpenSSL for the TLS layer (including ALPN) to avoid the deadlock of decent TLS support in Java.

All in all, a great track and splendid talks with deep technical content. Exactly the way I like it. Thanks everyone. Apachecon certainly delivered for me! Twas fun.

Julien VehentSHA1/SHA256 certificate switching with HAProxy

SHA-1 certificates are on their way out, and you should upgrade to a SHA-256 certificate as soon as possible... unless you have very old clients and must maintain SHA-1 compatibility for a while.

If you are in this situation, you need to either force your clients to upgrade (difficult) or implement some form of certificate selection logic: we call that "cert switching".

The most deterministic selection method is to serve SHA-256 certificates to clients that present a TLS1.2 CLIENT HELLO that explicitly announces their support for SHA256-RSA (0x0401) in the signature_algorithms extension.


Modern web browsers will send this extension. However, I am not aware of any open source load balancer that is currently able to inspect the content of the signature_algorithms extension. It may come in the future, but for now the easiest way to achieve cert switching is to use HAProxy SNI ACLs: if a client presents the SNI extension, direct it to a backend that presents a SHA-256 certificate. If it doesn't present the extension, assume that it's an old client that speaks SSLv3 or some broken version of TLS, and present it a SHA-1 cert.

This can be achieved in HAProxy by chaining frontend and backends:



frontend https-in
        mode tcp
        tcp-request inspect-delay 5s
        tcp-request content accept if { req_ssl_hello_type 1 }
        use_backend jve_https if { req.ssl_sni -i }

        # fallback to backward compatible sha1
        default_backend jve_https_sha1

backend jve_https
        mode tcp
        server jve_https
frontend jve_https
        bind ssl no-sslv3 no-tlsv10 crt /etc/haproxy/certs/jve_sha256.pem tfo
        mode http
        option forwardfor
        use_backend jve

backend jve_https_sha1
        mode tcp
        server jve_https
frontend jve_https_sha1
        mode http
        option forwardfor
        use_backend jve

backend jve
        rspadd Strict-Transport-Security:\ max-age=15768000
        server jve maxconn 128

The configuration above receives inbound traffic in the frontend called "https-in". That frontend is in TCP mode and inspects the CLIENT HELLO coming from the client for the value of the SNI extension. If that value exists and matches our target site, it sends the connection to the backend named "jve_https", which redirects to a frontend also named "jve_https" where the SHA256 certificate is configured and served to the client.

If the client fails to present a CLIENT HELLO with SNI, or presents a SNI that doesn't match our target site, it is redirected to the "https_jve_sha1" backend, then to its corresponding frontend where a SHA1 certificate is served. That frontend also supports an older ciphersuite to accommodate older clients.

Both frontends eventually redirect to a single backend named "jve" which sends traffic to the destination web servers.

This is a very simple configuration, and eventually it could be improved using better ACLs (HAproxy regularly adds news ones), but for a basic cert switching configuration, it gets the job done!

Julian SewardDr Memory: a memory-checking tool for Windows

Valgrind’s Memcheck tool works on Linux and MacOS, but not on Windows. Interestingly, there is something like it for Windows: “Dr Memory”.  Similar in style to Memcheck, Dr Memory is an open source memory checking tool built on top of a JIT-based instrumentation framework called DynamoRIO. It provides essentially identical functionality: detection of invalid memory accesses, uninitialised value uses and memory leaks. Dr Memory claims to be considerably faster than Memcheck, so I was curious to see how it performed.

I recently tried Dr Memory 1.9.0-RC1 on Windows 7, running 32-bit Firefox builds, to see to what extent it can provide coverage for the Windows-specific parts of Gecko.

Installing and getting started isn’t difficult. There are command line flags to direct the output, control the level of instrumentation, specify files listing errors to hide, and so on. As you’d expect.

Despite considerable efforts with Dr Memory, I came away feeling it was a promising tool, but just a bit too hard to use. I encountered two kinds of problems.

Firstly, about half of my Firefox startups ended up spinning. Some of the time, Firefox would start (slowly, of course) and be usable after a couple of minutes. Other runs would spin for an hour or more and still not produce a usable browser. I never figured out why. This seems to be related to the instrumentation, because if I run Firefox uninstrumented on the DynamoRIO core, like Valgrind’s –tool=none, it works reliably.

A second problem was the considerable number of uninitialised memory read errors. I tried out both non-optimised (“/Zi /Od”) and optimised (“/Zi /O2 /Oy- /Ob0”) builds of Firefox.

For the non-optimised builds, Dr Memory reports no invalid accesses and a few uninitialised memory reads, which is what I’d expect. But it’s unusably slow, because the unoptimised build lacks reasonable register allocation, which easily doubles the number of memory accesses that have to be checked.

So my next step was to try an optimised build. This runs a great deal faster. There’s a down side, though: the number of uninitialised memory accesses goes way up. Most of these must be false positives, because they weren’t reported in the unoptimised runs.

I investigated further. It is likely that one source of false positives is Dr Memory’s incomplete description of the Windows system call interface. Valgrind’s description of the Linux syscall interface is itself complex, and it is said that the Windows interface makes the Linux interface look simple. Given that, I’m impressed that Dr Memory works as well as it does.

The other source of false positives appears to be bitfields. Dr Memory tracks the definedness state of each byte of memory using one bit for each byte. Consequently it has no way to accurately model partially initialised bytes, and so must unavoidably either report false positives, or miss real errors, depending on which of the two available shadow states partially initialised bytes are mapped to.

One way to detect probable false-positive bitfield errors in cross platform Gecko code is to check whether Memcheck reports errors at the same places. In many cases it doesn’t. I created a suppressions file, which tells Dr Memory to hide errors I identified as clearly false. A second line of defense is to add extra initialisation code for bitfields purely in order to keep Dr Memory happy. Neither of these are really what one wants to do, though.

The false positive problem seriously compromises Dr Memory’s usefulness on optimised Gecko code, compared to Memcheck. The effect is to create a lot more undefined value errors needing investigation. The situation is exacerbated because Dr Memory doesn’t have an equivalent to Memcheck’s origin-tracking feature, which makes it more difficult to analyse the errors and to determine where, if any, dummy initialisations should be placed.

Dr Memory does have a “light” mode, which restricts it to invalid-address and leak checking only. This increases usability at the expense of losing undefined value checking. If you’re looking for possible heap corruption on Windows, this would be worth a try.

QMOFirefox 42.0 Beta 3 Testday Results

Hello Mozillians!

As you may already know, last Friday – October 2nd – we held a new Testday event, for Firefox 42 Beta 3.

I must admit that this testday is by far one of the most successful event. Besides the fact that we had a large number of participants we also have an impressive number of verified bugs. Congratulations to all participants!


We’d like to take this opportunity to thank Bolaram Paul, Mohammed Adam , Ionce Stelian, Ruwan Ranganath, Jayesh KR, Arshad Abid, Moin Shaikh, Syed Muhammad Mahmudul Haque (Yamin), Nischaytv, Jyotsna Gupta, PreethiDhinesh, Kevin Le, Gunjan Tank and the people from our Bangladesh Community: Hossain Al Ikram, Khalid Syfullah Zaman, Ashickur Rahman, Md. Asiful Kabir, Rezaul Huque Nayeem, Kazi Nuzhat Tasnem, Nazir Ahmed Sabbir, Saheda Reza Antora, Md.Ehsanul Hassan, Mohammad Maruf Islam, Sayed Mohammad Amir, Meraj  Kazi, Forhad Hossain, T.M. Sazzad Hossain and Towkir Ahmed for getting involved in this event and making Firefox as best as it could be.

Also a big thank you goes to all our active moderators.

Keep an eye on QMO for upcoming events!

Andy McKayAm I better off?

It seems to be a common theme in elections for the incumbent party to ask the question, "Are you better off?". The Conservatives have been in power now since the 2006 election.

Financially? I'm better off, due to having a decent paying job in a company not based in Canada. The economy is moving along despite a Conservative running multiple deficits in a row. Really, I don't place too much faith in the Government to do a huge amount to the economy, no matter what the Conservative attack adverts say.

Privacy? Much worse off, the Government have introduced multiple bills to reduce our privacy, none worse than bill C-51 which greatly increases the Government spying powers and reduces the amount of oversight.

Rights? Much worse off, the Government introduced bill C-24 which means that the citizenship of me (and most of my friends and colleagues) can now be revoked. We are all second class citizens and not "old stock canadians". The reduction of our charter of rights and freedoms is breathtaking.

Environment? We are the only country in the world to withdraw from Kyoto and become isolated Canada on the world stage. There's the oil sands turning Alberta into Mordor, there's been major oil spills, the tragic Lac-Mégantic incident, the proposed pipelines and so on. And of course environmentalists are terrorists.

Science? The census was gutted and decisions are not made on data, but ideology.

Right now? I am worse off. I have much less security and privacy than before. So no, I won't be voting Conservative and I'm not sure why anyone would.

This Week In RustThis Week in Rust 99

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • Redox. A Rust Operating System.
  • Webrender. An experimental renderer for Servo that aims to draw web content like a modern game engine.
  • Coroutine I/O. Coroutine scheduling with work-stealing algorithm.
  • Rustation. PlayStation emulator in Rust.

Updates from Rust Core

102 pull requests were merged in the last week.

See the subteam report for 2015-10-02 for details.

Notable changes

New Contributors

  • Andreas Sommer
  • Dato Simó
  • James Bell
  • Jethro Beekman
  • Seeker14491
  • Ted Mielczarek
  • Will Speak
  • Willy Aguirre

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week!

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week. Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week, Crate of the Week is Itertools. Thanks go to llogiq for the suggestion. In his own words:

So today I'll write about Itertools. Because iterators in Rust are awesome, and this crates makes them even awesome-r. If you want to do something with iterators that seems to be slightly impossible using the std APIs, chances are Itertools already implements a way that is both fast and elegant. Knowing your itertools APIs will level up your Rust-fu.

For a (very small and simple) example, haven't you wished to zip two iterators, but don't stop iteration after the shorter iterator has run out? With Itertools you can just say x.zip_longest(y) and get an iterator of EitherOrBoth<X, Y>.

Quote of the Week

In programming (as opposed to politics), safety=freedom.llogiq on /r/rust.

Thanks to birkenfeld for the tip. Submit your quotes for next week!

Mike ConleyThe Joy of Coding – Ep’s 23 – 29

Wow! I’ve been a way from this blog for too long. I also haven’t posted any new episodes for The Joy of Coding. I also haven’t been keeping up with my Things I’ve Learned posts.

Time to get back in the saddle. First thing’s first, here are 6 episodes of The Joy of Coding that have aired. Unfortunately, I haven’t put together summaries for any of them, but I’ve put their agendas near the videos so that might give some clues.

Here we go!

Episode 23


Episode 24


Episode 25


Episode 26


Episode 27


Episode 28


Episode 29


Yunier José Sosa VázquezVI Taller Internacional de Tecnologías de Software Libre y Código Abierto

Como parte de la XVI Convención y Feria Internacional Informática Habana 2016 a realizarse en nuestra capital a partir del 14 y hasta el 18 de marzo de 2016, se desarrollará el VI Taller Internacional de Tecnologías de Software Libre y Código Abierto, un evento donde las personas podrán debatir en torno a diferentes temáticas como por ejemplo: la adopción de tecnologías de software libre y código abierto, el desarrollo y personalización de sistemas operativos, y aspectos económicos, legales y sociales.

informatica2016-logoComo parte del taller de software libre y código abierto en las temáticas se tratará lo siguiente:

  1. Adopción de tecnologías de software libre y código abiertoExperiencias en la conducción y ejecución de procesos de migración a aplicaciones de software libre y código abierto. Implantación de tecnologías libres en el sector público y privado. Modelos de madurez para la selección de tecnologías libres. Rol del software libre en el desarrollo sostenible.
  2. Construcción / personalización de sistemas operativos basados en fuentes abiertasDesarrollo de sistemas operativos libres a la medida para computadoras de escritorio, servidores y teléfonos móviles. Software libre embebido. Construcción de distribuciones GNU/Linux.
  3. Aspectos económicos, legales y socialesEstudios de viabilidad económica para la adopción de software libre. Modelos de negocios aplicables. Impacto social del software libre. Propiedad intelectual, derecho de autor y licenciamiento de tecnologías libres. Marco regulativo para el empleo de software libre.
  4. Tecnologías de software libreSoftware libre para la nube, seguridad en tecnologías libres. Estándares abiertos en el desarrollo de soluciones. OpenData. OpenCloud. Tecnologías de gestión de datos en código abierto. Software libre como servicio, aplicaciones empresariales de fuentes abiertas. Tecnologías móviles de código abierto.

Desde la página web del evento también se puede observar varios datos útiles para las personas que deseen participar entre los que destacan:

Fechas Importantes

  • Presentación de resúmenes y ponencias: 20 de octubre de 2015
  • Notificación de aceptación: 20 de noviembre de 2015
  • Envío del trabajo final para su publicación: 7 de diciembre de 2015

Pautas de Redacción

Los trabajos se presentarán a partir de las temáticas principales de los distintos eventos a realizarse en el marco de la Convención, teniendo en cuenta las siguientes especificaciones:

  • Deben entregarse en ficheros compatibles con formato de documento abierto
  • Límite de 10 hojas
  • El tamaño de la hoja será tipo carta (8,5” x 11” ó 21,59 cm. x 27,94 cm.), con márgenes de 2 cm. por cada lado y escrito a dos columnas
  • Deberá utilizarse tipografía Arial a 11 puntos para los encabezados y a 10 puntos para los textos, con un interlineado sencillo
  • Se redactará en los idiomas del evento (español o inglés)

Estructura de los trabajos:

  • Título
  • Título (en inglés)
  • Autor y coautores
  • Afiliación y datos de contactos
  • Resumen y Palabras Claves (en español y en inglés)
  • Introducción
  • Contenido
  • Conclusiones
  • Agradecimientos (opcional)
  • Referencias bibliográficas

Para facilitar la elaboración de su trabajo conforme a las especificaciones de la Convención, tiene que descargar la plantilla y sustituya sus textos. .doc

Usted deberá enviar su trabajo a través de la Plataforma para la Gestión de Ponencias, recuerde que debe adjuntar además una síntesis curricular del autor principal. Las presentaciones de los trabajos a exponer en el evento presencial deben ser entregados por los ponentes en la Oficina de Recepción de Medios Audiovisuales, en el Palacio de Convenciones, un día antes de la exposición en sala. Las conferencias, presentaciones y otros materiales de la Convención se publicarán en un disco compacto con su registro ISBN.

Luis VillaSoftware that liberates people: feels about FSF@30 and OSFeels@1

tl;dr: I want to liberate people; software is a (critical) tool to that end. There is a conference this weekend that understands that, but I worry it isn’t FSF’s.

Feelings are facts, by wrote, CC BY 2.0

This morning, social network chatter reminded me of FSF‘s 30th birthday celebration. These travel messages were from friends who I have a great deal of love and respect for, and represent a movement to which I essentially owe my adult life.

Despite that, I had lots of mixed feels about the event. I had a hard time capturing why, though.

While I was still processing these feelings, late tonight, Twitter reminded me of a new conference also going on this weekend, appropriately called Open Source and Feelings. (I badly wanted to submit a talk for it, but a prior commitment kept me from both it and FSF@30.)

I saw the OSFeels agenda for the first time tonight. It includes:

  • Design and empathy (learning to build open software that empowers all users, not just the technically sophisticated)
  • Inclusive development (multiple talks about this, including non-English, family, and people of color) (so that the whole planet can access, and participate in developing, open software)
  • Documentation (so that users understand open software)
  • Communications skills (so that people feel welcome and engaged to help develop open software)

This is an agenda focused on liberating human beings by developing software that serves their needs, and engaging them in the creation of that software. That is incredibly exciting. I’ve long thought (following Sen and Nussbaum’s capability approach) that it is not sufficient to free people; they must be empowered to actually enjoy the benefits of that freedom. This is a conference that seems to get that, and I can’t wait to go (and hopefully speak!) next year.

The Free Software Foundation event’s agenda:

  • licenses
  • crypto
  • boot firmware
  • federation

These are important topics. But there is clearly a difference in focus here — technology first, not people. No mention of community, or of design.

This difference in focus is where this morning’s conflicted feels came from. On the one hand, I support FSF, because they’ve done an incredible amount to make the world a better place. (OSFeels can take open development for granted precisely because FSF fought so many battles about source code.) But precisely because I support FSF, I’d challenge it, in the next 15 years, to become more clearly and forcefully dedicated to liberating people. In this world, FSF would talk about design, accessibility, and inclusion as much as licensing, and talk about community-building protocols as much as communication protocols. This is not impossible: LibrePlanet had at least some people-focused talks (e.g.), and inclusion and accessibility are a genuine concern of staff, even if they didn’t rise to today’s agenda. But it would still be a big change, because at the deepest level, it would require FSF to see source code as just one of many requirements for freedom, rather than “the point of free software“.

At the same time, OSFeels is clearly filled with people who see the world through a broad, thoughtful ethical lens. It is a sad sign, both for FSF and how it is perceived, that such a group uses the deliberately apolitical language of openness rather than the language of a (hopefully) aligned ethical movement — free software. I’ll look forward to the day (maybe FSF’s 45th (or 31st!) birthday) that both groups can speak and work together about their real shared concern: software that liberates people. I’d certainly have no conflicted feelings about signing up for a conference on that :)

David HumphreyHow to become a Fool Stack Programmer

At least once in your career as a programmer, and hopefully more than once and with deliberate regularity, it is important to leave the comfort of your usual place along the stack and travel up or down it. While you usually fix bugs and add features using a particular application, tool, or API and work on top of some platform, SDK, or operating system, this time you choose to climb down a wrung and work below. In doing so you are working on instead of with something.

There are a number of important outcomes of changing levels. First, what you previously took for granted, and simply used as such, suddenly comes into view as a thing unto itself. This layer that you've been standing on, the one that felt so turns out to have also have been built, just like the things you build from above! It sounds obvious, but in my experience, the effect it has on your approach from this point forward is drastically changed. Second, and very much related to the first, your likelihood to lash out when you encounter bugs or performance and implementation issues gets abated. You gain empathy and understanding.

If all you ever do is use an implementation, tool, or API, and never build or maintain one, it's easy to take them for granted, and speak about them in detached ways: Why would anyone do it this way? Why is this so slow? Why won't they fix this bug after 12 years! And then, Why are they so stupid as to have done this thing?

Now, the same works in the other direction, too. Even though there are more people operating at the higher level and therefore need to descend to do what I'm talking about, those underneath must also venture above ground. If all you've ever done is implement things, and never gone and used such implementations to build real things, you're just as guilty, and equally, if not more likely to snipe and complain about people above you, who clearly don't understand how things really work. It's tantalizingly easy to dismiss people who haven't worked at your level: it's absolutely true that most of them don't understand your work or point of view. The way around this problem is not to wait and hope that they will come to understand you, but to go yourself, and understand them.

Twitter, HN, reddit, etc. are full of people at both levels making generalizations, lobbing frustration and anger at one another, and assuming that their level is the only one that actually matters (or exists). Fixing this problem globally will never happen; but you can do something at the personal level.

None of us enjoys looking foolish or revealing our own ignorance. And one of the best ways to avoid both is to only work on what we know. What I'm suggesting is that you purposefully move up or down the stack and work on code, and with tools, people, and processes that you don't know. I'm suggesting that you become a Fool, at least in so far as you allow yourself to be humbled by this other world, with its new terminology, constraints, and problems. By doing this you will find that your ability to so easily dismiss the problems of the other level will be greatly reduced. Their problems will become your problems, and their concerns your concerns. You will know that you've correctly done what I'm suggesting when you start noticing yourself referring to "our bug" and "how we do this" instead of "their" and "they."

Becoming a Fool Stack Programmer is not about becoming an expert at every level of the stack. Rather, its goal is to erase the boundary between the levels such that you can reach up or down in order to offer help, even if that help is only to offer a kind word of encouragement when the problem is particularly hard: these are our problems, after all.

I'm grateful to this guy who first taught me this lesson, and encouraged me to always keep moving up the stack.

Jan de MooijBye Wordpress, hello Jekyll!

This week I migrated this blog from Wordpress to Jekyll, a popular static site generator. This post explains why and how I did this, maybe it will be useful to someone.


Wordpress powers thousands of websites, is regularly updated, has a ton of features. Why did I abandon it?

I still think Wordpress is great for a lot of websites and blogs, but I felt it was overkill for my simple website. It had so many features I never used and this came at a price: it was hard to understand how everything worked, it was hard to make changes and it required regular security updates.

This is what I like most about Jekyll compared to Wordpress:

  • Maintainance, security: I don't blog often, yet I still had to update Wordpress every few weeks or months. Even though the process is pretty straight-forward, it got cumbersome after a while.
  • Setup: Setting up a local Wordpress instance with the same content and configuration was annoying. I never bothered so the little development I did was directly on the webserver. This didn't feel very good or safe. Now I just have to install Jekyll, clone my repository and generate plain HTML files. No database to setup. No webserver to install (Jekyll comes with a little webserver, see below).
  • Transparency: With Wordpress, the blog posts were stored somewhere in a MySQL database. With Jekyll, I have Markdown files in a Git repository. This makes it trivial to backup, view diffs, etc.
  • Customizability: After I started using Jekyll, customizing this blog (see below) was very straight-forward. It took me less than a few hours. With Wordpress I'm sure it'd have taken longer and I'd have introduced a few security bugs in the process.
  • Performance: The website is just some static HTML files, so it's fast. Also, when writing a blog post, I like to preview it after writing a paragraph or so. With Wordpress it was always a bit tedious to wait for the website to save the blog post and reload the page. With Jekyll, I save the markdown file in my text editor and, in the background, jekyll serve immediately updates the site, so I can just refresh the page in the browser. Everything runs locally.
  • Hosting: In the future I may move this blog to GitHub Pages or another free/cheaper host.

Why Jekyll?

I went with Jekyll because it's widely used, so there's a lot of documentation and it'll likely still be around in a year or two. Octopress is also popular but under the hood it's just Jekyll with some plugins and changes, and it seems to be updated less frequently.


I decided to use the default template and customize it where needed. I made the following changes:

  • Links to previous/next post at the end of each post, see post.html
  • Pagination on the homepage, based on the docs. I also changed the home page to include the contents instead of just the post title.
  • Archive page, a list of posts grouped by year, see archive.html
  • Category pages. I wrote a small plugin to generate a page + feed for each category. This is based on the example in the plugin documentation. See _plugins/category-generator.rb and _layouts/category.html
  • List of categories in the header of each post (with a link to the category page), see post.html
  • Disqus comments and number of comments in the header of each post, based on the docs, see post.html. I was able to export the Wordpress comments to Disqus.
  • In _config.yml I changed the post URL format ("permalink" option) to not include the category names. This way links to my posts still work.
  • Some minor tweaks here and there.

I still want to change the code highlighting style, but that can wait for now.


After using Jekyll for a few hours, I'm a big fan. It's simple, it's fun, it's powerful. If you're tired of Wordpress and Blogger, or just want to experiment with something else, I highly recommend giving it a try.

Mozilla Addons BlogAdd-on Compatibility for Firefox 42

Firefox 42 will be released on November 3rd. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 42 for Developers, so you should also give it a look.




Please let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 42, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 41.

Joel MaherHacking on a defined length contribution program

Contribution takes many forms where each person has different reasons to contribute or help people contribute.  One problem we saw a need to fix was when a new contributor came to Mozilla and picked up a “good first bug”, then upon completion was left not knowing what to do next and picking up other random bugs.  The essential problem is that we had no clear path defined for someone to start making more substantial improvements to our projects.   This can easily lead toward no clear mentorship as well as a lot of wasted time setting up and learning new things.  In response to this, we decided to launch the Summer of Contribution program.

Back in May we announced two projects to pilot this new program: perfherder and developer experience.  In the announcement we asked that interested hackers commit to dedicating 5-10 hours/week for 8 weeks to one of these projects. In return, we would act as a dedicated mentor and do our best to ensure success.

I want to outline how the program was structured, what worked well, and what we want to do differently next time.

Program Structure

The program worked well enough, with some improvising, here is what we started with:

  • we created a set of bugs that would be good to get new contributors started and working for a few weeks
  • anybody could express interest via email/irc, we envisioned taking 2-3 participants based on what we thought we could handle as mentors.

That was it, we improvised a little by doing:

  • accepting more than 2-3 people to start (4-6)- we had a problem saying no
  • folks got ramped up and just kept working (there was no official start date)
  • blogging about who was involved and what they would be doing (intro to the perfherder team, intro to the dx team)
  • setting up communication channels with contributors like etherpad, email, wunderlist, bugzilla, irc
  • setting up regular meetings with contributors
  • picking an end date
  • summarizing the program (wlach‘s perfherder post, jmaher’s dx post

What worked well

A lot worked very well, specifically advertising by blog post and newsgroup post and then setting the expectation of a longer contribution cycle rather than a couple weeks.  Both :wlach and myself have had a good history of onboarding contributors, and feel that being patient, responding quickly, communicating effectively and regularly, and treating contributors as team members goes a long way.  Onboarding is easier if you spend the time to create docs for setup (we have the ateam bootcamp).  Without mentors being ready to onboard, there is no chance for making a program like this work.

Setting aside a pile of bugs to work on was successful.  The first contribution is hard as there is so much time required for setup, so many tools and terms to get familiar with, and a lot of process to learn.  After the first bug is completed, what comes next?  Assuming it was enjoyable, one of two paths usually take place:

  • Ask what is next to the person that reviewed your code or was nice to you on IRC
  • Find another bug and ask to work on it

Both of these are OK models, but there is a trap where you could end up with a bug that is hard to fix, not well defined, outdated/irrelevant, or requires a lot of new learning/setup.  This trap is something to avoid where we can build on the experience of the first bug and work on the same feature but on a bug that is a bit more challenging.

A few more thoughts on the predefined set of bugs to get started:

  • These should not be easily discoverable as “good first bugs“, because we want people who are committed to this program to work on them, rather than people just looking for an easy way to get involved.
  • They should all have a tracking bug, tag, or other method for easily seeing the entire pool of bugs
  • All bugs should be important to have fixed, but they are not urgent- think about “we would like to fix this later this quarter or next quarter”.  If we do not have some form of urgency around getting the bugs fixed, our eagerness to help out in mentoring and reviewing will be low.  A lot of  times while working on a feature there are followup bugs, those are good candidates!
  • There should be an equal amount (5-10) of starter bugs, next bugs, and other bugs
  • Keep in mind this is a starter list, imagine 2-3 contributors hacking on this for a month, they will be able to complete them all.
  • This list can grow as the project continues

Another thing that worked is we tried to work in public channels (irc, bugzilla) as much as possible, instead of always private messaging or communicating by email. Also communicating to other team members and users of the tools that there are new team members for the next few months. This really helped the contributors see the value of the work they are doing while introducing them to a larger software team.

Blog posts were successful at communicating and helping keep things public while giving more exposure to the newer members on the team.  One thing I like to do is ensure a contributor has a Mozillians profile as well as links to other discoverable things (bugzilla id, irc nick, github id, twitter, etc.) and some information about why they are participating.  In addition to this, we also highlighted achievements in the fortnightly Engineering Productivity meeting and any other newsgroup postings we were doing.

Lastly I would like to point out a dedicated mentor was successful.  As a contributor it is not always comfortable to ask questions, or deal with reviews from a lot of new people.  Having someone to chat with every day you are hacking on the project is nice.  Being a mentor doesn’t mean reviewing every line of code, but it does mean checking in on contributors regularly, ensuring bugs are not stuck waiting for needinfo/reviews, and helping set expectations of how work is to be done.  In an ideal world after working on a project like this a contributor would continue on and try to work with a new mentor to grow their skills in working with others as well as different code bases.

What we can do differently next time?

A few small things are worth improving on for our next cycle, here is a few things we will plan on doing differently:

  • Advertising 4-5 weeks prior and having a defined start/end date (e.g. November 20th – January 15th)
  • Really limiting this to a specific number of contributors, ideally 2-3 per mentor.
  • Setting acceptance criteria up front.  This could be solving 2 easy bugs prior to the start date.
  • Posting an announcement welcoming the new team members, posting another announcement at the halfway mark, and posting a completion announcement highlighting the great work.
  • Setting up a weekly meeting schedule that includes status per person, great achievements, problems, and some kind of learning (guest speaker, Q&A, etc.).  This meeting should be unique per project.
  • Have a simple process for helping folks transition out of they have less time than they thought- this will happen, we need to account for it so the remaining contributors get the most out of the program.

In summary we found this to be a great experience and are looking to do another program in the near future.  We named this Summer of Contribution for our first time around, but that is limiting to when it can take place and doesn’t respect the fact that the southern hemisphere is experiencing Winter during that time.  With that :maja_zf suggested calling it Quarter of Contribution which we plan to announce our next iteration in the coming weeks!

Will Kahn-GreeneDennis v0.7 released! New lint rules and more tests!

What is it?

Dennis is a Python command line utility (and library) for working with localization. It includes:

  • a linter for finding problems in strings in .po files like invalid Python variable syntax which leads to exceptions
  • a template linter for finding problems in strings in .pot files that make translator's lives difficult
  • a statuser for seeing the high-level translation/error status of your .po files
  • a translator for strings in your .po files to make development easier

v0.7 released!

It's been 10 months since the last release. In that time, I:

  • Added a lot more tests and fixed bugs discovered with those tests.
  • Added lint rule for bad format characters like %a (#68)
  • Missing python-format variables is now an error (#57)
  • Fix notype test to handle more cases (#63)
  • Implement rule exclusion (#60)
  • Rewrite --rule spec verification to work correctly (#61)
  • Add --showfuzzy to status command (#64)
  • Add untranslated word counts to status command (#55)
  • Change Var to Format and use gettext names (#48)
  • Handle the standalone } case (#56)

I thought I was close to 1.0, but now I'm less sure. I want to unify the .po and .pot linters and generalize them so that we can handle other l10n file formats. I also want to implement a proper plugin system so that it's easier to add new rules and it'd allow other people to create separate Python packages that implement rules, tokenizers and translaters. Plus I want to continue fleshing out the tests.

At the (glacial) pace I'm going at, that'll take a year or so.

If you're interested in dennis development, helping out or have things you wish it did, please let me know. Otherwise I'll just keep on keepin on at the current pace.

Where to go for more

For more specifics on this release, see here:

Documentation and quickstart here:

Source code and issue tracker here:

Source code and issue tracker for Denise (Dennis-as-a-service):

47 out of 80 Silicon Valley companies say their last round of funding depended solely on having dennis in their development pipeline and translating their business plan into Dubstep.

Daniel PocockWant to be selected for Google Summer of Code 2016?

I've mentored a number of students in 2013, 2014 and 2015 for Debian and Ganglia and most of the companies I've worked with have run internships and graduate programs from time to time. GSoC 2015 has just finished and with all the excitement, many students are already asking what they can do to prepare and be selected for Outreachy or GSoC in 2016.

My own observation is that the more time the organization has to get to know the student, the more confident they can be selecting that student. Furthermore, the more time that the student has spent getting to know the free software community, the more easily they can complete GSoC.

Here I present a list of things that students can do to maximize their chance of selection and career opportunities at the same time. These tips are useful for people applying for GSoC itself and related programs such as GNOME's Outreachy or graduate placements in companies.


There is no guarantee that Google will run the program again in 2016 or any future year until the Google announcement.

There is no guarantee that any organization or mentor (including myself) will be involved until the official list of organizations is published by Google.

Do not follow the advice of web sites that invite you to send pizza or anything else of value to prospective mentors.

Following the steps in this page doesn't guarantee selection. That said, people who do follow these steps are much more likely to be considered and interviewed than somebody who hasn't done any of the things in this list.

Understand what free software really is

You may hear terms like free software and open source software used interchangeably.

They don't mean exactly the same thing and many people use the term free software for the wrong things. Not all projects declaring themselves to be "free" or "open source" meet the definition of free software. Those that don't, usually as a result of deficiencies in their licenses, are fundamentally incompatible with the majority of software that does use genuinely free licenses.

Google Summer of Code is about both writing and publishing your code and it is also about community. It is fundamental that you know the basics of licensing and how to choose a free license that empowers the community to collaborate on your code well after GSoC has finished.

Please review the definition of free software early on and come back and review it from time to time. The The GNU Project / Free Software Foundation have excellent resources to help you understand what a free software license is and how it works to maximize community collaboration.

Don't look for shortcuts

There is no shortcut to GSoC selection and there is no shortcut to GSoC completion.

The student stipend (USD $5,500 in 2014) is not paid to students unless they complete a minimum amount of valid code. This means that even if a student did find some shortcut to selection, it is unlikely they would be paid without completing meaningful work.

If you are the right candidate for GSoC, you will not need a shortcut anyway. Are you the sort of person who can't leave a coding problem until you really feel it is fixed, even if you keep going all night? Have you ever woken up in the night with a dream about writing code still in your head? Do you become irritated by tedious or repetitive tasks and often think of ways to write code to eliminate such tasks? Does your family get cross with you because you take your laptop to Christmas dinner or some other significant occasion and start coding? If some of these statements summarize the way you think or feel you are probably a natural fit for GSoC.

An opportunity money can't buy

The GSoC stipend will not make you rich. It is intended to make sure you have enough money to survive through the summer and focus on your project. Professional developers make this much money in a week in leading business centers like New York, London and Singapore. When you get to that stage in 3-5 years, you will not even be thinking about exactly how much you made during internships.

GSoC gives you an edge over other internships because it involves publicly promoting your work. Many companies still try to hide the potential of their best recruits for fear they will be poached or that they will be able to demand higher salaries. Everything you complete in GSoC is intended to be published and you get full credit for it. Imagine a young musician getting the opportunity to perform on the main stage at a rock festival. This is how the free software community works. It is a meritocracy and there is nobody to hold you back.

Having a portfolio of free software that you have created or collaborated on and a wide network of professional contacts that you develop before, during and after GSoC will continue to pay you back for years to come. While other graduates are being screened through group interviews and testing days run by employers, people with a track record in a free software project often find they go straight to the final interview round.

Register your domain name and make a permanent email address

Free software is all about community and collaboration. Register your own domain name as this will become a focal point for your work and for people to get to know you as you become part of the community.

This is sound advice for anybody working in IT, not just programmers. It gives the impression that you are confident and have a long term interest in a technology career.

Choosing the provider: as a minimum, you want a provider that offers DNS management, static web site hosting, email forwarding and XMPP services all linked to your domain. You do not need to choose the provider that is linked to your internet connection at home and that is often not the best choice anyway. The XMPP foundation maintains a list of providers known to support XMPP.

Create an email address within your domain name. The most basic domain hosting providers will let you forward the email address to a webmail or university email account of your choice. Configure your webmail to send replies using your personalized email address in the From header.

Update your ~/.gitconfig file to use your personalized email address in your Git commits.

Create a web site and blog

Start writing a blog. Host it using your domain name.

Some people blog every day, other people just blog once every two or three months.

Create links from your web site to your other profiles, such as a Github profile page. This helps reinforce the pages/profiles that are genuinely related to you and avoid confusion with the pages of other developers.

Many mentors are keen to see their students writing a weekly report on a blog during GSoC so starting a blog now gives you a head start. Mentors look at blogs during the selection process to try and gain insight into which topics a student is most suitable for.

Create a profile on Github

Github is one of the most widely used software development web sites. Github makes it quick and easy for you to publish your work and collaborate on the work of other people. Create an account today and get in the habbit of forking other projects, improving them, committing your changes and pushing the work back into your Github account.

Github will quickly build a profile of your commits and this allows mentors to see and understand your interests and your strengths.

In your Github profile, add a link to your web site/blog and make sure the email address you are using for Git commits (in the ~/.gitconfig file) is based on your personal domain.

Start using PGP

Pretty Good Privacy (PGP) is the industry standard in protecting your identity online. All serious free software projects use PGP to sign tags in Git, to sign official emails and to sign official release files.

The most common way to start using PGP is with the GnuPG (GNU Privacy Guard) utility. It is installed by the package manager on most Linux systems.

When you create your own PGP key, use the email address involving your domain name. This is the most permanent and stable solution.

Print your key fingerprint using the gpg-key2ps command, it is in the signing-party package on most Linux systems. Keep copies of the fingerprint slips with you.

This is what my own PGP fingerprint slip looks like. You can also print the key fingerprint on a business card for a more professional look.

Using PGP, it is recommend that you sign any important messages you send but you do not have to encrypt the messages you send, especially if some of the people you send messages to (like family and friends) do not yet have the PGP software to decrypt them.

If using the Thunderbird (Icedove) email client from Mozilla, you can easily send signed messages and validate the messages you receive using the Enigmail plugin.

Get your PGP key signed

Once you have a PGP key, you will need to find other developers to sign it. For people I mentor personally in GSoC, I'm keen to see that you try and find another Debian Developer in your area to sign your key as early as possible.

Free software events

Try and find all the free software events in your area in the months between now and the end of the next Google Summer of Code season. Aim to attend at least two of them before GSoC.

Look closely at the schedules and find out about the individual speakers, the companies and the free software projects that are participating. For events that span more than one day, find out about the dinners, pub nights and other social parts of the event.

Try and identify people who will attend the event who have been GSoC mentors or who intend to be. Contact them before the event, if you are keen to work on something in their domain they may be able to make time to discuss it with you in person.

Take your PGP fingerprint slips. Even if you don't participate in a formal key-signing party at the event, you will still find some developers to sign your PGP key individually. You must take a photo ID document (such as your passport) for the other developer to check the name on your fingerprint but you do not give them a copy of the ID document.

Events come in all shapes and sizes. FOSDEM is an example of one of the bigger events in Europe, is a similarly large event in Australia. There are many, many more local events such as the Debian UK mini-DebConf in Cambridge, November 2015. Many events are either free or free for students but please check carefully if there is a requirement to register before attending.

On your blog, discuss which events you are attending and which sessions interest you. Write a blog during or after the event too, including photos.

Quantcast generously hosted the Ganglia community meeting in San Francisco, October 2013. We had a wild time in their offices with mini-scooters, burgers, beers and the Ganglia book. That's me on the pink mini-scooter and Bernard Li, one of the other Ganglia GSoC 2014 admins is on the right.

Install Linux

GSoC is fundamentally about free software. Linux is to free software what a tree is to the forest. Using Linux every day on your personal computer dramatically increases your ability to interact with the free software community and increases the number of potential GSoC projects that you can participate in.

This is not to say that people using Mac OS or Windows are unwelcome. I have worked with some great developers who were not Linux users. Linux gives you an edge though and the best time to gain that edge is now, while you are a student and well before you apply for GSoC.

If you must run Windows for some applications used in your course, it will run just fine in a virtual machine using Virtual Box, a free software solution for desktop virtualization. Use Linux as the primary operating system.

Here are links to download ISO DVD (and CD) images for some of the main Linux distributions:

If you are nervous about getting started with Linux, install it on a spare PC or in a virtual machine before you install it on your main PC or laptop. Linux is much less demanding on the hardware than Windows so you can easily run it on a machine that is 5-10 years old. Having just 4GB of RAM and 20GB of hard disk is usually more than enough for a basic graphical desktop environment although having better hardware makes it faster.

Your experiences installing and running Linux, especially if it requires some special effort to make it work with some of your hardware, make interesting topics for your blog.

Decide which technologies you know best

Personally, I have mentored students working with C, C++, Java, Python and JavaScript/HTML5.

In a GSoC program, you will typically do most of your work in just one of these languages.

From the outset, decide which language you will focus on and do everything you can to improve your competence with that language. For example, if you have already used Java in most of your course, plan on using Java in GSoC and make sure you read Effective Java (2nd Edition) by Joshua Bloch.

Decide which themes appeal to you

Find a topic that has long-term appeal for you. Maybe the topic relates to your course or maybe you already know what type of company you would like to work in.

Here is a list of some topics and some of the relevant software projects:

  • System administration, servers and networking: consider projects involving monitoring, automation, packaging. Ganglia is a great community to get involved with and you will encounter the Ganglia software in many large companies and academic/research networks. Contributing to a Linux distribution like Debian or Fedora packaging is another great way to get into system administration.
  • Desktop and user interface: consider projects involving window managers and desktop tools or adding to the user interface of just about any other software.
  • Big data and data science: this can apply to just about any other theme. For example, data science techniques are frequently used now to improve system administration.
  • Business and accounting: consider accounting, CRM and ERP software.
  • Finance and trading: consider projects like R, market data software like OpenMAMA and connectivity software (Apache Camel)
  • Real-time communication (RTC), VoIP, webcam and chat: look at the JSCommunicator or the Jitsi project
  • Web (JavaScript, HTML5): look at the JSCommunicator

Before the GSoC application process begins, you should aim to learn as much as possible about the theme you prefer and also gain practical experience using the software relating to that theme. For example, if you are attracted to the business and accounting theme, install the PostBooks suite and get to know it. Maybe you know somebody who runs a small business: help them to upgrade to PostBooks and use it to prepare some reports.

Make something

Make some small project, less than two week's work, to demonstrate your skills. It is important to make something that somebody will use for a practical purpose, this will help you gain experience communicating with other users through Github.

For an example, see the servlet Juliana Louback created for fixing phone numbers in December 2013. It has since been used as part of the Lumicall web site and Juliana was selected for a GSoC 2014 project with Debian.

There is no better way to demonstrate to a prospective mentor that you are ready for GSoC than by completing and publishing some small project like this yourself. If you don't have any immediate project ideas, many developers will also be able to give you tips on small projects like this that you can attempt, just come and ask us on one of the mailing lists.

Ideally, the project will be something that you would use anyway even if you do not end up participating in GSoC. Such projects are the most motivating and rewarding and usually end up becoming an example of your best work. To continue the example of somebody with a preference for business and accounting software, a small project you might create is a plugin or extension for PostBooks.

Getting to know prospective mentors

Many web sites provide useful information about the developers who contribute to free software projects. Some of these developers may be willing to be a GSoC mentor.

For example, look through some of the following:

Getting on the mentor's shortlist

Once you have identified projects that are interesting to you and developers who work on those projects, it is important to get yourself on the developer's shortlist.

Basically, the shortlist is a list of all students who the developer believes can complete the project. If I feel that a student is unlikely to complete a project or if I don't have enough information to judge a student's probability of success, that student will not be on my shortlist.

If I don't have any student on my shortlist, then a project will not go ahead at all. If there are multiple students on the shortlist, then I will be looking more closely at each of them to try and work out who is the best match.

One way to get a developer's attention is to look at bug reports they have created. Github makes it easy to see complaints or bug reports they have made about their own projects or other projects they depend on. Another way to do this is to search through their code for strings like FIXME and TODO. Projects with standalone bug trackers like the Debian bug tracker also provide an easy way to search for bug reports that a specific person has created or commented on.

Once you find some relevant bug reports, email the developer. Ask if anybody else is working on those issues. Try and start with an issue that is particularly easy and where the solution is interesting for you. This will help you learn to compile and test the program before you try to fix any more complicated bugs. It may even be something you can work on as part of your academic program.

Find successful projects from the previous year

Contact organizations and ask them which GSoC projects were most successful. In many organizations, you can find the past students' project plans and their final reports published on the web. Read through the plans submitted by the students who were chosen. Then read through the final reports by the same students and see how they compare to the original plans.

Start building your project proposal now

Don't wait for the application period to begin. Start writing a project proposal now.

When writing a proposal, it is important to include several things:

  • Think big: what is the goal at the end of the project? Does your work help the greater good in some way, such as increasing the market share of Linux on the desktop?
  • Details: what are specific challenges? What tools will you use?
  • Time management: what will you do each week? Are there weeks where you will not work on GSoC due to vacation or other events? These things are permitted but they must be in your plan if you know them in advance. If an accident or death in the family cut a week out of your GSoC project, which work would you skip and would your project still be useful without that? Having two weeks of flexible time in your plan makes it more resilient against interruptions.
  • Communication: are you on mailing lists, IRC and XMPP chat? Will you make a weekly report on your blog?
  • Users: who will benefit from your work?
  • Testing: who will test and validate your work throughout the project? Ideally, this should involve more than just the mentor.

If your project plan is good enough, could you put it on Kickstarter or another crowdfunding site? This is a good test of whether or not a project is going to be supported by a GSoC mentor.

Learn about packaging and distributing software

Packaging is a vital part of the free software lifecycle. It is very easy to upload a project to Github but it takes more effort to have it become an official package in systems like Debian, Fedora and Ubuntu.

Packaging and the communities around Linux distributions help you reach out to users of your software and get valuable feedback and new contributors. This boosts the impact of your work.

To start with, you may want to help the maintainer of an existing package. Debian packaging teams are existing communities that work in a team and welcome new contributors. The Debian Mentors initiative is another great starting place. In the Fedora world, the place to start may be in one of the Special Interest Groups (SIGs).

Think from the mentor's perspective

After the application deadline, mentors have just 2 or 3 weeks to choose the students. This is actually not a lot of time to be certain if a particular student is capable of completing a project. If the student has a published history of free software activity, the mentor feels a lot more confident about choosing the student.

Some mentors have more than one good student while other mentors receive no applications from capable students. In this situation, it is very common for mentors to send each other details of students who may be suitable. Once again, if a student has a good Github profile and a blog, it is much easier for mentors to try and match that student with another project.

GSoC logo generic


Getting into the world of software engineering is much like joining any other profession or even joining a new hobby or sporting activity. If you run, you probably have various types of shoe and a running watch and you may even spend a couple of nights at the track each week. If you enjoy playing a musical instrument, you probably have a collection of sheet music, accessories for your instrument and you may even aspire to build a recording studio in your garage (or you probably know somebody else who already did that).

The things listed on this page will not just help you walk the walk and talk the talk of a software developer, they will put you on a track to being one of the leaders. If you look over the profiles of other software developers on the Internet, you will find they are doing most of the things on this page already. Even if you are not selected for GSoC at all or decide not to apply, working through the steps on this page will help you clarify your own ideas about your career and help you make new friends in the software engineering community.

Will Kahn-GreeneInput: Trigger rule project Phase 1


Last quarter, I finished up the suggester framework for Input. When a user leaves feedback, registered suggester modules would look at the feedback metadata and text and return suggested links. The suggested links would then show up on the Thank You page. Users could then read a bit about the link and click on it if it was appealing.

The first suggester I wrote does a search against SUMO kb articles to see if any of the kb articles seemed relevant to the feedback. Users frequently leave feedback about problems they're having that could be known issues with known solutions or even problems Firefox solves with features the user wasn't aware of. Because of this, it behooves us greatly to guide these users to the solutions that make their Firefox experience better. I wrote a post about that.

This project covers adding a new suggester that allows analyzers to set up trigger rules for suggestions which is stored in the database. When feedback matches the criteria for a trigger rule, then the suggestion is shown.

I pushed out the last code changes on September 9th, 2015. On September 25th, we created a trigger rule for feedback talking about Norton's addon and suggested a link for a SUMO kb article that talks about the problem. In the 5 days, 22 people saw the suggestion and 6 clicked on the link.

This blog post is a write-up for the Trigger rule project phase 1.

Read more… (10 mins to read)

Support.Mozilla.OrgWhat’s up with SUMO – 2nd October

What have you been up to this week, SUMO warriors? We have some news and announcements for you, so start reading before the weekend sweeps us all away from our screens!

Welcome, New Contributors!

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!
Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Contributors of the week

  • A group nomination to all the people who keep being awesome and supporting millions of users around the world :-) If you are reading these words… this most likely means YOU!

Last SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday,5th of October. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).



Support Forum

  • Some details from the last SUMO Day:
    • 45 participants: 32 for Firefox, 5 for Firefox for Android, 2 for Firefox for iOS
    • 93% issues replied to in 24 hours, 96% in 72 hours (FF, FF OS and FF iOS)
    • 37% issues replied to in 24 hours and 87% in 72 hours for FF for Android
    • Once again – HUGE thanks to all participants! You rock!
  • Save the date: the next SUMO Questions Day is coming up on the 8th of October.

Knowledge Base

  • The tracking protection articles will be revised and finalized by mid-October, depending on the coordination with Legal and Product teams.
  • If you want work with us directly on the Knowledge Base as an intern, take a look at this Outreachy opportunity.


  • (for Android) The documentation for version 42 has started, and a dot release for 41 took place as well.
  • (for Desktop) Version 41 has landed! Head over to our forums to learn more and discuss.
    • There has been a general increase in the number of browser hanging cases, so expect to see users reporting problems of that type. Yes, Farmville 2 is unfortunately affected, as well.
That’s all for now. May you have a great weekend! We all hope to see you refreshed, rested, and relaxed on Monday.
P.S. Don’t forget that we’re on Twitter… where we recently passed the 500 follower mark! Thank you!

Mark FinkleFun With Telemetry: URL Suggestions

Firefox for Android has a UI Telemetry system. Here is an example of one of the ways we use it.

As you type a URL into Firefox for Android, matches from your browsing history are shown. We also display search suggestions from the default search provider. We also recently added support for displaying matches to previously entered search history. If any of these are tapped, with one exception, the term is used to load a search results page via the default search provider. If the term looks like a domain or URL, Firefox skips the search results page and loads the URL directly.


  1. This suggestion is not really a suggestion. It’s what you have typed. Tagged as user.
  2. This is a suggestion from the search engine. There can be several search suggestions returned and displayed. Tagged as engine.#
  3. This is a special search engine suggestion. It matches a domain, and if tapped, Firefox loads the URL directly. No search results page. Tagged as url
  4. This is a matching search term from your search history. There can be several search history suggestions returned and displayed. Tagged as history.#

Since we only recently added the support for search history, we want to look at how it’s being used. Below is a filtered view of the URL suggestion section of our UI Telemetry dashboard. Looks like history.# is starting to get some usage, and following a similar trend to engine.# where the first suggestion returned is used more than the subsequent items.

Also worth pointing out that we do get a non-trivial amount of url situations. This should be expected. Most search keyword data released by Google show that navigational keywords are the most heavily used keywords.

An interesting observation is how often people use the user suggestion. Remember, this is not actually a suggestion. It’s what the person has already typed. Pressing “Enter” or “Go” would result in the same outcome. One theory for the high usage of that suggestion is it provides a clear outcome: Firefox will search for this term. Other ways of trigger the search might be more ambiguous.


Yunier José Sosa VázquezPablo Pérez: “Mozilla seguirá velando por los derechos de los usuarios en Internet”

Pablo-Pérez Diez-Mozilla-HispanoPara conocer mucho más acerca de las personas que integran Mozilla Hispano, llegan nuevamente las entrevistas los miembros de la comunidad. Este espacio servirá para reconocer el trabajo de nuestros colaboradores.

En esta oportunidad tenemos como invitado al español Pablo Pérez.

¿Cómo te llamas y a qué te dedicas?
Me llamo Pablo Pérez Diez, nací en un pueblo del norte de España. Posteriormente me trasladé a la ciudad de Valladolid (también en el norte de España) para realizar mis estudios de Técnico Superior en Telemática e Ingeniero Técnico en Telemática. Tras unos años trabajando como programador allí, actualmente resido en Madrid.

¿Por qué te decidiste a colaborar con Mozilla?
Decidí colaborar para aportar mi grano de arena en la defensa de los derechos de los usuarios online (neutralidad, privacidad, derecho al olvido, etc.), todos ellos cada vez más en peligro.

¿Actualmente qué labor desempeñas en la Comunidad?
Actualmente realizo las tareas de coordinación en el área de asistencia que integra tanto el foro como Twitter.

¿Qué es lo que más valoras o es lo más positivo de Mozilla / la Comunidad?
Lo que más valoro es como gente de diferentes países, creencias, ideologías, etc. son capaces de colaborar por un objetivo común que en última instancia es el bien común.

¿Qué te aporta a ti Mozilla / la Comunidad?
Me aporta la experiencia de colaborar con otra mucha gente aprendiendo lo mejor de todos ellos tanto de países hispanohablantes como de otros muchos más.

¿Como crees que será Mozilla en el futuro?
En mi opinión Mozilla seguirá velando por los derechos de los usuarios en Internet apoyándose cada vez más en la comunidad.

Unas palabras para las personas que desean unirse a la Comunidad.
Si compartes los valores tanto de Mozilla como de Mozilla Hispano este es el lugar perfecto para contribuir a que esos valores sean respetados y obteniendo grandes experiencias con el resto de la comunidad.

Muchas gracias Pablo por acceder a la entrevista.

Fuente: Mozilla Hispano

Air MozillaWebmaker Demos Oct 2 2015

Webmaker Demos Oct 2 2015 Webmaker Demos Friday October 2 2015

Hal Wineduo MFA & viscosity no-cell setup

duo MFA & viscosity no-cell setup

The Duo application is nice if you have a supported mobile device, and it’s usable even when you you have no cell connection via TOTP. However, getting Viscosity to allow both choices took some work for me.

For various reasons, I don’t want to always use the Duo application, so would like for Viscosity to alway prompt for password. (I had already saved a password - a fresh install likely would not have that issue.) That took a bit of work, and some web searches.

  1. Disable any saved passwords for Viscosity. On a Mac, this means opening up “Keychain Access” application, searching for “Viscosity” and deleting any associated entries.

  2. Ask Viscosity to save the “user name” field (optional). I really don’t need this, as my setup uses a certificate to identify me. So it doesn’t matter what I type in the field. But, I like hints, so I told Viscosity to save just the user name field:

    defaults write com.viscosityvpn.Viscosity RememberUsername -bool true

With the above, you’ll be prompted every time. You have to put “something” in the user name field, so I chose to put “push or TOTP” to remind me of the valid values. You can put anything there, just do not check the “Remember details in my Keychain” toggle.

Julien VehentIntroducing SOPS: a manager of encrypted files for secrets distribution

Automating the distribution of secrets and credentials to components of an infrastructure is a hard problem. We know how to encrypt secrets and share them between humans, but extending that trust to systems is difficult. Particularly when these systems follow devops principles and are created and destroyed without human intervention. The issue boils down to establishing the initial trust of a system that just joined the infrastructure, and providing it access to the secrets it needs to configure itself.

The initial trust

In many infrastructures, even highly dynamic ones, the initial trust is established by a human. An example is seen in Puppet by the way certificates are issued: when a new system attempts to join a Puppetmaster, an administrator must, by default, manually approve the issuance of the certificate the system needs. This is cumbersome, and many puppetmasters are configured to auto-sign new certificates to work around that issue. This is obviously not recommended and far from ideal.

AWS provides a more flexible approach to trusting new systems. It uses a powerful mechanism of roles and identities. In AWS, it is possible to verify that a new system has been granted a specific role at creation, and it is possible to map that role to specific resources. Instead of trusting new systems directly, the administrator trusts the AWS permission model and its automation infrastructure. As long as AWS keys are safe, and the AWS API is secure, we can assume that trust is maintained and systems are who they say they are.

KMS, Trust and secrets distributionKMS_Benefit_Key.png

Using the AWS trust model, we can create fine grained access controls to Amazon's Key Management Service (KMS). KMS is a service that encrypts and decrypts data with AES_GCM, using keys that are never visible to users of the service. Each KMS master key has a set of role-based access controls, and individual roles are permitted to encrypt or decrypt using the master key. KMS helps solve the problem of distributing keys, by shifting it into an access control problem that can be solved using AWS's trust model.

Since KMS's inception a few months ago, a number of projects have popped up to use its capabilities to distribute secrets: credstash and sneaker are such examples. Today I'm introducing sops: a secrets editor that uses KMS and PGP to manage encrypted files.

SOPS: Secrets OPerationS

A few weeks ago, Mozilla's Services Operations team started revisiting the issue of distributing secrets to EC2 instances, with a goal to store these secrets encrypted until the very last moment, when they need to be decrypted on target systems. Not unlike many other organizations that operate sufficiently complex automation, we found this to be a hard problem with a number of prerequisites:

  1. Secrets must be stored in YAML files for easy integration into hiera
  2. Secrets must be stored in GIT, and when a new CloudFormation stack is built, the current HEAD is pinned to the stack. (This allows secrets to be changed in GIT without impacting the current stack that may autoscale).
  3. Encrypt entries separately. Encrypting entire files as blobs makes git conflict resolution almost impossible. Encrypting each entry separately is much easier to manage.
  4. Secrets must always be encrypted on disk (admin laptop, upstream git repo, jenkins and S3) and only be decrypted on the target systems

Daniel Thornton and I brainstormed a number of ideas, and eventually ended-up with a workflow similar to the one described below.


The idea behind SOPS is to provide a wrapper around a text editor that takes care of the encryption and decryption transparently. When creating a new file, sops generates a data encryption key "Kd" that is itself encrypted with one or more KMS master keys and PGP public keys. Kd is used to encrypt the content of the file with AES256-GCM. In order to decrypt the files, sops must have access to any of the KMS or PGP master keys.

SOPS can be used to encrypt YAML, JSON and TEXT files. In TEXT mode, the content of the file is treated as a blob, the same way PGP would encrypt an entire file. In YAML and JSON modes, however, the content of the file is manipulated as a tree where keys are stored in cleartext, and values are encrypted. hiera-eyaml does something similar, and over the years we learned to appreciate its benefits, namely:

  • diff are meaningful. If a single value of a file is modified, only that value will show up in the diff. The diff is still limited to only showing encrypted data, but that information is already more granular that indicating that an entire file has changed.
  • conflicts are easier to resolve. If multiple users are working on the same encrypted files, as long as they don't modify the same values, changes are easy to merge. This is an improvement over the PGP encryption approach where unsolvable conflicts often happen when multiple users work on the same file.
# edit a file
$ sops example.yaml file written to example.yaml
# take a look at the diff
$ git diff example.yaml diff --git a/example.yaml b/example.yaml index 00fe479..5f40330 100644 --- a/example.yaml +++ b/example.yaml @@ -1,5 +1,5 @@ # The secrets below are unreadable without access to one of the sops master key -myapp1: ENC[AES256_GCM,data:Tr7oo=,iv:1vw=,aad:eo=,tag:ka=]
+myapp1: ENC[AES256_GCM,data:krm,iv:0Y=,aad:KPyE=,tag:oIA==]
app2: db: user: ENC[AES256_GCM,data:YNKE,iv:H4JQ=,aad:jk0=,tag:Neg==]

Below are two examples of SOPS encrypted files. The first one in YAML, the second one in JSON:



# The secrets below are unreadable without access to one of the sops master key
myapp1: t00m4nys3cr3tzupdated
        user: eve
        password: c4r1b0u
    # private key for secret operations in app2
    key: |
        -----BEGIN RSA PRIVATE KEY-----
        -----END RSA PRIVATE KEY-----
number: 1234567890
- secretuser1
- secretuser2
- some other value


# The secrets below are unreadable without access to one of the sops master key
myapp1: ENC[AES256_GCM,data:krwEdH2fxWRexFuvZHS816Wz46Lm,iv:0STqWePc0HOPuDn2EizQdNepx9ksx0guHGeKrshlYSY=,aad:Krl8HyPGQmnIWIZh74Ib+y0OdiVEvRDBv3jTdMGSPyE=,tag:oI2THtQeUX4ZLNnbrdel2A==]
        user: ENC[AES256_GCM,data:YNKE,iv:H9CDb4aUHBJeF2MSTKHQuOwlLxQVdx12AhT0+Dob4JQ=,aad:jlF2KvytlQIgyMpOoO/BiQbukiMwrh1j94Oys+YMgk0=,tag:NeDysIHV9CGtMAQq9i4vMg==]
        password: ENC[AES256_GCM,data:p673JCgHYw==,iv:EOOeivCp/Fd80xFdMYX0QeZn6orGTK8CeckmipjKqYY=,aad:UAhi/SHK0aCzptnFkFG4dW8Vv1ASg7TDHD6lui9mmKQ=,tag:QE6uuhRx+cGInwSVdmxXzA==]
    # private key for secret operations in app2
    key: |-
number: ENC[AES256_GCM,data:XMrBalgZ9tvBxQ==,iv:XyEAAaIzVy/2trnJhLrjMInLg8tMI4CAX9+ccnj3T1Y=,aad:JOlAkP159UxDjL1CrumTuQDqgW2+VOIwz7bdfaJIIn4=,tag:WOHOMJS4nhSdj/aQcGbU1A==]
- ENC[AES256_GCM,data:td1aAv4s4cOzSo0=,iv:ErVqte7GpQ3JfzVpVRf7pWSQZDHn6W0iAntKWFsMqio=,aad:RiYy8fKX/yVY7KRgXSOIzydT0+TwK7WGzSFSy+1GmVM=,tag:aSGLCmNZsGcBjxEGvNQRwA==]
- ENC[AES256_GCM,data:2K8C418jef8zoAY=,iv:cXE4Hwdl4ZHzAHHyyXqaIMFs0mn65JUehDdaw/aM0WI=,aad:RlAgUZUZ1DvxD9/lZQk9KOHKl4L+fYETaAdpDVekCaA=,tag:CORSBzis6Vy45dEvT/UtMg==]
- ENC[AES256_GCM,data:hbcOBbsaWmlnrpeuwLfh1ttsi8zj/pxMc1LYqhdksT/oQb80g2z0FE4QwUVb7VV+x98LAWHofVyV8Q==,iv:/sXHXde82r2FyG3Z3vC5x8zONB14RwC0GmtkiYEUNLI=,aad:BQb8l5fZzF/aa/EYnrOQvRfGUTq9QmJOAR/zmgOfYDA=,tag:fjNeg3Manjl6B2U2oflRhg==]
- ENC[AES256_GCM,data:LLHkzGobqL53ws6E2zglkA==,iv:g9z3zz4DUzJr4Cim0SVqKF736w2mZoItqbB0TcsGrQU=,aad:Odrvz0loqFdd9wKJz0ULMX/lyEQcX8WaHE59MgeXkcI=,tag:V+rV/AeZ4uEgtwGhlamTag==]
    -   enc: CiC6yCOtzsnFhkfdIslYZ0bAf//gYLYCmIu87B3sy/5yYxKnAQEBAQB4usgjrc7JxYZH3SLJWGdGwH//4GC2ApiLvOwd7Mv+cmMAAAB+MHwGCSqGSIb3DQEHBqBvMG0CAQAwaAYJKoZIhvcNAQcBMB4GCWCGSAFlAwQBLjARBAyGdRODuYMHbA8Ozj8CARCAO7opMolPJUmBXd39Zlp0L2H9fzMKidHm1vvaF6nNFq0ClRY7FlIZmTm4JfnOebPseffiXFn9tG8cq7oi
        enc_ts: 1439568549.245995
        arn: arn:aws:kms:us-east-1:656532927350:key/920aff2e-c5f1-4040-943a-047fa387b27e
    -   fp: 85D77543B3D624B63CEA9E6DBC17301B491B3F21
        enc: |
            -----BEGIN PGP MESSAGE-----
            Version: GnuPG v1

            -----END PGP MESSAGE-----
        created_at: 1443203323.058362



    "address": {
        "city": "New York", 
        "postalCode": "10021-3100", 
        "state": "NY", 
        "streetAddress": "21 2nd Street"
    "age": 25, 
    "firstName": "John", 
    "lastName": "Smith", 
    "phoneNumbers": [
            "number": "212 555-1234", 
            "type": "home"
            "number": "646 555-4567", 
            "type": "office"


    "address": {
        "city": "ENC[AES256_GCM,data:2wNRKB+Sjjw=,iv:rmATLCPii2WMzcT80Wp9gOpYQqzx6juRmCf9ioz2ZLM=,aad:dj0QZW0BvZVjF1Dn25hOJpcwcVB0qYvEIhGWgxq6YzQ=,tag:wOoPYU+8BA9DiNFlsal3Aw==]", 
        "postalCode": "ENC[AES256_GCM,data:xwWZ/np9Gxv3CQ==,iv:OLwOr7iliPyWWBtKfUUH7E1wQlxJLA6aFxIfNAEC/M0=,aad:8mw5NU8MpyBlrh7XaUqa642jeyJWGqKvduaQ5bWJ5pc=,tag:VFmnc4Ay+yKzyHcrKeEzZQ==]", 
        "state": "ENC[AES256_GCM,data:3jY=,iv:Y2bEgkjdn91Pbf5RgJMbyCsyfhV7XWdDhe8wVwTQue0=,aad:DcA5kW1rrET9TxQ4kn9jHSpoMlkcPKs5O5n9wZjZYCQ=,tag:ad1xdNnFwkqx/8EOKVVHIA==]", 
        "streetAddress": "ENC[AES256_GCM,data:payzP57DGPl5S9Z7uQ==,iv:UIz34fk9zH4z6hYfu0duXmAnI8CqnoRhoaIUqg1YoYA=,aad:hll9Baw40lMjwj7HePQ1o1Lsuh1LCwrE6+bkG4025sg=,tag:FDBhYxMmJ1Wj/uxYxdvVZg==]"
    "age": "ENC[AES256_GCM,data:4Y4=,iv:hi1iSH19dHSgG/c7yVbNj4yzueHSmmY46yYqeNCoX5M=,aad:nnyubQyaWeLTcz9k9cMHUlgTwVDMyHf32sWCBm7KWAA=,tag:4lcMjstadzI8K40BoDEfDA==]", 
    "firstName": "ENC[AES256_GCM,data:KVe8Dw==,iv:+eg+Rjvaqa2EEp6ufw9c4hwWwObxRLPmxx3fG6rkyps=,aad:3BdHcorHfbvM2Jcs96zX0JY2VQL5dBNgy7zwhqLNqAU=,tag:5OD6MN9SPhBmXuA81hyxhQ==]", 
    "lastName": "ENC[AES256_GCM,data:1+koqsI=,iv:b2kBxSW4yOnLFc8qoeylkMtiO/6qr4cZ5VTntXTyXO8=,aad:W7HXukq3lUUMj9i57UehILG2NAp8XCgJMYbvgflWJIY=,tag:HOrgi1L+IRP+X5JGMnm7Ig==]", 
    "phoneNumbers": [
            "number": "ENC[AES256_GCM,data:Oo0IxdtBrnfE+bTf,iv:tQ1E/JQ4lHZvj1nQnGL2sKE30sCctjiMCiagS2Yzch8=,aad:P+m5gD3pKfNEOy6t61vbKhEpPtMFI2NZjBPrD/m8T9w=,tag:6iRMUVUEx3UZvUTGTjCdwg==]", 
            "type": "ENC[AES256_GCM,data:M3zOKQ==,iv:pD9RO4BPUVu6AWPo2DprRsOqouN+0HJn+RXQAXhfB2s=,aad:KFBBVEEnSjdmah3i2XmPx7wWEiFPrxpnfKYW4BSolhk=,tag:liwNnip/L6SZ9srn0N5G4g==]"
            "number": "ENC[AES256_GCM,data:BI2f/qFUea6UHYQ+,iv:jaVLMju6h7s+AlF7CsPbpUFXO2YtYAqYsCIsyHgfrfI=,aad:N+8sVpdTlY5I+DcvnY018Iyh/QesD7bvwfKHRr7q2L0=,tag:hHPPpQKP4cUIXfh9CFe4dA==]", 
            "type": "ENC[AES256_GCM,data:EfKAdEUP,iv:Td+sGaS8XXRqzY98OK08zmdqsO2EqVGK1/yDTursD8U=,aad:h9zi8s+EBsfR3BQG4r+t+uqeChK4Hw6B9nJCrValXnI=,tag:GxSk1LAQIJNGyUy7AvlanQ==]"
    "sops": {
        "kms": [
                "arn": "arn:aws:kms:us-east-1:656532927350:key/920aff2e-c5f1-4040-943a-047fa387b27e", 
                "created_at": 1443204393.48012, 
                "enc": "CiC6yCOtzsnFhkfdIslYZ0bAf//gYLYCmIu87B3sy/5yYxKnAQEBAgB4usgjrc7JxYZH3SLJWGdGwH//4GC2ApiLvOwd7Mv+cmMAAAB+MHwGCSqGSIb3DQEHBqBvMG0CAQAwaAYJKoZIhvcNAQcBMB4GCWCGSAFlAwQBLjARBAwBpvXXfdPzEIyEMxICARCAOy57Odt9ngHHyIjVU8wqMA4QszXdBglNkr+duzKQO316CRoV5r7bO8JwFCb7699qreocJd+RhRH5IIE3"
                "arn": "arn:aws:kms:ap-southeast-1:656532927350:key/9006a8aa-0fa6-4c14-930e-a2dfb916de1d", 
                "created_at": 1443204394.74377, 
                "enc": "CiBdfsKZbRNf/Li8Tf2SjeSdP76DineB1sbPjV0TV+meTxKnAQEBAgB4XX7CmW0TX/y4vE39ko3knT++g4p3gdbGz41dE1fpnk8AAAB+MHwGCSqGSIb3DQEHBqBvMG0CAQAwaAYJKoZIhvcNAQcBMB4GCWCGSAFlAwQBLjARBAwag3w44N8+0WBVySwCARCAOzpqMpvzIXV416ycCJd7mn9dBvjqzkUDag/zHlKse57uNN7P0S9GeRVJ6TyJsVNM+GlWx8++F9B+RUE3"
        "pgp": [
                "created_at": 1443204394.748745, 
                "enc": "-----BEGIN PGP MESSAGE-----\nVersion: GnuPG v1\n\nhQIMA0t4uZHfl9qgAQ//dpZVlRD9WGvz6Pl+PRKvBf661IHLkCeOq5ubzqLIJZu7\nJMNu0KBoO0qX+rgIQtzMU+04QlbIukw01q9ELSDYjBDQPRQJ+6OAeauawxf5mPGa\nZKOaSuoCuPbfOmGj8AENdSCpDaDz+KvOPvo5NNe16kC8BeerFJGewyEwbnkx5dxZ\ngk+LJBOuWRVUEzjsB1pzGfGRzvuzHcrUzWAoA8N936hDFIpoeDYC/8KLc0CWTltA\nYYGaKh5cZxC0R0TgQ5S9GjcU2nZjhcL94XRxZ+9BZDLCDRnjnRfUpPSTHoxr9wmR\nAuLtgyCIolrPl3fqRLJSLUH6FyTo2CO+2mFSx7y9m2OXkKQd1z2tkOlpC9PDTjGT\nVfGvy9nMUsmrgWG35soEmk0nNJaZehiscvZfomBnnHQgqx7DMSMxAnBneFqjsyOQ\nGK7Jacs/tigxe8NZcYhx+usITeQzVLmuqZ2pO5nEGyq0XJhJjxce9YVaeC4QOft0\nlm6qq+m6oABOdKTGh6zuIiWxU1r417IEgV8mkwjlraAvNNPKowQq5j8dohG4HaNK\nOKoOt8aIZWvD3HE9szuH+uDRXBBEAbIvnojQIyrqeIYv1xU8hDTllJPKw/kYD6nx\nMKrw4UAFand5qAgN/6QoIrOPXC2jhA2VegXkt0LXXSoP1ccR4bmlrGRHg0x6Y8zS\nXAE+BVEMYh8l+c86BNhzVOaAKGRor4RKtcZIFCs/Gpa4FxzDp5DfxNn/Ovrhq/Xc\nlmzlWY3ywrRF8JSmni2Asxet31RokiA0TKAQj2Q7SFNlBocR/kvxWs8bUZ+s\n=Z9kg\n-----END PGP MESSAGE-----\n", 
                "fp": "85D77543B3D624B63CEA9E6DBC17301B491B3F21"

As you can see on each key/value pair, only the values are encrypted and keys are kept in clear. It can be argued that this approach leaks sensitive information, but it’s a tradeoff we’re willing to accept in exchange for an increased usability.

Simplifying key management

OpenPGP gets a lot of bad press for being an outdated crypto protocol, and while true, what really made look for alternatives is the difficulty to manage and distribute keys to systems. With KMS, we manage permissions to an API, not keys, and that's a lot easier to do.

But PGP is not dead yet, and we still rely on it heavily as a backup solution: all our files are encrypted with KMS and with one PGP public key, with its private key stored securely for emergency decryption in the event that we lose all our KMS master keys.

That said, nothing prevents you from using SOPS the same way you would use an encrypted PGP file: by referencing the pubkeys of each individual who has access to the file. It can easily be done by providing sops with a comma-separated list of public keys when creating a new file:

$ sops --pgp "E60892BB9BD89A69F759A1A0A3D652173B763E8F, 84050F1D61AF7C230A12217687DF65059EF093D3, 85D77543B3D624B63CEA9E6DBC17301B491B3F21" mynewfile.yaml
Updating the master keys

GnuPG can be a little obscure when it comes to managing the keys that have access to a file. While its command line is powerful, it takes a few minutes to find the right commands and figure out how to provide access to a new member of the team.

In SOPS, managing master keys is easy: they are simply stored as entries of the document under sops->{kms,pgp}. By default, that information is hidden during editing, but calling sops with the "-s" flag will display the master keys in the editor. From there, add new keys by creating entry in the document, or remove them by deleting the lines.


Rotating data keys is also trivial: sops provides a rotation flag "-r" that will generate a new data key Kd and re-encrypt all values in the file with it. Coupled with in-place encryption/decryption, it is easy to rotate all the keys on a group of files:

for file in $(find . -type f -name "*.yaml"); do
        sops -d -i $file
        sops -e -i -r $file

Something that should be done every few months for good practice ;)

Assuming roles

SOPS has the ability to use KMS in multiple AWS accounts by assuming roles in each account. Being able to assume roles is a nice feature of AWS that allows administrator to establish trust relationships between accounts, typically from the most secure account to the least secure one. In our use-case, we use roles to indicate that a user of the Master AWS account is allowed to make use of KMS master keys in development and staging AWS accounts. Using roles, a single file can be encrypted with KMS keys in multiple accounts, thus increasing reliability and ease of use.

Check it out, and contribute!

SOPS is available on Github at and on Pypi at We are progressively reaching a stable stage, with a goal to support Python 2.6.6 to 3.4.


Mozilla Addons BlogOctober 2015 Featured Add-ons

Pick of the Month: New Tab Tools

by Geoff Lankow
Customize the new tab page, add more tiles, add the launcher from Firefox start. Set new tile images and titles, see recently closed tabs, and more!

“I was trying to figure out how to get more tiles without zooming the page out. I stumbled on this and it’s perfect.”

Featured: Privacy Settings

by Jeremy Schomery
Alter Firefox’s built-in privacy settings easily with a toolbar panel.

Featured: PriceZombie

by Price Zombie
Price Zombie is a price tracker and price comparison browser extension. PriceZombie lets you see full price history on millions of products across hundreds of stores such as Amazon, BestBuy, Bloomingdale’s, and many more.

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to for the board’s consideration. We welcome you to submit your own add-on!

Jonathan GriffinEngineering Productivity Update, Oct 1, 2015

We’ve said good-bye to Q3, and are moving on to Q4. Planning for Q4 goals and deliverables is well underway; I’ll post a link to the final versions next update.

Last week, a group of 8-10 people from Engineering Productivity gathered in Toronto to discuss approaches to several aspects of developer workflow. You can look at the notes we took; next up is articulating a formal Vision and Roadmap for 2016, which incorporates both this work as well as other planning which is ongoing separately for things like MozReview and Treeherder.


Bugzilla: Support for 2FA has been enhanced.


  • The automatic starring backend, along with related database changes, is now in production. In Q4 we’ll be developing a simple UI for this, and by the end of quarter, automatic starring for at least simple failures should be a reality.
  • Treeherder will soon stop posting bug comments for each intermittent failure. Instead OrangeFactor will post periodic summaries on bugs – see:
  • Job Ingestion via Pulse Exchanges is in the final review stages.  This will allow projects like Task Cluster to send JSON Schema-validated job data to Treeherder via a Pulse Exchange, rather than our APIs.  It also enables developers and testers the ability to ingest production jobs from Task Cluster to their local machine.  Blog post:
  • :Goma’s line highlighting and linking in the log viewer are now live. See this blog post for details.
  • Jonathan French, our awesome contractor and contributor, has landed onscreen shortcuts; see this blog post. Jonathan will be moving on to other things soon, and we’ll sorely miss him!

Perfherder and Performance Automation:

  • Work is underway to prototype a UI in Perfherder which can be used for performance sheriffing sans Alert Manager or Graphserver; follow bug 1201154 for more details. Separately, work has been started to allow other performance harnesses (besides Talos) submit data to Perfherder; bug 1175295.
  • Talos on linux32 has been turned off; the machines that had been used for this are being repurposed as Windows 7 and Windows 8 test workers, in order to reduce overall wait times on those platforms.
  • The dromaeo DOM Talos test has been enabled on linux64.

MozReview and Autoland: mcote posted a blog post detailing some of the rough edges in MozReview, and explaining how the team intends on tackling these. dminor blogged about the state of autoland; in short, we’re getting close to rolling out an initial implementation which will work similarly to the current “checkin-needed” mechanism, except, of course, it will be entirely automated. May you never have to worry about closed trees again!

Mobile Automation: gbrown made some additional improvements to mach commands on Android; bc has been busy with a lot of Autophone fixes and enhancements.

Firefox Automation: maja_zf has enabled MSE playback tests on trunk, running per-commit. They will go live at the next buildbot reconfig.

Developer Workflow: numerous enhancements have been made to |mach try|; see list below in the Details section.  run-by-dir has been applied to mochitest-plain on most platforms, and to mochitest-chrome-opt, by kaustabh93, one of team’s contributors. This reduces test bleedthrough, a source of intermittent failures, as well as improves our ability to change job chunking without breaking tests.

Build System: gps has improved test package generation, which results in significantly faster builds – a savings of about 5 minutes per build on OSX and Windows in automation; about 90s on linux.

TaskCluster Migration: linux64 debug builds are now running, so ahal is unblocked on getting linux64 debug tests running in TaskCluster.  armenzg has landed mozharness code to support running buildbot jobs via TaskCluster scheduling, via buildbot bridge.

The Details


Perfherder/Performance Testing

TaskCluster Support

Mobile Automation

  • mach cppunittest now supports Firefox for Android
  • mach test commands now download host utilities for Firefox for Android
  • [bc] Autophone
  • Bug 1202826 – Autophone – 2015-09-09 deployment
  • Bug 1202833 – Autophone – CHARGING state should not prevent Autophone shutdown/restart
  • Bug 1201061 – Autophone – deploy robocop_adobe_flash.html
  • Bug 1196115 – Intermittent Crash Autophone S1S2Test beginning 2015-08-18
  • Bug 1207836 – Autophone – 2015-09-23 deployment
  • Bug 1205864 –  Autophone – collects duplicate messages
  • Bug 1206954 – Autophone – better handle failures to submit results to PhoneDash
  • Bug 1209796 – Autophone – next deployment (In progress)
  • Bug 1205836 – Autophone – investigate orange for remote nytimes s1s2
  • Bug 1208782 – Autophone – do not attempt to get response json during Treeherder submission error if response is None
  • Bug 1209647 – Autophone – eliminate startup check for network connectivity
  • Bug 1209651 – Autophone – do not allow logcat device error to prevent setup_job initialization
  • Bug 1209653 – Autophone – after clearing logcat, specifying -b main can hang
  • Bug 1209675 – Autophone – Logcat should use PhoneTest loggerdeco
  • Bug 1209691 – Autophone – handle incorrect logcat dates emitted by devices.
  • jmaher/wlach working to get Autophone Talos reporting results to PerfHerder

Firefox and Media Automation

  • [maja_zf] MSE Video Playback buildbot jobs will be deployed to run per-commit on mozilla-inbound any day now…

General Automation

  • [ahal] started work on reftest using structured logging
  • [ahal] consolidate mochitest + xpcshell’s StructuredLog.jsm
  • [jgraham] Landed new |mach try| implementation that passes test paths rather than manifest paths; this adds support for web-platform-tests in |mach try|
  • [jgraham] Added support for saving and reusing try strings in |mach try|
  • [jgraham] Added Talos support to |mach try|
  • [jgraham] reftest and xpcshell test harnesses now take paths to multiple test locations on the command line and expose more functionality through mach
  • [jmaher] Kaustabh93 has runbydir live for mochitest-plain osx debug, and mochitest-chrome opt;  All that is left is mochitest-chrome debug and linux64 ASAN e10s.
  • [ato] Support for running Marionette tests using `mach try` in review


WebDriver (highlights)

  • [ato] Defined remote end steps for Element Clear command
  • [ato] Element location strategies have been outlined
  • [ato] Added steps to Base64 encode screen capture results
  • [ato] Because implementors have relied on prose from outdated sections, warnings were added to those sections which have yet to be redefined
  • + a ton of various fixes and rewording


  • [ato] findChildElement and findChildElements commands removed


  • [bc] Have been keeping the system running, helping triage bugs
  • [tomcat] Has been filing bugs, sent a September status report to internal set of people.


  • bugs 924405/1199788 – Bugherder now uses Bugzilla’s native REST API and can use bugzilla api keys for authentication even when 2FA is enabled.

Firefox build system

  • [gps] Test packaging is now drastically faster in automation. 50% reduction across all platforms. This is a 5+ minute decrease on OS X build jobs!

Will Kahn-GreeneInput: Moving to Django 1.8

Over the course of 2015, we've been reworking large parts of the Fjord codebase to do the following:

  1. ditch jingo and friends and other libraries that deviate from typical Django and aren't active projects
  2. reduce complexity by moving closer to a "default/typical Django project"
  3. upgrade to Django 1.8

This blog post covers many grueling details including order we did things, design decisions we made and some anecdotes.

Read more… (13 mins to read)

Ben HearsumImprovements to updates for Foxfooders

We've been providing on-device updates (that is to say: no flashing required) to users in the Foxfood program for nearly 6 months now. These updates are intended for users who are officially part of the Foxfooding program, but the way our update system works means that anyone who puts themselves on the right update channel can receive them. This makes things tough for us, because we'd like to be able to provide official Foxfooders with some extra bits and we can't do that while these populations are on the same update channel. Thanks to work that Rob Wood and Alexandre Lissy are doing, we'll soon be able to resolve this and get Foxfooders the bits they need to do the best possible testing.

To make this possible, we've implemented a short term solution that lets us only serve updates to official Foxfooders. When landed, they will send a hashed version of their IMEI as part of their update request. A list of the acceptable IMEI hashes will be maintained in Balrog (the update server), which lets us only serve an update if the incoming one matches one of the whitelisted ones.

To really make this work we need to detangle the current "dogfood" update channel. As I mentioned, it's currently being used in two distinct populations of users: those are part of the official program, and those who aren't. In order to support both populations of users we'll be splitting the "dogfood" update channel into two:

  1. The new "foxfood" channel will be for users who are officially part of the Foxfooding program. Users on this channel will be part of the IMEI whitelist, and could receive FOTA or OTA updates.
  2. The "dogfood" channel will continue to serve serve OTA updates to anyone who puts themself on it.

To transition, we will be asking folks who are officially part of the Foxfooding program to flash with a new image that switches them to the "foxfood" update channel. When this is ready to go, it will be announced and communicated appropriately.

Big thanks to everyone who was involved in this effort, particularly Rob Wood, who implemented the new whitelisting feature in Balrog, and Alexandre Lissy and Jean Gong, who went through multiple rounds of back and forth before we settled on this solution.

It's worth noting that this solution isn't ideal: sending IMEIs (even hashed versions) isn't something we prefer to do for both reasons of user privacy and protection of the bits. In the longer term, we'd like to look at a solution that wouldn't require IMEIs to be sent to us. This could come in the form of embedding or asking for credentials, and using those to access the updates. This type of solution would enhance user privacy and make it harder to get around the protections by brute forcing.

Air MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Dan MinorAutoland to Inbound

We’re currently putting the finishing touches on Autoland to Inbound from MozReview. We have a relatively humble goal for the initial version of this: we’d like to replace the current manual “checkin-needed” process done by a sheriff. Basically, if a patch is reviewed and a try run has been done, the patch author can flag “checkin-needed” in Bugzilla and at some point in the future, a sheriff will land the changes.

In the near future, if a “ship-it” has been granted and a try run has been made, a button to land to Inbound will appear in the Automation menu in MozReview. At the moment, the main benefit of using Autoland is that it will retry landings in the event that the tree is closed. Enforcing a try run should cut down on tree closures, analysis of closure data in the past has shown a not insignificant number of patches which end up breaking the tree have had no try run performed.

A number of workflow improvements should be easy to implement once this is in place, for instance automatically closing MozReview requests to keep people’s incoming review requests sane and rewriting any r? in the commit summary to be r=.

James LongImmutable Data Structures and JavaScript

A little while ago I briefly talked about my latest blog rewrite and promised to go more in-depth on specific things I learned. Today I'm going to talk about immutable data structures in JavaScript, specifically two libraries immutable.js and seamless-immutable. There are other libraries, but the choice is conceptually between truly persistent data structures or copying native JavaScript objects, and comparing these two highlights the tradeoffs, no matter what specific library you choose [1]. I'll also talk a little about transit-js, which is a great way to serialize anything.

Very little of this applies specifically to Redux. I talk about using immutable data structures generally, but provide pointers for using it specifically in Redux. In Redux, you have a single app state object and update it immutably, and there are various ways to achieve this, each with tradeoffs. I explore this below.

One thing to think about with Redux is how you combine reducers to form the single app state atom; the default method that Redux provides (combineReducers) assumes that you are combining multiple values into a single JavaScript object. If you really want to combine them into a single Immutable.js object, for example, you would need to write your own combineReducers that does so. This might be necessary if you need to serialize your app state and you assume that it's entirely made up of Immutable.js objects.

Most of this applies to using immutable objects in JavaScript in general. It's a bit awkward sometimes because you're fighting the default semantics, and it can feel like you're juggling types. However, depending on your app and how you set things up, you can get a lot out of it.

Currently there is a proposal for adding immutable data structures to JavaScript natively, but it's not clear if it will work out yet. It would certainly remove most problems with using them in JavaScript currently.


Immutable.js comes from Facebook and is one of the most popular implementations of immutable data structures. It's the real deal; it implements fully persistent data structures from scratch using advanced things like tries to implement structural sharing. All updates return new values, but internally structures are shared to drastically reduce memory usage (and GC thrashing). This means that if you append to a vector with 1000 elements, it does not actually create a new vector 1001-elements long. Most likely, internally only a few small objects are allocated.

The advancements of structural sharing data structures, greatly helped with the groundbreaking work by Okasaki, has all but shattered the myth that immutable values are too slow for Real Apps. In fact, it's surprising how many apps can be made faster with them. Apps which read and copy data structures heavily (to avoid being mutated from someone else) will easily benefit from immutable data structures (simply copying a large array once will diminish your performance wins from mutability).

Another example is how ClojureScript discovered that UIs are given a huge performance boost when backed by immutable data structures. If you're mutating a UI, you commonly touch the DOM more than necessary (because you don't know whether the value needs updating or not). React will minimize DOM mutations, but you still need to generate the virtual DOM for it to work with. When components are immutable, you don't even have to generate the virtual DOM; a simple === equality check tells you if it needs to update or not.

Is it Too Good to Be True? You might wonder why we don't use immutable data structures all the time with the benefits they provide. Well, some languages do, like ClojureScript and Elm. It's harder in JavaScript because they are not the default in the language, so we need to weigh the pros and cons.

Space and GC Efficiency

I already explained why structural sharing makes immutable data structures efficient. Nothing is going to beat mutating an array at an index, but the overhead of immutability isn't large. If you need to avoid mutations, they are going to beat copying objects hands-down.

In Redux, immutability is enforced. You won't see any updates on the screen unless you return a new value. There are big wins because of this, and if you want to avoid copying you might want to look at Immutable.js.

Reference & Value Equality

Let's say you internally stored a reference to an object, and called it obj1. Later on, obj2 comes down the pipe. If you never mutate objects, and obj1 === obj2 is true, you know absolutely nothing has changed. In many architectures, like React, this allows you easily do powerful optimizations.

That's called "reference equality," where you can simply just compare pointers. But there's also the concept of "value equality," where you can check if two objects are identical by doing obj1.equals(obj2). When things are immutable, you treat objects as just values.

In ClojureScript everything is a value, and even the default equality operator performs the value equality check (as if === would). If you actually wanted to compare instances you would use identical?. The benefit of value equality with immutable data structures is that it can usually do the checks more performantly than a full recursive scan (if it shares structure it can skip that part).

So where does this come into play? I already explained how it makes optimizing React trivial. Just implement shouldComponentUpdate and check if the state is identical, and skip rendering if so.

I also discovered that while using === with Immutable.js does not perform a value equality check (obviously, you can't override JavaScript's semantics), Immutable.js uses value equality for identities of objects. Anywhere that it wants to check if objects are the same, it uses value equality.

For example, keys of a Map object are value equality checked. This means I can store an object in a Map, and retrieve it later just by supplying an object of the same shape:

let map = Immutable.Map();
map = map.set(Immutable.Map({ x: 1, y: 2}), "value");
map.get(Immutable.Map({ x: 1, y: 2 })); // -> "value"

This has a lot of really nice implications. For example, let's say I have a function that takes a query object that specifies fields to pull from a server:

function runQuery(query) {
  // pseudo-code: somehow pass the query to the server and 
  // get some results
  return fetchFromServer(serialize(query));

  select: 'users',
  filter: { name: 'James' }

If I wanted to implement query caching, this is all I would have to do:

let queryCache = Immutable.Map();
function runQuery(query) {
  let cached = queryCache.get(query);
  if(cached) {
    return cached;
  } else {
    let results = fetchFromServer(serialize(query));
    queryCache = queryCache.set(query, results);
    return results;

I can treat the query object as a value, and store the results with it as a key. Later on, if something runs the same query, I'll get back the cached results even if the query object isn't the same instance.

There are all sorts of patterns that value equality simplifies. In fact, I do the exact same technique when querying for posts.

JavaScript Interop

The major downside to Immutable.js data structures is the reason that they are able to implement all the above features: they are not normal JavaScript data structures. An Immutable.js object is completely different from a JavaScript object.

That means you must do map.get("property") instead of, and array.get(0) instead of array[0]. While Immutable.js goes to great lengths to provide JavaScript-compatible APIs, even they are different (push must return a new array instead of mutating the existing instance). You can feel it fighting the default mutation-heavy semantics of JavaScript.

The reason this makes things complicated is that unless you're really hardcore and are starting a project from scratch, you can't use Immutable objects everywhere. You don't really need to anyway for local objects of small functions. Even if you create every single object/array/etc as immutable, you're going to have to work with 3rd party libraries which use normal JavaScript objects/arrays/etc.

The result is that you never know if you are working with a JavaScript object or an Immutable one. This makes reasoning about functions harder. While it's possible to be clear where you are using immutable objects, you still pass them through the system into places where it's not clear.

In fact, sometimes you might be tempted to put a normal JavaScript object inside an Immutable map. Don't do this. Mixing immutable and mutable state in the same object will reap confusion.

I see two solutions to this:

  1. Use a type system like TypeScript or Flow. This removes the mental burden of remembering where immutable data structures are flowing through the system. Many projects are not willing to take this step though, as it requires quite a different coding style.

  2. Hide the details about data structures. If you are using Immutable.js in a specific part of your system, don't make anything outside of it access the data structures directly. A good example is Redux and it's single atom app state. If the app state is an Immutable.js object, don't force React components to use Immutable.js' API directly.

    There are two ways to do this. The first is to use something like typed-immutable and actually type your objects. By creating records, you get a thin wrapper around an Immutable.js object that provides a interface by defining getters based on the fields provided by the record type. Everything that just reads from the object can treat it like a normal JavaScript object. You can't mutate it still, but that's something you actually want to enforce.

    The second method is to provide a way to query objects and force anything that wants to read to perform a query. This doesn't work in general, but it works really well in the case of Redux because we have a single app state object, and you want to hide the data layout anyway. Forcing all React components to depend on the data layout means you can never change the actual structure of the app state, which you'll probably want to do over time.

    Queries don't have to be a sophisticated engine for deep object querying, they can just be simple function. I'm not doing this in my blog yet, but imagine if I had a bunch of functions like getPost(state, id) and getEditorSettings(state). These all take state and return what I am "querying" just by using the function. I no longer care about where it lives within the state. The only problem is that I might still return an immutable object, so I might need to coerce that into a JavaScript object first or use a record type as described above.

To sum it all up: JavaScript interop is a real issue. Never reference JavaScript objects from Immutable ones. Interop issues can be mitigated with record types as provided with typed-immutable, which have other interesting benefits like throwing errors when mutating or reading invalid fields. Finally, if you're using Redux, don't force everything to depend on the app state structure, as you'll want to change it later. Abstract the data implementation away, which solves the problem with immutable interop.


There's another way to enforce immutability. The seamless-immutable project is a much lighter-weight solution that uses normal JavaScript objects. It does not implement new data structures, so there is no structural sharing, which means you will copy objects as you update them (however, you only need a shallow copy). You don't get any of the performance or value equality benefits explained above.

However, in return you get excellent JavaScript interop. All the data structures are quite literally JavaScript data structures. The difference is that that seamless-immutable calls Object.freeze on them, so you cannot mutate them (and strict mode, which is the default with ES6 modules, will throw errors on mutation). Additionally, it adds a few methods to each instance to aid in updating the data, like merge which returns a new object with the supplied properties merged in.

It's missing a few common methods for updating immutable data structures, like Immutable.js' setIn and mergeIn methods which makes it easy to update a deeply nested object. But these are easily implemented and I plan to contribute these to the project.

It's impossible to mix immutable and mutable objects. seamless-immutable will deeply convert all objects to be immutable when wrapping an instance with it, and any added values are automatically wrapped. In practice Immutable.js works very similarly, where Immutable.fromJS deeply converts, as well as various methods like obj.merge. But obj.set does not automatically coerce, so you can store any data type you like. This is not possible with seamless-immutable, so you cannot accidentally store a mutable JavaScript object.

In my opinion, I would expect each library to behave the way they currently do; they have different goals. For example, because seamless-immutable automatically coerces, you cannot store any type that it is not aware of, so it won't play nicely with anything but basic builtin types (in fact, it does not even support Map or Set types right now).

seamless-immutable is a tiny libary with big wins, but also loses out on some fundamental advantages of immutable data structures like value equality. If JavaScript interop is a huge concern for you, it's a fantastic solution. It's especially helpful if you're migrating existing code, as you can slowly make things immutable without rewriting every piece of code that touches them.

The Missing Piece: Serializing with transit-js

There's one last piece to consider: serialization. If you're using custom data types, JSON.stringify is no longer an option. But JSON.stringify was never very good anyway, you can't even serialize ES6 Map or Set instances.

transit-js is a great library written by David Nolen that defines an extensible data transfer format. By default you cannot throw Map or Set instances into it, but the crucial difference is that you can easily transcribe custom types into something that transit understands. In fact, the full code for serializing and deserializing the entire set of Immutable.js types is less than 150 lines long.

Transit is also much smarter about how it encodes types. For example, it knows that map keys might be complex types as well, so it's easy to tell how it how to serialize Map types. Using the transit-immutable-js library (referenced above) to support Immutable.js, now we can do things like this:

let { toJSON, fromJSON } = require('transit-immutable-js');

let map = Immutable.Map();
map = map.set(Immutable.Map({ x: 1, y: 2 }), "value");

let newMap = fromJSON(toJSON(map));
newMap.get(Immutable.Map({ x: 1, y: 2 })); // -> "value"

Value equality combined with transit's easy-breezy map serialization gives us a simple way to use these patterns consistently across any system. In fact, my blog builds the query cache on the server when server-rendering and then sends that cache to the client, so the cache is still fully intact. This use case was actually the main reason I switched to transit.

It would be easy to serialize ES6 Map types as well, but if you have complex keys I'm not sure how you would use the unserialized instance without value equality. There are still probably uses for serializing them though.

If you have mixed normal JavaScript objects and Immutable.js objects, serializing with transit will also keep all those types in tact. While I recommend against mixing them, transit will deserialize each object into the appropriate type, whereas using raw JSON means you'd convert everything to an Immutable.js type when deserializing (assuming you do Immutable.fromJS(JSON.parse(str))).

You can extend transit to serialize anything, like Date instances or any custom types. Check out transit-format for how it encodes types.

If you use seamless-immutable, you are already restricting yourself to only use builtin JavaScript (and therefore JSON-compatible) types, so you can just use JSON.stringify. While simpler, you lose out on the extensibility; it's all about tradeoffs.


Immutability provides a lot of benefits, but whether or not you need to use full-blown persistent data structures provided by Immutable.js depends on the app. I suspect a lot of apps are fine copying objects, as most of them are relatively small.

You win simplicity at the cost of features though; not only is the API a lot more limited you don't get value equality. Additionally, it may be hard later on to switch to Immutable.js if you find out you need the performance gains of structural sharing.

Generally I would recommend hiding the data structure details, especially if you use Immutable.js, to the outside world. Try to conform to JavaScript's default protocols for objects and arrays, i.e. and arr[0]. It should be possible to quickly wrap Immutable objects with these interfaces, but more research is needed.

This is especially true in Redux, where you will want to change how the app state is structured in the future. You have this problem even if your app state is a normal JavaScript object. Outside users shouldn't break if you move things around in the app state. Provide a way to query the app state structure instead, at least just by abstracting out data accesses with functions. More complex solutions like Relay and Falcor solve this too because a query language is the default way to access data.

[1] mori is another persistent data structure implementation (pulled out from ClojureScript), and React's immutability helpers is another library that simply shallow copies native JavaScript objects

About:CommunityParticipation Lab Notes: Volunteer vrs Contributor


As part of the Participation Lab’s efforts we recently began conducting  experiments on the Get Involved Page seeking to better understand how people navigate and connect (or fail to connect) to contribution opportunities. In preparation for this experiment we looked at some of the research that the team had conducted in recent years, and a number of their  key learnings led us to a deeper conversation about the language and labels we use to invite contribution to Mozilla. Some of the those learnings were:

  • People need to understand what, and why before they’ll be interested in understanding how they can contribute to Mozilla.
  • We must make it immediately apparent that the Get Involved  page is seeking volunteers and not employees.
  • We need to set clear expectation of investment/journey needed to get involved or become a Mozillian.

This matched some other feedback,  and as a result we decided to conduct a series of interviews to discover more ideas and prejudices that exist around the terms volunteer and contributor. Eighteen interviews covering diverse perspectives were conducted, these included core contributors, project leaders, alumni, community project leads, those working in open science, advocacy and randomly selected people in my life who had never contributed to Mozilla.  We discovered four interesting insights shared below.

Project preference is ‘Contributor’


Overall, people working in, or already volunteering with Mozilla were more comfortable with ‘contributor’, but agreed that unless your background was working in a field like software engineering, or science where the term is already part of the language ecosystem, it might be challenging to grasp.  I also noticed  a trend in feedback that acknowledged  once you’re regularly involved  in a project you might no longer be objective, and that we may, in fact, be skewing even the most common understanding of these terms. One example given was the use of  ‘paid contributor’ and ‘volunteer contributor’, which made no sense for most people outside of Mozilla.

The term ‘Volunteering’ is more universally  associated with lending time and skills but…


While people seemed to generally understand that volunteering was about lending time and skills, I encountered sensitivities that the word ‘volunteer’ which invoked feelings of being ‘charitable’ vs the more empowered feeling of being a ‘contributor’ .  I heard that ‘contribution’ lends to a feeling we’re part ‘part of something ’ while ‘volunteering felt more detached.  One core contributor felt very, very strongly, that volunteering was not the term for what they do at Mozilla.

‘Contribution’ feels more like giving a gift or donation


Feedback from non-technical contributors ,and those I spoke with outside the Mozilla community indicated that  the term “contribution” was easy to misinterpret as being about donating funds or something of greater significance than some people felt they could offer.  When asked, a couple of people cited political campaigns, and fundraisers as the most common association they with the word ‘contribution’.

What’s in a Name?


It was also suggested that at Mozilla we should stop labouring over generalized terms like volunteer and contributor,  and instead focus  energies on clarifying ways people can help – One person felt that such opportunity  exists in more explicit ‘role titles’  i.e. Android Community Ambassador’.   The hypothesis is, that by providing role titles we can help people connect to opportunities that are resume-worthy with recognition that contribution is an opportunity.  Of course, there are already examples of success with role names demonstrated by the Mozilla Reps program and most recently Club Captains and Regional Leads in Webmaker Clubs.


We had an interesting suggestion that  we make up our own name!  Create a Mozilla-fied name for volunteers that makes volunteering at Mozilla a unique version of both terms.  An inspiring example was the London Olympics which called volunteers ‘Games Makers’, what a Mozilli-fied version would be remained unclear :)  but I’m sure we could come up with something.  What do you think?

Additional lure of a Mozilla-fied name is a chance to help people recognize the amazingness of the community they would be joining which MDN reported to be a factor in repeat contribution in their area  – and similar to how an Olympic volunteerism resonated with a name describing their impact.

So where from here?


There is the opportunity for continued experimentation and testing using the Get Involved Page, and  we would love to hear from you – contributor volunteer, Mozillia-fied name?

What experiment  do you think the Participation Lab should design  next with these new insights?



Image: “Mozilla Summit Day 2” by Roland Tanglaois licensed under CC BY 2.0

About:CommunityMeet an MDN contributor: klez

Photo of Frederico klez Culloca

Federico klez Culloca made his first few edits to MDN in 2013, but started contributing in earnest in 2014, as a localizer for Mozilla Italia. After a couple of months, he started attending the bi-weekly MDN Community meeting, and later the Learning Area meetings. After that, he concentrated his efforts on the Learning Area of MDN, especially the Glossary.

He says that working on MDN gives him a good idea of how an organization as big as Mozilla actually works, bringing together paid staff and volunteers.

His advice for MDN newcomers?

Don’t be afraid to make mistakes in good faith. Someone is always able and willing to correct them and help you learn from your errors. It’s a wiki, after all :-)

Thanks, klez, for all your contributions to MDN!

About:CommunityParticipation Lab Notes: The Power of Swag

It doesn’t take long, once you’ve entered the Mozilla community before you notice that swag is a big part of Mozilla. Stickers, t-shirts, lanyards are everywhere and for many Mozillians these things have become a kind of currency with emotional and physical value.

Photo by: Doug Belshaw on Flickr

Photo by DougBelshaw/CC BY 2.0

In 2014, Mozilla spent over $150,000 on swag to engage contributors across four major initiatives: Maker Party, MozFest, Mozilla Reps, and Firefox Student Ambassadors (FSAs).

However we rarely stop to examine what we are learning about the results, benefits and challenges of this investment.

In order to surface and capture these insights the Participation Team interviewed four groups at Mozilla, for whom swag is a core part of their activities, and identified the most interesting insights and challenges faced by each group.

As a result, we discovered that many of the groups face similar challenges but have found distinct solutions and strategies for managing them. The two major insights were that by encouraging local production of swag, and creating swag that is tailored specifically to the needs of the community, costs can be minimized and value to the community increased.

Maker Party

Although in the past year, swag has become a much smaller part of Maker Party as the campaign has become shorter (17 days vs. the previous 2 months) and more contained. In 2014 thousands of people spent the summer throwing events, and swag was an integral part of growing and motivating the Maker Party community.

Insight : Swag Legitimizes Hosts & Events

Much more than a form of recognition for event hosts, in many communities swag is perceived as a vote of confidence from Mozilla that legitimizes both the host and the event. Many communities feel that if we are willing to support an event host and their event through physical things, and it marks them as “officially sanctioned” by Mozilla and this alignment with the brand dramatically increases the influence and reputation of the contributor and the event.

For example, in South America, a Maker Party host created Mozilla branded mouse pads to legitimize their events in the eyes of local internet cafe owners who let them use the space for free in exchange for Mozilla branded mouse pads.

Challenge: Cost of Shipping

For Maker Party, shipping swag across the world, often to extremely remote areas, was very expensive and problematic. Certain countries charge enormous taxes on clothing and have been known to detain parcels with t-shirts – to the detriment of volunteers who often cannot pay the high customs fees.


While MozFest, as a short-term festival is a bit different from the other examples,  it identifies another way in which we use swag to build and support community.

Insight: Swag is Key to Partnerships

Every year MozFest partners with other like-minded organizations to put on the event. As part of this relationship, partners are offered the opportunity to distribute swag and some promotional material to attendees. As a result Mozilla can produce a small amount of swag like a tote bag and water bottle, and have partners add their swag to create fun gifts for participants that also act as promotional pieces for partners and Mozilla.

Challenges: The Right Swag

Finding swag that is re-usable and has value outside of the event is challenging, but water bottles and tote bags have proven popular and effective, and have the added benefit of reducing the event’s environmental footprint.

Mozilla Reps

Unlike Maker Party or MozFest, Mozilla Reps is a community where individuals participate in multiple ways over a sustained period of time. For this group, it is often the variety rather than quantity of swag that drives excitement.

Insight: Creating A Collectors Culture

Within Reps, swag is a great way to acknowledge contributors and support events. However, in some circumstances, swag can come to be seen as a symbol that represents value and status in the community. Therefore as more of a swag item is produced the value of each item diminishes, and collectors culture has developed. While rare swag is a very powerful tool for driving engagement and recognizing achievement in the Reps community, it is important to be aware of the number and variety of an item that is produced, and to carefully manage expectations to prevent swag becoming an end in and of itself.

Challenge: Mitigating Expectations

As swag is a large part of the Reps culture, it is important to be careful about the expectations that are set around the value of swag. Limiting the kinds of official swag that is produced to t-shirts, stickers, and posters and having clear value attached to each, may be one way to keep expectations low and guard against increasing expectations.


The Firefox Student Association has many parallels in it’s structure and its relationship to swag as the Mozilla Reps program. However by carefully controlling the value of swag, and encouraging local production, many of the challenges faced by other groups have been avoided.

Insight: Careful Curation & Local Production

The FSA’s have solved problems related to shipping swag, and reduced the “freebee” quality by having FSA’s create their own swag locally and then be reimbursed for the cost. Like the Reps program they also have a collectors culture but set formal expectations on the “value” of different kinds of swag ie. t-shirts are something you have to earn, stickers are freebees you can give away at your event, and posters are something you have to produce yourself.

Challenge: Tracking Designs

Because unique designs are a large part of what gives t-shirt swag it’s value, there is a struggle to find and keep track of the many ways t-shirts and designs are being used across Mozilla. In order to coordinate the FSA program suggested creating a central repository of t-shirt designs and what/when they should be distributed so that the use of swag can be better aligned across all of Mozilla.

Overall, across Mozilla there is a great deal being learned and experimented with around swag as well as many areas for growth and improvement. Our hope is that by surfacing these lessons and insights, we’ll spark new conversations and gain more insights into the swag processes and how it can be improved. If you have experience or thoughts around swag at Mozilla please share them in the comments here, or on the Participation Team Discourse page.

Air MozillaQuality Team (QA) Public Meeting

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Mark CôtéFixing MozReview's sore spots

MozReview was intentionally released early, with a fairly minimal feature set, and some ugly things bolted onto a packaged code-review tool. The code-review process at Mozilla hasn’t changed much since the project began—Splinter, a slightly fancier UI than dealing with raw diffs, notwithstanding. We knew this would be a controversial subject, with a variety of (invariably strong) opinions. But we also knew that we couldn’t make meaningful progress on a number of long-desired features, like autolanding commits and automatic code analysis, without moving to a modern repository-based review system. We also knew that, with the incredible popularity of GitHub, many developers expect a workflow that involves them pushing up commits for review in a rich web UI, not exporting discrete patches and looking at almost raw diffs.

Rather than spending a couple years off in isolation developing a polished system that might not hit our goals of both appealing to future Mozillians and increasing productivity overall, we released a very early product—the basic set of features required by a push-based, repository-centric code-review system. Ironically, perhaps, this has decreased the productivity of some people, since the new system is somewhat raw and most definitely a big change from the old. It’s our sincere belief that the pain currently experienced by some people, while definitely regrettable and in some cases unexpected, will be balanced, in the long run, by the opportunities to regularly correct our course and reach the goal of a world-class code review-and-submission system that much faster.

And so, as expected, we’ve received quite a bit of feedback. I’ve been noticing a pattern, which is great, because it gives us insight into classes of problems and needs. I’ve identified four categories, which interestingly correspond to levels of usage, from basic to advanced.

Understanding MozReview’s (and Review Board’s) models

Some users find MozReview very opaque. They aren’t sure what many of the buttons and widgets do, and, in general, are confused by the interface. This caught us a little off-guard but, in retrospect, is understandable. Review Board is a big change from Splinter and much more complex. I believe one of the sources of most confusion is the overall review model, with its various states, views, entry points, and exit points. Splinter has the concept of a review in progress, but it is a lot simpler.

We also had to add the concept of a series of related commits to Review Board, which on its own has essentially a patch-based model, similar to Splinter’s, that’s too limited to build on. The relationship between a parent review request and the individual “child” commits is the source of a lot of bewilderment.

Improving the overall user experience of performing a review is a top priority for the next quarter. I’ll explore the combination of the complexity of Review Board and the commit-series model we added in a follow-up post.

Inconveniences and lack of clarity around some features

For users who are generally satisfied by MozReview, at least, enough to use it without getting too frustrated, there are a number of paper cuts and limitations that can be worked around but generate some annoyance. This is an area we knew we were going to have to improve. We don’t yet have parity with Splinter/Bugzilla attachments, e.g. reviewers can’t delegate review requests, nor can they mark specific files as reviewed. There are other areas that we can go beyond Bugzilla, such as being able to land parts of a commit series (this is technically possible in Bugzilla by having separate patches, but it’s difficult to track). And there are specific things that Review Board has that aren’t as useful for us as they could be, like the dashboard.

This will also be a big part of the work in the next quarter (at least).

Inability to use MozReview at all due to technological limitations

The single biggest item here is lack of support for git, particularly a git interface for hg repos like mozilla-central. There are many people interested in using MozReview, but their work flows are based around git using git-cinnabar. gps and kanru did some initial work around this in bug 1153053; fleshing this out into a proper solution isn’t a small task, but it seems clear that we’ll have to finish it regardless before too long, if we want MozReview to be the central code-review tool at Mozilla. We’re still trying to decide how this fits into the above priorities; more users is good, but making existing users happier is as well.

Big-ticket items

As mentioned at the beginning of this post, the main reason we’re building a new review tool is to make it repository-centric, that is, based around commits, not isolated patches. This makes a lot of long-desired tools and features much more feasible, including autoland, automatic static analysis, commit rewriting to automatically include metadata like reviewers, and a bunch of other things.

This has been a big focus for the last few months. We’ve had autoland-to-try for a little while now, and autoland-to-inbound is nearly complete. We have a generic library for static analysis with which we’ll be able to build various review bots. And, of course, the one big feature we started with, the ability push commits to MozReview instead of exporting standalone patches, which by itself is both more convenient and preserves more information.

After autoland-to-inbound we’ll be putting aside other big features for a little while to concentrate on general user experience so that people enjoy using MozReview, but rest assured we’ll be back here to build more powerful workflows for everyone.

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Christian HeilmannOf impostor syndrome and running in circles (part 3)

These are the notes of my talk at SmartWebConf in Romania. Part 1 covered how Impostor Syndrome cripples us in using what we hear about at conferences. It covered how our training and onboarding focuses on coding instead of human traits. In Part 2 I showed how many great things browsers do for us we don’t seem to appreciate. In this final part I’ll explain why this is and why it is a bad idea. This here is a call to action to make yourself feel better. And to get more involved without feeling inferior to others.

This is part 3 of 3.

Part 2 of this series ended with the explanation that JavaScript is not fault tolerant, and yet we rely on it for almost everything we do. The reason is that we want to control the outcome of our work. It feels dangerous to rely on a browser to do our work for us. It feels great to be in full control. We feel powerful being able to tweak things to the tiniest detail.

JavaScript is the duct-tape of the web

There is no doubt that JavaScript is the duct-tape of the web: you can fix everything with it. It is a programming language and not a descriptive language or markup. We have all kind of logical constructs to write our solutions in. This is important. We seem to crave programmatic access to the things we work with. That explains the rise of CSS preprocessors like Sass. These turn CSS into JavaScript. Lately, PostCSS even goes further in merging these languages and ideas. We like detailed access. At the same time we complain about complexity.

No matter what we do – the problem remains that on the client side JavaScript is unreliable. Because it is fault intolerant. Any single error – even those not caused by you – result in our end users getting an empty screen instead of the solution they came for. There are many ways JavaScript can fail. Stuart Langridge maintains a great flow chart on that called “Everyone has JavaScript, right?“.

There is a bigger issue with fixing browser issues with JavaScript. It makes you responsible and accountable for things browser do. You put it onto yourself to fix the web, now it is your job to keep doing that – for ever, and ever and ever…

Taking over things like page rendering, CSS support and page loading with JavaScript feels good as it fixes issues. Instead of a slow page load we can show a long spinner. This makes us feel good, but doesn’t help our end users much. Especially when the spinner has no timeout error case – like browser loading has.

Fixing a problem with JavaScript is fun. It looks simple enough and it removes an unknown browser support issue. It allows us to concentrate on building bigger and better things. And we don’t have to worry about browser issues.

It is an inspiring feeling to be the person who solved a web-wide issue. It boosts our ego to see that people rely on our brains to solve issues for them. It is great to see them become more effective and faster and free to build the next Facebook.

It gets less amazing when you want to move on and do something else. And when people have outrageous demands or abuse your system. Remy Sharp lately released a series of honest and important blog posts on that matter. “The toxic side of free” is a great and terrifying read.

Publishing something new in JavaScript as “open source” is easy these days. GitHub made it more or less a one step process. And we get a free wiki, issue tracker and contribution process with it to boot. That, of course, doesn’t mean we can make this much more complex if we wanted to. And we do as Eric Douglas explains.

open source is free as in puppy

Releasing software or solutions as open source is not the same as making it available for free. It is the start of a long conversation with users and contributors. And that comes with all the drama and confusion that is human interaction. Open Source is free as in puppy. It comes with responsibilities. Doing it wrong results in a badly behaving product and community around it.

Help stop people falling off the bleeding edge

300 cliff

If you embrace the idea that open source and publishing on the web is a team effort, you realise that there is no need to be on the bleeding edge. On the opposite – any “out there” idea needs a group of peers to review and use it to get data on how sensible the idea really is. We tend to skip that part all too often. Instead of giving feedback or contributing to a solution we discard it and build our own. This means all we have to do is to deal with code and not people. It also means we pile on to the already unloved and unused “amazing solutions” for problems of the past that litter the web.

The average web page is 2MB with over 100 http requests. The bulk of this is images, but there is also a lot of JS and CSS magical solutions in the mix.

If we consider that the next growth of the internet is not in the countries we are in, but in emerging places with shaky connectivity, our course of action should be clear: clean up the web.

Of course we need to innovate and enhance our web technology stack. At the same time it is important to understand that the web is an unprecedented software environment. It is not only about what we put in, it is also about what we can’t afford to lose. And the biggest part of this is universal access. That also means it needs to remain easy to turn from consumer into creator on the web.

If you watch talks about internet usage in emerging countries, you’ll learn about amazing numbers and growth predictions.

You also learn about us not being able to control what end users see. Many of our JS solutions will get stripped out. Many of our beautiful, crafted pictures optimised into a blurry mess. And that’s great. It means the users of the web of tomorrow are as empowered as we were when we stood up and fought against browser monopolies.

So there you have it: you don’t have to be the inventor of the next NPM module to solve all our issues. You can be, but it shouldn’t make you feel bad that you’re not quite interested in doing so. As Bill Watterson of Calvin and Hobbes fame put it:

We all have different desires and needs, but if we don’t discover what we want from ourselves and what we stand for, we will live passively and unfulfilled.

So, be active. Don’t feel intimidated by how clever other people appear to be. Don’t be discouraged if you don’t get thousands of followers and GitHub stars. Find what you can do, how you can help the merging of bleeding edge technologies and what goes into our products. Above all – help the web get leaner and simpler again. This used to be a playground for us all – not only for the kids with the fancy toys.

You do that by talking to the messy kids. Those who build too complex and big solutions for simple problems. Those doing that because clever people told them they have to use all these tools to build them. The people on the bleeding edge are too busy to do that. You can. And I promise, by taking up teaching you end up learning.

John FordTaskcluster Component Loader

Taskcluster is the new platform for building Automation at Mozilla.  One of the coolest design decisions is that it's composed of a bunch of limited scope, interchangeable services that have well defined and enforced apis.  Examples of services are the Queue, Scheduler, Provisioner and Index.  In practice, the server-side components roughly map to a Heroku app.  Each app can have one or more web worker processes and zero or more background workers.

Since we're building our services with the same base libraries we end up having a lot of duplicated glue code.  During a set of meetings in Berlin, Jonas and I were lamenting about how much copied, pasted and modified boilerplate was in our projects.

Between the API definition file and the command line to launch a program invariably sits a bin/server.js file for each service.  This script basically loads up our config system, loads our Azure Entity library, loads a Pulse publisher, a JSON Schema validator and a Taskcluster-base App.  Each background worker has its own bin/something.js which basically has a very similar loop.  Services with unit tests have a test/helper.js file which initializes the various components for testing.  Furthermore, we might have things initialize inside of a given before() or beforeEach().

The problem with having so much boiler plate is twofold.  First, each time we modify one services's boilerplate, we are now adding maintenance complexity and risk because of that subtle difference to the other services.  We'd eventually end up with hundreds of glue files which do roughly the same thing, but accomplish it complete differently depending on which services it's in.  The second problem is that within a single project, we might load the same component ten ways in ten places, including in tests.  Having a single codepath that we can test ensures that we're always initializing the components properly.

During a little downtime between sessions, Jonas and I came up with the idea to have a standard component loading system for taskcluster services.  Being able to rapidly iterate and discuss in person made the design go very smoothly and in the end, we were able to design something we were both happy with in about an hour or so.

The design we took is to have two 'directories' of components.  One is the project wide set of components which has all the logic about how to build the complex things like validators and entities.  These components can optionally have dependencies.  In order to support different values for different environments, we force the main directory to declare which 'virtual dependencies' it requires.  They are declared as a list of strings.  The second level of component directory is where these 'virtual dependencies' have their value.

Both Virtual and Concrete dependencies can either be 'flat' values or objects.  If a dependency is a string, number, function, Promise or an object without a create property, we just give that exact value back as a resolved Promise.  If the component is an object with a create property, we initialize the dependencies specified by the 'requires' list property, pass those values as properties on an object to the function at the 'create' property.  The value of that function's return is stored as a resolved promise.  Components can only depend on other components non-flat dependencies.

Using code is a good way to show how this loader works:

// lib/components.js

let loader = require('taskcluster-base').loader;
let fakeEntityLibrary = require('fake');

module.exports = loader({
fakeEntity: {
requires: ['connectionString'],
setup: async deps => {
let conStr = await deps.connectionString;
return fakeEntityLibrary.create(conStr);
In this file, we're building a really simple component directory which only contains a contrived 'fakeEntity'.  This component depends on having a connection string to fully configure.  Since we want to use this code in production, development and testing, we don't want to bake configuration into this file, so we force the thing using this to itself give us a way to configure what the connection string.

// bin/server.js
let config = require('taskcluster-base').config('development');
let loader = require('../lib/components.js');

let load = loader({
connectionString: config.entity.connectionString,

let configuredFakeEntity = await load('fakeEntity')
In this file, we're providing a simple directory that satisifies the 'virtual' dependencies we know that need to be fulfilled before initializing can happen.

Since we're creating a dependency tree, we want to avoid having cyclic dependencies.  I've implemented a cycle checker which ensures that you cannot configure a cyclical dependency.  It doesn't rely on the call stack being exceeded from infinite recursion either!

This is far from being the only thing that we figured out improvements for during this chat.  Two other problems that we were able to talk through were splitting out taskcluster-base and having a background worker framework.

Currently, taskcluster-base is a monolithic library.  If you want our Entities at version 0.8.4, you must take our config at 0.8.4 and our rest system at 0.8.4.  This is great because it forces services to move all together.  This is also awful because sometimes we might need a new stats library but can't afford the time to upgrade a bunch of Entities.  It also means that if someone wants to hack on our stats module that they'll need to learn how to get our Entities unit tests to work to get a passing test run on their stats change.

Our plan here is to make taskcluster-base a 'meta-package' which depends on a set of taskcluster components that we support working together.  Each of the libraries (entities, stats, config, api) will be split out into their own packages using git filter-branch to maintain history.  This is just a bit of simple leg work of ensuring that the splitting out goes smooth.

The other thing we decided on was a standardized background looping framework.  A lot of background workers follow the pattern "do this thing, wait one minte, do this thing again".  Instead of each service implementing this its own special way for each background worker, what we'd really like is to have a library which does all the looping magic itself.  We can even have nice things like a watch dog timer to ensure that the loop doesn't stick.

Once the PR has landed for the loader, I'm going to be converting the provisioner to use this new loader.  This is a part of a new effort to make Taskcluster components easy to implement.  Once a bunch of these improvements have landed, I intend to write up a couple blog posts on how you can write your own Taskcluster service.

Joel MaherSay hi to Gabriel Machado- a newer contributor on the Perfherder project

Earlier this summer, I got an email from Gabriel asking how he could get involved in Automation and Tools projects at Mozilla.  This was really cool and I was excited to see Gabriel join the Mozilla Community.  Gabriel is knows as :goma on IRC and based on his interests and projects with mentors  available, hacking on TreeHerder was a good fit.  Gabriel also worked on some test cases for the Firefox-UI-Tests.  You can see the list of bugs he has been involved with and check him out on Github.

While it is great to see a contributor become more comfortable in their programming skills, it is even better to get to know the people you are working with.  As I have done before, I would like to take a moment to introduce Gabriel and let him do the talking:

Tell us about where you live –

I lived in Durham-UK since last year, where I was doing an exchange program. About Durham I can say that is a lovely place, it has a nice weather, kind people, beautiful castles and a stunning “medieval ambient”.  Besides it is a student city, with several parties and cultural activities.

I moved back to Brazil 3 days ago, and next week I’ll move to the city of Ouro Preto to finish my undergrad course. Ouro Preto is another beautiful historical city, very similar to Durham in some sense. It is a small town with a good university and stunning landmarks. It’s a really great place, designated a world heritage site by UNESCO.

Tell us about your school –

In 2 weeks I’ll begin my third year in Computer Science at UFOP(Federal University of Ouro Preto). It is a really good place to study computer science, with several different research groups. In my second year I earned a scholarship from the Brazilian Government to study in the UK. So, I studied my second year at Durham University. Durham is a really great university, very traditional and it has a great infra-structure. Besides, they filmed several Harry Potter scenes there :P

Tell us about getting involved with Mozilla –

In 2014 I was looking for some open source project to contribute with when I found the Mozilla Contributing Guide. It is a really nice guide and helped me a lot. I worked on some minors bugs during the year. In July of 2015, as part of my scholarship to study in the UK, I was supposed to do a small final project and I decided to work with some open source project, instead of an academic research. I contacted jmaher by email and asked him about it. He answered me really kindly and guided me to contribute with the Treeherder. Since then, I’ve been working with the A-Team folks, working with Treeherder and Firefox-Ui-Tests.

I think Mozilla does a really nice job helping new contributors, even the new ones without experience like me. I used to think that I should be a great hacker, with tons of programming knowledge to contribute with an open source project. Now, I think that contributing with an open source project is a nice way to become a great hacker with tons of programming knowledge

Tell us what you enjoy doing –

I really enjoy computers. Usually I spent my spare time testing new operating systems, window managers or improving my Vim. Apart from that, I love music. Specially instrumental.  I play guitar, bass, harmonica and drums and I really love composing songs. You can listen some of my instrumental musics here:

Besides, I love travelling and meeting people from different cultures. I really like talking about small particularities of different languages.

Where do you see yourself in 5 years?

I hope be a software engineer, working with great and interesting problems and contributing for a better (and free) Internet.

If somebody asked you for advice about life, what would you say?

Peace and happiness comes from within, do not seek it without.

Please say hi to :goma on irc in #treeherder or #ateam.

Byron Joneshappy bmo push day!

the following changes have been pushed to

  • [1207926] change treeherder@bots.tld to orangefactor@bots.tld
  • [1204683] Add whoami endpoint
  • [1208135] security not being mailed when bugs change core-security-release state
  • [1199090] add printable recovery 2fa codes
  • [1204623] timestamp on flags should reference the latest updated activity, not the first
  • [1209745] Update get_permissions.html.tmpl to reflect new self-canconfirm process

discuss these changes on

Filed under: bmo, mozilla

Daniel Stenberglibbrotli is brotli in lib form

Brotli is this new cool compression algorithm that Firefox now has support for in Content-Encoding, Chrome will too soon and Eric Lawrence wrote up this nice summary about.

So I’d love to see brotli supported as a Content-Encoding in curl too, and then we just basically have to write some conditional code to detect the brotli library, add the adaption code for it and we should be in a good position. But…

There is (was) no brotli library!

It turns out the brotli team just writes their code to be linked with their tools, without making any library nor making it easy to install and use for third party applications.

an unmotivated circle sawWe can’t have it like that! I rolled up my imaginary sleeves (imaginary since my swag tshirt doesn’t really have sleeves) and I now offer libbrotli to the world. It is just a bunch of files and a build system that sucks in the brotli upstream repo as a submodule and then it builds a decoder library (brotlidec) and an encoder library (brotlienc) out of them. So there’s no code of our own here. Just building on top of the great stuff done by others.

It’s not complicated. It’s nothing fancy. But you can configure, make and make install two libraries and I can now go on and write a curl adaption for this library so that we can get brotli support for it done. Ideally, this (making a library) is something the brotli project will do on their own at some point, but until they do I don’t mind handling this.

As always, dive in and try it out, file any issues you find and send us your pull-requests for everything you can help us out with!

Sara HaghdoostiA bridge between learning and advocacy

How do we bring together learning and advocacy? I wanted to start this conversation by giving example of campaigns I’ve worked on in the past.

Case Study 1: How much do you know about Iran quiz


This campaign happened at (which means let’s go in Farsi and whose mission was to support Iranian social innovators.) We were running a campaign to avoid war with Iran. What we found was that we were getting a lot of emails from our members who were confusing Iran with other groups or countries around the world. For example our members would write to us and say they wouldn’t take action anymore until women in Iran were allowed to drive. It was a clear case of misinformation as women in Saudi Arabia are restricted from driving not women in Iran.


Our goal was to be able to shatter misinformation about Iran in a way that was fun and that didn’t make our members – 90% of whom were not from an Iranian background -defensive.

The result:

As a result we launched a 10 question buzzfeed style quiz titled ‘how much do you know about Iran?’ We sent the quiz to our email list of 70,000+ people. The quiz was taken about 20,000 times and had a completion rate that was close to 90%. After the success of the quiz in our network it came to the notice of Upworthy who shared the quiz and then it reached over 100,000 people.

The majority of people got about 50% of the questions in the quiz right. This was intentional as we wanted our members to be challenged and realize that some of their assumptions were wrong. Our assumption was that doing so in the format of a fun quiz would be less confronting and more likely to sink in that doing it through a ‘mythbusting checklist.’ The feedback we received from members tended to indicate we were right:

Great!  Offerings like this which let us see the people and culture and humanity of Iran are the way.  Thanks! – Pat

Sara, very nice introduction to Iran. It’s a good way to begin the process to break down barriers.  -Dan

Great quiz…I missed two.  Sent it on to about 30 other folks. – Bob

The quiz was a great way to stimulate my thinking about Iran and correcting my misconceptions and adding to my knowledge about its products, way of life, historical events, etc.  Would love to see more.  – Sondra

How does this related to our work at Mozilla?

I wanted to use this case study to illustrate one tactic to use online organizing to educate people at scale. Right now we’re only conceptualizing the advocacy list as a way to mobilize people around legislative action – but in order to really build relationships and deep connections we can’t just ask people to take action we need to think about how we can serve our community.

That’s why I think it would be great for us to come together and create some learning goals for our list. For example – after a year of being on the list do we want people to be able to be able to articulate what net neutrality is? Do we want to know 5 things they can do to secure their privacy?

While the strategy and benchmarks is something we need to develop together – here are some tactical ideas to help illustrate the potential of this collaboration:

-Sending people a list of fun facts/anecdotes that relate to the open web that they can talk about with their families during thanksgiving

-Creating a list of gift ideas that will help people learn more about the open web during the holiday season.

– Running a campaign to ask people to make privacy a new year’s resolution and creating small things they can do each week to realize that resolution.