Laura de ReynalSnapchat

“If Snapchat was a phone I would definitely buy it. It’s like an iPhone, it’s really fast, you can see everyone’s life and see what is going on in your world.”

16 years old female teenager, Chicago.


Filed under: Mozilla, Quotes, Research

ArkyAutonomous Mozilla Stumbler with Android

Mozilla Location Service (MLS) is an open source service to determine location based on network infrastructure like WiFi access points and cell towers. The project has released an client applications to collect the large dataset of GSM, Cellphone, WiFi data using crowd sourcing. In this blog post I'll explore an idea to re-purpose an old Android mobile phone as an autonomous MozStumbling device that could be easily deployed in public transport, taxis or your friend who is driving across the country.

Bill of Materials

  1. Android Mobile phone.
  2. Mozstumbler Android App.
  3. Taskbomb Android App.
  4. Mini-USB cable.
  5. GSM SIM (With mobile data).
  6. Car lighter socket power adapter.
  7. Powerbank (optional).

Putting it together

In this setup, I am using a rugged Android phone running Android Gingerbread. It can take a lot punishment. Leaving a phone in overheated car is recipe for disaster.

From the Android settings I enabled allow installation of non-market applications 'Unknown Sources'. Connected the phone to my computer using the Mini-USB cable. Transferred the previous downloaded apps(.apk) packages to phone and installed the Mozstumbler and Taskbomb applications.

Android homescreen showing Mozstumbler and Taskbomb icons

Configured the Mozstumbler application to start on boot with Taskbomb app. Also configured Mozstumbler to upload using mobile data connection. The phone has GSM SIM card with data connection. Made sure both WiFi, GPS and Cellular data is enabled.

To prevent phone from downloading software updates and using up all the data. I disabled all software updates. Disabled all notifications both audio and LED notifications. Finally locked by phone by setting a secret code. Now the device is ready for deployment. The phone is plugged into car's lighter charging unit to keep it powered up. You can also use a power bank in case where charging options are not available.

Planning to use this autonomous Mozstumbler hack soon. Perhaps I should ask Thejesh to use on his epic trip across India.

Hannes VerschoreKeep on growing

I haven’t had the time to create a blogpost about the last year with numbers and describe the changes that have happened over the months/year. Hopefully soon, but a small teaser:

mysql dump size (in gb)

Dave HuntCustom Firefox Cufflinks

If you’re interested in a pair of custom Firefox logo cufflinks for $25 plus postage then please get in touch. I’ve been in contact with CuffLinks.com, and if I can place an initial order of at least 25 pairs then the mold and tooling fee will be waived. Check out their website for examples of their work. The Firefox cufflinks would be contoured to the outline of the logo, and the logo itself would be screen printed in full colour. If these go well, I’d also love a pair of dino head cufflinks to complement them!

Unless I can organise with CuffLinks.com to ship to multiple recipients and take payments separately, I will accept payments via Paypal and take care of sending the cufflinks out. As an example of postage costs, sending to USA would cost me approximately $12 per parcel. Let me know if you’re interested via a comment on this post, e-mail (find me in the phonebook), or IRC (:davehunt).

Karl DubostVendor Prefixes And Market Reality

Through the Web Compat twitter account, I happen to read a thread about Apple introducing a new vendor prefix. 🎳. The message by Alfonso Martínez L. starts a bit rough:

The mess caused by vendor prefixes on the wild is not enough, so we have new -apple https://www.webkit.org/blog/3709/using-the-system-font-in-web-content/ … @jonathandavis

Going to Apple blog post before reading the rest of the thread gives a bit more background.

Web content is sometimes designed to fit in with the overall aesthetic of the underlying platform which it is being rendered on. One of the ways to achieve this is by using the platform’s system font, which is possible on iOS and OS X by using the “-apple-system” CSS value for the “font-family” CSS property. On iOS 9 and OS X 10.11, doing this allows you to use Apple’s new system font, San Francisco. Using “-apple-system” also correctly interacts with the font-weight CSS property to choose the correct font on Apple’s latest operating systems.

Here I understand the desire to use the system font, but I don't understand the new -apple-system, specifically when the next paragraph says:

On platforms which do not support “-apple-system” the browser will simply fall back to the next item in the font-family fallback list. This provides a great way to make sure all your users get a great experience, regardless of which platform they are using.

I wonder what the cascade of font-family is not already doing so they need a new prefix. They explain later on by providing this information:

Going beyond the system font, iOS has dynamic type behavior, which can provide an additional level of fit and finish to your content.

font: -apple-system-body
font: -apple-system-headline
font: -apple-system-subheadline
font: -apple-system-caption1
font: -apple-system-caption2
font: -apple-system-footnote
font: -apple-system-short-body
font: -apple-system-short-headline
font: -apple-system-short-subheadline
font: -apple-system-short-caption1
font: -apple-system-short-footnote
font: -apple-system-tall-body

What I smell here is pushing the semantics of a text into the font-face, I believe it will not end well. But that's not what I want to talk about here.

Vendor Prefixes Principle

The vendor prefixes have been created for providing a safe place for vendors to experiment with new features. It's a good idea on paper. It can work well, specifically when the technology is not yet really mature and details need to be ironed. This would be perfectly acceptable if the feature was only available on beta and alpha versions of rendering engines. That would stop de facto the proliferation of these properties in common Web sites. And that would give space for experimenting.

Here the feature is not proposed as an experiment but as a way for Web developers, designers to use a new feature on Apple platform. It's proposed as a competitive advantage and a marketing tool for enticing developers to the cool new thing. And before I'm being targeted for blaming Apple only, all vendors in some fashion do that.

Let's assume that Apple is of good will. The real issue is not easy to understand, except if you are working daily on Web Compatibility across the world.

Enter the market reality field.

Flexbox And Gradients In China And Japan

tori in Kamakura

With the Web Compat team, we have been working lately a lot on Chinese and Japanese mobile Web site compatibility issues. The current market in China and Japan on Mobile is a smartphone ecosystem largely dominated by iOS and Android. It means that if you use in your site -webkit- and WebKit vendor prefixes, you are basically on the safe side for most of the users, but not all users.

What is happening here is interesting. Gradients and flexbox went through syntax changes and the standard syntax is really different from the original -webkit- syntax. These are two features of the Web platform which are very useful and very powerful, specifically flexbox. In a monopolistic market such as China and Japan, the end result was Web developers jumping on the initial version of the feature for creating their Web sites (shiny new and useful features).

Fast forward a couple of years and the economic reality of the Web starts playing its cards. Other vendors have catched up with the features, the standard process took place, and the new world of interoperability is all pinky with common implementations in all rendering engines, except a couple of minor details.

Web developers should all jump on adjusting their Web sites to add the standard properties at least. This is not happening. Why? Because the benefits are not perceived by Web developers, project managers and site owners. Indeed adjusting the Web site will have a cost in editing and testing. Who bares this cost and for which reasons?

When mentionning it will allow other users with different browsers to use the Web site, the answer is straightforward: "This browser X is not in our targeted list of browsers." or "This browser Y doesn't appear in our stats." We all know that the browser Y can't appear in the stats because it's not usable on the site (A good example of that is MBGA).

mbga rendering on Gecko mobile

Dropping Vendor Prefixes

Adding prefixless version of properties in the implementation of rendering engines help, but do not magically fix everything for improving the Web Compatibility story. That's the mistake that Timothy Hatcher (WebKit Developer Experience Manager at Apple.) is making in:

@AlfonsoML We also unprefixed 4 dozen properties this year. https://developer.apple.com/library/mac/releasenotes/General/WhatsNewInSafari/Articles/Safari_9.html#//apple_ref/doc/uid/TP40014305-CH9-SW28

This is cool and I applaud Apple for this. I wish it happened a bit earlier. Why doesn't it solve the Web Compatibility issue? Because the prefixed version of properties still exists and is supported. Altogether, we then sing the tune "Yeah, Apple (and Google), let's drop the prefixed version of these properties!" Ooooh, hear me, I so wish it would be possible. But Apple and Google can't do that for the exact same reason that other non-WebKit browsers can't exist in Chinese and Japanese markets. They would instantly break a big number of high profiles Web sites.

We have reached the point where browser vendors have to start implementing or aliasing these WebKit prefixes just to allow their users to browse the Web, see Mozilla in Gecko and Microsoft in Edge. The same thing is happening over again. In the past, browser vendors had to implement the quirks of IE to be compatible with the Web. As much as I hate it, we will have to specify the current -webkit- prefixes to implement them uniformly.

Web Compatibility Responsibility

Microsoft is involved in the Web Compatibility project. I would like Apple and Google to be fully involved and committed in this project. The mess we are all involved is due to WebKit prefixes and the leader position they have on the mobile market can really help. This mess killed Opera Presto on mobile, which had to switch to Blink.

Let's all create a better story for the Web and understand fully the consequences of our decisions. It's not only about technology, but also economic dynamics and market realities.

Otsukare!

Jesse RudermanReleasing jsfunfuzz and DOMFuzz

Today I'm releasing two fuzzers: jsfunfuzz, which tests JavaScript engines, and DOMFuzz, which tests layout and DOM APIs.

Over the last 11 years, these fuzzers have found 6450 Firefox bugs, including 790 bugs that were rated as security-critical.

I had to keep these fuzzers private for a long time because of the frequency with which they found security holes in Firefox. But three things have changed that have tipped the balance toward openness.

First, each area of Firefox has been through many fuzz-fix cycles. So now I'm mostly finding regressions in the Nightly channel, and the severe ones are fixed well before they reach most Firefox users. Second, modern Firefox is much less fragile, thanks to architectural changes to areas that once oozed with fuzz bugs. Third, other security researchers have noticed my success and demonstrated that they can write similarly powerful fuzzers.

My fuzzers are no longer unique in their ability to find security bugs, but they are unusual in their ability to churn out reliable, reduced testcases. Each fuzzer alternates between randomly building a JS string and then evaling it. This construction makes it possible to make a reproduction file from the same generated strings. Furthermore, most DOMFuzz modules are designed so their functions will have the same effect even if other parts of the testcase are removed. As a result, a simple testcase reduction tool can reduce most testcases from 3000 lines to 3-10 lines, and I can usually finish reducing testcases in less than 15 minutes.

The ease of getting reduced testcases lets me afford to report less severe bugs. Occasionally, one of these turns out to be a security bug in disguise. But most importantly, these bug reports help me establish positive relationships with Firefox developers, by frequently saving them time.

A JavaScript engine developer can easily spend a day trying to figure out why a web site doesn't work in Firefox. If instead I can give them a simple testcase that shows an incorrect result with a new JS optimization enabled, they can quickly find the source of the bug and fix it. Similarly, they much prefer reliable assertion testcases over bug reports saying "sometimes, Google Maps crashes after a while".

As a result, instead of being hostile to fuzzing, Firefox developers actively help me fuzz their code. They've added numerous assertions to their code, allowing fuzzers to notice as soon as the smallest thing goes wrong. They've fixed most of the bugs that impede fuzzing progress. And several have suggested new ways to test their code, even (especially) ways that scare them.

Developers working on the JavaScript engine have been especially helpful. First, they ensured I could test their code directly, apart from the rest of the browser. They already had a JavaScript shell for running regression tests, and they added a --fuzzing-safe option to disable the more dangerous testing functions.

The JS team also created a large set of testing functions to let me control things that would normally be based on heuristics. Fuzzers can now choose when garbage collection happens and even how much. They can make expensive JITs kick in after 2 loop iterations rather than 100. Fuzzers can even simulate out-of-memory conditions. All of these things make it possible to create small, reliable testcases for nasty classes of bugs.

Finally, the JS team has supported differential testing, a form of fuzzing where output is checked for correctness against some oracle. In this case, the oracle is the same JavaScript engine with most of its optimizations disabled. By fixing inconsistencies quickly and supporting --enable-more-deterministic, they've ensured that differential testing doesn't get stuck finding the same problems repeatedly.

Andreas Gal, a developer working on Firefox's JavaScript engine, once commented on Bugzilla: 'From this day forward, I shall never write a JIT again without Jesse.'

Please join us on IRC, or just dive in and contribute! Your suggestions and patches can have a large impact: fuzzer modules often act together to find complex interactions within the browser. For example, bug 893333 was found by my designMode module interacting with a <table> module contributed by a Firefox developer, Mats Palmgren. Likewise, bug 1158427 was found by Christoph Diehl's WebAudio module combined with my reflection-based API-discovery modules.

If your contributions result in me finding a security bug, and I think I wouldn't have found it otherwise, I'll make sure you get a bug bounty as if you had reported it yourself.

To the next 6450 browser bug fixes!

Christian HeilmannGot something to say? Write a post!

Tweet button

Here’s the thing: Twitter sucks for arguments:

  • It is almost impossible to follow conversation threads
  • People favouriting quite agressive tweets leaves you puzzled as to the reasons
  • People retweeting parts of the conversation out of context leads to wrong messages and questionable quotes
  • 140 characters are great to throw out truisms but not to make a point.
  • People consistenly copying you in on their arguments floods your notifications tab without really wanting to weigh in any longer

This morning was a great example: Peter Paul Koch wrote yet another incendiary post asking for a one year hiatus of browser innovation. I tweeted about the post saying it has some good points. Paul Kinlan of the Chrome team disagreed strongly with the post. I opted to agree with some of it, as a lot of features we created and thought we discarded tend to linger longer on the web than we want to.

A few of those back and forth conversations later and Alex Russel dropped the mic:

@Paul_Kinlan: good news is that @ppk has articulated clearly how attractive failure can be. @codepo8 seems to agree. Now we can call it out.

Now, I am annoyed about that. It is accusing, calling me reactive and calls out criticism of innovation a failure. It also very aggressively hints that Alex will now always quote that to show that PPK was wrong and keeps us from evolving. Maybe. Probably. Who knows, as it is only 140 characters. But I am keeping my mouth shut, as there is no point at this agressive back and forth. It results in a lot of rushed arguments that can and will be quoted out of context. It results in assumed sub-context that can break good relationships. It – in essence – is not helpful.

If you truly disagree with something – make your point. Write a post, based on research and analysis. Don’t throw out a blanket approval or disapproval of the work of other people to spark a “conversation” that isn’t one.

Well-written thoughts lead to better quotes and deeper understanding. It takes more effort to read a whole post than to quote a tweet and add your sass.

In many cases, whilst writing the post you realise that you really don’t agree or disagree as much as you thought you did with the author. This leads to much less drama and more information.

And boy do we need more of that and less drama. We are blessed with jobs where people allow us to talk publicly, research and innovate and to question the current state. We should celebrate that and not use it for pithy bickering and trench fights.

Photo Credit: acidpix

Daniel StenbergHTTP Workshop, second day

All 37 of us gathered again on the 3rd floor in the Factory hotel here in Münster. Day two of the HTTP Workshop.

Jana Iyengar (from Google) kicked off this morning with his presentations on HTTP and the Transport Layer and QUIC. Very interesting area if you ask me – if you’re interested in this, you really should check out the video recording from the barbof they did on this topic in the recent Prague IETF. It is clear that a team with dedication, a clear use-case, a fearless approach to not necessarily maintaining “layers” and a handy control of widely used servers and clients can do funky experiments with new transport protocols.

I think there was general agreement with Jana’s statement that “Engagement with the transport community is critical” for us to really be able to bring better web protocols now and in the future. Jana’s excellent presentations were interrupted a countless number of times with questions, elaborations, concerns and sub-topics from attendees.

Gaetano Carlucci followed up with a presentation of their QUIC evaluations, showing how it performs under various situations like packet loss etc in comparison to HTTP/2. Lots of transport related discussions followed.

We rounded off the afternoon with a walk through the city (the rain stopped just minutes before we took off) to the town center where we tried some of the local beers while arguing their individual qualities. We then took off in separate directions and had dinner in smaller groups across the city.

snackstation

Honza BambasString parsing made simple with mozilla::Tokenizer

 

PL_strstr

 

I can see FindChar, Substring, ToInteger and even atoi, strchr, strstr and sscanf craziness all over the Mozilla code base. There are though much better and, more importantly, safer ways to parse even a very simple input.

I wrote a parser class with API derived from lexical analyzers that helps with simple inputs parsing in a very easy way. Just include mozilla/Tokenizer.h and use class mozilla::Tokenizer. It implements a subset of features of a lexical analyzer.  Also nicely hides boundary checks of the input buffer from the consumer.

To describe the principal briefly: Tokenizer recognizes tokens like whole words, integers, white spaces and special characters.  Consumer never works directly with the string or its characters but only with pre-parsed parts (identified tokens) returned by this class.

 

There are two main methods of Tokenizer:

  • bool Next(Token& result);

If there is anything to read from the input at the current internal read position, including the EOF, returns true and result is filled with a token type and an appropriate value easily accessible via a simple variant-like API.  The internal read cursor is shifted to the start of the next token in the input before this method returns.

  • bool Check(const Token& tokenToTest);

If a token at the current internal read position is equal (by the type and the value) to what has been passed in the tokenToTest argument, true is returned and the internal read cursor is shifted to the next token.  Otherwise (token is different than expected) false is returned and the read cursor is left unaffected.

Few usage examples:

Construction

  #include "mozilla/Tokenizer.h"

  mozilla::Tokenizer p(NS_LITERAL_CSTRING("Sample string 2015."));

Reading a single token, examining it

  mozilla::Tokenizer::Token t;
  bool read = p.Next(t);
  // read == true, we have read something and t has been filled
  // Following our example string...
  if (t.Type() == mozilla::Tokenizer::TOKEN_WORD) {
    t.AsString(); // returns "Sample"
  }

Checking on a token value and automatically skipping on a positive test

  if (!p.CheckChar('\x20')) {
    throw "I expect a space here!";
  }

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_WORD;
  t.AsString() == "string";

  if (!p.CheckWhite()) {
    throw "A white space is expected here!";
  }

Reading numbers

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_INTEGER;
  t.AsInteger() == 2015;

Reaching the end of the input

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_CHAR;
  t.AsChar() == '.';

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_EOF;

  read = p.Next(t);
  // read == false, we are behind the EOF
  // t is here undefined!

More features

To learn more enhanced features of the Tokenizer – there is not that many, don’t be scared ;) – look at the well documented Tokenizer.h file under xpcom/ds.

As a teaser you can go through this more enhanced example or check on a gtest for Tokenizer:

#include "mozilla/Tokenizer.h"

using namespace mozilla;

{
  // A simple list of key:value pairs delimited by commas
  nsCString input("message:parse me,result:100");

  // Initialize the parser with an input string
  Tokenizer p(input);
  // A helper var keeping type and value of the token just read
  Tokenizer::Token t;

  // Loop over all tokens in the input
  while (p.Next(t)) {
    if (t.Type() == Tokenizer::TOKEN_WORD) {
      // A 'key' name found
      if (!p.CheckChar(':')) {
        // Must be followed by a colon
        return; // unexpected character
      }

      // Note that here the input read position is just after the colon
      // Now switch by the key string
      if (t.AsString() == "message") {
        // Start grabbing the value
        p.Record();
        // Loop until EOF or comma
        while (p.Next(t) && !t.Equals(Tokenizer::Token::Char(',')))
          ;
        // Claim the result
        nsAutoCString value;
        p.Claim(value);
        MOZ_ASSERT(value == "parse me");

        // We must revert the comma so that the code bellow recognizes the flow correctly
        p.Rollback();
      } else if (t.AsString() == "result") {
        if (!p.Next(t) || t.Type() != Tokenizer::TOKEN_INTEGER) {
          return; // expected a value and that value must be a number
        }

        // Get the value, here you know it's a valid number
        uint32_t number = t.AsInteger();
        MOZ_ASSERT(number == 100);
      } else {
        // Here t.AsString() is any key but 'message' or 'result', ready to be handled
      }

      // On comma we loop again
      if (p.CheckChar(',')) {
        // Note that now the read position is after the comma
        continue;
      }
      // No comma?  Then only EOF is allowed
      if (p.CheckEOF()) {
        // Cleanly parsed the string
        break;
      }
    }

    return; // The input is not properly formatted
  }
}

 

Currently works only with ASCII inputs but can be easily enhanced to also support any UTF-8/16 coding or even specific code pages if needed.

The post String parsing made simple with mozilla::Tokenizer appeared first on mayhemer's blog.

Aaron KlotzInteresting Win32 APIs

Yesterday I decided to diff the export tables of some core Win32 DLLs to see what’s changed between Windows 8.1 and the Windows 10 technical preview. There weren’t many changes, but the ones that were there are quite exciting IMHO. While researching these new APIs, I also stumbled across some others that were added during the Windows 8 timeframe that we should be considering as well.

Volatile Ranges

While my diff showed these APIs as new exports for Windows 10, the MSDN docs claim that these APIS are actually new for the Windows 8.1 Update. Using the OfferVirtualMemory and ReclaimVirtualMemory functions, we can now specify ranges of virtual memory that are safe to discarded under memory pressure. Later on, should we request that access be restored to that memory, the kernel will either return that virtual memory to us unmodified, or advise us that the associated pages have been discarded.

A couple of years ago we had an intern on the Perf Team who was working on bringing this capability to Linux. I am pleasantly surprised that this is now offered on Windows.

madvise(MADV_WILLNEED) for Win32

For the longest time we have been hacking around the absence of a madvise-like API on Win32. On Linux we will do a madvise(MADV_WILLNEED) on memory-mapped files when we want the kernel to read ahead. On Win32, we were opening the backing file and then doing a series of sequential reads through the file to force the kernel to cache the file data. As of Windows 8, we can now call PrefetchVirtualMemory for a similar effect.

Operation Recorder: An API for SuperFetch

The OperationStart and OperationEnd APIs are intended to record access patterns during a file I/O operation. SuperFetch will then create prefetch files for the operation, enabling prefetch capabilities above and beyond the use case of initial process startup.

Memory Pressure Notifications

This API is not actually new, but I couldn’t find any invocations of it in the Mozilla codebase. CreateMemoryResourceNotification allocates a kernel handle that becomes signalled when physical memory is running low. Gecko already has facilities for handling memory pressure events on other platforms, so we should probably add this to the Win32 port.

Mozilla IT & OperationsCVS & BZR services decommissioned on mozilla.org

tl;dr: CVS & BZR (aka Bazaar) version control systems have been decommissioned at Mozilla. See https://ftp.mozilla.org/pub/mozilla.org/vcs-archive/README for final archives.

As part of our ongoing efforts to ensure that the services operated by Mozilla continue to meet the current needs of Mozilla, the following VCS systems have been decommissioned:

This work took coordinated effort between IT, Developer Services, and the remaining active users of those systems. Thanks to all of them for their contributions!

Final archives of the public repositories are currently available at https://ftp.mozilla.org/pub/mozilla.org/vcs-archive/. The README file has instructions for retrieval of non-public repositories.

NOTE: These URLs are subject to change, please refer back to this blog post for the up to date link.

For any questions or concerns, please contact the Developer Services team.

Mozilla Open Policy & Advocacy BlogExperts develop cybersecurity recommendations

Today, we’re excited to publish the output of our “Cybersecurity Delphi 1.0” research process, tapping into a panel of 32 cybersecurity experts from diverse and mutually reinforcing backgrounds.

Mozilla Cybersecurity Delphi 1.0

Securing our communications and our data is hard. Every month seems to bring new stories of mistakes and attacks resulting in our personal information being made available – bit by bit harming trust online, and making ordinary Internet users feel fear. Yet, cybersecurity public policy often seems stuck in yesterday’s solution space, focused exclusively on well known terrain, around issues such as information sharing, encryption, and critical infrastructure protection. These “elephants” of cybersecurity policy are significant issues – but too much focus on them eclipses other solutions that would allow us to secure the Internet for the future.

So, working with Camille François & DHM Research we’ve spent the past year engaging the panel of cybersecurity experts through a tailored research process to try to extract public policy ideas and see what consensus can be found around them. We weren’t aiming for full consensus (an impossible task within the security community!). Our goal was to foment ideation and exchange, to develop a user-focused and holistic cybersecurity policy agenda.

Mozilla Cybersecurity Delphi Process

Our experts collectively generated 36 distinct policy suggestions for government action in cybersecurity. We then asked them to identify and rank their top choices of policy options by both feasibility and desirability. The result validated the importance of the “cyberelephants.” Privacy-respecting information sharing policies, effective critical infrastructure protection, and widespread availability and understanding of secure encryption programs are all important goals to pursue: they ranked high on desirability, but were generally viewed as hard to achieve.

More important are the ideas that emerged that aren’t on the radar screens of policymakers today. First and foremost was a proposal that stood out above the others as both highly desirable and highly feasible: increased funding to maintain the security of free and open source software. Although not high on many security policy agendas, the issue deserves attention. After all, 2014’s major security incidents around Poodle, Heartbleed, and Shellshock all centered on vulnerabilities in open source software. Moreover, open source software libraries are built into countless noncommercial and commercial products.

Many other good proposals and priorities surfaced through the process, including: developing and deploying alternative authentication mechanisms other than passwords; improving the integrity of public key infrastructure; and making secure communications tools easier to use. Another unexpected policy priority area highlighted by all segments of our expert panel as highly feasible and desirable was norm development, including norms concerning governments’ and corporations’ behavior in cyberspace, guided by human rights and communicated with maximum clarity in national and international contexts.

This report is not meant to be a comprehensive analysis of all cybersecurity public policy issues. Rather, it’s meant as a first, significant step towards a broader, collaborative policy conversation around the real security problems facing Internet users today.

At Mozilla, we will build on the ideas that emerged from this process, and hope to work with policymakers and others to develop a holistic, effective, user-centric cybersecurity public policy agenda going forward.

This research was made possible by a generous grant from the John D. and Catherine T. MacArthur Foundation.

Mozilla Cybersecurity Delphi 1.0

Chris Riley
Jochai Ben-Avie
Camille François

Emily DunhamGood times

Good times

People sometimes say “morning” or “evening” on IRC for a time zone unlike my own. Here’s a bash one-liner that emits the correct time-of-day generalization based on the datetime settings of the machine you run it on.

case $(($(date +%H)/6)) in 0|1)m="morning";;2)m="afternoon";;3)m="night";;esac; echo good $m

Read more...

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1185823] add additional [audit] syslog entries
  • [1187184] Minor updates to gear form
  • [1184828] api searches should honour the same fields in its “order” parameter as the web UI
  • [1186803] remove %product_sec_groups from bmo/lib/data.pm
  • [1186776] allow users to set keywords on bug creation (via API/internally only)
  • [1181453] Amend https://bugzilla.mozilla.org/form.fxos.feature form
  • [1186788] disabling an account should always disable bugmail
  • [1171806] add the ability for a user to disable/”remove” their own account
  • [1187498] Disable SiteMapIndex extension if running on under development site instead of production

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Nicholas NethercoteA work-around for Tree Style Tab breakage on Firefox Nightly caused by mozRequestAnimationFrame removal

This post is aimed at Firefox Nightly users who also use the Tree Style Tab extension. Bug 909154 landed last week. It removed support for the prefixed mozRequestionAnimationFrame function, and broke Tree Style Tab. The GitHub repository that hosts Tree Style Tab’s code has been updated, but that has not yet made it into the latest Tree Style Tab build, which has version number 0.15.2015061300a003855.

Fortunately, it’s fairly easy to modify your installed version of Tree Style Tab to fix this problem. (“Fairly easy”, at least, for the technically-minded users who run Firefox Nightly.)

  • Find the Tree Style Tabs .xpi file. On my Linux machine, it’s at ~/.mozilla/firefox/ndbcibpq.default-1416274259667/extensions/treestyletab@piro.sakura.ne.jp.xpi. Your profile name will not be exactly the same. (In general, you can find your profile with these instructions.)
  • That file is a zip file. Edit the modules/lib/animationManager.js file within that file, and change the two occurrences of mozRequestAnimationFrame to requestAnimationFrame. Save the change.

I did the editing in vim, which was easy because vim has the ability to edit zip files in place. If your editor does not support that, it might work if you unzip the code, edit the file directly, and then rezip, but I haven’t tried that myself. Good luck.

Chris CooperThe changing face of buildduty, Summer 2015 edition

Buildduty is the Mozilla release engineering (releng) equivalent of front-line support. It’s made up of a multitude of small tasks, none of which on their own are particulary complex or demanding, but taken in aggregate can amount to a lot of work.

It’s also non-deterministic. One of the most important buildduty tasks is acting as information brokers during tree closures and outages, making sure sheriffs, developers, and IT staff have the information they need. When outages happen, they supercede all other work. You may have planned to get through the backlog of buildduty tasks today, but congratulations, now you’re dealing with a network outage instead.

Releng has struggled to find a sustainable model for staffing buildduty. The struggle has been two-fold: finding engineers to do the work, and finding a duration for a buildduty rotation that doesn’t keep the engineer out of their regular workflow for too long.

I’m a firm believer that engineers *need* to be exposed to the consequences of the software they write and the systems they design:

I also believe that it’s a valuable skill to be able to design a system and document it sufficiently so that it can be handed off to someone else to maintain.

Starting this week, we’re trying something new. We’re shifting at least part of the burden to management: I am now managing a pair of contractors who will be responsible for buildduty for the rest of 2015.

Alin and Vlad are our new contractors, and are both based in Romania. Their offset from Mozilla Standard Time (aka PST) will allow them to tackle the asynchronous activities of buildduty, namely slave loans, non-urgent developer requests, and maintaining the health of the machine pools.

It will take them a few weeks to find their feet since they are unfamiliar with any of the systems. You can find them on IRC in the usual places (#releng and #buildduty). Their IRC nicks are aselagea and vladC. Hopefully they will both be comfortable enough to append |buildduty to those nicks soon. :)

While Alin and Vlad get up to speed, buildduty continues as usual in #releng. If you have an issue that needs buildduty assistance, please ask in #releng, and someone from releng will assist you as quickly as possible. For less urgent requests, please file a bug.

Daniel StenbergThe HTTP Workshop started

So we started today. I won’t get into any live details or quotes from the day since it has all been informal and we’ve all agreed to not expose snippets from here without checking properly first. There will be a detailed report put together from this event afterwards.

The most critical peace of information is however how we must not walk on the red parts of the sidewalks here in Münster, as that’s the bicycle lane and they can be ruthless there.

We’ve had a bunch of presentations today with associated Q&A and follow-up discussions. Roy Fielding (HTTP spec pioneer) started out the series with a look at HTTP full of historic details and views from the past and where we are and what we’ve gone through over the years. Patrick Mcmanus (of Firefox HTTP networking) took us through some of the quirks of what a modern day browser has to do to speak HTTP and topped it off with a quiz regrading Firefox metrics. Did you know 31% of all Firefox HTTP requests get fulfilled by the cache or that 73% of all Firfox HTTP/2 connections are used more than once but only 7% of the HTTP/1 ones?

Poul-Henning Kamp (author of Varnish) brought his view on HTTP/2 from an intermediary’s point of view with a slightly pessimistic view, not totally unlike what he’s published before. Stefan Eissing (from Green Bytes) entertained us by talking about his work on writing mod_h2 for Apache Httpd (and how it might be included in the coming 2.4.x release) and we got to discuss a bit around timing measurements and its difficulties.

We rounded off the afternoon with a priority and dependency tree discussion topped off with a walk-through of numbers and slides from Kazuho Oku (author of H2O) on how dependency-trees really help and from Moto Ishizawa (from Yahoo! Japan) explaining Firefox’s (Patrick’s really) implementation of dependencies for HTTP/2.

We spent the evening having a 5-course (!) meal at a nice Italian restaurant while trading war stories about HTTP, networking and the web. Now it is close to midnight and it is time to reload and get ready for another busy day tomorrow.

I’ll round off with a picture of where most of the important conversations were had today:

kafeestation

Mozilla IT & OperationsProduct Delivery Migration: What is changing, when it’s changing and the impacts

As promised, the FTP Migration team is following up from the 7/20 Monday Project Meeting where Sean Rich talked about a project that is underway to make our Product Delivery System better.

As a part of this project, we are migrating content out of our data centers to AWS. In addition to storage locations changing, namespaces will change and the FTP protocol for this system will be deprecated. If, after reading this post, you have any further questions, please email the team.

Action: The ftp protocol on ftp.mozilla.org is being turned off.
Timing: Wednesday, 5th August 2015.
Impacts:

    After 8/5/15, ftp protocol support for ftp.mozilla.org will be completely disabled and downloads can only be accessed through http/https.
    Users will no longer be able to just enter “ftp.mozilla.org” into their browser, because this action defaults to the ftp protocol. Going forward, users should start using archive.mozilla.org. The old name will still work but needs to be entered in your browser as https://ftp.mozilla.org/

Action: The contents of ftp.mozilla.org are being migrated from the NetApp in SCL3 to AWS/S3 managed by Cloud Services.
Timing: Migrating ftp.mozilla.org contents will start in late August and conclude by end of October. Impacted teams will be notified of their migration date.
Impacts:

    Those teams that currently manually upload to these locations have been contacted and will be provided with S3 API keys. They will be notified prior to their migration date and given a chance to validate their upload functionality post-migration.
    All existing download links will continue to work as they do now with no impact.

Mark SurmanMozilla Learning Strategy Slides

Developing a long term Mozilla Learning strategy has been my big focus over the last three months. Working closely with people across our community, we’ve come up with a clear, simple goal for our work: universal web literacy. We’ve also defined ‘leadership’ and ‘advocacy’ as our two top level strategies for pursuing this goal. The use of ‘partnerships and networks’ will also be key to our efforts. These are the core elements that will make up the Mozilla Learning strategy.

Over the last month, I’ve summarized our thinking on Mozilla Learning for the Mozilla Board and a number of other internal audiences. This video is based on these presentations:

As you’ll see in the slides, our goal for Mozilla Learning is an ambitious one: make sure everyone knows how to read, write and participate on the web. In this case, everyone = the five billion people who will be online by 2025.

Our top level thinking on how to do this includes:

1. Develop leaders who teach and advocate for web literacy.

Concretely, we will integrate our Clubs, Hive and Fellows initiatives into a single, world class learning and leadership program.

2. Shift thinking: everyone understands the web / internet.

Concretely, this means we will invest more in advocacy, thought leadership and user education. We may also design ways to encourage web literacy more aggressively in our products.

3. Build a global web literacy network.

Mozilla can’t create universal web literacy on its own. All of our leadership and advocacy work will involve ‘open source’ partners with whom we’ll create a global network committed to universal web literacy.

Process-wise: we arrived at this high level strategy by looking at our existing programs and assets. We’ve been working on web literacy, leadership development and open internet advocacy for about five years now. So, we already have a lot in play. What’s needed right now is a way to focus all of our efforts in a way that will increase their impact — and that will build a real snowball of people, organizations and governments working on the web literacy agenda.

The next phase of Mozilla Learning strategy development will dig deeper on ‘how’ we will do this. I’ll provide a quick intro post on that next step in the coming days.


Filed under: mozilla

Armen ZambranoEnabling automated back-filling on mozilla-inbound for few hours

tl;dr; we're going to enable automated back-filling tomorrow Tuesday for
few hours on mozilla-inbound.

We were aiming for Monday but pushed it to Tuesday to help publicize this more.

If on Wednesday there are no fall-outs we will leave it running for m-i for a week before enabling it on other places.

Posted on various mailing lists including mozilla.dev.tree-management.

> Hello all,
>
> We are planning to turn on a service that automatically backfills
> failed test jobs on m-i. If there are no concerns, we would like to
> turn this on experimentally for a couple of hours on [Tuesday]. We
> hope this will make it easier to identify which revision broke a
> test. Suggestions are welcome.
>
> The backfilling works like this: - It triggers the job that failed
> one extra time - Then it looks for a successful run of the job on the
> previous 5 revisions. If a good run is found, it triggers the job
> only on revisions up to this good run. If not, it triggers the job on
> every one of the previous 5 revisions. Previous jobs will be
> triggered one time.
>
> The tracking bug is:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1180732
>
> Best, Alice

--
Zambrano Gasparnian, Armen
Automation & Tools Engineer
http://armenzg.blogspot.ca



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Mozilla Reps CommunityThank you Emma, Arturo and Raj

Being part of the Reps Council is a great experience, it is at the same time an honour and a challenge and it comes with a lot of responsibility, hard work, but also lessons and growth.

We want to thank and recognize our most recent former Reps Council members for serving their one year term bringing all the their passions, knowledge and dedication to the program and making it more powerful.

Emma Irwin

emmaEmma was a great inspiration not only for Reps, but specially for mentors and the Council. Her passion for education and empowering volunteers allowed her to push the program to be much more centered towards education and growth, marking a new era for Reps.

She was not only advocating for it, but also rolling up her sleeves and running different trainings, creating curriculum and working towards improving the mentorship culture. She was also extremely helpful helping us navigate some conflicts and getting Reps to grow and put aside differences.

Arturo Martinez

arturo

Arturo’s unchallenged energy and drive were great additions to the Council. Specially during his term as Council Chair he set the standard for effectively driving initiatives, helping everyone to achieve their goals and pushing us to be excellent. Thank you for the productivity boost!

Gauthamraj Elango

raj

Raj’s passion for Web literacy and for empowering everyone to take part of the open web helped him lead Reps program to work more closely with Webmaker team, showcasing an example on how Reps can bring much more value to initiatives. He drove efforts to innovate our initiatives both in terms of local organization and funding.

These are just a few examples of their exemplary work as Council members, that not only helped Reps all around the world to step up and have more impact, but also inspired new Mozillians and all the Reps around the world on how to lead change for the open Web.

Once again, thank you so much for your time, effort and your passion, you left an outstanding mark in the program.

You can share your gratitude with them in this topic on Discourse.

QMOFirefox 40 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – July 24th – we held a new Testday event, for Firefox 40 Beta 7.

We’d like to take this opportunity to thank everyone for getting involved in the proposed testing activities and in general, for helping us make Firefox better.

Many thanks go out to the Bangladesh QA Community, for testing Firefox Hello context, WebGl, Adobe flash plugin and also verifying lots of bug fixes: Hossain Al Ikram, Nazir Ahmed Sabbir, Rezaul Huque Nayeem, MD.Owes Quruny Shubho, Mohammad Maruf Islam, Md.Rahimul Islam, Kazi Nuzhat Tasnem, Md. Ehsanul Hassan, Saheda.Reza Antora, Fahmida Noor, Meraj Kazi, Md. Jahid Hasan Fahim, Israt, Towkir Ahmed and Eyakub.

Special thanks go out to participants of Campus-Party Mexico that attended Firefox 40 beta 7 testday and helped with the testing of Firefox Hello context, WebGl and Adobe flash plugin: Mauricio Navarro Miranda, LASR21 Sánchez,  diegoehg, nataly Gurrola, Jorge Luis Flores Barrales, EZ274, Armando Gomez, Karla Danitza Duran Memijes and Eduardo Arturo Enciso Hernández.

Also a big thank you goes to all our moderators.

Keep an eye on QMO for upcoming events! :)

Daniel StenbergHTTPS and HTTP/2 plans for my sites

I produce a fair amount of open source code. I make that code available online. curl is probably the most popular package.

People ask me how they can trust that they are actually downloading what I put up there. People ask me when my source code can be retrieved over HTTPS. Signatures and hashes don’t add a lot against attacks when they all also are fetched over HTTP…

HTTPS

SSL padlockI really and truly want to offer HTTPS (only) for all my sites.  I and my friends run a whole busload of sites on the same physical machine and IP address (www.haxx.se, daniel.haxx.se, curl.haxx.se, c-ares.haxx.se, cool.haxx.se, libssh2.org and many more) so I would like a solution that works for all of them.

I can do this by buying certs, either a lot of individual ones or a few wildcard ones and then all servers would be covered. But the cost and the inconvenience of needing a lot of different things to make everything work has put me off. Especially since I’ve learned that there is a better solution in the works!

Let’s Encrypt will not only solve the problem for us from a cost perspective, but they also promise to solve some of the quirks on the technical side as well. They say they will ship certificates by September 2015 and that has made me wait for that option rather than rolling up my sleeves to solve the problem with my own sweat and money. Of course there’s a risk that they are delayed, but I’m not running against a hard deadline myself here.

HTTP/2

Related, I’ve been much involved in the HTTP/2 development and I host my “http2 explained” document on my still non-HTTPS site. I get a lot of questions (and some mocking) about why my HTTP/2 documentation isn’t itself available over HTTP/2. I would really like to offer it over HTTP/2.

Since all the browsers only do HTTP/2 over HTTPS, a prerequisite here is that I get HTTPS up and running first. See above.

Once HTTPS is in place, I want to get HTTP/2 going as well. I still run good old Apache here so it might be done using mod_h2 or perhaps with a fronting nghttp2 proxy. We’ll see.

Daniel StenbergHTTP Workshop 2015, day -1

http workshopI’ve traveled to a rainy and gray Münster, Germany, today and checked in to my hotel for the coming week and the HTTP Workshop. Tomorrow is the first day and I’m looking forward to it probably a little too much.

There is a whole bunch of attendees coming. Simply put, most of the world’s best brains and the most eager implementers of the HTTP stacks that are in use today and will be in use tomorrow (with a bunch of notable absentees of course but you know you’ll be missed). I’m happy and thrilled to be able to take part during this coming week.

Julien VehentUsing Mozilla Investigator (MIG) to detect unknown hosts

MIG is a distributed forensics framework we built at Mozilla to keep an eye on our infrastructure. MIG can run investigations on thousands of servers very quickly, and focuses on providing low-level access to remote systems, without giving the investigator access to raw data.

As I was recently presenting MIG at the DFIR Summit in Austin, someone in the audience asked if it could be used to detect unknown or rogue systems inside a network. The best way to perform that kind of detection is to watch the network, particularly for outbound connections rogue hosts or malware would establish to a C&C server. But MIG can also help the detection by inspecting ARP tables of remote systems and cross-referencing the results with local mac addresses on known systems. Any MAC address not configured on a known system is potentially a rogue agent.

First, we want to retrieve all the MAC addresses from the ARP tables of known systems. The netstat module can perform this task by looking for neighbor MACs that match regex "^[0-9a-f]", which will match anything hexadecimal.

$ mig netstat -nm "^[0-9a-f]" > /tmp/seenmacs

We store the results in /tmp/seenmacs and pull a list of unique MACs using some bash.

$ awk '{print tolower($5)}' /tmp/seenmacs | sort | uniq
00:08:00:85:0b:c2
00:0a:9c:50:b4:36
00:0a:9c:50:bc:61
00:0c:29:41:90:fb
00:0c:29:a7:41:f7
00:10:db:ff:10:00
00:10:db:ff:30:00
00:10:db:ff:f0:00
00:21:53:12:42:c1

We now want to check that every single one of the seen MAC addresses is configured on a known agent. Again, the netstat module can be used for this task, this time by querying local mac addresses with the -lm flag.

Now the list of MACs may be quite long, so instead of running one MIG query per MAC, we group them 50 by 50 using the following script:

#! /usr/bin/env bash
i=50
input=$1
output=$2
while true
do
    echo -n "mig netstat " >> $output
    for mac in $(awk '{print tolower($5)}' $1|sort|uniq|head -$i|tail -50)
    do
        echo -n "-lm $mac " >> $output
    done
    echo >> $output
    i=$((i+50))
    if [ $i -gt $(awk '{print tolower($5)}' $1|sort|uniq|wc -l) ]
    then
        exit 0
    fi
done

The script will build MIG netstat command with 50 arguments max. Invoke it with /tmp/seenmacs as argument 1, and an output file as argument 2.

$ bash /tmp/makemigmac.sh /tmp/seenmacs /tmp/migsearchmacs

/tmp/migsearchmacs now contains a number of MIG netstat commands that will search seen MAC addresses across the configured interfaces of known hosts. Run the commands and pipe the output to a results file.

$ for migcmd $(cat /tmp/migsearchmacs); do $migcmd >> /tmp/migfoundmacs; done

We now have a file with seen MAC addresses, and another one with MAC addresses configured on known systems. Doing the delta of the two is fairly easy in bash:

$ for seenmac in $(awk '{print tolower($5)}' /tmp/seenmacs|sort|uniq); do
hasseen=""; hasseen=$(grep $seenmac /tmp/migfoundmacs)
if [ "$hasseen" == "" ]; then
echo "$seenmac is not accounted for"
fi
done
00:21:59:96:75:7f is not accounted for
00:21:59:98:d5:bf is not accounted for
00:21:59:9c:c0:bf is not accounted for
00:21:59:9e:3c:3f is not accounted for
00:22:64:0e:72:71 is not accounted for
00:23:47:ca:f7:40 is not accounted for
00:25:61:d2:1b:c0 is not accounted for
00:25:b4:1c:c8:1d is not accounted for

Automating the detection

It's probably a good idea to run this procedure on a regular basis. The script below will automate the steps and produce a report you can easily email to your favorite security team.

#!/usr/bin/env bash
SEENMACS=$(mktemp)
SEARCHMACS=$(mktemp)
FOUNDMACS=$(mktemp)
echo "seen mac addresses are in $SEENMACS"
echo "search commands are in $SEARCHMACS"
echo "found mac addresses are in $FOUNDMACS"

echo "step 1: obtain all seen MAC addresses"
$(which mig) netstat -nm "^[0-9a-f]" 2>/dev/null | grep 'found neighbor mac' | awk '{print tolower($5)}' | sort | uniq > $SEENMACS

MACCOUNT=$(wc -l $SEENMACS | awk '{print $1}')
echo "$MACCOUNT MAC addresses found"

echo "step 2: build MIG commands to search for seen MAC addresses"
i=50
while true;
do
    echo -n "$i.."
    echo -n "$(which mig) netstat -e 50s " >> $SEARCHMACS
    for mac in $(cat $SEENMACS | head -$i | tail -50)
    do
        echo -n "-lm $mac " >> $SEARCHMACS
    done
    echo -n " >> $FOUNDMACS" >> $SEARCHMACS
    if [ $i -gt $MACCOUNT ]
    then
        break
    fi
    echo " 2>/dev/null &" >> $SEARCHMACS
    i=$((i+50))
done
echo
echo "step 3: search for MAC addresses configured on local interfaces"
bash $SEARCHMACS

sleep 60

echo "step 4: list unknown MAC addresses"
for seenmac in $(cat $SEENMACS)
do
    hasseen=$(grep "found local mac $seenmac" $FOUNDMACS)
    if [ "$hasseen" == "" ]; then
        echo "$seenmac is not accounted for"
    fi
done

The list of unknown MACs can then be used to investigate the endpoints. They could be switches, routers or other network devices that don't run the MIG agent. Or they could be rogue endpoints that you should keep an eye on.

Happy hunting!

Panos AstithasLessons from Startup Weekend

I had an exhausting but fun weekend at the Athens Startup Weekend a few days ago. Along with Christos I joined Yannis, Panagiotis Christakos and Babis Makrinikolas on the Newspeek project. When Yannis pitched the idea on Friday night, the main concept was to create a mobile phone application that would provide a better way to view news on the go. I don't believe it was very clear in his mind then, what would constitute a "better" experience, but after some chatting about it we all defined a few key aspects, which we refined later with lots of useful feedback and help from George. Surprisingly, for me at least, in only two days we managed to design, build and present a working prototype in front of the judges and the other teams. And even though the demo wasn't exactly on par with our accomplishments, I'm still amazed at what can be created in such a short time frame.
Newspeek, our product, had a server-side component that periodically collected news items from various news feeds, stored them and provided them to clients through a simple REST API. It also had an iPhone client that fetched the news items and presented them to the user in a way that respected the UI requirements and established UX norms for that device.

So, in the interest of informing future participants about what works and what doesn't work in Startup Weekend, here are the lessons I learned:

  1. If you plan to win, work on the business aspect, not on the technology. Personally, I didn't go to ASW with plans to create a startup, so I didn't care that much about winning. I mostly considered the event as a hackathon, and tried my best to end up with a working prototype. Other teams focused more on the business side of things, which is understandable, given the prize. Investors fund teams that have a good chance to return a profit, not the ones with cool technology and (mostly) working demos. Still, the small number of actual working prototypes was a disappointment for me. Even though the developers were the majority in the event, you obviously can't have too many of them in a Startup Weekend.
  2. For quick server-side prototyping and hosting, Google App Engine is your friend. Since everyone in the team had Java experience, we could have gone with a JavaEE solution and set up a dedicated server to host the site. But, since I've always wanted to try App Engine for Java and the service architecture mapped nicely to it, we tried a short experiment to see if it could fly. We built a stub service in just a few minutes, so we decided it was well worth it. Building our RESTful service was really fast, scalability was never a concern and the deployment solution was a godsend, since the hosting service provided for free by the event sponsors was evidently overloaded. We're definitely going to use it again for other projects.
  3. jQTouch rocks! Since our main deliverable would be an iPhone application, and there were only two of us who had ever built an iPhone application (of the Hello World variety), we knew we had a problem. Fortunately, I had followed the jQTouch development from a reasonable distance and had witnessed the good things people had to say, so I pitched the idea of a web application to the team and it was well received. iPhone applications built with web technologies and jQTouch can be almost indistinguishable from native ones. We all had some experience in building web applications, so the prospect of having a working prototype in only two days seemed within the realm of possibility again. The future option of packaging the application with PhoneGap and selling it in the App Store was also a bonus point for our modest business plan.
  4. For ad-hoc collaboration, Mercurial wins. Without time to set up central repositories, a DVCS was the obvious choice, and Mercurial has both bundles and a standalone server that make collaborative coding a breeze. If we had zeroconf/bonjour set up in all of our laptops, we would have used the zeroconf extension for dead easy machine lookup, but even without it things worked flawlessly.
  5. You can write code with a netbook. Since I haven't owned a laptop for the last three years, my only portable computer is an Asus EEE PC 901 running Linux. Its original purpose was to allow me to browse the web from the comfort of my couch. Lately however, I'm finding myself using it to write software more than anything else. During the Startup Weekend it had constantly open Eclipse (for server-side code), Firefox (for JavaScript debugging), Chrome (for webkit rendering), gedit (for client-side code) and a terminal, without breaking a sweat.
  6. When demoing an iPhone application, whatever you do, don't sweat. Half-way through our presentation, tapping the buttons didn't work reliably all the time, so anxiety ensued. Since we couldn't make a proper presentation due to a missing cable, we opted for a live demo, wherein Yannis held the mic and made the presentation, and I posed as the bimbo that holds the product and clicks around. After a while we ended up both touching the screen, trying to make the bloody buttons click, which ensured the opposite effect. In retrospect, using a cloth occasionally would have made for a smoother demo, plus we could have slipped a joke in there, to keep the spirit up.
All in all it was an awesome experience, where we learned how far we can stretch ourselves, made new friends and caught up with old ones. Next year, I'll make sure I have a napkin, too.

Emma Irwin#MozLove for April

This month’s #mozlove post is for April Morone.

I wrote this post with  inspiration from the first version of ‘Participation Personas . Personas (V1) is a  list of contributor profiles I use to design participation opportunities.  For each persona I also suggest a series of ‘lenses’ which, I believe can help us design with, and for greater diversity and dimension.

A lens can be anything from gender identity and age, to what I called a ‘toxic rating’, which changes the flexibility and value of collaborating with someone.   Another lens is what I have (so far) called ‘accessibility’, which encourages thinking about physical challenges of contribution.  This could be anything from asking ourselves if resources are ‘screen reader friendly’, to building in a respect for periods of time people may ‘disappear’ to take care of their wellness.  

In that light I would like to highlight the contributions, enthusiasm and dedication of April Morone. April describes herself as a ‘disabled contributor’ living with partial blindness, hearing loss and neuro-muscular problems . April is also advocate for helping other people living with disabilities contribute to the Mozilla project.   April was kind enough to take time to answer my questions, the first of which was “What got you started contributing?”

“What got me contributing was this insatiable need to help and insatiable need to learn more in the IT field, as well as to DO more in the IT field. I’ve always been helping others, from my cousins, helping teach them at the age of twelve on up, to teaching and helping others.”

You will find April embedded in the project helping others, especially focused on new contributors people setting up local environments for bug-fixes.  When I asked her what sustains her participation, she felt equally as motivated by people who ‘want to learn’, as her own interest in teaching and helping.

When listing the challenges to contribution, April identified the continual challenges posed by health issues which include the emotional effects of  surviving domestic abuse.  On the more predicable scale, April also listed issues with technology fails and limited time as worthy opponants.  What’s I think is very inspiring about both April and the community around her is how she describes her continued involvement and the people making a difference for her:

Abishek Gupta, Gautam Sharma, David Walsh, Luke Crouch, Janet Swisher, Hagen Halbach, and Daniel Desira have kept me going. They have been contributors and now also friends who have supported me through difficult times when I might have otherwise have given up contributing. I had thought of dropping out of contributing and even just giving up. But they stood by me, listened, and gave support, which help.  What also kept me going is my love of helping others, my love of Mozilla, and my love of IT and web development.

I think this is really, really special in that the community is as much a place to find ‘your people’, as it is a cause to contribute to.   I know April is among a small group of volunteers at Mozilla with ambitions of creating a more supportive network for contributors living with disability through directed documentation and on-boarding –  which I think is just amazing.  I am grateful to be a part of a community that includes April and many of the people she listed who help her be successful.

 

db8dadb3c7cc3bc2083881ff935416bd54101ced

Next month I hope to write a couple of these posts – we’ll see.

“Felt Heart” Image credit: Lauren Jong

 

Matthew NoorenbergheFirefox Password Manager Update: 2015-Q1

Logins are a part of our daily lives on the web and one of the active Firefox projects this year is to improve Firefox's password manager which has the simple high-level goal of helping users log in. Here's a summary of the progress made in the first quarter of 2015 (in no particular order):
  • Telemetry – Probes (with the prefix PWMGR_) were added to gain a better understanding of how users were using the feature and to allow measuring the impact of improvements.
  • Ignoring @autocomplete=off in login forms – An autocomplete attribute with a value of "off" no longer disables auto-filling of login fields. This puts users back in control of their login experience and aligns with the direction of other browser vendors. Last year we started ignoring @autocomplete=off when deciding whether to ask the user to save/update their login so this is an evolution of that change. Note that @autocomplete=off is still effective outside login forms e.g. to implement custom search box autocompletion.
  • Capture doorhanger fields – The remember/update password doorhanger notification is now easier to visually scan with fields for the captured username and password. The username field is also editable which helps in cases where the detected username is incorrect or missing.
    Screenshot of the password capture doorhanger on Windows 7 in Firefox 39
  • Viewing and managing logins on Android – A password management interface (about:logins) was added to Firefox for Android and is accessible via the menu under Tools > Logins.
    Firefox for Android version 42 about:logins list viewFirefox for Android version 42  about:logins context menu
  • Per-site recipes – A new mechanism was added to allow per-site overrides to the password manager capture and fill heuristics since it's not feasible/scalable to have general ones which work for every website. Initial recipe support allows overriding the username and password field detection via CSS selectors. There are only a handful of recipes currently in use as there hasn't been much focus/communication on gathering these yet but you're encouraged to file bugs about sites that don't work with the password manager (and if you're ambitious you can even submit patches to the JSON file).
  • Android Capture Doorhanger Polish – Capture doorhanger visuals were polished in preparation for later improvements.
  • General bug fixes – As usual, there were bug fixes for functionality that simply didn't work as expected. For example, Bug 1121040 regarding forms submitting via the Enter key during username autocompletion before a password had time to be filled in the password field.
Expect to see many more improvements in upcoming months as we continue to make major improvements to the password manager. If you'd like to contribute to this project, check out the password manager wiki page for mailing list, IRC, bug list and other information.

Tantek ÇelikDark Forest Run

Yesterday morning I ran through a forest in pitch darkness for the first time. I had a headlamp, a general sense of direction (uphill), and the knowledge that friends were just out of sight up ahead.

When I left my house it was dark as night in the city, which really means never darker than the dim glow from diffuse streetlamps and other light polluters. I ran nearly a kilometer before meeting my fellow #nopasoparungang members at the intersection of Frederick & Stanyan streets.

From there we ran half a mile up Stanyan’s steepest segments (240 feet elevation) to Belgrave Ave and the eastern edge of the Mt Sutro Open Space Preserve. Undaunted by poison oak warning signs, we leapt onto the narrow dirt forest trail. In mere seconds we disappeared into the dense woods, the city glow faded, and our headlamps barely lit the trail ahead. Anything beyond 10 meters was nothing but gray shapes blending into darkness.

Our gang of four split into lead and tail pairs, and we soon lost sight of the lead headlamps. We didn’t bother navigating by mobile, even the dimmest of backlighting would have been blinding. Whenever the trail split, we chose the uphill path.

Not only was the forest darkness pierced only by our headlamps, it was silent except for the sounds we made, breathing, pounding the trail, rustling leaves, snapping twigs.

The lead pair rejoined us from behind, having taken a wrong turn and doubled back. We emerged from the south side of the forest onto the street and found the few other @Nov_Project_SF early gang arrivals who took our photo.

Early rungang photo

Now seven strong, we hiked up Johnstone drive just a bit and ran uphill onto the East Ridge Trail, again leaving civilization behind in just moments. We ran all the way up to the Mt. Sutro summit, to a clearing formerly used for Nike Missile Control Site SF-89C.

Looking back through the dark forest I could see dawn’s light in the East.

Dark forest backlist by dawn.

From Fredrick & Stanyan we had only run a mile, and yet the second half of it through pitch black woods, and 400 more feet of incline for total of 640 feet of elevation gain.

Tapering for this weekend's race, once I reached the summit I did reps of planking, tricep dips, pushups, all while swatting perhaps nearly 100 mosquitos. Everyone else ran up & down the ridge trail and others nearby. A few more runners found us during the 30 minute hills workout.

Afterwards we ran back down to the meeting point on the street, and hugged the 6:25am arrivals. Then we did it all again, this time in the sunrise lit trails below.

NPSF late gang group photo in Sutro Forest.

This is November Project San Francisco #hillsforbreakfast. We run through poison-ivy laden mosquito-infested forests from darkness through dawn and into the sunrise.

Why are we shushing with our fingers? We heard from a concerned hiker that "the sound travels really far" out of the forest (which is odd, because the sound from the city doesn’t seem to make it into the forest). For more, see: NPSF: Do you know what it feels like to be 90 years old?

Cameron KaiserUpdating you on 38 just-in-time

Did you see what I did there? For the past two weeks my free time apart from work and the Master's degree has been sitting in a debugger trying to fix JavaScript, which is just murder on my dating life. Here is the current showstopper bug-roll for 38.1.1b1:

  • The Faceblech bug with the new IonPower JavaScript JIT compiler is squashed, I think, after repairing some conformance test failures which in turn appear to have repaired Forceblah. In my defence, the two bugs in question were incredibly weird edge cases and these tests are not part of the usual JIT test suite, so I guess we'll have to run them as well in future. This also repairs an issue with Instagrump which is probably the same underlying issue since Faceboink owns them also.

    The silver lining after all that was that I was considering disabling inlining in the JIT prior to release, which worked around the "badness," but also cut the engine speed in about half. (Still faster than JaegerMonkey!) To make this a bit less of a hit, I tuned the thresholds for starting the twin JITs and got about 10% improvement without inlining. With inlining back on, it's still faster by about 4% and change -- the G5 now achieves a score of nearly 5800 on V8, up from 5560. I also tweaked our foreground finalization patch for generational GC so that we should be able to get the best of both worlds. Overall you should see even better performance out of this next beta.

  • I have a presumptive fix for the webfont "ATSUI puke" on the New York Times, but it's not implemented or well-tested yet. This is a crash on 10.5, so I consider it a showstopper and it will be fixed before the next beta. (It affects 31.8 also but I will not be making another 31 release unless there is a Mozilla ESR chemspill.)

  • The modified strip7 tool required for building 38.x has a serious bug in it that causes it to crash trying to strip certain symbols. I have fixed this bug and builders will need to install this new version (remember: do not replace your normal strip with this one; it is intentionally loose with the Mach-O specification). I will be uploading it sometime this week along with an updated gdb7 that has better debugger performance and repairs a bug with too eagerly disabling register display while single-stepping Ion code.

These bugs are not considered showstoppers, but I do acknowledge them and I plan to fix them either for the final release or the next version of 38:

  • I can confirm saved passwords do not appear in the preferences panel. They do work, though, and can be saved, so this is more of an issue with managing them; while it's possible to do so manually it requires some inconvenient screwing around with your profile, so I consider this the highest priority of the non-showstopper bugs.

  • Checkboxes on the dropdown menus from the Console tabs do not appear. This specific manifestation is purely cosmetic because they work normally otherwise, but this may be an indication there is a similar issue with dropdowns and context menus elsewhere, so I do want to fix this as well.

Other miscellaneous changes include some adjustments to HTML5 media streaming and I have decided to reduce the default window and tab undos back to 31's level (6 and 2 respectively) so that the browser still gives up tenured memory a bit more easily. Unfortunately, there is not enough time to get MP3 support fully functional for final release. I plan to get this completed in a future version of 38.x, but it will not be officially supported until then (you can still toggle tenfourfox.mp3.enabled to use the minimp3 driver for those sites it does work with as long as you remember that seeking within a track doesn't work yet).

The localizer elves have French, German, Spanish, Italian, Russian and Finnish installers available. Our Japanese localization appears to have dropped off the web, so if you can help us, o-negai shimasu! Swedish just needs a couple of strings to be finished. We do not yet have Polish or Asturian, which we used to, so if you can help on any of these languages, please visit issue 42 where Chris is coordinating these efforts. A big thank you to all of our localizers!

Once the localizations are all in, the Google Code project will be frozen to prepare for the wiki and issue tracker moving to Github ahead of Google Code going read-only on 24 August. Downloads will remain on SourceForge, but everything else will go to Github, including the source tree when we eventually drop source parity. I was hoping to have an Elcapitanspoof up in time for 38's final release, but we'll see if I have time to do the graphics.

Watch for the next beta to come out by next weekend with any luck, which gives us enough time if there needs to be a third emergency release prior to the final (weekend prior to 11 August).

Finally, I am pleased to note we are now no longer the only PowerPC JavaScript JIT out there, though we are the only one I know of for Mozilla SpiderMonkey. IBM has been working on a port of Google V8 to PowerPC for some time, both AIX and Linux, which recently became an official part of the Google V8 repository (i.e., the PPC port is now officially supported). If you've been looking at nabbing a POWER8 with that money burning a hole in your pocket, it even works with the new Power ISA little endian mode, of which we dare not speak. Since uppsala, Floodgap's main server, is a POWER6 running AIX and should be able to run this, I might give it a spin sometime when I have a few spare cycles. However, before some of the freaks amongst you get excited and think this means Google Chrome on OS X/ppc is just around the corner, there's still an awful lot more work required to get it operational than just the JavaScript engine, and it won't be me that works on it. It does mean, however, that things like node.js will now work on a Power-based server with substantially less fiddling around, and that might be very helpful for those of you who run Power boxes like me.

Marcia KnousLibriFox emerges - try it now on your Firefox OS Device!

An update to my June 15th post about the group of students working on their own Firefox OS Summer of Code - as a result of their hard work, there is now a new app in the Firefox OS Marketplace - LibriFox! LibriFox is a native Firefox OS app that brings LibriVox.org audiobooks to your device. Alex Hirschberg did a great job taking this from concept to app, and I encourage all of you to download some audiobooks and try it out! While you are at it, please take a moment to review the app -

Hannah KaneQuick update: engagement on the MLN Site

Pledge to Teach

In my last post, I mentioned that we had recently launched the Pledge to Teach the Web. Since we launched it three weeks ago, 240 people have taken the pledge.

Screen Shot 2015-07-24 at 3.25.11 PMScreen Shot 2015-07-24 at 3.26.32 PMScreen Shot 2015-07-24 at 3.27.10 PMOf those who’ve taken the pledge, about a quarter of them have also completed a survey that we sent as a follow-up. The survey is helping us gain a better understanding of our audience, their contexts for teaching, and their needs. We’ll share an analysis of the survey results next month.

Site Traffic

Since we launched teach.mozilla.org back in April, we haven’t been particularly focused on driving traffic to the site. That changed recently, as we began our Maker Party promotion efforts in earnest. We started promoting Maker Party on both beta.webmaker.org and on mozilla.org. Those two referrals, along with our email campaign, led to our most highly trafficked week on the site since launch, during the lead-up to Maker Party. Our highest day was July 13th, when we had over 11K sessions. Since the initial bump, traffic has dropped back down again to between 1200 and 2500 sessions per day.

Unsurprisingly, the Maker Party page is the most popular content, after the homepage. The Activities page is the next most popular.


What we’re doing next with regard to user engagement

  • Adding analytics tracking to several things so we can better measure conversion rates.
  • Experimenting on pledge flow to increase conversion rate. One possibility is to make the pledge the only CTA on the homepage.
  • Determining our post-Maker Party strategy for people who take the pledge. We’re discussing ideas here.
  • Experimenting with “Community” link to increase Discourse activity. This is a larger-than-the-site effort , though. We can be promoting Discourse across all of our work.

Stuart ColvilleBack to the future: ES6 + React

I've just recently finished shaving about a billion yaks * to convert a React app over to use ES6 modules and classes so we can start living in the future that is ES6 with a sprinkling of ES7.

* Might not be true

Transpiling back to the present

We're using babel via webpack to transpile our ES6+ code into ES5. Babel exposes the various stages of ECMAScript proposals so you can choose whatever stages are appropriate for your project. For example Stage 0 will give you the most features, but that will include things like strawman proposals that may never actually see the light of day.

For our project we've tried to stick to Stage2 which is Babel's default (Stage 2 features Draft specs), and then add to that a couple of more modern features which should hopefully make it to prime time.

The two features we've specifically added are:

ES6 Modules

Es6 Modules are very powerful, it takes a bit of getting used to coming from commonJS or AMD modules.

One of the really great things is that you can export more than one thing by using a default export and exporting something else.
In this example the instance will be the default export that you'll get when importing this module:

E.g.

export class Tracking {  
    // Whatever
}

export default new Tracking();  

So now I have two options:

// Get an instance
import tracking from tracking;  
tracking.init();

// Get the class (handy for tests)
import { Tracking } from tracking;  
var tracking = new Tracking();  
tracking.init();  

There's lots of great info on the full range or what's possible with es6 modules here.

React components with ES6 classes

When converting from ES5 there's quite a number of things to be aware of.

Setting default state

You can easily do this in the constructor rather than using a separate getInitialState method.

import React, { Component } from 'react';

class MyComponent  extends Component {  
    constructor() {
        super();
        this.state = {
            // Default state here.
        }
    }

    //... Everything else  
}

Setting defaultProps and propTypes

When using defaultProps and propTypes are defined as static class properties.

If you have es7.classProperties you can do this like so:

import React, { Component, PropTypes } from 'react';

class MyComponent  extends Component {

    static propTypes = {
        foo: PropTypes.func.isRequired,
        bar: PropTypes.object,
    }

    static defaultProps = {
        foo: () => {
               console.log('hai');
        },
        bar: {},
    }

    //... Everything else
}

The alternative to this (when you don't have es7 shiny enabled) is to just set props directly on the class e.g:

class MyComponent  extends Component {  
    //... Everything else
}
MyComponent.propTypes = {foo: PropTypes.func.isRequired};  
MyComponent.defaultProps = {foo: function() { console.log('hai'); }};  

I found this very ugly though, so static class props seems to be a good way to go for now.

isMounted() isn't available

If you'd used isMounted() then this isn't availabe on ES6 classes extended from Component. If you really need it you can set a flag in componentDidMount and unset it again in componentWillUnMount.

Method binding

When passing methods around as props you'll find with ES6 classes this doesn't refer to the component like it used to with React.createClass.

To get back the same behaviour without using something ugly like:

<Foo onClick={this.handleClick.bind(this)} />  

You can use an arrow function instead:

class MyComponent  extends Component {  
    handleClick = (e) => {
        // this is now the component instance.
        console.log(this);
    }
}

This works because arrow functions capture the this value of the enclosing context. In otherwords an arrow function gets the this value from where it was defined, rather than where it's used.

If you want to get really fancy you can use "function bind syntax".

Which you get by doing this:

<Foo onClick={::this.handleClick} />  

Which is equivalent to the previous bind() example. But to do that you'd need to enable es7.functionBind. For me this was a step too far and I'm happy to stick with the arrow functions.

Testing with ES6/7

Our previous tests had made some good use of rewire.js to modify requires and vars that weren't exported from the module (It does this with some cheeky injection techniques).

Unfortunately due to some changes in Babel rewire isn't compatible (at the moment) with ES6 modules. There's an alternative babel-plugin-reqire but that injects getters and setters into every module.

Dependency Injection in React Classes

Instead of using module mangling it was easiest to fall-back to dependency injection for the places where we were changing Components to fake ones.

In React props look like a good fit for dep injection:

import DefaultOtherComponent from 'components/other';

class MyComponent extends Component {

    static defaultProps: {
        OtherComponent: DefaultOtherComponent,
    }

    //... Snip

    render() {
        var OtherComponent = this.props.OtherComponent;
        return (
          <OtherComponent {...this.props} />
        );
    }
}

Now in the test we can inject a fake component:

TestUtils.renderIntoDocument(  
  <MyComponent OtherComponent={FakeOtherComponent} />
);

getDOMNode() is deprecated

In ES6 React classes getDOMNode() is not present so you'll need to refactor all calls to MyComponent.getDOMNode() to React.findDOMNode(MyComponent) instead. The docs also mention it's deprecated so it might be removed completely in the future.

Summary

There's quite a bit of work in making the conversion depending on how big your code-base is and depending on how much use of ES6 you're making already.

A lot of the new features in ES6 are making things much easier for developers so I'd definitely say it's a direction worth moving in assuming you're comfortable with all the aspects of transpilation. Webpack + babel makes this pretty straightforward. If you're already using babel for JSX (or some other similar loader for JSX) then you're already most of the way there.

FWIW: if you're not using babel, switching to it is now an official recommendation.

I'm very impressed with the currently ecosystem around JS e.g. tools like babel and webpack alongside forward thinking libs like React + Redux. All of these things sit well with each other and allow projects like ours to be able to step into the future today.

Now we've just got to get used to all the new syntax and start using more of it!

Further reading

Credits

  • Image By Terabass (Own work) CC BY-SA 4.0, via Wikimedia Commons

Chris CooperRelEng & RelOps Weekly highlights - July 24, 2015

Welcome back. When we last left our heroes, they were battling the combined forces technical debt and a lack of self-service options. We join the fight already in progress…

Kapow! width=100%

Modernize infrastructure: To pave the way for creating continuous integration (CI) automation for Windows 10, Q is auditing all of our Windows 8 GPOs to determine which will work as-is on WIndows 10, which will no longer be needed, and which will require rewriting to work on the new platform (https://bugzil.la/1185844).

Dustin has completed the taskcluster scope handling audit and has reporting his findings back to the team and filed bugs for remediation.

Rail has deployed a change that allows us to specify docker images by their sha256 in TaskCluster, reducing the risk of MITM attacks. This was one of the hard security blockers for Funsize (https://bugzil.la/1175561).

After much discussion, we’ve chosen to move forward with installing hg as an EXE on Windows for the time being. Mark is implementing this method so we can continue progress towards moving Windows 2008 builds into AWS (https://bugzil.la/1170588).

Morgan has a prototype of some github/TaskCluster integration: if your project lives in github/mozilla, you can drop a .taskclusterrc file in the base of your repository and the jobs will just start running after each pull request (http://linuxpoetry.com/blog/23/)

Dustin is migrating treestatus to relengapi, removing one of the blockers to existing the PHX1 datacenter and centralizing another of our many web apps (https://bugzil.la/1181153).

Amy is working at replacing the servers we use to image mac builders and testers. This will allow us to perform backups of critical information and will prepare us for new OS X 10.10 hardware that’s in the purchasing pipeline now (https://bugzil.la/1186197).

Kim disabled Android 4.0 test jobs by default on Try as another step toward to disabling Pandas as a test platform as we move Android 4.3 test jobs to emulators on AWS (https://bugzil.la/1184117)

Today is the last day of Anhad’s internship. :( His end-of-internship presentation is now available on Air Mozilla: http://mzl.la/1JlpT0P This week he met with Anthony to hand-off his work on the getting Windows builds working with the generic worker in TaskCluster. (https://bugzil.la/1180775)

Improve release pipeline: Ben worked with OpSec to generate a new GPG signing key (replacing our expired one) and deploy it to our Nightly and Release signing servers. We are also working to improve the monitoring around signing key expiry to avoid future fire drills.

Improve CI pipeline: Jordan has re-deployed the change that switches over all future CI and release jobs to using a Gecko-based copy of Mozharness (http://jordan-lund.ghost.io/mozharness-goes-live-in-the-tree/).

Ben has been working towards stopping automated builds of XULRunner for the Firefox 42.0 cycle, starting September 22nd, 2015 (https://groups.google.com/forum/#!topic/mozilla.dev.planning/mmNWxHOt_lw)

Release: Firefox 40 is currently in beta. We’re up to b7 now.

Operational: Amy and Coop have worked with DCops to re-balance the Linux/Windows test pools, removing 30 machines from the Linux talos pools increasing capacity by 10 machines each in the Windows XP, Windows 7, and Windows 8 test pools (https://bugzil.la/1151591).

We giveth: This week we enabled the B2G 2.2r branch. (https://bugzil.la/1177598)

…and we taketh away: We also disabled many obsolete B2G builds/branches to improve throughput and reclaim capacity.

vcs-sync is now running in AWS! Hal made the official switch this week after running both setups in parallel for a while. This allows us to retire some ancient hardware in the datacenter.

Callek touched over 55 bugs this week as buildduty, many of them during triage and resolution of machine loans. (http://tinyurl.com/nhhpjyr)

Will our heroes emerge victorious? Tune in next week!

Support.Mozilla.OrgWhat’s up with SUMO – 24th July

A warm “hello” to all our readers across SUMO and Planet Mozilla! Here we go with another round of the good, the bad, and the reminders… ;-)

Say “hello” to the newcomers!

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Big thanks to Safwan, Orvi, Anush and pollti for all their Kitsune fixes the last few months!
  • Kudos to pollti once again, for his mozilla.org SUMO fixes!
  • Our awesome Arabic localizers for hitting the Top 20 Firefox OS milestone!
  • Tobias for helping with the final push of getting German l10n to 100%!
 We salute you!

Last Monday SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 27th of July. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).

Help needed – thank you!

Let’s work together on the script for the tutorial videos for SUMO l10n

    ! Use the “comments” feature to leave your feedback and ideas in the document.

Developers

Community

Support Forum

Knowledge Base

Firefox

Firefox OS

Thunderbird

  • Reminder: as of July 1st, Thunderbird is 100% Community powered and owned! You can still +needinfo Roland in Bugzilla or email him if you need his help as a consultant. He will also hang around in #tb-support-crew.

Don’t forget that we are on Twitter! And don’t forget to cool off in the heat of the cruel summer (if you happen to have one, that is). See you on Monday!

Mozilla Reps CommunityReps Weekly Call – July 23th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

fxos

Summary

  • Firefox OS Foxfooding.
  • Featured events.
  • Webmaker app launch in Indonesia.

AirMozilla video

Detailed notes

Shoutouts to Ahmed Nefzaoui and Yofie Setiawan (@yofiesetiawan) as the Reps of the Month!

Firefox OS Foxfooding

The Foxfooding program started as an employee initiative but it will be expanded to mozillians soon.

A good way to contribute now is check the Foxfooding dashboard and help to triage bugs, we also have an app (Triagr) to help with this login with your bugzilla user, you can install or try it. Feel free to follow @foxfooding on Twitter.

Discourse topic about Foxfooding.

There is also the B2gdroid initiative to be able to experience Gaia directly on an Android device.

Future for Firefox OS is very interesting, like NGA (New Gaia Architecture) and new features like stream apps, share apps, hack apps, share modifications (Hackerspace).

Featured events

  • Maker Party Events:
  • Kathmandu (Nepal), 25th. Rep: Surit Aryal
    • Stockholm (Sweden), 25th. Rep: Åke Nygren
    • Kanpur (India), 28th. Rep: Chandan Kumar Baba
    • Sertaozinho (Brazil), 29th. Rep: Sergio Oliveira
    • And many more events in: https://events.webmaker.org/events
  • HackDehli – New Dehli (India). 25th-26th.
  • Mozilla Hindi Community Meetup – New Dehli (India). 25th-26th
  • Firefox OS Launch week 6 in Mauritius – 25 July 2015 in the South of Mauritius at Mahebourg.

Webmaker app launch in Indonesia

The launch is going to happen in August with the training help of WilliamQ, Bobby, Paul, Kevin.

The preparation is still in progress, including localization, planning for events content.

Raw etherpad notes.

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Alex VincentIntroducing a WebGL-DOM Visualization Tool

Repository: https://bitbucket.org/verbosio/webgl-dom

Home page: https://alexvincent.us/webgl-dom/

tl;dr:  I have a new way of visualizing DOM trees in WebGL, and I’m looking for volunteers to improve the basic tool, especially on the WebGL side with three.js.

I’ve run into some trouble with an experimental Document Object Model.  Specifically, I’m trying to visualize it, but I’m dealing with multiple dimensions:

  • The parent-child node relationships
  • Sibling nodes
  • Attributes of DOM Element nodes
  • Atomic change sets
  • and at least two models of “shadow content”.

About four to six weeks ago, I realized I needed a tool to not only debug the DOM I’m trying to build, but to simulate new ideas that I haven’t yet implemented.  So I started building a WebGL-based visualization tool.

The tool currently has two main tasks:

  1. Transforming a XML document into a specific JSON format
  2. Generating a tree diagram in WebGL from JSON documents in the specified format

Most importantly, hand-editing this JSON should allow me to show ideas that currently cannot be done in the standard DOM.  The “WebGL Inspector sample” link from the home page shows this:  it takes only JSON as input and renders the tree.

It is very primitive right now, a mere starting point.  I’m posting this now, hoping to find a volunteer who’s more familiar with WebGL / three.js than I am to improve on the rendering parts.  The image is static:  there’s no zoom, pan, or rotation support whatsoever.  I really would like some help there.

Also, it doesn’t work in Google Chrome, but that’s because I had to specify type=”application/javascript;version=1.8” to make it work in Mozilla Firefox 39+.  (I like ECMAScript 6th edition and strict mode, thank you very much.  I just wish it worked without versioning.  I understand that’s Coming Soon.)

There is some click support:  clicking on a sphere should give details about the corresponding DOM Node, including the nodeName and nodeType.

If anyone out there likes the 3-D visualization idea and wants to reimplement it in the Firefox Developer Tools, be my guest.  Though the Tilt add-on for Firefox is more practical right now.

Nick DesaulniersAdditional C/C++ Tooling

21st Century C by Ben Klemens was a great read. It had a section with an intro to autotools, git, and gdb. There are a few other useful tools that came to mind that I’ve used when working with C and C++ codebases. These tools are a great way to start contributing to Open Source C & C++ codebases; running these tools on the code or adding them to the codebases. A lot of these favor command line, open source utilities. See how many you are familiar with!

Build Tools

CMake

The first tool I’d like to take a look at is CMake. CMake is yet another build tool; I realize how contentious it is to even discuss one of the many. From my experience working with Emscripten, we recommend the use of CMake for people writing portable C/C++ programs. CMake is able to emit Makefiles for unixes, project files for Xcode on OSX, and project files for Visual Studio on Windows. There are also a few other “generators” that you can use.

I’ve been really impressed with CMake’s modules for finding dependencies and another for fetching and building external dependencies. I think C++ needs a package manager badly, and I think CMake would be a solid foundation for one.

The syntax isn’t the greatest, but when I wanted to try to build one of my C++ projects on Windows which I know nothing about developing on, I was able to install CMake and Visual Studio and get my project building. If you can build your code on one platform, it will usually build on the others.

If you’re not worried about writing cross platform C/C++, maybe CMake is not worth the effort, but I find it useful. I wrestle with the syntax sometimes, but documentation is not bad and it’s something you deal with early on in the development of a project and hopefully never have to touch again (how I wish that were true).

Code Formatters

ClangFormat

Another contentious point of concern amongst developers is code style. Big companies with lots of C++ code have documents explaining their stylistic choices. Don’t waste another hour of your life arguing about something that really doesn’t matter. ClangFormat will help you codify your style and format your code for you to match the style. Simply write the code however you want, and run the formatter on it before commiting it.

It can also emit a .clang-format file that you can commit and clang-format will automatically look for that file and use the rules codified there.

Linters

Flint / Flint++

Flint is a C++ linter in use at Facebook. Since it moved from being implemented in C++ to D, I’ve had issues building it. I’ve had better luck with a fork that’s pure C++ without any of the third party dependencies Flint originally had, called Flint++. While not quite full-on static analyzers, both can be used for finding potential issues in your code ahead of time. Linters can look at individual files in isolation; you don’t have to wait for long recompiles like you would with a static analyzer.

Static Analyzers

Scan-build

Scan-build is a static analyzer for C and C++ code. You build your code “through” it, then use the sibling tool scan-view to see the results. Scan-view will emit and open an html file that shows a list of the errors it detected. It will insert hyperlinks into the resulting document that step you through how certain conditions could lead to a null pointer dereference, for example. You can also save and share those html files with others in the project. Static analyzers will help you catch bugs at compile time before you run the code.

Runtime Sanitizers

ASan and UBSan

Clang’s Address (ASan) and Undefined Behavior (UBSan) sanitizers are simply compiler flags that can be used to detect errors at runtime. ASan and UBSan two of the more popular tools, but there are actually a ton and more being implemented. See the list here. These sanitizers will catch bugs at runtime, so you’ll have to run the code to notice any violations, at variable runtime performance costs per sanitizer. ASan and TSan (Thread Sanitizer) made it into gcc4.8 and UBSan is in gcc4.9.

Header Analysis

Include What You Use

Include What You Use (IWYU) helps you find unused or unnecessary #include preprocessor directives. It should be obvious how this can help improve compile times. IWYU can also help cut down on recompiles by recommending forward declarations under certain conditions. I look forward to the C++ module proposal being adopted, but until then this tool can help you spot cruft that can be removed.

Rapid Recompiles

ccache

ccache greatly improves recompile times by caching the results of parts of the compilation process. I use when building Firefox, and it saves a great deal of time.

distcc

distcc is a distributed build system. Some folks at Mozilla speed up their Firefox builds with it.

Memory Leak Detectors

Valgrind

Valgrind has a suite of tools, my favorite being memcheck for finding memory leaks. Unfortunately, it doesn’t seem to work on OSX since 10.10. This page referring to ASan seems to indicate that it can do everything Valgrind’s Memcheck can, at less of a runtime performance cost, but I’m not sure how true this is exactly.

leaks

A much more primitive tool for finding leaks from the command line, BSD’s have leaks.

1
2
3
MallocStackLogging=1 ./a.out
leaks a.out
...

Profilers

Perf

Perf, and Brendan Gregg’s tools for emitting SVG flamegraphs from the output are helpful for finding where time is spent in a program. In fact, there are numerous perfomance analysis tools that are Linux specific. My recommendation is spend some time on Brendan Gregg’s blog.

DTrace

OSX doesn’t have the same tooling as Linux, but DTrace was ported to it. I’ve used it to find sampling profiles of my code before. Again, Brendan Gregg’s blog is a good resource; there are some fantastic DTrace one liners.

Debuggers

lldb

lldb is analogous to gdb. I can’t say I have enough experience with LLDB and GDB to note the difference between the two, but LLDB did show the relative statements forward and back from the current statement by default. I’m informed by my friends/mortal enemies using emacs that this is less of an issue when using emacs/gdb in combination.

Fuzzers

American Fuzzy Lop

American Fuzzy Lop (AFL) is a neat program that performs fuzzing on programs that take inputs from files and repeatedly runs the program, modifies the input trying to get full code coverage, and tries to find crashes. It’s been getting lots of attention lately, and while I haven’t used it myself yet, it seems like a very powerful tool. Mozilla employs the use of fuzzers on their JavaScript engine, for instance (not AFL, but one developed in house).

Disassemblers

gobjdump

If you really need to make sure the higher level code you’re writing is getting translated into the assembly your expecting, gobjdump -S will intermix the emitted binary’s disassembled assembly and the source code. This was used extensively while developing my Brainfuck JIT.

Conclusion

Hopefully you learned of some useful tools that you should know about when working with C or C++. What did I miss?

Mike HommeyFirefox nightlies for Linux are now using Gtk+3

As of last nightly (2015-07-23), Firefox for Linux is using Gtk+3 instead of Gtk+2.

Thanks to the recent efforts of Andrew Comminos, all remaining test failures are gone and mozilla-central now defaults to Gtk+3 builds. Some jobs on treeherder are still not converted, but this will come soon (bug 1186748).

If you’ve been using elm builds for dogfooding, you should be automatically switched to standard nightlies today or tomorrow. The elm branch will be recycled to do Gtk+2 builds so that they keep working. Those builds won’t be auto-updating, so don’t use them.

Joel Maherlost in data – episode 1, tackling a bunch of alerts

Today I recorded a session of me investigating talos alerts, It is ~35 minutes, sort of long, but doable in the background whilst grabbing breakfast or lunch!

I am looking forward to more sessions and answering questions that come up.


David BurnsMicrosoft ship a WebDriver implementation

Microsoft, the people still claim to be evil (who are actually big proponents of the the The Open Web), have... (wait for it...) SHIPPED. AN. IMPLEMENTATION. OF. WEBDRIVER!

At GTAC in California in 2011, Simon Stewart and I discussed that the Selenium project was at a crossroads (I am pretty sure beer was involved). We could ,and should, move this to the browser vendors. We had seen how in April the Chrome team had shipped their implementation and the Selenium project then deleted its implementation. I am pretty sure Daniel (Wagner-Hall) was glad to see it go.

In that time to now we have seen Mozilla get Marionette into Firefox and into release branches since Firefox 24 while slowly working on it (as well as Firefox OS support). We have seen Blackberry ship a version for the browser on their devices. We have seen mobile implementation with iOS-Driver, Selendroid and Appium.

The Spec is on track to be put forward for Recommendation by the end of the year. All the dreams that we (the Selenium Development team (my BFFs)) had are slowly coming true. This ship might be slow moving but it's mostly because some companies haven't always seen the value.

so...

P.s. There is an open bug on the WebKit tracker for Safari support (and it is getting some internal push so I am hopeful!)

About:CommunityTen years of evolution of MDN

Logo for MDN 10 year anniversaryThis week marks the tenth anniversary of Mozilla Developer Network (MDN) as a wiki. This post offers a deep dive into where MDN came from, how it has evolved in various ways, and where it may be going.

(This post is based in large part on a round table discussion about MDN that was held during the “Hack on MDN” weekend in Berlin in April 2015, and on Florian Scholz’s history of MDN’s JavaScript documentation.)

What is MDN today?

For many web developers, MDN is the reference manual for the Web, the place they go to look up or learn about open web technologies. MDN offers much more than that. It is a resource for learning about the Web, and a place for developers to share their skills and knowledge. MDN’s strength lies in its openness, where anybody can help make the resources incrementally better, or substantially better. MDN can also encourage the growth of web technologies into new spaces from where they’ve been in the past.

MDN is a community of programmers, writers, and localizers. A few of these are paid staff of Mozilla, but they are a subset of the larger community of people who make small or large contributions.

One of the coolest things for those who contribute to MDN is that every time we talk to developers, they tell us how much they love MDN. It’s not, “Oh MDN is kind of cool,” or “It’s great.” The response is: “I love MDN. It’s the best resource out there.” It’s tremendously gratifying to feel that you are part of something that people really love.

Who is MDN for?

MDN serves a variety of audiences:

  • Web developers, first and foremost
  • People who want to teach themselves about web development
  • Teachers of web development skills and concepts
  • Developers in the ecosystem of Mozilla products, such as Firefox add-ons, and Firefox OS apps
  • Developers working on the Mozilla codebase

Beginnings of MDN

The site that became MDN, developer.mozilla.org or “Devmo,” started as a redirect to a developer-oriented page on the main mozilla.org site. Later that content was moved over to Devmo; it contained primarily information for developers contributing to the Mozilla codebase.

Just as the Mozilla project emerged from the remains of Netscape, also MDN as we know it began with documentation originally written at Netscape. The site known as “Netscape DevEdge” documented web technologies, such as JavaScript, and other things that were implemented in Netscape products. After Netscape was acquired by AOL, the DevEdge site eventually was shut down, and that information disappeared from the Web.

Mitchell Baker (Chair of Mozilla) and others from Mozilla worked out an arrangement with AOL to release the DevEdge content, which she announced in February 2005. At the same time, Deb Richardson was hired to migrate and curate the DevEdge content into Devmo.

Mitchell and Deb made the decision to put the content into a wiki, to enable open contributions to maintain and update the content. Previously, the DevEdge content was in a CVS source control system, and published as a static site. Using a wiki for documentation was a novel concept at the time, and it took a while for some core developers within the Mozilla project to warm to this way of working. However, others quickly embraced the idea, and the many of the earliest contributors to the Devmo wiki were developers who were active elsewhere in the Mozilla project.

Deb and some volunteers spent a few months mining and migrating the still-useful content from DevEdge, working on a test server. This effort was still in progress when the content was migrated to the Devmo site, now titled “Mozilla Developer Center” or “MDC” in July 2005. We mark this as the starting point of what we now call Mozilla Developer Network.

Platform evolution

MDN has lived on three different wiki platforms in the course of its history: first MediaWiki, then MindTouch DekiWiki, and now Kuma, a Mozilla-developed platform. Looking at the technical infrastructure is interesting not just from a technical point of view, but also because technology influences social structures like community.

MediaWIki

The wiki platform used for the first iteration of MDC was MediaWiki, the open source software that underlies Wikipedia. It was the most robust and widely-used wiki at the time. The Devmo project began to discover that software designed for writing a general-purpose encyclopedia was not necessarily ideal for writing developer-oriented technical documentation. For example, it did not handle code examples well, reformatting them to be unreadable. Mozilla tried to fix such issues by creating its own fork of MediaWiki, which then ended up being quite difficult to maintain.

On the level of contributions, using MediaWiki was initially an advantage, because many technical people were already familiar with how to use it. However, the project eventually reached a plateau, where it became difficult to keep contributors coming back. This, combined with the technical quirks, led to a search for a more user-friendly platform.

Dekiwiki

After an evaluation process that looked at all the wiki products available on the market (not just open source ones), the choice was made of DekiWiki by MindTouch. One advantage of DekiWiki was that the source format for articles was HTML, rather than wiki markup. It seemed a logical choice for a site targeting web developers, to have the source format be a standard web language. This required migrating all of the content from MediaWiki markup format to HTML, which was a major migration project. The choice of DekiWiki was announced in November 2007, and the site switched to it in August 2008.

While DekiWiki was a quality product, one way that the selection process was flawed was that it did not include a major group of stakeholders: volunteers who contributed to the site. The rate of contribution nose-dived, because the platform was not embraced by the contributor community. In particular, localization communities, who translated the content into various languages other than English, were severely disrupted. They had built tools and processes around working with MediaWiki, and these tools didn’t work with DekiWiki. After a few months, many of these groups simply disbanded and decided not to contribute any more, with the result that translated documentation started to become stale, and became more and more out of date over time.

DekiWiki was also written in C#, and designed to run in a Microsoft .NET environment. This was a mismatch with Mozilla’s technical infrastructure, which is Linux-based. Trying to run DekiWiki on Mono led to a great deal of instability, with the site being down for days and weeks at times.

These issues, after a couple of years, led to looking for another solution. The best candidates on the market were still MediaWiki and DekiWiki. Now that the content was all written in HTML, migrating back to MediaWiki markup syntax was not feasible. No product seemed suited to the specific needs of an open developer documentation site, so Mozilla decided to create its own.

Kuma

The current platform for MDN, known as Kuma, is written in Python with Django. It started as a fork of Kitsune, the platform for the Mozilla support site, and was adapted to the needs of a wiki site for developers rather than end users. (Also, “kitsune” means “fox” in Japanese, and “kuma” means “bear”. Because users : foxes :: developers : bears, right?)

Like DekiWiki, Kuma uses HTML as the source format for the content. The migration effort in this case was converting the scripts and macros used on the site. DekiWiki used “DekiScript,” based on Lua, while Kuma introduced KumaScript, which is based on JavaScript, using Node.js. KumaScript is the brainchild of developer Les Orchard. As with creating content in HTML, KumaScript means that MDN is implemented using the same technologies that it documents, and that its contributors are familiar with. It was possible to migrate about 70% of the existing macros automatically, but the rest had to be manually converted.

The goal when launching the Kuma platform was to achieve feature parity with the DekiWiki implementation of MDN. The content was migrated to the new system, and changes from the production server were periodically updated on the Kuma staging server. Thus, the Kuma instance was kept in sync with the production DekiWiki server. While the months leading up to the launch of Kuma were involved a great deal of migration work, the actual launch was very smooth. A routing switch was flipped, and traffic shifted to the new site seamlessly, without even disrupting login sessions.

Community evolution

From the beginning, the community for the DevMo site grew organically, starting with contributors who were already active in other parts of the Mozilla project. Like other areas of Mozilla, communication happens through a mailing list and IRC chat channel. By mid-2007, contributions were typically 250 per month. As mentioned before, the migration to Dekiwiki led to a dramatic drop-off in localization contribution, and total contribution declined as well.

Doc contributors sitting around a table, in Paris, in October 2010

MDN doc sprint, Paris 2010


As part of an effort to engage the community more actively, I (Janet Swisher) was hired as a staff technical writer in mid-2010. I brought experience with open source developer documentation, and in particular, experience with the “book sprints” methodology used by the FLOSS Manuals project to produce manuals for free software in five days or less. The first MDN “doc sprint” took place in October 2010, in the Mozilla Paris office. Doc sprints bring together a number of MDN contributors, either physically or virtually, for focussed, collaborative work, typically over a weekend. These were held about once a quarter for about three years. More recently, they have evolved into less frequent but broader “Hack on MDN” events, whose scope includes hacking on the platform or tools, as well as on content, to make them more attractive to developers.
Idea pitches at HackOnMDN weekend, Berlin 2015

Idea pitches at HackOnMDN weekend, Berlin 2015


In addition, the MDN community holds a number of regular online meetings, both for general information, and for tracking specific projects. These community activities, as well as the migration to Kuma in 2012, have led to a significant increase in contributions, now around 1000 per month.

Branding evolution

In the beginning, the DevMo site was known as “Mozilla Developer Center.” At first, it simply sported that title, with a simple skin on MediaWiki. With the move to DekiWiki, the word “Mozilla” became the Mozilla wordmark, followed by “<developer center/>”, to convey slightly more webbiness.

Wordmark for Mozilla Developer Center

Wordmark for Mozilla Developer Center

In September 2010, the name of the site was changed from “Mozilla Developer Center” to “Mozilla Developer Network” or MDN. This change was met with some skepticism from the developer audience at the time, though by now they simply accept MDN as MDN. The visual design of the site changed at the same time, to a darker theme, and MDN acquired a logo, the “robot dino,” which it had never had before.

MDN robot dino logo

MDN robot dino logo

Along with these visual changes, features were added to the site to broaden its scope beyond just documentation. One successful feature is known as “Demo Studio”, an area where developers can upload code demos, share them, and show them off.

When MDN migrated from DekiWiki to Kuma, the visual appearance was preserved, so there was very little visual difference between the pre- and post-migration sites. After six to eight months of bug-fixing on Kuma, a project was started to change not only the visual design, but also the content structure. These changes were rolled out using feature flags, to users who chose to be beta testers. Thus, while most users continued to see the old design, while beta testers saw and tested the new visuals and structure. “Launch day” for the redesign consisted of simply flipping a switch in the database to make the new features visible to everybody.

The redesign brought not only a new logo, the dino-head-map that we see today, but also structural features like the navigation sidebar, which varies depending on which content area an article is in. In localized pages, items in the sidebar that are not yet translated link to their English versions, and show an invitation to translate them.

Content evolution

We mark the start of MDN “as we know it” from the acquisition and republication of the Netscape DevEdge content in 2005. But in the early days, the content was very slanted toward Mozilla products and technology. Not only was there documentation of XUL and internal Mozilla APIs (which are still there), but documentation of web technologies tended to be focused on Mozilla and Firefox, for example, with big banners like “works in Firefox 2.0″ or explanations of Gecko’s support of a feature in the middle of an otherwise neutral article.

As Mozilla began engaging more actively with the MDN community in 2010, community members began to express a vision of MDN as a vendor-neutral resource for web developers, whatever browsers they are targeting. Adopting this as a strategy required a lot of clean-up effort to remove Firefox-specific content from articles about web standards, and to create the compatibility tables that exist now, with information about all major browsers. Not coincidentally, as the content on MDN became more browser-agnostic, MDN started seeing contributions from other organizations.

MDN today and tomorrow

Two current projects on MDN are having a major impact on the shape of MDN in the near to medium term: the Learning area, and the compatibility data project.

MDN’s information about web technologies has long been a resource for experienced web developers. But it has poorly supported beginners to web development. The aim of the Learning area is to change that by offering tutorials and other resources to people who want to teach themselves about web development. This effort is happening in response to surveys we’ve done of our audience, who reported basic learning material as a significant gap. The Learning area project has been underway for about a year, and in that time has created a large Glossary about web technology concepts, and a number of new tutorials, corresponding to the Web Literacy Map developed by the Mozilla Foundation. The Learning area is a great opportunity to get started in contributing to MDN, since learners and teachers are as needed as technical experts.

Currently, data on MDN about browsers’ compatibility with web technology features is maintained in tables on the relevant pages. The data is pretty good, thanks to many, many crowd-sourced contributions. But this approach is not very sustainable or maintainable; for example, every table must be replicated on all localized versions of the page. The compatibility data project aims to improve the quality of the data, make data contribution easier, make access to the data easier, and allow reuse of the data, through a centralized data store. This project is action-driven rather than time-bound; contributions and involvement are welcome.

MDN in 10 years?

MDN as it exists today is quite different from its beginnings ten years ago. The Web has evolved, Mozilla has evolved, and MDN has evolved. We can expect even greater changes in the next ten years. Perhaps the vision of a direct brain interface to virtual reality “cyberspace” will finally come to pass. We know for sure there will be many more web developers, many more types of devices, and many standards that are not yet written.

Some things won’t change: Mozilla’s mission will continue to be to work towards an Internet that is a global public resource, open and accessible for all. MDN will continue to be a means towards that mission, by providing resources to enable anyone to become a creator of the Web, and to develop on the Web as a primary platform. MDN’s content, no matter how it’s delivered, will continue to be contributed by a global community of people who are passionate about learning and sharing knowledge about the Web.

Andreas TolfsenOptimised Rust code in Gecko

Rust Rust wave by Glen Scott (CC BY-NC 2.0)

Optimised Rust code in Gecko

Following bug 1177608, Rust code in Gecko is now compiled with optimisation by default.

Compilation of Rust components can be enabled by adding ac_add_options --enable-rust to your mozconfig file. For now there’s only limited usage of Rust in Gecko, but you can take a look at the bindings for the MP4 encoder if you’re interested. This is much thanks to the work of Ralph Giles.

Since optimised compiles are the default, you will now also get optimised output from rustc. You can disable this by setting ac_add_options --disable-optimize. This disables compilation for all cc, c++, and rustc compilers.

Next up is adding debug symbols, assertions, and adding some Rust autoconf macros.

Air MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaJuly Brantina on Prototyping with Tom Chi

July Brantina on Prototyping with Tom Chi At this month's July 23 Brantina Tom Chi (one of the founders of GoogleX) will share some best practices as well as things to avoid...

The Mozilla BlogMDN celebrates 10 years of documenting YOUR Web

Today, Mozilla proudly celebrates the 10th anniversary of the Mozilla Developer Network, one of the richest and also one of the few multilingual resources on the Web for documentation. It started in February 2005, when a small team dedicated to the open Web took DevEdge (Netscape’s developer materials) and set out to create an open, free, community-built online resource for all Web developers. Just a couple of months later, on 23 July, 2005 the original MDN wiki site launched and has evolved steadily ever since for the convenience and the benefit of its users.

MDN_10-Milestones_UK
Today, ten years later, not only has the amount of documentation grown – 34,500 documents and climbing – but also MDN’s global volunteer community is bigger than ever. Currently, MDN has more than 4 million users and over 1000 volunteer editors per month creating and translating documentation, sample code, tutorials and other learning resources for all open Web technologies, including CSS, HTML, JavaScript and everything that makes the open Web as rich and versatile as it is.

MDN_10_Facts_UK

For a wide range of Web developers, from learners to hobbyists to full-time professionals, MDN provides useful explanations for coding practice. It aims to inspire ideas, encourage collaboration, innovation and ultimately, foster the growth of the open Web. Moreover, as the digital industry flourishes and the demand for coding skills at young age rises, the importance of well-organized resources like MDN grows exponentially. That is why in 2014 MDN started to feed and expand all its learning pages into a “Learn the Web” area for beginning web developers, including a web terminology glossary, which MDN’s technical writers and volunteers will continue to develop over the next years.

All these efforts, which would not be possible without the active MDN volunteer base, are being greatly acknowledged by developers from all over the world who would not be doing what they do without MDN – or at least not as good.

Let’s hear it for MDN!

For more information:

Web: https://developer.mozilla.org/
MDN at 10: https://developer.mozilla.org/en-US/docs/MDN_at_ten
Twitter: https://twitter.com/MozDevNet

All graphics are also available in French, German, Italien, Spanish and Polish.

Daniel PocockUnpaid work training Google's spam filters

This week, there has been increased discussion about the pain of spam filtering by large companies, especially Google.

It started with Google's announcement that they are offering a service for email senders to know if their messages are wrongly classified as spam. Two particular things caught my attention: the statement that less than 0.05% of genuine email goes to the spam folder by mistake and the statement that this new tool to understand misclassification is only available to "help qualified high-volume senders".

From there, discussion has proceeded with Linus Torvalds blogging about his own experience of Google misclassifying patches from Linux contributors as spam and that has been widely reported in places like Slashdot and The Register.

Personally, I've observed much the same thing from the other perspective. While Torvalds complains that he isn't receiving email, I've observed that my own emails are not always received when the recipient is a Gmail address.

It seems that Google expects their users work a little bit every day going through every message in the spam folder and explicitly clicking the "Not Spam" button:

so that Google can improve their proprietary algorithms for classifying mail. If you just read or reply to a message in the folder without clicking the button, or if you don't do this for every message, including mailing list posts and other trivial notifications that are not actually spam, more important messages from the same senders will also continue to be misclassified.

If you are not willing to volunteer your time to do this, or if you are simply one of those people who has better things to do, Google's Gmail service is going to have a corrosive effect on your relationships.

A few months ago, we visited Australia and I sent emails to many people who I wanted to catch up with, including invitations to a family event. Some people received the emails in their inboxes yet other people didn't see them because the systems at Google (and other companies, notably Hotmail) put them in a spam folder. The rate at which this appeared to happen was definitely higher than the 0.05% quoted in the Google article above. Maybe the Google spam filters noticed that I haven't sent email to some members of the extended family for a long time and this triggered the spam algorithm? Yet it was at that very moment that we were visiting Australia that email needs to work reliably with that type of contact as we don't fly out there every year.

A little bit earlier in the year, I was corresponding with a few students who were applying for Google Summer of Code. Some of them also observed the same thing, they sent me an email and didn't receive my response until they were looking in their spam folder a few days later. Last year I know a GSoC mentor who lost track of a student for over a week because of Google silently discarding chat messages, so it appears Google has not just shot themselves in the foot, they managed to shoot their foot twice.

What is remarkable is that in both cases, the email problems and the XMPP problems, Google doesn't send any error back to the sender so that they know their message didn't get through. Instead, it is silently discarded or left in a spam folder. This is the most corrosive form of communication problem as more time can pass before anybody realizes that something went wrong. After it happens a few times, people lose a lot of confidence in the technology itself and try other means of communication which may be more expensive, more synchronous and time intensive or less private.

When I discussed these issues with friends, some people replied by telling me I should send them things through Facebook or WhatsApp, but each of those services has a higher privacy cost and there are also many other people who don't use either of those services. This tends to fragment communications even more as people who use Facebook end up communicating with other people who use Facebook and excluding all the people who don't have time for Facebook. On top of that, it creates more tedious effort going to three or four different places to check for messages.

Despite all of this, the suggestion that Google's only response is to build a service to "help qualified high-volume senders" get their messages through leaves me feeling that things will get worse before they start to get better. There is no mention in the Google announcement about what they will offer to help the average person eliminate these problems, other than to stop using Gmail or spend unpaid time meticulously training the Google spam filter and hoping everybody else does the same thing.

Some more observations on the issue

Many spam filtering programs used in corporate networks, such as SpamAssassin, add headers to each email to suggest why it was classified as spam. Google's systems don't appear to give any such feedback to their users or message senders though, just a very basic set of recommendations for running a mail server.

Many chat protocols work with an explicit opt-in. Before you can exchange messages with somebody, you must add each other to your buddy lists. Once you do this, virtually all messages get through without filtering. Could this concept be adapted to email, maybe giving users a summary of messages from people they don't have in their contact list and asking them to explicitly accept or reject each contact?

If a message spends more than a week in the spam folder and Google detects that the user isn't ever looking in the spam folder, should Google send a bounce message back to the sender to indicate that Google refused to deliver it to the inbox?

I've personally heard that misclassification occurs with mailing list posts as well as private messages.

Daniel PocockRecording live events like a pro (part 1: audio)

Whether it is a technical talk at a conference, a political rally or a budget-conscious wedding, many people now have most of the technology they need to record it and post-process the recording themselves.

For most events, audio is an essential part of the recording. There are exceptions: if you take many short clips from a wedding and mix them together you could leave out the audio and just dub the couple's favourite song over it all. For a video of a conference presentation, though, the the speaker's voice is essential.

These days, it is relatively easy to get extremely high quality audio using a lapel microphone attached to a smartphone. Lets have a closer look at the details.

Using a lavalier / lapel microphone

Full wireless microphone kits with microphone, transmitter and receiver are usually $US500 or more.

The lavalier / lapel microphone by itself, however, is relatively cheap, under $US100.

The lapel microphone is usually an omnidirectional microphone that will pick up the voices of everybody within a couple of meters of the person wearing it. It is useful for a speaker at an event, some types of interviews where the participants are at a table together and it may be suitable for a wedding, although you may want to remember to remove it from clothing during the photos.

There are two key features you need when using such a microphone with a smartphone:

  • TRRS connector (this is the type of socket most phones and many laptops have today)
  • Microphone impedance should be at least 1kΩ (that is one kilo Ohm) or the phone may not recognize when it is connected

Many leading microphone vendors have released lapel mics with these two features aimed specifically at smartphone users. I have personally been testing the Rode smartLav+

Choice of phone

There are almost 10,000 varieties of smartphone just running Android, as well as iPhones, Blackberries and others. It is not practical for most people to test them all and compare audio recording quality.

It is probably best to test the phone you have and ask some friends if you can make test recordings with their phones too for comparison. You may not hear any difference but if one of the phones has a poor recording quality you will hopefully notice that and exclude it from further consideration.

A particularly important issue is being able to disable AGC in the phone. Android has a standard API for disabling AGC but not all phones or Android variations respect this instruction.

I have personally had positive experiences recording audio with a Samsung Galaxy Note III.

Choice of recording app

Most Android distributions have at least one pre-installed sound recording app. Look more closely and you will find not all apps are the same. For example, some of the apps have aggressive compression settings that compromise recording quality. Others don't work when you turn off the screen of your phone and put it in your pocket. I've even tried a few that were crashing intermittently.

The app I found most successful so far has been Diktofon, which is available on both F-Droid and Google Play. Diktofon has been designed not just for recording, but it also has some specific features for transcribing audio (currently only supporting Estonian) and organizing and indexing the text. I haven't used those features myself but they don't appear to cause any inconvenience for people who simply want to use it as a stable recording app.

As the app is completely free software, you can modify the source code if necessary. I recently contributed patches enabling 48kHz recording and disabling AGC. At the moment, the version with these fixes has just been released and appears in F-Droid but not yet uploaded to Google Play. The fixes are in version 0.9.83 and you need to go into the settings to make sure AGC is disabled and set the 48kHz sample rate.

Whatever app you choose, the following settings are recommended:

  • 16 bit or greater sample size
  • 48kHz sample rate
  • Disable AGC
  • WAV file format

Whatever app you choose, test it thoroughly with your phone and microphone. Make sure it works even when you turn off the screen and put it in your pocket while wearing the lapel mic for an hour. Observe the battery usage.

Gotchas

Now lets say you are recording a wedding and the groom has that smartphone in his pocket and the mic on his collar somewhere. What is the probability that some telemarketer calls just as the couple are exchanging vows? What is the impact on the recording?

Maybe some apps will automatically put the phone in silent mode when recording. More likely, you need to remember this yourself. These are things that are well worth testing though.

Also keep in mind the need to have sufficient storage space and to check whether the app you use is writing to your SD card or internal memory. The battery is another consideration.

In a large event where smartphones are being used instead of wireless microphones, possibly for many talks in parallel, install a monitoring app like Ganglia on the phones to detect and alert if any phone has weak wifi signal, low battery or a lack of memory.

Live broadcasts and streaming

Some time ago I tested RTP multicasting from Lumicall on Android. This type of app would enable a complete wireless microphone setup with live streaming to the internet at a fraction of the cost of a traditional wireless microphone kit. This type of live broadcast could also be done with WebRTC on the Firefox app.

Conclusion

If you research the topic thoroughly and spend some time practicing and testing your equipment, you can make great audio recordings with a smartphone and an inexpensive lapel microphone.

In subsequent blogs, I'll look at tips for recording video and doing post-production with free software.

Nicholas Nethercote“Thank you” is a wonderful phrase

Last year I contributed a number of patches to pdf.js. Most of my patches were reviewed and merged to the codebase by Yury Delendik. Every single time he merged one of my patches, Yury wrote “Thank you for the patch”. It was never “thanks” or “thank you!” or “thank you :)”. Nor was it “awesome” or “great work” or “+1″ or “\o/” or “\m/”. Just “Thank you for the patch”.

Oddly enough, this unadorned use of one of the most simple and important phrases in the English language struck me as quaint, slightly formal, and perhaps a little old-fashioned. Not a bad thing by any means, but… notable. And then, as Yury merged more of my patches, I started getting used to it. Tthen I started enjoying it. Each time he wrote it — I’m pretty sure he wrote it every time — it made me smile. I felt a small warm glow inside. All because of a single, simple, specific phrase.

So I started using it myself. (“Thank you for the patch.”) I tried to use it more often, in situations I previously wouldn’t have. (“Thank you for the fast review”.) I mostly kept to this simple form and eschewed variations. (“Thank you for the additional information.”) I even started using it each time somebody answered one of my questions on IRC. (“glandium: thank you”)

I’m still doing this. I don’t always use this exact form, and I don’t always remember to thank people who have helped me. But I do think it has made my gratitude to those around me more obvious, more consistent, and more sincere. It feels good.

Morgan Phillipsgit push origin taskcluster

If you've been around Mozilla during the past two years, you've probably heard talk about TaskCluster - the fancy task execution engine that a few awesome folks built for B2G - which will be used to power our entire CI/Build infrastructure soonish.

Up until now, there have been a few ways for developers to schedule custom CI jobs in TaskCluster; but nothing completely general. However, today I'd like to give sneak peak at a project I've been working on to make the process of running jobs on our infra extremely simple: TaskCluster GitHub.

Why Should You Care?

1.) The service watches an entire organization at once: if your project lives in github/mozilla, drop a .taskclusterrc file in the base of your repository and the jobs will just start running after each pull request - dead simple.

2.) TaskCluster will give you more control over the platform/environment: you can choose your own docker container by default, but thanks to the generic worker we'll also be able to run your jobs on Windows XP-10 and OSX.

3.) Expect integration with other Mozilla services: For a mozilla developer, using this service over travis or circle should make sense, since it will continue to evolve and integrate with our infrastructure over time.

It's Not Ready Yet: Why Did You Post This? :(

Because today the prototype is working, and I'm very excited! I also feel that there's no harm in spreading the word about what's coming.

When this goes into production I'll do something more significant than a blog post, to let developers know they can start using the system. In the meantime here it is handling a replay of this pull request. \o/ note: The finished version will do nice things, like automatically leave a comment with a link to the job and its status..

Mic BermanGetting to know your new direct reports

A few of my coaching clients have recently had to take on new teams or hire new people and grow their teams rapidly.

Here are some great questions to get to know your new folks in your 1:1's. Remember to answer the questions yourselves as well - this relationship is about reciprocity - Enjoy :)!

  • What will make this relationship rewarding for you?
  • What approaches encourage or motivate you? and discourage or de-motivate you?
  • How will you know you are receiving value from this relationship?
  • If you trusted me enough to say how to manage you most effectively, what tips would you give?
  • What else would you like me to know about you?
  • What do you do when you're really against an obstacle or barrier?
  • What makes you angry, frustrated or upset?
  • What's easy to appreciate? what matters in the more difficult times?
  • What works for you when you are successful at making changes?
  • Where do you usually get stuck?
  • How do you deal with disappointment or failure?
  • What is the compliment or acknowledgement you hear most often about yourself?
  • What words describe you at your best?
  • What words describe you when you are at less than your best?

PS Don't use them all at once ;) take your time getting to know each other and working together

Mozilla Addons BlogAdd-ons Update – Week of 2015/07/22

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

Add-ons Forum

As we announced before, there’s a new add-ons community forum for all topics related to AMO or add-ons in general. The Add-ons category is one of the most active in the community forum, so thank you all for your contributions! The old forum is still available in read-only mode.

The Review Queues

  • Most nominations for full review are taking less than 10 weeks to review.
  • 308 nominations in the queue awaiting review.
  • Most updates are being reviewed within 10 weeks.
  • 127 updates in the queue awaiting review.
  • Most preliminary reviews are being reviewed within 12 weeks.
  • 334 preliminary review submissions in the queue awaiting review.

The unlisted queues aren’t mentioned here, but they are empty for the most part. We’re in the process of getting more help to reduce queue length and waiting times for the listed queues.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 40 Compatibility

The Firefox 40 compatibility blog post is up. The automatic compatibility validation will be run soon.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

We announced that we will require extensions to be signed in order for them to continue to work in release and beta versions of Firefox. The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions.

A recent update is that Firefox for Android will implement signing at the same time as Firefox for Desktop. This mostly means that we will run the automatic signing process for add-ons that support Firefox for Android on AMO, so they are all ready before it hits release.

Electrolysis

Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.

We will be talking more about these changes in this blog in the future. For now we recommend you start looking at the available documentation.

Air MozillaQuality Team (QA) Public Meeting

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Christian HeilmannI don’t want Q&A in conference videos

I present at conferences – a lot. I also moderate conferences and I brought the concept of interviews instead of Q&A to a few of them (originally this concept has to be attributed to Alan White for Highland Fling, just to set the record straight). Many conferences do this now, with high-class ones like SmashingConf and Fronteers being the torch-bearers. Other great conferences, like EdgeConf, are 100% Q&A, and that’s great, too.

confused Q&A speaker

I also watch a lot of conference talks – to learn things, to see who is a great presenter (and I will recommend to conference organisers who ask me for talent), and to see what others are doing to excite audiences. I do that live, but I’m also a great fan of talk recordings.

I want to thank all conference organisers who go the extra mile to offer recordings of the talks at their event – you already rock, thanks!

I put those on my iPod and watch them in the gym, whilst I am on the cross trainer. This is a great time to concentrate, and to get fit whilst learning things. It is a win-win.

Much like everyone else, I pick the talk by topic, but also by length. Half an hour to 40 minutes is what I like best. I also tend to watch 2-3 15 minute talks in a row at times. I am quite sure, I am not alone in this. Many people watch talks when they commute on trains or in similar “drive by educational” ways. That’s why I’d love conference organisers to consider this use case more.

I know, I’m spoilt, and it takes a lot of time and effort and money to record, edit and release conference videos and you make no money from it. But before shooting me down and telling me I have no right to demand this if I don’t organise events myself, let me tell you that I am pretty sure you can stand out if you do just a bit of extra work to your recordings:

  • Make them available offline (for YouTube videos I use YouTube DL on the command line to do that anyways). Vimeo has an option for that, and Channel 9 did that for years, too.
  • Edit out the Q&A – there is nothing more annoying than seeing a confused presenter on a small screen trying to understand a question from the audience for a minute and then saying “yes”. Most of the time Q&A is not 100% related to the topic of the talk, and wanders astray or becomes dependent on knowledge of the other talks at that conference. This is great for the live audience, but for the after-the-fact consumer it becomes very confusing and pretty much a waste of time.

That way you end up with much shorter videos that are much more relevant. I am pretty sure your viewing/download numbers will go up the less cruft you have.

It also means better Q&A for your event:

  • presenters at your event know they can deliver a great, timed talk and go wild in the Q&A answering questions they may not want recorded.
  • people at the event can ask questions they may not want recorded (technically you’d have to ask them if it is OK)
  • the interviewer or people at the event can reference other things that happened at the event without confusing the video audience. This makes it a more lively Q&A and part of the whole conference experience
  • there is less of a rush to get the mic to the person asking and there is more time to ask for more details, should there be some misunderstanding
  • presenters are less worried about being misquoted months later when the video is still on the web but the context is missing

For presenters, there are a few things to consider when presenting for the audience and for the video recording, but that’s another post. So, please, consider a separation of talk and Q&A – I’d be happier and promote the hell out of your videos.

Jared WeinPolishing Firefox for Windows 10

Over the years, the Firefox front-end team has made numerous polishes and updates to the Firefox theme.

Firefox 4 was a huge update to the user interface of Firefox compared to Firefox 3.6. A couple years down the line we released Australis in Firefox 29. Next month we’ll be releasing Firefox 40 and with it will come with what I believe to be the most polished UI we’ve shipped to date.

The Australis release was great in so many ways. It may have taken a bit longer than many had expected to get released, but much of that was due to the complexities of the Firefox theme code. During the Australis release we removed some lesser used features which consistently were a pain in the rear when making theme changes (small icons and icons+text mode, I’m looking at both of you 👀).

Now with Firefox 40, we’re able to show the benefits of our refined theme code and our ability to ship updated and modern themes much faster than ever before.

With Firefox 40 on Windows 10, which you can download today using the Firefox Beta channel, we’ve matched the tabstrip and toolbar to the native Windows 10 theme. This includes refinements to our standard icon set, as well as much improved HiDPI (>1dppx) support. All of our first-tier icons now have 2× variants that are shipped with the browser, and the remaining icons buried in the depths of the browser should be fixed soon as well.

My favorite feature of Firefox on Windows 10 is the increased URL and search bar height (pictured above). This is actually something that we’ve wanted to do for a couple years. Firefox has continually had a pretty small font-size for the URL bar, one which makes it hard for users with poor vision to read. The text in our URL and search bars is now on-par with competing browsers. It may not seem like a big deal to most, but it is the small cuts like these that can either add up to cause death by a thousand wounds or be part of the stitches in a bullet-proof product.

Cheers 😊


Tagged: firefox, mozilla, planet-mozilla, ux

Mark SurmanBuilding a big tent (for web literacy)

Building a global network of partners will be key to the success of our Mozilla Learning initiative. A network like this will give us the energy, reach and diversity we need to truly scale our web literacy agenda. And, more important, it will demonstrate the kind of distributed leadership and creativity at the heart of Mozilla’s vision of the web.

networks

As I said in my last two posts, leadership development and advocacy will be the two core strategies we employ to promote universal web literacy. Presumably, Mozilla could do these things on its own. However, a distributed, networked approach to these strategies is more likely to scale and succeed.

Luckily, partners and networks are already central to many of our programs. What we need to do at this stage of the Mozilla Learning strategy process is determine how to leverage and refine the best aspects of these networks into something that can be bigger and higher impact over time. This post is meant to frame the discussion on this topic.

The basics

As a part of the Mozilla Learning strategy process, we’ve looked at how we’re currently working with partners and using networks. There are three key things we’ve noticed:

  1. Partners and networks are a part of almost all of our current programs. We’ve designed networks into our work from early on.
  1. Partners fuel our work: they produce learning content; they host fellows; they run campaigns with us. In a very real way, partners are huge contributors (a la open source) to our work.
  1. Many of our partners specialize in learning and advocacy ‘on the ground’. We shouldn’t compete with them in this space — we should support them.

With these things in mind, we’ve agreed we need to hold all of our program designs up to this principle:

Design principle = build partners and networks into everything.

We are committed to integrating partners and networks into all Mozilla Learning leadership and advocacy programs. By design, we will both draw from these networks and provide value back to our partners. This last point is especially important: partnerships need to provide value to everyone involved. As we go into the next phase of the strategy process, we’re going to engage in a set of deep conversations with our partners to ensure the programs we’re building provide real value and support to their work.

Minimum viable partnership

Over the past few years, a variety of network and partner models have developed through Mozilla’s learning and leadership work. Hives are closely knit city-wide networks of educators and orgs. Maker Party is a loose network of people and orgs around the globe working on a common campaign. Open News and Mozilla Science sit within communities of practice with a shared ethos. Mozilla Clubs are much more like a global network of local chapters. And so on.

As we develop our Mozilla Learning strategy, we need to find a way to both: a) build on the strengths of these networks; and b) develop a common architecture that makes it possible for the overall network to grow and scale.

Striking this balance starts with a simple set of categories for Mozilla Learning partners and networks. For example:

  • Member: any org participating in Mozilla Learning.
  • Partner: any org contributing to Mozilla Learning.
  • Club: a locally-run node in the Mozilla Learning network.
  • Affiliate network: group of orgs aligned with Moz Learning.
  • Core network: group of orgs coordinated by Mozilla staff.

This may not be the exact way to think about it, but it is certain that we will need some sort of common network architecture if we want to build partners and networks into everything. Working through this model will be an important part of the next phase of Mozilla Learning strategy work.

Partners = open source

In theory, one of the benefits of networks is that the people and organizations inside them can build things together in an open source-y way. For example, one set of partners could build a piece of software that they need for an immediate project. Another partner might hear about this software through the network, improve it for their own project and then give it back. The fact that the network has a common purpose means it’s more likely that this kind of open source creativity and value creation takes place.

This theory is already a reality in projects like Open News and Hive. In the news example, fellows and other members of the community post their code and documentation on the Source web page. This attracts the attention of other news developers who can leverage their work. Similarly, curriculum and practices developed by Hive members are shared on local Hive websites for others to pick up and run with. In both cases, the networks include a strong social component: you are likely to already know, or can quickly meet, the person who created a thing you’re interested in. This means it’s easy to get help or start a collaboration around a tool or idea that someone else has created.

One question that we have for Mozilla Learning overall is: can we better leverage this open source production aspect of networks in a more serious, instrumental and high impact way as we move forward? For example, could we: a) work on leadership development with partners in the internet advocacy space; b) have the fellows / leaders involved produce high quality curriculum or media; and c) use these outputs to fuel high impact global campaigns? Presumably, the answer can be ‘yes’. But we would first need to design a much more robust system of identifying priorities, providing feedback and deploying results across the network.

Questions

Whatever the specifics of our Mozilla Learning programs, it is clear that building in partnerships and networks will be a core design principle. At the very least, such networks provide us diversity, scale and a ground game. They may also be able to provide a genuine ‘open source’ style production engine for things like curriculum and campaign materials.

In order to design the partnership elements of Mozilla Learning, there are a number of questions we’ll need to dig into:

  • Who are current and desired partners? (make a map)
  • What value do they seek from us? What do they offer?
  • Specifically, do they see value in our leadership and advocacy programs?
  • What do partners want to contribute? What do they want in return?
  • What is the right network / partner architecture?

A key piece of work over the coming months will be to talk to partners about all of this. I will play a central role here, convening a set of high level discussions. People leading the different working groups will also: a) open up the overall Mozilla Learning process to partners and b) integrate partner input into their plans. And, hopefully, Laura de Reynal and others will be able to design a user research process that lets us get info from our partners in a detailed and meaningful way. More on all this in coming weeks as we develop next steps for the Mozilla Learning process.


Filed under: mozilla

Christian HeilmannNew chapter in the Developer Evangelism handbook: keeping time in presentations

Having analysed a lot of conference talks lately, I found a few things that don’t work when it comes to keeping to the time you have as a speakers. I then analysed what the issues were and what you can do to avoid them and put together a new chapter for the Developer Evangelism Handbook called “Keeping time in presentations“.

Rabbit with watch
White Rabbit by Claire Stevenson

In this pretty extensive chapter, I cover a few topics:

All this information is applicable to conference talks. As this is a handbook, all of it is YMMV, too. But following these guidelines, I always managed to keep on time and feel OK watching some of my old videos without thinking I should have done a less rushed job.

Mozilla Reps CommunityCouncil at Whistler Work Week

From the 23rd to 26th of June the Reps Council attended the Mozilla Work Week in Whistler, British Columbia, Canada to discuss the future plans with the rest of the Participation team. Unfortunately Bob Reyes couldn’t attend due to delays in the Visa process.

Human centered design workshop

At the beginning of the week Emma introduced us to “Human centered design” where the overall goal is to find solutions to a problem while keeping the individual in the focus. We split up in groups of two to do the exercises. We tried to come up with individual solutions for the problem statement “How might we improve the gift giving experience?”.

At first we interviewed our partner to get to the root of the problem they currently have with gift giving. This might either be that they don’t have ideas on what to give to a person, or maybe the person doesn’t want to get a gift, or something entirely different. All in all, we came up with a lot of different root causes to analyze.

After talking intensively to the partner, we came up with individual solution proposals which we then discussed with the partner and improved based on their feedback.

We think this was a valuable workshop for the following sessions with the functional area teams.

Sessions with other functional areas

With this knowledge Council members, as part of the Participation team, attended more than 27 meetings with other functional teams from all across Mozilla. The goal was to invite these teams to a session, where we analyze their issues they have with Participation.

During these meetings we provided feedback to the team’s plans for bringing more participation and enabling more community members into their projects. We also received a lot of insights on what functional teams think about the community and how valuable is the work of volunteer to them. In every session the goal was to come up with a solid problem statement and then find possible solutions for it. Due to the Council being volunteers each of us could give valuable input and ideas.

Some problem statements we tackled during the week (of course this is just a selection):

  • How might we increase the Code contribution retention so people come back to us after doing a code contribution?
  • How might community approach organizations who already touch the “socially involved” audience to infiltrate their systems so Shape of the Web can touch people so that it can have a lasting impact?
  • How might we improve the SUMO retention ratio from 10% to 30% in the next 6 months?
  • How might we deliver operational capability to run suggested titles in German speaking countries by the end of the year?

Our plans (along with the Participation team) are to continue working with most of these teams and the community in order to accomplish our common plans for bringing more participation into these functional areas.

Council meetings & Leadership workshop

During the week we also held council meetings where we prioritised tasks and worked on the most important ones. Mentor’s selection criteria 2.0, new budget SOPs, Reps recognition and Reps selection criteria for important events are some of these tasks. After completing these important tasks we would like to focus on Reps as leadership platform and set our goals for the next (at least) one year.

On that direction, Rosana Ardila run a highly interesting workshop on Friday around volunteer leadership and the current Organisation model within Mozilla. In three hours we tried to think outside the box and come up with solutions to make volunteer leadership more effective. We haven’t started any plans to incorporate this with the community, but in the coming weeks we will look at this and figure out which parts might work for community as well.

Radical Participation session

On Wednesday evening, a lot of volunteers and staff come together for the “Radical Participation” session. Mozilla invited several external experts on Participation to give a lightning talk to inspire us.

After these lightning talks we evaluated what resonated with us the most for our job and for Mozilla in general. It was good to get an outside view and tips so we can move forward with our plans as best prepared as we can be.

At the end, Mark and Mitchell gave us an update on what they think about Radical Participation. We think we’re on a good way planning for impact, but there is still a lot to do!

Participation Planning

This was one of the most interesting sessions we had since we used a new and innovative post-it notes (more post-it notes!) method for framing the current status of Participation in Mozilla. Identifying in detail the current status and the structure of Participation, was the most important step towards having more impact within the project.

By re-ordering and evaluating the notes we managed to make statements on what we could change and come with a plan for the following months. We looked at an 18-month timeline. George Roter is currently in charge of getting the document which describes the 18-month plan in more detail finished. Stay tuned for this!

Unofficial meetings with other teams

During the social parts of the Work Week (arrival apero and dinners), we all had a chance to talk to other teams informally and discuss our pain points for projects we’re working on outside of Council. We had a lot of talks with different teams outside our Participation sessions, therefore we could move forward with several other, non-Reps related, projects as well. To give an example: Michael met Patrick Finch on Monday of the work week to discuss the Community Tile for the German-speaking community. Due to several talks during the week, the Community Tile went online around a week after the work week.

Conclusion

All in all it was crucial to sit together and work on future plans. We could get a good understand of what the Participation team has been working on and could share what we have been working on as a Council. We could set the next steps for future work. After a hard working Work Week we are all ready to tackle the next steps for Participation. Let’s help Mozilla move forward with Participation! You can find other blog posts from the Participation Team below.

More blog posts about the Work Week from the Participation Team

Participation at Whistler
Storify – Recap Participation at Whistler

Armen ZambranoFew mozci releases + reduced memory usage

While I was away adusca released few releases of mozci.

From the latest release I want to highlight that we're replacing the standard json library for ijson since it solves some memory leak issues we were facing for pulse_actions (bug 1186232).

This was important to fix since our Heroku instance for pulse_actions has an upper limit of 1GB of RAM.

Here are the release notes and the highlights of them:

  • 0.9.0 - Re-factored and cleaned-up part of the modules to help external consumers
  • 0.10.0:
    • --existing-only flag prevents triggering builds that are needed to trigger test jobs
    • Support for pulse_actions functionality
  • 0.10.1 - Fixed KeyError when querying for the request_id
  • 0.11.0 - Added support for using ijson to load information, which decreases our memory usage



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Botond BalloIt’s official: the Concepts TS has been voted for publication!

I mentioned in an earlier post that the C++ Concepts Technical Specification will come up for its final publication vote during a committee-wide teleconference on July 20.

That teleconference has taken place, and the outcome was to unanimously vote the TS for publication!

With this vote having passed, the final draft of the TS will be sent to the ISO head offices, which will complete the publication process within a couple of months.

With the TS published, the committee will be on the lookout for feedback from implementers and users, to see how the proposed design weathers real-world codebases and compiler architectures. This will allow the committee to determine whether any design changes need to be made before merging the contents of the TS into the C++ International Standard, thus making the feature all official (and hard to change the design of).

GCC has a substantially complete implementation of the Concepts TS in a branch; if you’re interested in concepts, I encourage you to try it out!


Air MozillaBugzilla Development Meeting

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

The Servo BlogServo developer tools overview

Servo is a new web browser engine. It is one of the largest Rust-based projects, but the total Rust code is still dwarfed by the size of the code provided in native C and C++ libraries. This post is an overview of how we have structured our development environment in order to integrate the Cargo build system, with its “many small and distributed dependencies” model with our needs to provide many additional features not often found in smaller Rust-only projects.

Mach

Mach is a python driver program that provides a frontend to Servo’s development environment that both reduces the number of steps required and integrates our various tools into a single frontend harness. Similar to its purpose in the Firefox build, we use it to centralize and simplify the number of commands that a developer has to perform.

mach bootstrap

The steps that mach will handle before issuing a normal cargo build command are:

  • Downloading the correct versions of the cargo and rustc tools. Servo uses many unstable features in Rust, most problematically those that change pretty frequently. We also test the edges of feature compatibility and so are the first ones to notice many changes that did not at first seem as if they would break anyone. Further, we build a custom version of the tools that additionally supports cross-compilation targeting Android (and ARM in the near future). A random local install of the Rust toolchain is pretty unlikely to work with Servo.

  • Updating git submodules. Some of Servo’s dependencies cannot be downloaded as Cargo dependencies because they need to be directly referenced in the build process, and Cargo adds a hash that makes it difficult to locate those files. For such code, we add them as submodules.

mach build & run

The build itself also verifies that the user has explicitly requested either a dev or release build — the Servo dev build is debuggable but quite slow, and it’s not clear which build should be the default.

Additionally, there’s the question of which cargo build to run. Servo has three different “toplevel” Cargo.toml files.

  • components/servo/Cargo.toml is used to build an executable binary named servo and is used on Linux and OSX. There are also horrible linker hacks in place that will cause an Android-targeted build to instead produce a file named servo that is actually an APK file that can be loaded onto Android devices.

  • ports/gonk/Cargo.toml produces a binary that can run on the Firefox OS Boot2Gecko mobile platform.

  • ports/cef/Cargo.toml produces a shared library that can be loaded within the Chromium Embedding Framework to provide a hostable web rendering engine.

The presence of these three different toplevel binaries and the curious directory structure means that mach also provides a run command that will execute the correct binary with any provided arguments.

mach test

Servo has several testing tools that can be executed via mach.

  • mach tidy will verify that there are no trivial syntactic errors in source files. It checks for valid license headers in each file, no tab characters, no trailing whitespaces, etc.

  • mach test-ref will run the Servo-specific reference tests. These tests render a pair of web pages that implement the same final layout using different CSS features to images. If the images are not pixel-identical, the test fails.

  • mach test-wpt runs the cross-browser W3C Web Platform Tests, which primarily test DOM features.

  • mach test-css runs the cross-browser CSS WG reference tests, which are a version of the reference tests that are intended to work across many browsers.

  • mach test-unit runs the Rust unit tests embedded in Servo crates. We do not have many of these, except for basic tests of per-crate functionality, as we rely on the WPT and CSS tests for most of our coverage. Philosophically, we prefer to write and upstream a cross-browser test where one does not exist instead of writing a Servo-specific test.

cargo

While the code that we have written for Servo is primarily in Rust, we estimate that at least 2/3 of the code that will run inside of Servo will be written in C/C++, even when we ship. From the SpiderMonkey JavaScript engine to the Skia and Azure/Moz2D graphics pipeline to WebRTC, media extensions, and proprietary video codecs, there is a huge portion of the browser that is integrated and wrapped into Servo, rather than rewritten. For each of these projects, we have a crate that has a build.rs file that performs the custom build steps to produce a static library and then produce a Rust rlib file to link into Servo.

The rest of Servo is a significant amount of code (~150k lines of Rust; ~250k if you include autogenerated DOM bindings), but follows the standard conventions of Cargo and Rust as far as producing crates. For the many crates within the Servo repo, we simply have a Cargo.toml file next to a lib.rs that defines the module structure. When we break them out into a separate GitHub repository, though, we follow the convention of a toplevel Cargo.toml file with a src directory that holds all of the Rust code.

Servo's dependency graph

Updating dependencies

Since there are three toplevel Cargo.toml files, there are correspondingly three Cargo.lock files. This configuration makes the already challenging updates of dependencies even moreso. We have added a command, mach update-cargo -p {package} --precise {version} to handle updates across all three of the lockfiles. While running this command without any arguments does attempt to upgrade all dependencies to the highest SemVer-compatible versions, in practice that operation is unlikely to work, due to a mixture of:

  • git-only dependencies, which do not have a version number

  • Dependencies with different version constraints on a common dependency, resulting in two copies of a library and conflicting types

  • Hidden Rust compiler version dependencies

Things we’d like to fix in the future

It would be great if there was a single Cargo.toml file and it was at the toplevel of the Servo repo. It’s confusing to people familiar with Rust projects, who go looking for a Cargo.toml file and can’t find them.

Cross-compilation to Android with linker hacks feels a bit awkward. We’d like to clean that up, remove the submodule that performs that linker hackery, and have a more clean/consistent feel to our cross-targeted builds.

Managing the dependencies — particularly if there is a cross-repo update like a Rust upgrade — is both a real pain and requires network access in order to clone the dependency that you would like to edit. The proposed cargo clone command would be a huge help here.

Nick FitzgeraldProposal For Encoding Source-Level Environment Information Within Source Maps

A month ago, I wrote about how source maps are an insufficient debugging format for the web. I tried to avoid too much gloom and doom, and focus on the positive aspect. We can extend the source map format with the environment information needed to provide a rich, source-level debugging experience for languages targeting JavaScript. We can do this while maintaining backwards compatibility.

Today, I'm happy to share that I have a draft proposal and a reference implementation for encoding source-level environment information within source maps. It's backwards compatible, compact, and future extensible. It enables JavaScript debuggers to rematerialize source-level scopes and bindings, and locate any given binding's value, even if that binding does not exist in the compiled JavaScript.

I look forward to the future of debugging languages that target JavaScript.

Interested in getting involved? Join the discussion.

Jordan LundMozharness now lives in Gecko

What's changed?

continuous-integration and release jobs that use Mozharness will now get Mozharness from the Gecko repo that the job is running against.

How?

Whether the job is a build (requires a full gecko checkout) or a test (only requires a Firefox/Fennec/Thunderbird/B2G binary), automation will first grab a copy of Mozharness from the gecko tree, even before checking out the rest of the tree. Effectively minimizing changes to our current infra.

This is thanks to a new relengapi endpoint, Archiver, and hg.mozilla.org's sub directory archiving abilities. Essentially Archiver will get a tar ball of Mozharness from within a target gecko repo, rev, and sub-repo-directory and upload it to Amazon's S3.

What's nice about Achiver is that it is not restricted to just grabbing Mozharness. You could, for example, put https://hg.mozilla.org/build-tools in the Gecko tree or, improving on our tests.zip model, simply grab subdirectories from within the testing/* part of the tree and request them on a suite by suite basis.

What does this mean for you?

it depends. if you are...

1) developing on Mozharness

You will need to checkout gecko and patches will now land like any other gecko patch: 1) land on a development tree-branch (e.g. mozilla-inbound) 2) ride the trains. This now means:

  • we can have tree specific configs and even scripts
  • the Mozharness pinning abstraction layer is no longer needed
  • we have more deterministic builds and tests as the jobs are tied to a specific Gecko repo + rev
  • there is more transparency on how automation runs continuous integration jobs
  • there should be considerably less strain on hg.mozilla.org as we no longer rm + clone mozharness jobs
  • development tree-branches (inbound, fx-team, etc) will behave like the Mozharness default branch while mozilla-central will act as production branch.

This also means:

  • Mozharness patches that require tree wide changes will need to be uplifted across trees
  • development on Mozharness will require a Gecko tree checkout
  • Github + Travis tests will not be run on each Mozharness change (mh tests will have to be run locally for now)
  • Mozharness changes will not be documented or visible (outside of diffing gecko revs)

2) just needing to deploy Mozharness or get a copy of it without gecko

Like the usage docs linked to Archiver above, you could hit the API directly. But I recommend using the client that buildbot uses. The client will wait until the api call is complete, download the archive from a response location, and unpack it to a specified destination.

Let's take a look at that in action: say you want to download and unpack a copy of mozharness based on mozilla-beta at 93c0c5e4ec30 to some destination.

python archiver_client.py mozharness --repo releases/mozilla-beta --rev 93c0c5e4ec30 --destination /home/jlund/downloads/mozharness  

Note: if that was the first time Archiver was polled for that repo + rev, it might take a few seconds as it has to download Mozharness from hgmo and then upload it to S3. Subsequent calls will happen near instantly

Note 2: if your --destination path already exists with a copy of Mozharness or something else, the client won't rm that path, it will merge (just like unpacking a tarball behaves)

3) a Release Engineering service that is still using hg.mozilla.org/build/mozharness

Not all Mozharness scripts are used for continuous integration / release jobs. There are a number of Releng services that are based on Mozharness: e.g. Bumper, vcs-sync, and merge_day. As these services transition to using Archiver, they will continue to use hgmo/build/mozharness as the Repository of Record (RoR).

If certain services that can not use gecko based Mozharness, then we can fork Mozharness and setup a separate repo. That will of course mean such services won't receive upstream changes from the gecko copy so we should avoid this if possible.

If you are an owner or major contributor to any of these releng services, we should meet and talk about such a transition. Archiver and its client should make deployments pretty painless in most cases.

Have something that may benefit from Archiver?

If you want to move something into a larger repository or be able to pull something out of such a repository for lightweight deployments, feel free to chat to me about Archiver and Relengapi.

As always, please leave your questions, comments, and concerns below

QMOHelp us Triage Firefox Bugs – Introducing the Bug Triage tool

Interested in helping with some Firefox Bug Triage?  We have a new experimental tool that makes it really easy to help on a daily basis.

Please visit our new triage tool and sign up to triage some bugs. If you can give us a few minutes of your day, you can help Mozilla move faster!

Air MozillaJuly Privacy Lab - Crypto Wars with guest speakers from CDT and EFF

July Privacy Lab - Crypto Wars with guest speakers from CDT and EFF July's Privacy Lab will include guest speakers from CDT and EFF to talk about backdoors and crypto wars.

Ben HearsumMozilla Software Release GPG Key Transition

Late last week we discovered the expiration of the GPG key that we use to sign Firefox, Fennec, and Thunderbird nightly builds and releases. We had been aware that this was coming up, but we unfortunately missed our deadline to renew it. This caused failures in many of our automated nightly builds, so it was quickly noticed and acted upon.

Our new GPG key is as follows, and available on keyservers such as gpg.mozilla.org and pgp.mit.edu:

pub   4096R/0x61B7B526D98F0353 2015-07-17
      Key fingerprint = 14F2 6682 D091 6CDD 81E3  7B6D 61B7 B526 D98F 0353
uid                            Mozilla Software Releases 
sub   4096R/0x1C69C4E55E9905DB 2015-07-17 [expires: 2017-07-16]

The new primary key is signed by many Mozillians, the old master key, as well as our OpSec team's GPG key. Nightlies and releases will now be signed with the subkey (0x1C69C4E55E9905DB), and a new one will be generated from the same primary key before this one expires. This means that you can validate Firefox releases with the primary public key in perpetuity.

We are investigating a few options to make sure key renewal happens without delay in the future.

Mozilla IT & OperationsTroubleshooting the Internet

Introduction

If you’re working from home, a coffee shop or even an office and are experiencing “issues with the Internet” this blogpost might be useful to you.
We’re going to go through some basic troubleshooting and tips to help solve your problem. At worse gather data so your support team can investigate.

Something to keep in mind while reading is that everything is packets. When downloading a webpage, image, email or video-conferencing, you have to imagine your computer splitting everything into tiny numbered packets and sending them to their destination through pipes, where they will be reassembled in order.
Network issues are those little packets not getting properly from A to Z, because of pipes full, bugs, overloaded computers, external perturbations and a LOT more possible reasons.

Easy tests

We can start by running some easy online tests, even if not 100% accurate, you can run them in a few clicks from your browser (or even phone) and can point toward the right direction.

Speedtest is a good example. Hit the big button and wait, don’t run any download or bandwidth heavy applications. The best is to know what a “normal” value is, so you can compare both.

As some sneaky providers can prioritize connections to the famous Speedtest, you can try another less known website like ping.online.net. Start the download of a large file and check how fast it goes.

Note that the connection is shared between all the connected users. So one can easily monopolize the bandwidth and clog the pipe for everyone else. This is less likely to happen in our offices where we try to have pipes large enough for everyone, but can happen in public places. In this case there is nothing much you can do.

Next is a GREAT one: Netalyzr. You will need Java but it’s worth it (first time I have ever said that). There is also an Android app but it’s better to run it from the device experiencing the issue (but it’s also interesting to see how good your mobile/data “Internet” really is). Netalyzr will runs TONS of tests, DNS, blocked ports, latency, etc… And give you a report quite easy to understand or share with a helpdesk. See Mozilla’s Paris office.

Screenshot of Netalyzr resultSome ISPs start to roll out a new version of the Internet, called IPv6 (finally!). As it’s “new” for them, there can be some issues. To check that, run the test on Test-ipv6.

Basic connectivity

If the previous tests show that something is wrong or you still have a doubt, these tools are made to test basic connectivity to the Internet as well as checking the path and basic health of each network segments from you to a distant server.

“ping” will send probes each second to a target and will reports how much time it takes (round trip) to reach it. For an indication, a basic time between Europe and the US west coast is about ~160ms, US east to US west coast ~90ms, Taipei to Europe ~300ms. If it is higher than that you might indeed see some “laggish/slow Internet”.

Screenshot of pings to Wikipedia“traceroute” is a bit different; it does the same but shows you the intermediate hops in between source and destination.

The best in this domain is called “mtr” and is a great combination of the previous 2: for each hop, it runs a continuous ping. Add –report so you can easily copy and paste the result if you need to share it. Having some higher latency or packet loss for a hop or two is usually not an issue. This is because network devices are not made to reply to those tests, they just do it if they are not too busy with their main function, redirecting a packet in a direction or another. A good issue indicator is if many nodes (and especially your target) start showing packet loss or higher latency.

Screenshot of an mtr to www.mozilla.orgIf the numbers are high on the first hop this means something is wrong between your computer and your home/office/bar’s Internet gateway.

If the issues appear “around the middle” it’s an issue with your ISP or your ISP’s ISP and the only thing you can do is to call their support to complain or change ISP when possible. That is what happened with Netflix in the US.

If the numbers are high near the really end it’s probably an issue with the service you’re testing, try to contact them.

Wireless

An easy one when possible. If you are on wireless try to switch to a wired network and re run the tests mentioned above. It might be surprising but wireless isn’t magic :)

Most of the home and small shops wifi routers (labeled “BGN” or 2.4Ghz) can only use 1 out of 3 frequencies/channels to talk to their clients, and clients have to wait for their turn to talk.
So imagine your neighbor’s wifi is on the same frequency. Even if your wifi distinguishes your packets from your neighbor’s, they will have to share the channel. If you have more than 2 neighbors (Wireless networks visible in the wifi list), you will have an non optimal wireless experience and this is exponential.

To solve/mitigate it you can use an app like “Wifi Analyzer” on Android. X are the frequencies, only 1, 6 and 11 are really usable as they don’t overlap. and Y is how noisy they are. Locate yours, then if a channel is less busy go in your wireless router’s settings and move to that one.

Screenshot of wifi analyserIf they are all busy, last option is to buy a router that supports more recent standard, labeled “ABGN” (even now “AC”) or 5Ghz. Most high end phones and laptop support it as well. That range has around 20 usable channels instead of 3.

Another common issue in public places is when some people can’t connect at all but other people don’t have any issue. This is usually due to a mechanism called DHCP that allocates an address (here you can see it as a slot) to your device. Default settings are made for small networks and remember slots for a long time even if they are not used anymore. It’s usually possible to reduce the “remember” time and/or enlarge the pool of slots in the router’s settings.

Wired

These are less complicated to troubleshot. You can start by unplugging everything else connected to your router and see if it’s better. If not the issue might come from that box. Also be careful not to create physical loops in your network (like plugging 2 cables between your router and a switch).

Air MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.