QMOFirefox 41 Aurora Testday, July 31st

Hi there, I want to let you know that this Friday, July 31st, we’ll be hosting the Firefox 41.0 Aurora Testday. The main focus of this event is going to be set on NPAPI Flash and Hello Chat. Detailed participation instructions are available in this etherpad.

No previous testing experience is required so feel free to join us on the #qa IRC channel and our moderators will make sure you’ve got everything you need to get started.

Hope to see you all on Friday! Let’s make Firefox better together! 😀

Mozilla Cloud Services BlogShutting down the legacy Sync service

In response to strong user uptake of Mozilla’s new Sync service powered by Firefox Accounts, earlier this year we announced a plan to transition users off of our legacy Sync infrastructure and onto the new product.  With this migration now well under way, it is time to settle the details of a graceful end-of-life for the old service.

We will shut down the legacy Sync service on September 30th 2015.

We encourage all users of the old service to upgrade to a Firefox Account, which offers a simplified setup process, improved availability and reliability, and the possibility of recovering your data even if you lose all of your devices.

Users on Firefox 37 or later are currently being offered a guided migration process to make the experience as seamless as possible.  Users on older versions of Firefox will see a warning notice and will be able to upgrade manually.  Users running their own Sync server, or using a Sync service hosted by someone other than Mozilla, will not be affected by this change.

We are committed to making this transition as smooth as possible for Firefox users.  If you have any questions, comments or concerns, don’t hesitate to reach out to us on sync-dev@mozilla.org or in #sync on Mozilla IRC.

 

FAQ

 

  • What will happen on September 30th 2015?

After September 30th, we will decommission the hardware hosting the legacy Sync service and discard all data stored therein.  The corresponding DNS names will be redirected to static error pages, to ensure that appropriate messaging is provided for users who have yet to upgrade to the new service.

  • What’s the hurry? Can’t you just leave it running in maintenance mode?

Unfortunately not.  While we want to ensure as little disruption as possible for our users, the legacy Sync service is hosted on aging hardware in a physical data-center and incurs significant operational costs.  Maintaining the service beyond September 30th would be prohibitively expensive for Mozilla.

  • What about Extended Support Release (ESR)?

Users on the ESR channel have support for Firefox Accounts and the new Sync service as of Firefox 38.  Previous ESR versions reach end-of-life in early August and we encourage all users to upgrade to the latest version.

  • Will my data be automatically migrated to the new servers?

No, the strong encryption used by both Sync systems means that we cannot automatically migrate your data on the server.  Once you complete your account upgrade, Firefox will re-upload your data to the new system (so if you have a lot of bookmarks, you may want to ensure you’re on a reliable network connection).

  • Are there security considerations when upgrading to the new system?

Both the new and old Sync systems provide industry-leading security for your data: client-side end-to-end encryption of all synced data, using a key known only to you.

In legacy Sync this was achieved by using a complicated pairing flow to transfer the encryption key between devices.  With Firefox Accounts we have replaced this with a key securely derived from your account password.  Pick a strong password and you can remain confident that your synced data will only be seen by you.

  • Does Mozilla use my synced data to market to me, or sell this data to third parties?

No.  Our handling of your data is governed by Mozilla’s privacy policy which does not allow such use.  In addition, the strong encryption provided by Sync means that we cannot use your synced data for such purposes, even if we wanted to.

  • Is the new Sync system compatible with Firefox’s master password feature?

Yes.  There was a limitation in previous versions of Firefox that prevented Sync from working when a master password was enabled, but this has since been resolved.  Sync is fully compatible with the master password feature in the latest version of Firefox.

  • What if I am running a custom or self-hosted Sync server?

This transition affects only the default Mozilla-hosted servers.  If you are using a custom or self-hosted server setup, Sync should continue to work uninterrupted and you will not be prompted to upgrade.

However, the legacy Sync protocol code inside Firefox is no longer maintained, and we plan to begin its removal in 2016.  You should consider migrating your server infrastructure to use the new protocols; see below.

  • Can I self-host the new system?

Yes, either by hosting just the storage servers or by running a full Firefox Accounts stack.  We welcome feedback and contributions on making this process easier.

  • What if I’m using a different browser (e.g. SeaMonkey, Pale Moon, …)?

Your browser vendor may already provide alternate hosting.  If not, you should consider hosting your own server to ensure uninterrupted functionality.

Benjamin KerensaUnnecessary Finger Pointing

I just wanted to pen quickly that I found Chris Beard’s open letter to Satya Nadella (CEO of Microsoft) to be a bit hypocritical. In the letter he said:

“I am writing to you about a very disturbing aspect of Windows 10. Specifically, that the update experience appears to have been designed to throw away the choice your customers have made about the Internet experience they want, and replace it with the Internet experience Microsoft wants them to have.”

Right, but what about the experiences that Mozilla chooses to default for users like switching to Yahoo and making that the default upon upgrade and not respecting their previous settings ?What about baking Pocket and Tiles into the experience? Did users want these features? All I have seen is opposition to them.

“When we first saw the Windows 10 upgrade experience that strips users of their choice by effectively overriding existing user preferences for the Web browser and other apps, we reached out to your team to discuss this issue. Unfortunately, it didn’t result in any meaningful progress, hence this letter.”

Again see above and think about the past year or two where Mozilla has overridden existing user preferences in Firefox. The big difference here is Mozilla calls it acting on behalf of the user as its agent, but when Microsoft does the same it is taking away choice?

Set Firefox as Windows 10 DefaultClearly not that difficult

Anyways, I can go on but the gist is the letter is hypocritical and really unnecessarily finger pointing. Let’s focus on making great products for our users and technical changes like this to Windows won’t be a barrier to users picking Firefox. Sorry, that I cannot be a Mozillian that will blindly retweet you and support a misguided social media campaign to point fingers at Microsoft.

Read the entire letter here:

https://blog.mozilla.org/blog/2015/07/30/an-open-letter-to-microsofts-ceo-dont-roll-back-the-clock-on-choice-and-control/

John O'DuinnThe “we are all remoties” book!?!

I’ve been working in distributed teams, as well as talking, presenting, coaching and blogging about “remoties”‚ in one form or another for 8?9? years now. So, I’m excited to announce that I recently signed a contract with O’Reilly to write a book about how to successfully work in, and manage in, a geo-distributed world. Yes, I’m writing a “we are all remoties” book. If you’ve been in one of my ever-evolving “we are all remoties” sessions, you have an idea of what will be included.

If you’ve ever talked with me about the pros (and cons!) of working as a remote employee or of working in a distributed team, you already know how passionate I am about this topic. I care deeply about people being able to work well together, and having meaningful careers, while being physically or somehow otherwise remote from each other. Done incorrectly, this situation can be frustrating and risky to your career, as well as risky to employers. Done correctly, however, this could be a global change for good, raising the financial, technical and economic standards across all sorts of far flung places around the globe. Heady game-changing stuff indeed.

There are many “advocacy books” out there, explaining why working remote is a good / reasonable thing to do – typically written from the perspective of the solo person who is already remote. There are also many different tools becoming available to help people working in distributed teams – exciting to see. However, I found very few books, or blogposts, talking about the practical mechanics of *how* to use a combination of these tools and some human habits to allow humans to work together effectively in distributed teams, especially at any scale or over a sustained amount of time. Hence, my presentations, and now, this upcoming book.

Meanwhile,

  • if you are physically geo-distributed from the people you work with, I’d like to hear what does or doesn’t work for you. If you know someone who is in this situation, please share this post with them.
  • If you have experience working in distributed teams, is there something that you wish was already explained in a book? Something that you had to learn the hard way, but which you wish was clearly signposted to make it easier for others following to start working in distributed teams? Do you have any ideas that did / didn’t work for you?
  • If you have published something on the internet about remoties, please be tolerant of any questions I might ask. If you saw any of my “we are all remoties” presentations, is there anything that you would like to see covered in more/less detail? Anything that you wish was written up in a book to help make the “remote” path easier for those following behind?

Now, time to brew some coffee and get back to typing.

John.
=====

Daniel StenbergThe last HTTP Workshop day

This workshop has been really intense days so far and this last and forth Workshop day did not turn out different. We started out the morning with the presentation: Caching, Intermediation and the Modern Web by Martin Thomson (Mozilla) describing his idea of a “blind cache” and how it could help to offer caching in a HTTPS world. It of course brought a lot of discussions and further brainstorming on the ideas and how various people in the room thought the idea could be improved or changed.

Immediately following that, Martin continued with a second presentation describing for us a suggested new encryption format for HTTP based on the JWE format and how it could possible be used.

The room then debated connection coalescing (with HTTP/2) for a while and some shared their experiences and thoughts on the topic. It is an area where over-sharing based on the wrong assumptions certainly can lead to tears and unhappiness but it seems the few in the room who actually have implemented this seemed to have considered most of the problems people could foresee.

Support of Trailers in HTTP was brought up and we discussed its virtues for a while vs the possible problems with supporting it and what possible caveats could be, and we also explored the idea of using HTTP/2 push instead of trailers to allow servers to send meta-data that way, and that then also doesn’t necessarily have to follow after the transfer but can in fact be sent during transfer!

Resumed uploads is a topic that comes back every now and then and that has some interest. (It is probably one of the most frequently requested protocol features I get asked about.) It was brought up as something we should probably discuss further, and especially when discussing the next generation HTTP.

At some point in the future we will start talking about HTTP/3. We had a long discussion with the whole team here on what HTTP/3 could entail and we also explored general future HTTP and HTTP/2 extensions and more. A massive list of possible future work was created. The list ended up with something like 70 different things to discuss or work on, but of course most of those things will never actually become reality.

With so much possible or potential work ahead, we need to involve more people that want to and can consider writing specs and to show how easy it apparently can be, Martin demoed how to write a first I-D draft using the fancy Internet Draft Template Repository. Go check it out!

Poul-Henning Kamp brought up the topic of “CO2 usage of the Internet” and argued for that current and future protocol work needs to consider its environmental impact and how “green” it is. Ilya Grigorik (Google) showed off numbers from http archive.org’s data and demoed how easy it is to use the big query feature to extract useful information and statistical info out of the vast amount of data they’ve gathered there. Brad Fitspatrick (Google) showed off his awesome tool h2i and how we can use it to poke on and test HTTP/2 server implementations in a really convenient and almost telnet-style command line using way.

Finally, Mark Nottingham (Akamai) showed off his redbot.org service that runs HTTP against a site, checks its responses and reports with details exactly what it responds and why and provide a bunch of analysis and informational based on that.

Such an eventful day really had to be rounded off with a bunch of beers and so we did. The HTTP Workshop of the summer 2015 ended. The event was great. The attendees were great. The facilities and the good were perfect. I couldn’t ask for more. Thanks for arranging such a great happening!

I’ll round off showing off my laptop lid after the two new stickers of the week were applied. (The HTTP Workshop one and an Apache one I got from Roy):

laptop-stickers

… I’ll get up early tomorrow morning and fly back home.

Air MozillaKyle Zentner: CSS Containment - Leave my divs alone!

Kyle Zentner: CSS Containment - Leave my divs alone! Mozilla Intern Kyle Zentner describes his project - CSS Containment: Leave my divs alone! How to make pages (and frameworks) less janky and more predictable.

About:CommunityMeet an MDN Contributor: Heather Bloomer

Headshot photo of Heather Bloomer

Heather Bloomer started contributing to Mozilla in November 2014, initially on SUMO. There, she saw a link to MDN, and realized she could contribute there as well. So, she is a “crossover” who contributes to helping both end-users and developers. She has been heavily involved in the Learning Area project, writing and editing Glossary entries and tutorials. She describes her contributions as “a continuing journey of enlightenment and an overall awesome experience.”

Here’s more from Heather:

I feel what I do on MDN has personally enhanced my writing skills and expanded my technical knowledge. I also feel I am making a positive impact in the MDN community and for developers that refer to MDN from beginners to advanced. That is an amazing feeling to be part of something bigger than yourself and grow and nurture not only ones self, but others as well.

My advice for new contributors is to just reach out and connect with the MDN community. Join the team and just dig in. If you need help on getting started, we are more than happy to point you in the right direction. We are friendly, supportive, encouraging and a team driven bunch of folks!

Thanks, Heather!

Air MozillaSpenser Buaman: Making Polymorphism Fast

Spenser Buaman: Making Polymorphism Fast Mozilla Intern Spenser Bauman describes his project SpiderMonkey: Making polymorphism fast. Tweaking the JIT for faster container operations.

Air MozillaPeter Elmers: DXR: The new_one

Peter Elmers: DXR: The new_one Mozilla intern Peter Elmers describes his project - DXR: the new_one: what's there, what's new, and what's next in the land of DXR.

Air MozillaNihanth Subramanya: Making ContentSearch Great

Nihanth Subramanya: Making ContentSearch Great Mozilla intern Nihanth Subramanya presents: Making ContentSearch Great Bringing the new "Flare" design to in-content search, consistent with the main searchbox.

Air MozillaMiles Crabill: (Kinda Fear) The Reaper

Miles Crabill: (Kinda Fear) The Reaper Mozilla intern Miles Crabill presents (Kinda Fear) The Reaper. The Reaper is a Go application that queries AWS for resources, filters them, notifies their owners,...

Air MozillaJimmy Wang: One Process At A Time, e10s

Jimmy Wang: One Process At A Time, e10s Mozilla intern Jimmy Wangpresents: One Process At A Time, e10s. From converting page info to e10s to remove unsafe CPOWs, making light weight web themes...

Air MozillaIntern Presentations

Intern Presentations 6 interns will be presenting what they worked on over the summer: Spenser Bauman, SpiderMonkey: Making polymorphism fast. Tweaking the JIT for faster container operations....

Air MozillaFrancesco Polizzi: Marrying Growth, Data, and Privacy on the Web

Francesco Polizzi: Marrying Growth, Data, and Privacy on the Web Mozilla intern Francesco Polizzi describes his project: Marrying Growth, Data, and Privacy on the Web. Is the internet in danger of data driven disaster? Maybe....

Air MozillaUrsula Sarracini: Three Easy Steps to a Happy e10s

Ursula Sarracini: Three Easy Steps to a Happy e10s Ursula Sarracini - Three Easy Steps To a Happy e10s. I'll show you how to make a project multi-process friendly by showing you how I...

Air MozillaIntern Presentation - Ursula Sarracini

Intern Presentation - Ursula Sarracini Ursula Sarracini - Three Easy Steps To a Happy e10s. I'll show you how to make a project multi-process friendly by showing you how I...

Air MozillaIntern Presentations

Intern Presentations Ursula Sarracini - Three Easy Steps To a Happy e10s. I'll show you how to make a project multi-process friendly by showing you how I...

The Mozilla BlogSafeguarding Choice and Control Online

We are calling on Microsoft to “undo” its aggressive move to override user choice on Windows 10

Mozilla exists to bring choice, control and opportunity to everyone on the Web. We build Firefox and our other products for this reason. We build Mozilla as a non-profit organization for this reason. And we work to make the Internet experience beyond our products represent these values as much as we can.

Sometimes we see great progress, where consumer products respect individuals and their choices. However, with the launch of Windows 10 we are deeply disappointed to see Microsoft take such a dramatic step backwards. It is bewildering to see, after almost 15 years of progress bolstered by significant government intervention, that with Windows 10 user choice has now been all but removed. The upgrade process now appears to be purposefully designed to throw away the choices its customers have made about the Internet experience they want, and replace it with the Internet experience Microsoft wants them to have.

tweet-button

On the user choice benchmark, Microsoft’s Windows 10 falls woefully short, even when compared to its own past versions. While it is technically possible for people to preserve their previous settings and defaults, the design of the new Windows 10 upgrade experience and user interface does not make this obvious nor easy. We are deeply passionate about our mission to ensure people are front, center and squarely in the driver’s seat of their online experience, so when we first encountered development builds of Windows 10 that appeared would override millions of individual decisions people have made about their experience, we were compelled to immediately reach out to Microsoft to address this. And so we did. Unfortunately this didn’t result in any meaningful change.

Today we are sending an open letter to Microsoft’s CEO to again insist that Windows 10 make it easy, obvious and intuitive for people to maintain the choices they have already made — and make it easier for people to assert new choices and preferences.

In the meantime, we’re rolling out support materials and a tutorial video to help guide everyone through the process of preserving their choices on Windows 10.

Blog Post: Firefox for Windows 10: How to Restore or Choose Firefox as Your Default Browser

An Open Letter to Microsoft’s CEO: Don’t Roll Back the Clock on Choice and Control

The Mozilla BlogAn Open Letter to Microsoft’s CEO: Don’t Roll Back the Clock on Choice and Control

Satya,

I am writing to you about a very disturbing aspect of Windows 10. Specifically, that the update experience appears to have been designed to throw away the choice your customers have made about the Internet experience they want, and replace it with the Internet experience Microsoft wants them to have.

When we first saw the Windows 10 upgrade experience that strips users of their choice by effectively overriding existing user preferences for the Web browser and other apps, we reached out to your team to discuss this issue. Unfortunately, it didn’t result in any meaningful progress, hence this letter.

We appreciate that it’s still technically possible to preserve people’s previous settings and defaults, but the design of the whole upgrade experience and the default settings APIs have been changed to make this less obvious and more difficult. It now takes more than twice the number of mouse clicks, scrolling through content and some technical sophistication for people to reassert the choices they had previously made in earlier versions of Windows. It’s confusing, hard to navigate and easy to get lost.

Mozilla exists to bring choice, control and opportunity to everyone. We build Firefox and our other products for this reason. We build Mozilla as a non-profit organization for this reason. And we work to make the Internet experience beyond our products represent these values as much as we can.

Sometimes we see great progress, where consumer products respect individuals and their choices. However, with the launch of Windows 10 we are deeply disappointed to see Microsoft take such a dramatic step backwards.

These changes aren’t unsettling to us because we’re the organization that makes Firefox. They are unsettling because there are millions of users who love Windows and who are having their choices ignored, and because of the increased complexity put into everyone’s way if and when they choose to make a choice different than what Microsoft prefers.

We strongly urge you to reconsider your business tactic here and again respect people’s right to choice and control of their online experience by making it easier, more obvious and intuitive for people to maintain the choices they have already made through the upgrade experience. It should be easier for people to assert new choices and preferences, not just for other Microsoft products, through the default settings APIs and user interfaces.

Please give your users the choice and control they deserve in Windows 10.

Sincerely,

Chris Beard
CEO, Mozilla

Blog Post: Firefox for Windows 10: How to Restore or Choose Firefox as Your Default Browser

Blog Post: Safeguarding Choice and Control Online

Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting https://wiki.mozilla.org/De/Meetings

Matjaž HorvatA single platform for localization

Let’s get straight to the biscuits. From now on, you only need one tool to localize Mozilla stuff. That’s it. Single user interface, single translation memory, single permission management, single user account. Would you like to give it a try? Keep on reading!

A little bit of background.
Mozilla software and websites are localized by hundreds of volunteers, who give away their free time to put exciting technology into the hands of people across the globe. Keep in mind that 2 out of 3 Firefox installations are non-English and we haven’t shipped a single Firefox OS phone in English yet.

Considering the amount of impact they have and the work they contribute, I have a huge respect for our localizers and the feedback we get from them. One of the most common complaints I’ve been hearing is that we have too many localization tools. And I couldn’t agree more. At one of our recent l10n hackathons I was even introduced to a tool I never heard about despite 13 years of involvement with Mozilla localization!

So I thought, “Let’s do something about it!”

9 in 1.
I started by looking at the tools we use in Slovenian team and counted 9(!) different tools:

Eating my own dog food, I had already integrated all 3 terminology services into Pontoon, so that suggestions from these sources are presented to users while they translate. Furthermore, Pontoon syncs with repositories, sometimes even more often that the dashboards, practically eliminating the need to look at them.

So all I had to do is migrate projects from the rest of the editors into Pontoon. Not a single line of code needed to be written for Verbatim migration. Pootle and the text editor were slightly more complicated. They were used to localize Firefox, Firefox for Android, Thunderbird and Lightning, which all use the huge mozilla-central repository as their source repository and share locale repositories.

Nevertheless, a few weeks after the team agreed to move to Pontoon, Slovenian now uses Pontoon as the only tool to localize all (31) of our active projects!

Who wants to join the party?
Slovenian isn’t the only team using Pontoon. In fact, there are 2 dozens of locales with at least 5 projects enabled in Pontoon. Recently, Ukranian (uk) and Portugese Brasil (pt-BR) have been especially active, not only in terms of localization but also in terms of feedback. A big shout out to Artem, Marco and Marko!

There are obvious benefits of using just one tool, namely keeping all translations, attributions, contributor stats, etc. in one place. To give Pontoon a try, simply select a project and request your locale to be enabled. Migrating projects from other tools will of course preserve all the translations. Starting today, that includes attributions and submission dates (who translated what, and when it was translated) if you’re moving projects from Verbatim.

And, as you already know, Pontoon is developed by Mozilla, so we invite you to report problems and request new features. We also accept patches. ;) We have many exciting things coming up by the end of the summer, so keep an eye out for updates!

Air MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

Ehsan AkhgariTab audio indicators and muting in Firefox Nightly

Sometimes when you have several tabs open, and one of them starts to make some noise, you may wonder where the noise is coming from.  Other times, you may want to quickly mute a tab without figuring out if the web page provides its own UI for muting the audio.  On Wednesday, I landed the user facing bits of a feature to add an audio indicator to the tabs that are playing audio, and enable muting them.  You can see a screenshot of what this will look like in action below.

Tab audio indicators in action

Tab audio indicators in action

As you can see in the screenshot, my Soundcloud tab is playing audio, and so is my Youtube tab, but the Youtube tab has been muted.  Muting and unmuting a tab is easy by clicking on the tab audio indicator icon.  You can now test this out yourself on Firefox Nightly tomorrow!

This feature should work with all APIs that let you play audio, such as HTML5 <audio> and <video>, and Web Audio.  Also, it works with the latest Flash beta.  Note that you actually need to install the latest Flash beta, that is, version 19.0.0.124 which was released yesterday.  Earlier versions of Flash won’t work with this feature.

We’re interested in your feedback about this feature, and especially about any bugs that you may encounter.  We hope to iron out the rough edges and then let this feature ride the trains.  If you are curious about this progress, please follow along on the tracking bug.

Last but not least, this is the results of the effort of many of my colleagues, most notably Andrea Marchesini, Benoit Girard, and Stephen Horlander.  Thanks to those and everyone else who helped with the code, reviews, and other things!

David BurnsAnother Marionette release! Now with Windows Support!

If you have been wanting to use Marionette but couldn't because you only work on Windows, now is your chance to do so! All the latest downloads are available from our development github repository releases page

There is also a new page on MDN that walks you through the process of setting up Marionette and using it. I have only updated the python bindings so I can get a fell for how people are using it

Since you are awesome early adopters it would be great if we could raise bugs.

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • getPageSource not available. This will be added in at a later stage, it was a slightly contentious part in the specification.
  • I am sure there are other things we don't remember

Switching of Frames needs to be done with either a WebElement or an index. Windows can only be switched by window handles. This is currently how it has been discussed in the specification.

If in doubt, raise bugs!

Thanks for being an early adopter and thanks for raising bugs as you find them!

Karl DubostCSS Vendor Prefixes - Some Historical Context

A very good (must) read by Daniel Glazman about the CSS vendor prefixes and its challenges. He reminds us of what I was brushing of yesterday about the issues with regards to Web Compatibility:

Flagged properties have another issue: they don't solve the problem of proprietary extensions to CSS that become mainstream. If a given vendor implements for its own usage a proprietary feature that is so important to them, internally, they have to "unflag" it, you can be sure some users will start using it if they can. The spread of such a feature remains a problem, because it changes the delicate balance of a World Wide Web that should be readable and usable from anywhere, with any platform, with any browser.

I think the solution is in the hands of browser vendors: they have to consider that experimental features are experimental whetever their spread in the wild. They don't have to care about the web sites they will break if they change, update or even ditch an experimental or proprietary feature. We have heard too many times the message « sorry, can't remove it, it spread too much ». It's a bad signal because it clearly tells CSS Authors experimental features are reliable because they will stay forever as they are. They also have to work faster and avoid letting an experimental feature alive for more than two years.

Emphasis is mine on this last part. Yes it's a very bad signal. And check what was said yesterday.

@AlfonsoML And we will always support them (unlike some vendors that remove things at will). So what is the issue?

This is the issue in terms of Web Compatibility. It's what I was precisely saying that implementers do not understand the impact it has.

Hal WineDecoding Hashed known_hosts Files

Decoding Hashed known_hosts Files

tl;dr: You might find this gist handy if you enable HashKnownHosts

Modern ssh comes with the option to obfuscate the hosts it can connect to, by enabling the HashKnownHosts option. Modern server installs have that as a default. This is a good thing.

The obfuscation occurs by hashing the first field of the known_hosts file - this field contains the hostname,port and IP address used to connect to a host. Presumably, there is a private ssh key on the host used to make the connection, so this process makes it harder for an attacker to utilize those private keys if the server is ever compromised.

Super! Nifty! Now how do I audit those files? Some services have multiple IP addresses that serve a host, so some updates and changes are legitimate. But which ones? It’s a one way hash, so you can’t decode.

Well, if you had an unhashed copy of the file, you could match host keys and determine the host name & IP. [1] You might just have such a file on your laptop (at least I don’t hash keys locally). [2] (Or build a special file by connecting to the hosts you expect with the options “-o HashKnownHosts=no -o UserKnownHostsFile=/path/to/new_master”.)

I through together a quick python script to do the matching, and it’s at this gist. I hope it’s useful - as I find bugs, I’ll keep it updated.

Bonus Tip: https://github.com/defunkt/gist

Is a very nice way to manage gists from the command line.

Footnotes

[1]A lie - you’ll only get the host name and IP’s that you have connected to while building your reference known_hosts file.
[2]I use other measures to keep my local private keys unusable.

Daniel GlazmanCSS Vendor Prefixes

I have read everything and its contrary about CSS vendor prefixes in the last 48 hours. Twitter, blogs, Facebook are full of messages or articles about what are or are supposed to be CSS vendor prefixes. These opinions are often given by people who were not members of the CSS Working Group when we decided to launch vendor prefixes. These opinions are too often partly or even entirely wrong so let me give you my own perspective (and history) about them. This article is with my CSS Co-chairman's hat off, I'm only an old CSS WG member in the following lines...

  • CSS Vendor Prefixes as we know them were proposed by Mike Wexler from Adobe in September 1998 to allow browser vendors to ship proprietary extensions to CSS.

    In order to allow vendors to add private properties using the CSS syntax and avoid collisions with future CSS versions, we need to define a convention for private properties. Here is my proposal (slightly different than was talked about at the meeting). Any vendors that defines a property that is not specified in this spec must put a prefix on it. That prefix must start with a '-', followed by a vendor specific abbreviation, and another '-'. All property names that DO NOT start with a '-' are RESERVED for using by the CSS working group.

  • One of the largest shippers of prefixed properties at that time was Microsoft that introduced literally dozens of such properties in Microsoft Office.
  • The CSS Working Group slowly evolved from that to « vendor prefixes indicate proprietary features OR experimental features under discussion in the CSS Working Group ». In the latter case, the vendor prefixes were supposed to be removed when the spec stabilized enough to allow it, i.e. reaching an official Call for Implementation.
  • Unfortunately, some prefixed « experimental features » were so immensely useful to CSS authors that they spread at fast pace on the Web, even if the CSS authors were instructed not to use them. CSS Gradients (a feature we originally rejected: « Gradients are an example. We don't want to have to do this in CSS. It's only a matter of time before someone wants three colors, or a radial gradient, etc. ») are the perfect example of that. At some point in the past, my own editor BlueGriffon had to output several different versions of CSS gradients to accomodate the various implementation states available in the wild (WebKit, I'm looking at you...).
  • Unfortunately, some of those prefixed properties took a lot, really a lot, of time to reach a stable state in a Standard and everyone started relying on prefixed properties in production web sites...
  • Unfortunately again, some vendors did not apply the rules they decided themselves: since the prefixed version of some properties was so widely used, they maintained them with their early implementation and syntax in parallel to a "more modern" implementation matching, or not, what was in the Working Draft at that time.
  • We ended up just a few years ago in a situation where prefixed proprerties were so widely used they started being harmful to the Web. The indredible growth of first WebKit and then Chrome triggered a massive adoption of prefixed properties by CSS authors, up to the point other vendors seriously considered implementing themselves the -webkit- prefix or at least simulating it.

Vendor prefixes were not a complete failure. They allowed the release to the masses of innovative products and the deep adoption of HTML and CSS in products that were not originally made for Web Standards (like Microsoft Office). They allowed to ship experimental features and gather priceless feedback from our users, CSS Authors. But they failed for two main reasons:

  1. The CSS Working Group - and the Group is really made only of its Members, the vendors - took faaaar too much time to standardize critical features that saw immediate massive adoption.
  2. Some vendors did not update nor "retire" experimental features when they had to do it, ditching themselves the rules they originally agreed on.

From that perspective, putting experimental features behind a flag that is by default "off" in browsers is a much better option. It's not perfect though. I'm still under the impression the standardization process becomes considerably harder when such a flag is "turned on" in a major browser before the spec becomes a Proposed Recommendation. A Standardization process is not a straight line, and even at the latest stages of standardization of a given specification, issues can arise and trigger more work and then a delay or even important technical changes. Even at PR stage, a spec can be formally objected or face an IPR issue delaying it. As CSS matures, we increasingly deal with more and more complex features and issues, and it's hard to predict when a feature will be ready for shipping. But we still need to gather feedback, we still need to "turn flags on" at some point to get real-life feedback from CSS Authors. Unfortunately, you can't easily remove things from the Web. Breaking millions of web sites to "retire" an experimental feature is still a difficult choice...

Flagged properties have another issue: they don't solve the problem of proprietary extensions to CSS that become mainstream. If a given vendor implements for its own usage a proprietary feature that is so important to them, internally, they have to "unflag" it, you can be sure some users will start using it if they can. The spread of such a feature remains a problem, because it changes the delicate balance of a World Wide Web that should be readable and usable from anywhere, with any platform, with any browser.

I think the solution is in the hands of browser vendors: they have to consider that experimental features are experimental whetever their spread in the wild. They don't have to care about the web sites they will break if they change, update or even ditch an experimental or proprietary feature. We have heard too many times the message « sorry, can't remove it, it spread too much ». It's a bad signal because it clearly tells CSS Authors experimental features are reliable because they will stay forever as they are. They also have to work faster and avoid letting an experimental feature alive for more than two years. That requires taking the following hard decisions:

  • if a feature does not stabilize in two years' time, that's probably because it's not ready or too hard to implement, or not strategic at that moment, or that the production of a Test Suite is a too large effort, or whatever. It has then to be dropped or postponed.
  • Tests are painful and time-consuming. But testing is one of the mandatory steps of our Standardization process. We should "postpone" specs that can't get a Test Suite to move along the REC track in a reasonable time. That implies removing the experimental feature from browsers, or at least turning the flag they live behind off again. It's a hard and painful decision, but it's a reasonable one given all I said above and the danger of letting an experimenal feature spread.

Benjamin KerensaNóirín Plunkett: Remembering Them

Nóirín Plunkett & Benjamin KerensaNóirín and I

Today I learned of some of the worst kind of news, my friend and a valuable contributor to the great open source community Nóirín Plunkett passed away. They (this is their preferred pronoun per their twitter profile) was well regarded in the open source community for contributions.

I had known them for about four years now, having met them at OSCON and seen them regularly at other events. They were always great to have a discussion with and learn from and they always had a smile on their face.

It is very sad to lose them as they demonstrated an unmatchable passion and dedication to open source and community and surely many of us will spend many days, weeks and months reflecting on the sadness of this loss.

Other posts about them:

https://adainitiative.org/2015/07/remembering-noirin-plunkett/
http://www.apache.org/memorials/noirin.html
http://www.harihareswara.net/sumana/2015/07/29/0

Jonathan GriffinA-Team Update, July 29, 2015

Highlights

Treeherder: We’ve added to mozlog the ability to create error summaries which will be used as the basis for automatic starring.  The Treeherder team is working on implementing database changes which will make it easier to add support for that.  On the front end, there’s now a “What’s Deployed” link in the footer of the help page, to make it easier to see what commits have been applied to staging and production.  Job details are now shown in the Logviewer, and a mockup has been created of additional Logviewer enhancements; see bug 1183872.

MozReview and Autoland: Work continues to allow autoland to work on inbound; MozReview has been changed to carry forward r+ on revised commits.

Bugzilla: The ability to search attachments by content has been turned off; BMO documentation has been started at https://bmo.readthedocs.org.

Perfherder/Performance Testing: We’re working towards landing Talos in-tree.  A new Talos test measuring tab-switching performance has been created (TPS, or Talos Page Switch); e10s Talos has been enabled on all platforms for PGO builds on mozilla-central.  Some usability improvements have been made to Perfherder – https://treeherder.mozilla.org/perf.html#/graphs.

TaskCluster: Successful OSX cross-compilation has been achieved; working on the ability to trigger these on Try and sorting out details related to packaging and symbols.  Work on porting Linux tests to TaskCluster is blocked due to problems with the builds.

Marionette: The Marionette-WebDriver proxy now works on Windows.  Documentation on using this has been added at https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver.

Developer Workflow: A kill_and_get_minidump method has been added to mozcrash, which allows us to get stack traces out of Windows mochitests in more situations, particularly plugin hangs.  Linux xpcshell debug tests have been split into two chunks in buildbot in order to reduce E2E times, and chunks of mochitest-browser-chrome and mochitest-devtools-chrome have been re-normalized by runtime across all platforms.  Now that mozharness lives in the tree, we’re planning on removing the “in-tree configs”, and consolidating them with the previously out-of-tree mozharness configs (bug 1181261).

Tools: We’re testing an auto-backfill tool which will automatically retrigger coalesced jobs in Treeherder that precede a failing job.  The goal is to reduce the turnaround time required for this currently manual process, which should in turn reduce tree closure times related to test failures

The Details

bugzilla.mozilla.org

Treeherder/Automatic Starring

  • We’re generating error summaries now that will serve as the basis for automatic starring work.

Treeherder/Front End

  • New “What’s Deployed” feature in Help footer to view stage/prod deployment status
  • Logviewer now contains the full ‘Job Info’ aka. tinderbox printlines (bug 1092209)
  • Created a mock of logviewer UI changes (bug 1183872)

Perfherder/Performance Testing

  • Working towards moving Talos code in-tree (bug 787200)
  • New Talos test TPS (Talos Page Switch) (bug 1166132)
  • Fixed a few data ingestion/duplication cases.
  • Adjusting calculation of suite summaries to match graph server, not finished yet (tracking: bug 1184968)
  • e10s on all platforms, only runs on mozilla-central for pgo builds, broken tests, big regressions are tracked in bug 1144120
  • perfherder is easier to use, some polish on test selection and the  compare view, and most importantly we have found a few odd bugs that has  caused duplicate data to show up, check it out: https://treeherder.mozilla.org/perf.html#/graphs
  • Starting the work of moving Android Talos to Autophone (bug 1170685)

MozReview/Autoland

  • bug 1184079 – Fix for autopublishing when authenticating to MozReview via BMO cookies
  • bug 1178025 – Commits table looks nicer
  • bug 1175166 – r+ is now carried forward on commits from level 3 authors

TaskCluster Support

Mobile Automation

  • Continued work on porting android talos tests to autophone, remaining work is to figure out posting results and ensuring it runs on a regular basis and reliable.
  • Support for the Android stock browser and Dolphin has been added to mozbench (bug 1103134)

Dev Workflow

  • Created patch that replaces mach’s logger with mozlog. Still several rough edges and perf issues to iron out

Media Automation

  • The new MSE rewrite is now enabled by default on Nightly and we’re replacing a few tests in response: bug 1186943 – detection of video stalls has to repond to new internal strings from new MSE implementation by :jya.
  • firefox-media-tests mozharness log is now parsed into steps for Treeherder’s Log Viewer
  • Fixed a problem with automation scripts for WebRTC tests for Windows 64.

General Automation

  • Moved mozlog.structured to top-level mozlog, and released mozlog 3.0
  • Added a kill_and_get_minidump method to mozcrash (bug 890026). As a result we’re getting minidumps out of Windows mochitests under more circumstances (in particular, plugin hangs in certain intermittently failing tests).
  • The MozillaPulse consumer now supports listening to multiple exchanges simultaneously (bug 1180897).
  • Bug 1186420 – Autophone – update requirements and deploy thclient 1.6
  • Bughunter moved to SCL3 without interruption
  • Bug 1185498 – Sisyphus – Bughunter – consume urls directly from Socorro
  • linux debug xpcshell was split into two chunks to reduce E2E times (bug 1185499)
  • runtimes for mochitest-browser-chrome and mochitest-devtools have been renormalized across all platforms
  • Allow Firefox UI tests to determine where to get Firefox crash symbols for releases and improve reproducibility
  • Testing auto-backfill in production (bug 1180732)
  • Now that mozharness lives in the tree, we’re going to remove the “in-tree configs”, which will consolidate mozharness options and make maintenance simpler (bug 1181261)

ActiveData

  • ActiveData requires monitoring on all nodes before it can be left alone for more than a day without it failing:
    • Made  fork of Supervisor to run simple Cron jobs – the biggest task was  finding and installing (and compiling!) the C libraries used
    • Added  Supervisor to spot instances to monitor ES; not just the process, but  query response time.  Also monitoring the indexing jobs.
  • Replicated OrangeFactor to ActiveData so masters student (and the public) we can query it, or extract it.

Marionette

  • Landed Proxy support via capabilities
  • Updating cookie support to return httpOnly flag
  • Added a –version arg to Marionette (bug 1183157)
  • Landing support for W3C Compatible Drivers in Selenium Tree and released 2.46.1 so users can use it.
  • Wrote a small guide to use it https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver
  • Marionette<->WebDriver Proxy now works on Windows, Linux and OSX as of 0.3.0

Joel MaherLost in data – episode 2 – bisecting and comparing

This week on Lost in Data, we tackle yet another pile of alerts.  This time we have a set of changes which landed together and we push to try for bisection.  In addition we have an e10s only failure which happened when we broke talos uploading to perfherder.  See how I get one step closer to figuring out the root cause of the regressions.


Daniel StenbergA third day of HTTP Workshopping

I’ve met a bunch of new faces and friends here at the HTTP Workshop in Münster. Several who I’ve only seen or chatted with online before and some that I never interacted with until now. Pretty awesome really.

Out of the almost forty HTTP fanatics present at this workshop, five persons are from Google, four from Mozilla (including myself) and Akamai has three employees here. Those are the top-3 companies. There are a few others with 2 representatives but most people here are the only guys from their company. Yes they are all guys. We are all guys. The male dominance at this event is really extreme and we’ve discussed this sad circumstance during breaks and it hasn’t gone unnoticed.

This particular day started out grand with Eric Rescorla (of Mozilla) talking about HTTP Security in his marvelous high-speed style. Lots of talk about how how the HTTPS usage is right now on  the web, HTTPS trends, TLS 1.3 details and when it is coming and we got into a lot of talk about how HTTP deprecation and what can and cannot be done etc.

Next up was a presentation about  HTTP Privacy and Anonymity by Mike Perry (from the Tor project) about lots of aspects of what the Tor guys consider regarding fingerprinting, correlation, network side-channels and similar things that can be used to attempt to track user or usage over the Tor network. We got into details about what recent protocols like HTTP/2 and QUIC “leak” or open up for fingerprinting and what (if anything) can or could be done to mitigate the effects.

Evolving HTTP Header Fields by Julian Reschke (of Green Bytes) then followed, discussing all the variations of header syntax that we have in HTTP and how it really is not possible to write a generic parser that can handle them, with a suggestion on how to unify this and introduce a common format for future new headers. Julian’s suggestion to use JSON for this ignited a discussion about header formats in general and what should or could be done for HTTP/3 and if keeping support for the old formats is necessary or not going forward. No real consensus was reached.

Willy Tarreau (from HAProxy) then took us into the world of HTTP Infrastructure scaling and Load balancing, and showed us on the microsecond level how fast a load balancer can be, how much extra work adding HTTPS can mean and then ending with a couple suggestions of what he thinks could’ve helped his scenario. That then turned into a general discussion and network architecture brainstorm on what can be done, how it could be improved and what TLS and other protocols could possibly be do to aid. Cramming out every possible gigabit out of load balancers certainly is a challange.

Talking about cramming bits, Kazuho Oku got to show the final slides when he showed how he’s managed to get his picohttpparser to parse HTTP/1 headers at a speed that is only slightly slower than strlen() – including a raw dump of the x86 assembler the code is turned into by a compiler. What could possibly be a better way to end a day full of protocol geekery?

Google graciously sponsored the team dinner in the evening at a Peruvian place in the town! Yet another fully packed day has ended.

I’ll top off today’s summary with a picture of the gift Mark Nottingham (who’s herding us through these days) was handing out today to make us stay keen and alert (Mark pointed out to me that this was a gift from one of our Japanese friends here):

kitkat

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Mozilla WebDev CommunityBeer and Tell – July 2015

Once a month, web developers from across the Mozilla Project get together to develop an encryption scheme that is resistant to bad actors yet able to be broken by legitimate government entities. While we toil away, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Osmose: Moseamp

Osmose (that’s me!) was up first, and shared Moseamp, an audio player. It’s built using HTML, CSS, and JavaScript, but acts as a native app thanks to the Electron framework. Moseamp can play standard audio formats, and also can load plugins to add support for extra file formats, such as Moseamp-Audio-Overload for playing PSF files and Moseamp-GME for playing NSF and SPC files. The plugins rely on libraries written in C that are compiled via Emscripten.

Peterbe: Activity

Next was Peterbe with Activity, a small webapp that displays the events relevant to a project, such as pull requests, PR comments, bug comments, and more, and displays the events in a nice timeline along with the person related to the action. It currently pulls data from Bugzilla and Github.

The project was born from the need to help track a single individual’s activities related to a project, even if they have different usernames on different services. Activity can help a project maintainer see what contributors are doing and determine if there’s anything they can do to help the contributor.

New One: MXR to DXR

New One was up next with a Firefox add-on called MXR to DXR. The add-on rewrites all links to MXR viewed in Firefox to point to the equivalent page on DXR, the successor to MXR. The add-on also provides a hotkey for switching between MXR and DXR while browsing the sites.

bwalker: Liturgiclock

Last was bwalker who shared liturgiclock, which is a webpage showing a year-long view of what religious texts that Lutherans are supposed to read throughout the year based on the date. The site uses a Node.js library that provides the data on which text belongs to which date, and the visualization itself is powered by SVG and D3.js.


We don’t actually know how to go about designing an encryption scheme, but we’re hoping to run a Kickstarter to pay for the Udacity cryptography course. We’re confident that after being certified as cryptologists we can make real progress towards our dream of swimming in pools filled with government cash.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Air MozillaThe Joy of Coding (mconley livehacks on Firefox) - Episode 23

The Joy of Coding (mconley livehacks on Firefox) - Episode 23 Watch mconley livehack on Firefox Desktop bugs!

Sean McArthurWhat’s the password?

Exploring the wilds of the internets, I stumble upon a brand new site that allows me to turn cat images into ASCII art.

No way. Cats? In text form?! Text messages full of kitties!

This is amaze. How do I get started?

Says here, “Just create an account.” Ok.

“What’s your username?” seanmonstar.

“Pick a p͏a̵ss–”arrgghHHHa̵ąz͏z͝ef́w͟qa̛a̸s̕s̡;. WHAT! NO! What did you just call me?1 I will not!. I don’t care how amaze textual kitties might be.


Sorry, I’m fine now. It’s just… you know. It is downright irresponsible at this point to require a user to enter a password to login to your site. It’s pretty easy to properly hash some passwords, but DON’T DO IT!. Instead, you should let a secure identity provider provide the user’s credentials.

Good idea! Er, which do you pick? There’s several, and each has its own peculiarities regarding its API. Sigh.

Persona?

I had hoped Persona would move us away from these dark ages, but it struggled to gain support. The user experience was disruptive.

Most often, users had to create a new Persona account. Oh hey, another password!

Additionally, they would need to go through email verification, which while it helps to make it secure, is another step that may cause a user to bail out. Even if they went through the whole process, web developers needed to properly use navigator.id.watch() to keep state. It was very easy to mess that up.

The popup would confuse users. We’ve been teaching users forever to distrust popups, but saying this one was okay.

++

At this point, most browsers have user account information already. Chrome has a Google account, Firefox has a Firefox account, Safari has iCloud, and Edge has a Microsoft account. How about we just move websites to asking the User Agent for credentials, instead of the User directly?

This was what I originally assumed BrowserID would work, when I first heard about it. A user can sign into a browser, using whichever way that browser supports. The website (and thus web developer) isn’t required to care what account system the user wants to use. They just want to know “who are you” and “how can I be sure?”2The problem this solves now is passing and storing passwords.

navigator.auth.get()

A website could ask for credentials from the navigator, and the browser can show its own trusted UI asking the user if and which ID to share to the website. The API could return a Promise<JWT>, with the JWT being signed by the browser. There’s already standards in place for verifying a signed JWT, so the web developer can be confident that the user owns the data include in the token. An example usage:

navigator.auth.get().then((token) => fetch('/verify', {
    method: 'POST',
    body: JSON.stringify(token)
}))

Using a Promise and using the already-logged-in accounts of the browser, the website won’t need to reload. It therefore doesn’t need to store state, and doesn’t need the wonkiness of Persona’s watch(). This is simply an authentication requesting mechanism, so there’s no confusion about who manages the session: the website does.

Additionally, an HTML element could be used for sites that wish to support NoScript:

<auth method="POST" action="/verify" label="Login" />

This could render like a <div>, and a website could style it to their spleen’s content. Another pain point of Persona solved.

We can do this!

The Identity team at Mozilla is interested in exploring this, and being that the scope is low, working towards consensus and a standard is the goal, as opposed to Persona’s hope of adoption before standardization. To a less-passwords web!


  1. “Password” should be a curse word. Hey! Did you see that little password? Too busy texting to its passwording buddies, and almost hit me! 

  2. Federation of an account system should certainly be possible, but out of scope for this article. The points is that any browser maker can explore how to log into the browser, and pass a JWT to a website that includes a way to verify it. Firefox and others would then be free to explore de-centralized accounts and profiles, while web developers can happily log users in without evil passwords. 

Rob WoodRaptor on Gaia CI: Update

Raptor on Gaia-CI Refactored

A lot has happened in the world of Raptor in the last couple of months. The Raptor tool itself has been migrated out of Gaia, and is now available as it’s own Raptor CLI Tool installed as a global NPM package. The new Raptor tool has several improvements including the addition of Marionette, allowing Raptor tests to use Marionette to drive the device UI, and in turn spawn performance markers.

The automation running Raptor on Gaia-ci has now been updated to use the new Raptor CLI tool. Not only has the tool itself been upgraded, but how the Raptor suite is running on Gaia-ci has also been completely refactored, for the better.

In order to reduce test variance, the coldlaunch test is now run on the base Gaia version and then on the patch Gaia version, all on the same taskcluster worker/machine instance. This has the added benefit of just having a single Raptor app launch test task per Gaia application that the test is supported on.

New Raptor Tasks on Treeherder

The Raptor tasks are now visible by default on Treeherder for Gaia and Gaia-master, and appear in a new Raptor group as seen here:

New Raptor Tasks on Gaia Treeherder

New Raptor Tasks on Gaia Treeherder

All of the applications that the coldlaunch performance test were performed on are listed (currently 12 apps). The Raptor suite is run once by default now (on both base and patch Gaia revisions) and results calculated and compared all in the same task, per app. If an app startup time regression is detected, the test is automatically retried up to 3 more times. The app task is marked as an orange failure if all 4 runs show a 15% or greater regression in app startup time.

To view the coldlaunch test results for any app, click on the corresponding app task symbol on Treeherder. Then on the panel that appears towards the bottom, click on “Job details”, and then “Inspect Task”. The Task Inspector opens in a new tab. Scroll down and you will see the console output which includes the summary table for the base and patch Gaia revs, and the overall result (if a regression has been detected or not). If a regression has been detected, the regression percentage is displayed; this represents the increase in application launch time.

Retriggering the Launch Test

If you wish to repeat the app launch test again for an individual app, perhaps to further confirm a launch regression, it can be easily retriggered. To retrigger the coldlaunch test for a single app, simply click the task representing the app and retrigger the task how you normally would on Treeherder (login and click on the retrigger job button). This will start the Raptor coldlaunch test again just for the selected app, on both the gaia base and patch revisions as before. The versions of Gaia and the emulator (gecko) will remain the same as they were in the original suite run, when the test is retriggered on an individually-selected app.

If you wish to repeat the coldlaunch test on ALL of the apps instead of just one, then you can retrigger the entire Raptor suite. Select the Raptor decision task (the green “R“, as seen below) and click the retrigger button:

Raptor Decision Task

Raptor Decision Task

Note: Retriggering the entire Raptor suite will cause the emulator (gecko) version to be updated to the very latest before running the test, so if a new emulator build has been completed since the Raptor suite was originally run, then that new emulator (gecko) build will be used. The Gaia base and patch revisions used in the tests will be the same revisions used on the original suite run.

For more information drop by the #raptor channel on Mozilla IRC, or send me a message via the contact form on the About page of this blog. Happy coding!

Laura de ReynalSnapchat

“If Snapchat was a phone I would definitely buy it. It’s like an iPhone, it’s really fast, you can see everyone’s life and see what is going on in your world.”

16 years old female teenager, Chicago.


Filed under: Mozilla, Quotes, Research

ArkyAutonomous Mozilla Stumbler with Android

Mozilla Location Service (MLS) is an open source service to determine location based on network infrastructure like WiFi access points and cell towers. The project has released an client applications to collect the large dataset of GSM, Cellphone, WiFi data using crowd sourcing. In this blog post I'll explore an idea to re-purpose an old Android mobile phone as an autonomous MozStumbling device that could be easily deployed in public transport, taxis or your friend who is driving across the country.

Bill of Materials

  1. Android Mobile phone.
  2. Mozstumbler Android App.
  3. Taskbomb Android App.
  4. Mini-USB cable.
  5. GSM SIM (With mobile data).
  6. Car lighter socket power adapter.
  7. Powerbank (optional).

Putting it together

In this setup, I am using a rugged Android phone running Android Gingerbread. It can take a lot punishment. Leaving a phone in overheated car is recipe for disaster.

From the Android settings I enabled allow installation of non-market applications 'Unknown Sources'. Connected the phone to my computer using the Mini-USB cable. Transferred the previous downloaded apps(.apk) packages to phone and installed the Mozstumbler and Taskbomb applications.

Android homescreen showing Mozstumbler and Taskbomb icons

Configured the Mozstumbler application to start on boot with Taskbomb app. Also configured Mozstumbler to upload using mobile data connection. The phone has GSM SIM card with data connection. Made sure both WiFi, GPS and Cellular data is enabled.

To prevent phone from downloading software updates and using up all the data. I disabled all software updates. Disabled all notifications both audio and LED notifications. Finally locked by phone by setting a secret code. Now the device is ready for deployment. The phone is plugged into car's lighter charging unit to keep it powered up. You can also use a power bank in case where charging options are not available.

Planning to use this autonomous Mozstumbler hack soon. Perhaps I should ask Thejesh to use on his epic trip across India.

Hannes VerschoreKeep on growing

I haven’t had the time to create a blogpost about the last year with numbers and describe the changes that have happened over the months/year. Hopefully soon, but a small teaser:

mysql dump size (in gb)

Dave HuntCustom Firefox Cufflinks

If you’re interested in a pair of custom Firefox logo cufflinks for $25 plus postage then please get in touch. I’ve been in contact with CuffLinks.com, and if I can place an initial order of at least 25 pairs then the mold and tooling fee will be waived. Check out their website for examples of their work. The Firefox cufflinks would be contoured to the outline of the logo, and the logo itself would be screen printed in full colour. If these go well, I’d also love a pair of dino head cufflinks to complement them!

Unless I can organise with CuffLinks.com to ship to multiple recipients and take payments separately, I will accept payments via Paypal and take care of sending the cufflinks out. As an example of postage costs, sending to USA would cost me approximately $12 per parcel. Let me know if you’re interested via a comment on this post, e-mail (find me in the phonebook), or IRC (:davehunt).

Karl DubostVendor Prefixes And Market Reality

Through the Web Compat twitter account, I happen to read a thread about Apple introducing a new vendor prefix. 🎳. The message by Alfonso Martínez L. starts a bit rough:

The mess caused by vendor prefixes on the wild is not enough, so we have new -apple https://www.webkit.org/blog/3709/using-the-system-font-in-web-content/ … @jonathandavis

Going to Apple blog post before reading the rest of the thread gives a bit more background.

Web content is sometimes designed to fit in with the overall aesthetic of the underlying platform which it is being rendered on. One of the ways to achieve this is by using the platform’s system font, which is possible on iOS and OS X by using the “-apple-system” CSS value for the “font-family” CSS property. On iOS 9 and OS X 10.11, doing this allows you to use Apple’s new system font, San Francisco. Using “-apple-system” also correctly interacts with the font-weight CSS property to choose the correct font on Apple’s latest operating systems.

Here I understand the desire to use the system font, but I don't understand the new -apple-system, specifically when the next paragraph says:

On platforms which do not support “-apple-system” the browser will simply fall back to the next item in the font-family fallback list. This provides a great way to make sure all your users get a great experience, regardless of which platform they are using.

I wonder what the cascade of font-family is not already doing so they need a new prefix. They explain later on by providing this information:

Going beyond the system font, iOS has dynamic type behavior, which can provide an additional level of fit and finish to your content.

font: -apple-system-body
font: -apple-system-headline
font: -apple-system-subheadline
font: -apple-system-caption1
font: -apple-system-caption2
font: -apple-system-footnote
font: -apple-system-short-body
font: -apple-system-short-headline
font: -apple-system-short-subheadline
font: -apple-system-short-caption1
font: -apple-system-short-footnote
font: -apple-system-tall-body

What I smell here is pushing the semantics of a text into the font-face, I believe it will not end well. But that's not what I want to talk about here.

Vendor Prefixes Principle

The vendor prefixes have been created for providing a safe place for vendors to experiment with new features. It's a good idea on paper. It can work well, specifically when the technology is not yet really mature and details need to be ironed. This would be perfectly acceptable if the feature was only available on beta and alpha versions of rendering engines. That would stop de facto the proliferation of these properties in common Web sites. And that would give space for experimenting.

Here the feature is not proposed as an experiment but as a way for Web developers, designers to use a new feature on Apple platform. It's proposed as a competitive advantage and a marketing tool for enticing developers to the cool new thing. And before I'm being targeted for blaming Apple only, all vendors in some fashion do that.

Let's assume that Apple is of good will. The real issue is not easy to understand, except if you are working daily on Web Compatibility across the world.

Enter the market reality field.

Flexbox And Gradients In China And Japan

tori in Kamakura

With the Web Compat team, we have been working lately a lot on Chinese and Japanese mobile Web site compatibility issues. The current market in China and Japan on Mobile is a smartphone ecosystem largely dominated by iOS and Android. It means that if you use in your site -webkit- and WebKit vendor prefixes, you are basically on the safe side for most of the users, but not all users.

What is happening here is interesting. Gradients and flexbox went through syntax changes and the standard syntax is really different from the original -webkit- syntax. These are two features of the Web platform which are very useful and very powerful, specifically flexbox. In a monopolistic market such as China and Japan, the end result was Web developers jumping on the initial version of the feature for creating their Web sites (shiny new and useful features).

Fast forward a couple of years and the economic reality of the Web starts playing its cards. Other vendors have caught up with the features, the standard process took place, and the new world of interoperability is all pink with common implementations in all rendering engines, except a couple of minor details.

Web developers should all jump on adjusting their Web sites to add the standard properties at least. This is not happening. Why? Because the benefits are not perceived by Web developers, project managers and site owners. Indeed adjusting the Web site will have a cost in editing and testing. Who bears this cost and for which reasons?

When mentioning it will allow other users with different browsers to use the Web site, the answer is straightforward: "This browser X is not in our targeted list of browsers." or "This browser Y doesn't appear in our stats." We all know that the browser Y can't appear in the stats because it's not usable on the site (A good example of that is MBGA).

mbga rendering on Gecko mobile

Dropping Vendor Prefixes

Adding prefixless version of properties in the implementation of rendering engines help, but do not magically fix everything for improving the Web Compatibility story. That's the mistake that Timothy Hatcher (WebKit Developer Experience Manager at Apple.) is making in:

@AlfonsoML We also unprefixed 4 dozen properties this year. https://developer.apple.com/library/mac/releasenotes/General/WhatsNewInSafari/Articles/Safari_9.html#//apple_ref/doc/uid/TP40014305-CH9-SW28

This is cool and I applaud Apple for this. I wish it happened a bit earlier. Why doesn't it solve the Web Compatibility issue? Because the prefixed version of properties still exists and is supported. Altogether, we then sing the tune "Yeah, Apple (and Google), let's drop the prefixed version of these properties!" Ooooh, hear me, I so wish it would be possible. But Apple and Google can't do that for the exact same reason that other non-WebKit browsers can't exist in Chinese and Japanese markets. They would instantly break a big number of high profiles Web sites.

We have reached the point where browser vendors have to start implementing or aliasing these WebKit prefixes just to allow their users to browse the Web, see Mozilla in Gecko and Microsoft in Edge. The same thing is happening over again. In the past, browser vendors had to implement the quirks of IE to be compatible with the Web. As much as I hate it, we will have to specify the current -webkit- prefixes to implement them uniformly.

Web Compatibility Responsibility

Microsoft is involved in the Web Compatibility project. I would like Apple and Google to be fully involved and committed in this project. The mess we are all involved is due to WebKit prefixes and the leader position they have on the mobile market can really help. This mess killed Opera Presto on mobile, which had to switch to Blink.

Let's all create a better story for the Web and understand fully the consequences of our decisions. It's not only about technology, but also economic dynamics and market realities.

Otsukare!

Jesse RudermanReleasing jsfunfuzz and DOMFuzz

Today I'm releasing two fuzzers: jsfunfuzz, which tests JavaScript engines, and DOMFuzz, which tests layout and DOM APIs.

Over the last 11 years, these fuzzers have found 6450 Firefox bugs, including 790 bugs that were rated as security-critical.

I had to keep these fuzzers private for a long time because of the frequency with which they found security holes in Firefox. But three things have changed that have tipped the balance toward openness.

First, each area of Firefox has been through many fuzz-fix cycles. So now I'm mostly finding regressions in the Nightly channel, and the severe ones are fixed well before they reach most Firefox users. Second, modern Firefox is much less fragile, thanks to architectural changes to areas that once oozed with fuzz bugs. Third, other security researchers have noticed my success and demonstrated that they can write similarly powerful fuzzers.

My fuzzers are no longer unique in their ability to find security bugs, but they are unusual in their ability to churn out reliable, reduced testcases. Each fuzzer alternates between randomly building a JS string and then evaling it. This construction makes it possible to make a reproduction file from the same generated strings. Furthermore, most DOMFuzz modules are designed so their functions will have the same effect even if other parts of the testcase are removed. As a result, a simple testcase reduction tool can reduce most testcases from 3000 lines to 3-10 lines, and I can usually finish reducing testcases in less than 15 minutes.

The ease of getting reduced testcases lets me afford to report less severe bugs. Occasionally, one of these turns out to be a security bug in disguise. But most importantly, these bug reports help me establish positive relationships with Firefox developers, by frequently saving them time.

A JavaScript engine developer can easily spend a day trying to figure out why a web site doesn't work in Firefox. If instead I can give them a simple testcase that shows an incorrect result with a new JS optimization enabled, they can quickly find the source of the bug and fix it. Similarly, they much prefer reliable assertion testcases over bug reports saying "sometimes, Google Maps crashes after a while".

As a result, instead of being hostile to fuzzing, Firefox developers actively help me fuzz their code. They've added numerous assertions to their code, allowing fuzzers to notice as soon as the smallest thing goes wrong. They've fixed most of the bugs that impede fuzzing progress. And several have suggested new ways to test their code, even (especially) ways that scare them.

Developers working on the JavaScript engine have been especially helpful. First, they ensured I could test their code directly, apart from the rest of the browser. They already had a JavaScript shell for running regression tests, and they added a --fuzzing-safe option to disable the more dangerous testing functions.

The JS team also created a large set of testing functions to let me control things that would normally be based on heuristics. Fuzzers can now choose when garbage collection happens and even how much. They can make expensive JITs kick in after 2 loop iterations rather than 100. Fuzzers can even simulate out-of-memory conditions. All of these things make it possible to create small, reliable testcases for nasty classes of bugs.

Finally, the JS team has supported differential testing, a form of fuzzing where output is checked for correctness against some oracle. In this case, the oracle is the same JavaScript engine with most of its optimizations disabled. By fixing inconsistencies quickly and supporting --enable-more-deterministic, they've ensured that differential testing doesn't get stuck finding the same problems repeatedly.

Andreas Gal, a developer working on Firefox's JavaScript engine, once commented on Bugzilla: 'From this day forward, I shall never write a JIT again without Jesse.'

Please join us on IRC, or just dive in and contribute! Your suggestions and patches can have a large impact: fuzzer modules often act together to find complex interactions within the browser. For example, bug 893333 was found by my designMode module interacting with a <table> module contributed by a Firefox developer, Mats Palmgren. Likewise, bug 1158427 was found by Christoph Diehl's WebAudio module combined with my reflection-based API-discovery modules.

If your contributions result in me finding a security bug, and I think I wouldn't have found it otherwise, I'll make sure you get a bug bounty as if you had reported it yourself.

To the next 6450 browser bug fixes!

Christian HeilmannGot something to say? Write a post!

Tweet button

Here’s the thing: Twitter sucks for arguments:

  • It is almost impossible to follow conversation threads
  • People favouriting quite agressive tweets leaves you puzzled as to the reasons
  • People retweeting parts of the conversation out of context leads to wrong messages and questionable quotes
  • 140 characters are great to throw out truisms but not to make a point.
  • People consistenly copying you in on their arguments floods your notifications tab without really wanting to weigh in any longer

This morning was a great example: Peter Paul Koch wrote yet another incendiary post asking for a one year hiatus of browser innovation. I tweeted about the post saying it has some good points. Paul Kinlan of the Chrome team disagreed strongly with the post. I opted to agree with some of it, as a lot of features we created and thought we discarded tend to linger longer on the web than we want to.

A few of those back and forth conversations later and Alex Russel dropped the mic:

@Paul_Kinlan: good news is that @ppk has articulated clearly how attractive failure can be. @codepo8 seems to agree. Now we can call it out.

Now, I am annoyed about that. It is accusing, calling me reactive and calls out criticism of innovation a failure. It also very aggressively hints that Alex will now always quote that to show that PPK was wrong and keeps us from evolving. Maybe. Probably. Who knows, as it is only 140 characters. But I am keeping my mouth shut, as there is no point at this agressive back and forth. It results in a lot of rushed arguments that can and will be quoted out of context. It results in assumed sub-context that can break good relationships. It – in essence – is not helpful.

If you truly disagree with something – make your point. Write a post, based on research and analysis. Don’t throw out a blanket approval or disapproval of the work of other people to spark a “conversation” that isn’t one.

Well-written thoughts lead to better quotes and deeper understanding. It takes more effort to read a whole post than to quote a tweet and add your sass.

In many cases, whilst writing the post you realise that you really don’t agree or disagree as much as you thought you did with the author. This leads to much less drama and more information.

And boy do we need more of that and less drama. We are blessed with jobs where people allow us to talk publicly, research and innovate and to question the current state. We should celebrate that and not use it for pithy bickering and trench fights.

Photo Credit: acidpix

Daniel StenbergHTTP Workshop, second day

All 37 of us gathered again on the 3rd floor in the Factory hotel here in Münster. Day two of the HTTP Workshop.

Jana Iyengar (from Google) kicked off this morning with his presentations on HTTP and the Transport Layer and QUIC. Very interesting area if you ask me – if you’re interested in this, you really should check out the video recording from the barbof they did on this topic in the recent Prague IETF. It is clear that a team with dedication, a clear use-case, a fearless approach to not necessarily maintaining “layers” and a handy control of widely used servers and clients can do funky experiments with new transport protocols.

I think there was general agreement with Jana’s statement that “Engagement with the transport community is critical” for us to really be able to bring better web protocols now and in the future. Jana’s excellent presentations were interrupted a countless number of times with questions, elaborations, concerns and sub-topics from attendees.

Gaetano Carlucci followed up with a presentation of their QUIC evaluations, showing how it performs under various situations like packet loss etc in comparison to HTTP/2. Lots of transport related discussions followed.

We rounded off the afternoon with a walk through the city (the rain stopped just minutes before we took off) to the town center where we tried some of the local beers while arguing their individual qualities. We then took off in separate directions and had dinner in smaller groups across the city.

snackstation

Honza BambasString parsing made simple with mozilla::Tokenizer

 

PL_strstr

 

I can see FindChar, Substring, ToInteger and even atoi, strchr, strstr and sscanf craziness all over the Mozilla code base. There are though much better and, more importantly, safer ways to parse even a very simple input.

I wrote a parser class with API derived from lexical analyzers that helps with simple inputs parsing in a very easy way. Just include mozilla/Tokenizer.h and use class mozilla::Tokenizer. It implements a subset of features of a lexical analyzer.  Also nicely hides boundary checks of the input buffer from the consumer.

To describe the principal briefly: Tokenizer recognizes tokens like whole words, integers, white spaces and special characters.  Consumer never works directly with the string or its characters but only with pre-parsed parts (identified tokens) returned by this class.

 

There are two main methods of Tokenizer:

  • bool Next(Token& result);

If there is anything to read from the input at the current internal read position, including the EOF, returns true and result is filled with a token type and an appropriate value easily accessible via a simple variant-like API.  The internal read cursor is shifted to the start of the next token in the input before this method returns.

  • bool Check(const Token& tokenToTest);

If a token at the current internal read position is equal (by the type and the value) to what has been passed in the tokenToTest argument, true is returned and the internal read cursor is shifted to the next token.  Otherwise (token is different than expected) false is returned and the read cursor is left unaffected.

Few usage examples:

Construction

  #include "mozilla/Tokenizer.h"

  mozilla::Tokenizer p(NS_LITERAL_CSTRING("Sample string 2015."));

Reading a single token, examining it

  mozilla::Tokenizer::Token t;
  bool read = p.Next(t);
  // read == true, we have read something and t has been filled
  // Following our example string...
  if (t.Type() == mozilla::Tokenizer::TOKEN_WORD) {
    t.AsString(); // returns "Sample"
  }

Checking on a token value and automatically skipping on a positive test

  if (!p.CheckChar('\x20')) {
    throw "I expect a space here!";
  }

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_WORD;
  t.AsString() == "string";

  if (!p.CheckWhite()) {
    throw "A white space is expected here!";
  }

Reading numbers

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_INTEGER;
  t.AsInteger() == 2015;

Reaching the end of the input

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_CHAR;
  t.AsChar() == '.';

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_EOF;

  read = p.Next(t);
  // read == false, we are behind the EOF
  // t is here undefined!

More features

To learn more enhanced features of the Tokenizer – there is not that many, don’t be scared ;) – look at the well documented Tokenizer.h file under xpcom/ds.

As a teaser you can go through this more enhanced example or check on a gtest for Tokenizer:

#include "mozilla/Tokenizer.h"

using namespace mozilla;

{
  // A simple list of key:value pairs delimited by commas
  nsCString input("message:parse me,result:100");

  // Initialize the parser with an input string
  Tokenizer p(input);
  // A helper var keeping type and value of the token just read
  Tokenizer::Token t;

  // Loop over all tokens in the input
  while (p.Next(t)) {
    if (t.Type() == Tokenizer::TOKEN_WORD) {
      // A 'key' name found
      if (!p.CheckChar(':')) {
        // Must be followed by a colon
        return; // unexpected character
      }

      // Note that here the input read position is just after the colon
      // Now switch by the key string
      if (t.AsString() == "message") {
        // Start grabbing the value
        p.Record();
        // Loop until EOF or comma
        while (p.Next(t) && !t.Equals(Tokenizer::Token::Char(',')))
          ;
        // Claim the result
        nsAutoCString value;
        p.Claim(value);
        MOZ_ASSERT(value == "parse me");

        // We must revert the comma so that the code bellow recognizes the flow correctly
        p.Rollback();
      } else if (t.AsString() == "result") {
        if (!p.Next(t) || t.Type() != Tokenizer::TOKEN_INTEGER) {
          return; // expected a value and that value must be a number
        }

        // Get the value, here you know it's a valid number
        uint32_t number = t.AsInteger();
        MOZ_ASSERT(number == 100);
      } else {
        // Here t.AsString() is any key but 'message' or 'result', ready to be handled
      }

      // On comma we loop again
      if (p.CheckChar(',')) {
        // Note that now the read position is after the comma
        continue;
      }
      // No comma?  Then only EOF is allowed
      if (p.CheckEOF()) {
        // Cleanly parsed the string
        break;
      }
    }

    return; // The input is not properly formatted
  }
}

 

Currently works only with ASCII inputs but can be easily enhanced to also support any UTF-8/16 coding or even specific code pages if needed.

The post String parsing made simple with mozilla::Tokenizer appeared first on mayhemer's blog.

Aaron KlotzInteresting Win32 APIs

Yesterday I decided to diff the export tables of some core Win32 DLLs to see what’s changed between Windows 8.1 and the Windows 10 technical preview. There weren’t many changes, but the ones that were there are quite exciting IMHO. While researching these new APIs, I also stumbled across some others that were added during the Windows 8 timeframe that we should be considering as well.

Volatile Ranges

While my diff showed these APIs as new exports for Windows 10, the MSDN docs claim that these APIs are actually new for the Windows 8.1 Update. Using the OfferVirtualMemory and ReclaimVirtualMemory functions, we can now specify ranges of virtual memory that are safe to discarded under memory pressure. Later on, should we request that access be restored to that memory, the kernel will either return that virtual memory to us unmodified, or advise us that the associated pages have been discarded.

A couple of years ago we had an intern on the Perf Team who was working on bringing this capability to Linux. I am pleasantly surprised that this is now offered on Windows.

madvise(MADV_WILLNEED) for Win32

For the longest time we have been hacking around the absence of a madvise-like API on Win32. On Linux we will do a madvise(MADV_WILLNEED) on memory-mapped files when we want the kernel to read ahead. On Win32, we were opening the backing file and then doing a series of sequential reads through the file to force the kernel to cache the file data. As of Windows 8, we can now call PrefetchVirtualMemory for a similar effect.

Operation Recorder: An API for SuperFetch

The OperationStart and OperationEnd APIs are intended to record access patterns during a file I/O operation. SuperFetch will then create prefetch files for the operation, enabling prefetch capabilities above and beyond the use case of initial process startup.

Memory Pressure Notifications

This API is not actually new, but I couldn’t find any invocations of it in the Mozilla codebase. CreateMemoryResourceNotification allocates a kernel handle that becomes signalled when physical memory is running low. Gecko already has facilities for handling memory pressure events on other platforms, so we should probably add this to the Win32 port.

Mozilla IT & OperationsCVS & BZR services decommissioned on mozilla.org

tl;dr: CVS & BZR (aka Bazaar) version control systems have been decommissioned at Mozilla. See https://ftp.mozilla.org/pub/mozilla.org/vcs-archive/README for final archives.

As part of our ongoing efforts to ensure that the services operated by Mozilla continue to meet the current needs of Mozilla, the following VCS systems have been decommissioned:

This work took coordinated effort between IT, Developer Services, and the remaining active users of those systems. Thanks to all of them for their contributions!

Final archives of the public repositories are currently available at https://ftp.mozilla.org/pub/mozilla.org/vcs-archive/. The README file has instructions for retrieval of non-public repositories.

NOTE: These URLs are subject to change, please refer back to this blog post for the up to date link.

For any questions or concerns, please contact the Developer Services team.

Mozilla Open Policy & Advocacy BlogExperts develop cybersecurity recommendations

Today, we’re excited to publish the output of our “Cybersecurity Delphi 1.0” research process, tapping into a panel of 32 cybersecurity experts from diverse and mutually reinforcing backgrounds.

Mozilla Cybersecurity Delphi 1.0

Securing our communications and our data is hard. Every month seems to bring new stories of mistakes and attacks resulting in our personal information being made available – bit by bit harming trust online, and making ordinary Internet users feel fear. Yet, cybersecurity public policy often seems stuck in yesterday’s solution space, focused exclusively on well known terrain, around issues such as information sharing, encryption, and critical infrastructure protection. These “elephants” of cybersecurity policy are significant issues – but too much focus on them eclipses other solutions that would allow us to secure the Internet for the future.

So, working with Camille François & DHM Research we’ve spent the past year engaging the panel of cybersecurity experts through a tailored research process to try to extract public policy ideas and see what consensus can be found around them. We weren’t aiming for full consensus (an impossible task within the security community!). Our goal was to foment ideation and exchange, to develop a user-focused and holistic cybersecurity policy agenda.

Mozilla Cybersecurity Delphi Process

Our experts collectively generated 36 distinct policy suggestions for government action in cybersecurity. We then asked them to identify and rank their top choices of policy options by both feasibility and desirability. The result validated the importance of the “cyberelephants.” Privacy-respecting information sharing policies, effective critical infrastructure protection, and widespread availability and understanding of secure encryption programs are all important goals to pursue: they ranked high on desirability, but were generally viewed as hard to achieve.

More important are the ideas that emerged that aren’t on the radar screens of policymakers today. First and foremost was a proposal that stood out above the others as both highly desirable and highly feasible: increased funding to maintain the security of free and open source software. Although not high on many security policy agendas, the issue deserves attention. After all, 2014’s major security incidents around Poodle, Heartbleed, and Shellshock all centered on vulnerabilities in open source software. Moreover, open source software libraries are built into countless noncommercial and commercial products.

Many other good proposals and priorities surfaced through the process, including: developing and deploying alternative authentication mechanisms other than passwords; improving the integrity of public key infrastructure; and making secure communications tools easier to use. Another unexpected policy priority area highlighted by all segments of our expert panel as highly feasible and desirable was norm development, including norms concerning governments’ and corporations’ behavior in cyberspace, guided by human rights and communicated with maximum clarity in national and international contexts.

This report is not meant to be a comprehensive analysis of all cybersecurity public policy issues. Rather, it’s meant as a first, significant step towards a broader, collaborative policy conversation around the real security problems facing Internet users today.

At Mozilla, we will build on the ideas that emerged from this process, and hope to work with policymakers and others to develop a holistic, effective, user-centric cybersecurity public policy agenda going forward.

This research was made possible by a generous grant from the John D. and Catherine T. MacArthur Foundation.

Mozilla Cybersecurity Delphi 1.0

Chris Riley
Jochai Ben-Avie
Camille François

Emily DunhamGood times

Good times

People sometimes say “morning” or “evening” on IRC for a time zone unlike my own. Here’s a bash one-liner that emits the correct time-of-day generalization based on the datetime settings of the machine you run it on.

case $(($(date +%H)/6)) in 0|1)m="morning";;2)m="afternoon";;3)m="night";;esac; echo good $m

Read more...

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1185823] add additional [audit] syslog entries
  • [1187184] Minor updates to gear form
  • [1184828] api searches should honour the same fields in its “order” parameter as the web UI
  • [1186803] remove %product_sec_groups from bmo/lib/data.pm
  • [1186776] allow users to set keywords on bug creation (via API/internally only)
  • [1181453] Amend https://bugzilla.mozilla.org/form.fxos.feature form
  • [1186788] disabling an account should always disable bugmail
  • [1171806] add the ability for a user to disable/”remove” their own account
  • [1187498] Disable SiteMapIndex extension if running on under development site instead of production

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Nicholas NethercoteA work-around for Tree Style Tab breakage on Firefox Nightly caused by mozRequestAnimationFrame removal

This post is aimed at Firefox Nightly users who also use the Tree Style Tab extension. Bug 909154 landed last week. It removed support for the prefixed mozRequestionAnimationFrame function, and broke Tree Style Tab. The GitHub repository that hosts Tree Style Tab’s code has been updated, but that has not yet made it into the latest Tree Style Tab build, which has version number 0.15.2015061300a003855.

Fortunately, it’s fairly easy to modify your installed version of Tree Style Tab to fix this problem. (“Fairly easy”, at least, for the technically-minded users who run Firefox Nightly.)

  • Find the Tree Style Tabs .xpi file. On my Linux machine, it’s at ~/.mozilla/firefox/ndbcibpq.default-1416274259667/extensions/treestyletab@piro.sakura.ne.jp.xpi. Your profile name will not be exactly the same. (In general, you can find your profile with these instructions.)
  • That file is a zip file. Edit the modules/lib/animationManager.js file within that file, and change the two occurrences of mozRequestAnimationFrame to requestAnimationFrame. Save the change.

I did the editing in vim, which was easy because vim has the ability to edit zip files in place. If your editor does not support that, it might work if you unzip the code, edit the file directly, and then rezip, but I haven’t tried that myself. Good luck.

Chris CooperThe changing face of buildduty, Summer 2015 edition

Previously

Buildduty is the Mozilla release engineering (releng) equivalent of front-line support. It’s made up of a multitude of small tasks, none of which on their own are particulary complex or demanding, but taken in aggregate can amount to a lot of work.

It’s also non-deterministic. One of the most important buildduty tasks is acting as information brokers during tree closures and outages, making sure sheriffs, developers, and IT staff have the information they need. When outages happen, they supercede all other work. You may have planned to get through the backlog of buildduty tasks today, but congratulations, now you’re dealing with a network outage instead.

Releng has struggled to find a sustainable model for staffing buildduty. The struggle has been two-fold: finding engineers to do the work, and finding a duration for a buildduty rotation that doesn’t keep the engineer out of their regular workflow for too long.

I’m a firm believer that engineers *need* to be exposed to the consequences of the software they write and the systems they design:

I also believe that it’s a valuable skill to be able to design a system and document it sufficiently so that it can be handed off to someone else to maintain.

Starting this week, we’re trying something new. We’re shifting at least part of the burden to management: I am now managing a pair of contractors who will be responsible for buildduty for the rest of 2015.

Alin and Vlad are our new contractors, and are both based in Romania. Their offset from Mozilla Standard Time (aka PST) will allow them to tackle the asynchronous activities of buildduty, namely slave loans, non-urgent developer requests, and maintaining the health of the machine pools.

It will take them a few weeks to find their feet since they are unfamiliar with any of the systems. You can find them on IRC in the usual places (#releng and #buildduty). Their IRC nicks are aselagea and vladC. Hopefully they will both be comfortable enough to append |buildduty to those nicks soon. :)

While Alin and Vlad get up to speed, buildduty continues as usual in #releng. If you have an issue that needs buildduty assistance, please ask in #releng, and someone from releng will assist you as quickly as possible. For less urgent requests, please file a bug.

Daniel StenbergThe HTTP Workshop started

So we started today. I won’t get into any live details or quotes from the day since it has all been informal and we’ve all agreed to not expose snippets from here without checking properly first. There will be a detailed report put together from this event afterwards.

The most critical peace of information is however how we must not walk on the red parts of the sidewalks here in Münster, as that’s the bicycle lane and they can be ruthless there.

We’ve had a bunch of presentations today with associated Q&A and follow-up discussions. Roy Fielding (HTTP spec pioneer) started out the series with a look at HTTP full of historic details and views from the past and where we are and what we’ve gone through over the years. Patrick Mcmanus (of Firefox HTTP networking) took us through some of the quirks of what a modern day browser has to do to speak HTTP and topped it off with a quiz regrading Firefox metrics. Did you know 31% of all Firefox HTTP requests get fulfilled by the cache or that 73% of all Firfox HTTP/2 connections are used more than once but only 7% of the HTTP/1 ones?

Poul-Henning Kamp (author of Varnish) brought his view on HTTP/2 from an intermediary’s point of view with a slightly pessimistic view, not totally unlike what he’s published before. Stefan Eissing (from Green Bytes) entertained us by talking about his work on writing mod_h2 for Apache Httpd (and how it might be included in the coming 2.4.x release) and we got to discuss a bit around timing measurements and its difficulties.

We rounded off the afternoon with a priority and dependency tree discussion topped off with a walk-through of numbers and slides from Kazuho Oku (author of H2O) on how dependency-trees really help and from Moto Ishizawa (from Yahoo! Japan) explaining Firefox’s (Patrick’s really) implementation of dependencies for HTTP/2.

We spent the evening having a 5-course (!) meal at a nice Italian restaurant while trading war stories about HTTP, networking and the web. Now it is close to midnight and it is time to reload and get ready for another busy day tomorrow.

I’ll round off with a picture of where most of the important conversations were had today:

kafeestation

Mozilla IT & OperationsProduct Delivery Migration: What is changing, when it’s changing and the impacts

As promised, the FTP Migration team is following up from the 7/20 Monday Project Meeting where Sean Rich talked about a project that is underway to make our Product Delivery System better.

As a part of this project, we are migrating content out of our data centers to AWS. In addition to storage locations changing, namespaces will change and the FTP protocol for this system will be deprecated. If, after reading this post, you have any further questions, please email the team.

Action: The ftp protocol on ftp.mozilla.org is being turned off.
Timing: Wednesday, 5th August 2015.
Impacts:

    After 8/5/15, ftp protocol support for ftp.mozilla.org will be completely disabled and downloads can only be accessed through http/https.
    Users will no longer be able to just enter “ftp.mozilla.org” into their browser, because this action defaults to the ftp protocol. Going forward, users should start using archive.mozilla.org. The old name will still work but needs to be entered in your browser as https://ftp.mozilla.org/

Action: The contents of ftp.mozilla.org are being migrated from the NetApp in SCL3 to AWS/S3 managed by Cloud Services.
Timing: Migrating ftp.mozilla.org contents will start in late August and conclude by end of October. Impacted teams will be notified of their migration date.
Impacts:

    Those teams that currently manually upload to these locations have been contacted and will be provided with S3 API keys. They will be notified prior to their migration date and given a chance to validate their upload functionality post-migration.
    All existing download links will continue to work as they do now with no impact.

Mark SurmanMozilla Learning Strategy Slides

Developing a long term Mozilla Learning strategy has been my big focus over the last three months. Working closely with people across our community, we’ve come up with a clear, simple goal for our work: universal web literacy. We’ve also defined ‘leadership’ and ‘advocacy’ as our two top level strategies for pursuing this goal. The use of ‘partnerships and networks’ will also be key to our efforts. These are the core elements that will make up the Mozilla Learning strategy.

Over the last month, I’ve summarized our thinking on Mozilla Learning for the Mozilla Board and a number of other internal audiences. This video is based on these presentations:

As you’ll see in the slides, our goal for Mozilla Learning is an ambitious one: make sure everyone knows how to read, write and participate on the web. In this case, everyone = the five billion people who will be online by 2025.

Our top level thinking on how to do this includes:

1. Develop leaders who teach and advocate for web literacy.

Concretely, we will integrate our Clubs, Hive and Fellows initiatives into a single, world class learning and leadership program.

2. Shift thinking: everyone understands the web / internet.

Concretely, this means we will invest more in advocacy, thought leadership and user education. We may also design ways to encourage web literacy more aggressively in our products.

3. Build a global web literacy network.

Mozilla can’t create universal web literacy on its own. All of our leadership and advocacy work will involve ‘open source’ partners with whom we’ll create a global network committed to universal web literacy.

Process-wise: we arrived at this high level strategy by looking at our existing programs and assets. We’ve been working on web literacy, leadership development and open internet advocacy for about five years now. So, we already have a lot in play. What’s needed right now is a way to focus all of our efforts in a way that will increase their impact — and that will build a real snowball of people, organizations and governments working on the web literacy agenda.

The next phase of Mozilla Learning strategy development will dig deeper on ‘how’ we will do this. I’ll provide a quick intro post on that next step in the coming days.


Filed under: mozilla

Armen ZambranoEnabling automated back-filling on mozilla-inbound for few hours

tl;dr; we're going to enable automated back-filling tomorrow Tuesday for
few hours on mozilla-inbound.

We were aiming for Monday but pushed it to Tuesday to help publicize this more.

If on Wednesday there are no fall-outs we will leave it running for m-i for a week before enabling it on other places.

Posted on various mailing lists including mozilla.dev.tree-management.

> Hello all,
>
> We are planning to turn on a service that automatically backfills
> failed test jobs on m-i. If there are no concerns, we would like to
> turn this on experimentally for a couple of hours on [Tuesday]. We
> hope this will make it easier to identify which revision broke a
> test. Suggestions are welcome.
>
> The backfilling works like this: - It triggers the job that failed
> one extra time - Then it looks for a successful run of the job on the
> previous 5 revisions. If a good run is found, it triggers the job
> only on revisions up to this good run. If not, it triggers the job on
> every one of the previous 5 revisions. Previous jobs will be
> triggered one time.
>
> The tracking bug is:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1180732
>
> Best, Alice

--
Zambrano Gasparnian, Armen
Automation & Tools Engineer
http://armenzg.blogspot.ca



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Mozilla Reps CommunityThank you Emma, Arturo and Raj

Being part of the Reps Council is a great experience, it is at the same time an honour and a challenge and it comes with a lot of responsibility, hard work, but also lessons and growth.

We want to thank and recognize our most recent former Reps Council members for serving their one year term bringing all the their passions, knowledge and dedication to the program and making it more powerful.

Emma Irwin

emmaEmma was a great inspiration not only for Reps, but specially for mentors and the Council. Her passion for education and empowering volunteers allowed her to push the program to be much more centered towards education and growth, marking a new era for Reps.

She was not only advocating for it, but also rolling up her sleeves and running different trainings, creating curriculum and working towards improving the mentorship culture. She was also extremely helpful helping us navigate some conflicts and getting Reps to grow and put aside differences.

Arturo Martinez

arturo

Arturo’s unchallenged energy and drive were great additions to the Council. Specially during his term as Council Chair he set the standard for effectively driving initiatives, helping everyone to achieve their goals and pushing us to be excellent. Thank you for the productivity boost!

Gauthamraj Elango

raj

Raj’s passion for Web literacy and for empowering everyone to take part of the open web helped him lead Reps program to work more closely with Webmaker team, showcasing an example on how Reps can bring much more value to initiatives. He drove efforts to innovate our initiatives both in terms of local organization and funding.

These are just a few examples of their exemplary work as Council members, that not only helped Reps all around the world to step up and have more impact, but also inspired new Mozillians and all the Reps around the world on how to lead change for the open Web.

Once again, thank you so much for your time, effort and your passion, you left an outstanding mark in the program.

You can share your gratitude with them in this topic on Discourse.

QMOFirefox 40 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – July 24th – we held a new Testday event, for Firefox 40 Beta 7.

We’d like to take this opportunity to thank everyone for getting involved in the proposed testing activities and in general, for helping us make Firefox better.

Many thanks go out to the Bangladesh QA Community, for testing Firefox Hello context, WebGl, Adobe flash plugin and also verifying lots of bug fixes: Hossain Al Ikram, Nazir Ahmed Sabbir, Rezaul Huque Nayeem, MD.Owes Quruny Shubho, Mohammad Maruf Islam, Md.Rahimul Islam, Kazi Nuzhat Tasnem, Md. Ehsanul Hassan, Saheda.Reza Antora, Fahmida Noor, Meraj Kazi, Md. Jahid Hasan Fahim, Israt, Towkir Ahmed and Eyakub.

Special thanks go out to participants of Campus-Party Mexico that attended Firefox 40 beta 7 testday and helped with the testing of Firefox Hello context, WebGl and Adobe flash plugin: Mauricio Navarro Miranda, LASR21 Sánchez,  diegoehg, nataly Gurrola, Jorge Luis Flores Barrales, EZ274, Armando Gomez, Karla Danitza Duran Memijes and Eduardo Arturo Enciso Hernández.

Also a big thank you goes to all our moderators.

Keep an eye on QMO for upcoming events! :)

This Week In RustThis Week in Rust 89

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant Chaudhary, Brian Anderson.

From the Blogosphere

New Releases & Project Updates

  • Cupid. Native Rust access to the x86 and x86_64 CPUID instruction.
  • nue. I/O and binary data encoding for Rust.
  • oxcable. A signal processing framework for making music with Rust.
  • rsmpi. Message Passing Interface (MPI) bindings for Rust.
  • rust_box2d. Rust bindings for Box2D physics engine.
  • avr-emulator. Atmel 8-bit AVR Emulator in React and Rust.
  • Piston 0.5 released.
  • font-atlas. A set of crates for creating and using 'font atlases'.
  • Hound 1.0.0. A crate for reading and writing wav audio.
  • Rusty_Dodge. A simple polar dodging game using glium.

Friend of the Tree

The Rust Team likes to occassionally recognize people who have made outstanding contributions to The Rust Project, its ecosystem, and its community. These people are 'friends of the tree'.

This week's friend of the tree was @tshepang.

Over the last year Tshepang has landed over 100 improvements to our documentation. Tshepang saw where documentation was not, and said "No. This will not do."

We should all endeavor to care about docs as much as Tshepang.

Subteam reports

Every week The Rust Teams release a report on what is going on in their corner of the project. Here are the highlights from this week's report.

  • The compiler is being refactored to work on an HIR and an MIR.
  • Work is proceeding on stabilizing the core library.
  • Basic allocators will soon be available.
  • MSVC integration is proceeding rapidly.

What's cooking on nightly?

134 pull requests were merged in the last week.

New Contributors

  • Andy Caldwell
  • Antti Keränen
  • eternaleye
  • Jason Schein
  • Jonathan Hansford
  • Kornel Lesiński
  • Leif Arne Storset
  • midinastasurazz
  • mitaa
  • Ticki

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Internals discussions

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

There are some jobs writing Rust! This week's listings:

  • Assistant Researcher in Karlsruhe, Germany for embedded development on ARM stm32. Contact Oliver Schneider

Quote of the Week

Opening a vortex to Hell is actually safe, but de-referencing anything you pull from the vortex isn't safe.Steve Klabnik

Thanks to Gankro for the tip. Submit your quotes for next week!.

Chris ManchesterIntroducing mach try

This is a short introduction to mach try, a mach command that simplifies the process of selecting tests and pushing them to the try server.

To follow along at home you’ll either need to be using git cinnabar or have a modern mercurial and the hg “push-to-try” extension (available from |mach mercurial-setup|). Append —no-push to commands to keep them from pushing to the try server, and -n to see verbose messages associated with the results of commands.

# mach try is a command that takes try syntax and automates the steps
# to push the current tree to try with that syntax.
# For instance:

$ ./mach try -p win32 -u mochitest-bc

# ... will result in pushing "try: -b do -p win32 -u mochitest-bc -t none"
# to try. This saves dealing with mq or other ways of generating the try
# message commit. (An in-memory commit is generated with the appropriate
# message -- mq is not invoked at any stage).

# The more novel feature exposed by mach try is the ability to select
# specific test directories containing xpcshell, mochitests or reftests
# to run on the try server.

# For instance, if I've just made a change to one of the python libraries
# used by our test harnesses, and I'd like to quickly check that this
# feature works on windows. I can run:

$ ./mach try -p win64 testing/xpcshell testing/mochitest/tests

# This will result in the small number of xpcshell and mochitest tests
# that live next to their harnesses being run (in a single chunk) on
# try, so I can get my results without waiting for the entire suite,
# and I don't need to sift through logs to figure out which chunk a
# test lives in when I only care about running certain tests.

For more details run ./mach help try. As the command will inform you, this feature is under development — bugs should be filed blocking bug 1149670).

Daniel StenbergHTTPS and HTTP/2 plans for my sites

I produce a fair amount of open source code. I make that code available online. curl is probably the most popular package.

People ask me how they can trust that they are actually downloading what I put up there. People ask me when my source code can be retrieved over HTTPS. Signatures and hashes don’t add a lot against attacks when they all also are fetched over HTTP…

HTTPS

SSL padlockI really and truly want to offer HTTPS (only) for all my sites.  I and my friends run a whole busload of sites on the same physical machine and IP address (www.haxx.se, daniel.haxx.se, curl.haxx.se, c-ares.haxx.se, cool.haxx.se, libssh2.org and many more) so I would like a solution that works for all of them.

I can do this by buying certs, either a lot of individual ones or a few wildcard ones and then all servers would be covered. But the cost and the inconvenience of needing a lot of different things to make everything work has put me off. Especially since I’ve learned that there is a better solution in the works!

Let’s Encrypt will not only solve the problem for us from a cost perspective, but they also promise to solve some of the quirks on the technical side as well. They say they will ship certificates by September 2015 and that has made me wait for that option rather than rolling up my sleeves to solve the problem with my own sweat and money. Of course there’s a risk that they are delayed, but I’m not running against a hard deadline myself here.

HTTP/2

Related, I’ve been much involved in the HTTP/2 development and I host my “http2 explained” document on my still non-HTTPS site. I get a lot of questions (and some mocking) about why my HTTP/2 documentation isn’t itself available over HTTP/2. I would really like to offer it over HTTP/2.

Since all the browsers only do HTTP/2 over HTTPS, a prerequisite here is that I get HTTPS up and running first. See above.

Once HTTPS is in place, I want to get HTTP/2 going as well. I still run good old Apache here so it might be done using mod_h2 or perhaps with a fronting nghttp2 proxy. We’ll see.

Daniel StenbergHTTP Workshop 2015, day -1

http workshopI’ve traveled to a rainy and gray Münster, Germany, today and checked in to my hotel for the coming week and the HTTP Workshop. Tomorrow is the first day and I’m looking forward to it probably a little too much.

There is a whole bunch of attendees coming. Simply put, most of the world’s best brains and the most eager implementers of the HTTP stacks that are in use today and will be in use tomorrow (with a bunch of notable absentees of course but you know you’ll be missed). I’m happy and thrilled to be able to take part during this coming week.

Julien VehentUsing Mozilla Investigator (MIG) to detect unknown hosts

MIG is a distributed forensics framework we built at Mozilla to keep an eye on our infrastructure. MIG can run investigations on thousands of servers very quickly, and focuses on providing low-level access to remote systems, without giving the investigator access to raw data.

As I was recently presenting MIG at the DFIR Summit in Austin, someone in the audience asked if it could be used to detect unknown or rogue systems inside a network. The best way to perform that kind of detection is to watch the network, particularly for outbound connections rogue hosts or malware would establish to a C&C server. But MIG can also help the detection by inspecting ARP tables of remote systems and cross-referencing the results with local mac addresses on known systems. Any MAC address not configured on a known system is potentially a rogue agent.

First, we want to retrieve all the MAC addresses from the ARP tables of known systems. The netstat module can perform this task by looking for neighbor MACs that match regex "^[0-9a-f]", which will match anything hexadecimal.

$ mig netstat -nm "^[0-9a-f]" > /tmp/seenmacs

We store the results in /tmp/seenmacs and pull a list of unique MACs using some bash.

$ awk '{print tolower($5)}' /tmp/seenmacs | sort | uniq
00:08:00:85:0b:c2
00:0a:9c:50:b4:36
00:0a:9c:50:bc:61
00:0c:29:41:90:fb
00:0c:29:a7:41:f7
00:10:db:ff:10:00
00:10:db:ff:30:00
00:10:db:ff:f0:00
00:21:53:12:42:c1

We now want to check that every single one of the seen MAC addresses is configured on a known agent. Again, the netstat module can be used for this task, this time by querying local mac addresses with the -lm flag.

Now the list of MACs may be quite long, so instead of running one MIG query per MAC, we group them 50 by 50 using the following script:

#! /usr/bin/env bash
i=50
input=$1
output=$2
while true
do
    echo -n "mig netstat " >> $output
    for mac in $(awk '{print tolower($5)}' $1|sort|uniq|head -$i|tail -50)
    do
        echo -n "-lm $mac " >> $output
    done
    echo >> $output
    i=$((i+50))
    if [ $i -gt $(awk '{print tolower($5)}' $1|sort|uniq|wc -l) ]
    then
        exit 0
    fi
done

The script will build MIG netstat command with 50 arguments max. Invoke it with /tmp/seenmacs as argument 1, and an output file as argument 2.

$ bash /tmp/makemigmac.sh /tmp/seenmacs /tmp/migsearchmacs

/tmp/migsearchmacs now contains a number of MIG netstat commands that will search seen MAC addresses across the configured interfaces of known hosts. Run the commands and pipe the output to a results file.

$ for migcmd $(cat /tmp/migsearchmacs); do $migcmd >> /tmp/migfoundmacs; done

We now have a file with seen MAC addresses, and another one with MAC addresses configured on known systems. Doing the delta of the two is fairly easy in bash:

$ for seenmac in $(awk '{print tolower($5)}' /tmp/seenmacs|sort|uniq); do
hasseen=""; hasseen=$(grep $seenmac /tmp/migfoundmacs)
if [ "$hasseen" == "" ]; then
echo "$seenmac is not accounted for"
fi
done
00:21:59:96:75:7f is not accounted for
00:21:59:98:d5:bf is not accounted for
00:21:59:9c:c0:bf is not accounted for
00:21:59:9e:3c:3f is not accounted for
00:22:64:0e:72:71 is not accounted for
00:23:47:ca:f7:40 is not accounted for
00:25:61:d2:1b:c0 is not accounted for
00:25:b4:1c:c8:1d is not accounted for

Automating the detection

It's probably a good idea to run this procedure on a regular basis. The script below will automate the steps and produce a report you can easily email to your favorite security team.

#!/usr/bin/env bash
SEENMACS=$(mktemp)
SEARCHMACS=$(mktemp)
FOUNDMACS=$(mktemp)
echo "seen mac addresses are in $SEENMACS"
echo "search commands are in $SEARCHMACS"
echo "found mac addresses are in $FOUNDMACS"

echo "step 1: obtain all seen MAC addresses"
$(which mig) netstat -nm "^[0-9a-f]" 2>/dev/null | grep 'found neighbor mac' | awk '{print tolower($5)}' | sort | uniq > $SEENMACS

MACCOUNT=$(wc -l $SEENMACS | awk '{print $1}')
echo "$MACCOUNT MAC addresses found"

echo "step 2: build MIG commands to search for seen MAC addresses"
i=50
while true;
do
    echo -n "$i.."
    echo -n "$(which mig) netstat -e 50s " >> $SEARCHMACS
    for mac in $(cat $SEENMACS | head -$i | tail -50)
    do
        echo -n "-lm $mac " >> $SEARCHMACS
    done
    echo -n " >> $FOUNDMACS" >> $SEARCHMACS
    if [ $i -gt $MACCOUNT ]
    then
        break
    fi
    echo " 2>/dev/null &" >> $SEARCHMACS
    i=$((i+50))
done
echo
echo "step 3: search for MAC addresses configured on local interfaces"
bash $SEARCHMACS

sleep 60

echo "step 4: list unknown MAC addresses"
for seenmac in $(cat $SEENMACS)
do
    hasseen=$(grep "found local mac $seenmac" $FOUNDMACS)
    if [ "$hasseen" == "" ]; then
        echo "$seenmac is not accounted for"
    fi
done

The list of unknown MACs can then be used to investigate the endpoints. They could be switches, routers or other network devices that don't run the MIG agent. Or they could be rogue endpoints that you should keep an eye on.

Happy hunting!

Panos AstithasLessons from Startup Weekend

I had an exhausting but fun weekend at the Athens Startup Weekend a few days ago. Along with Christos I joined Yannis, Panagiotis Christakos and Babis Makrinikolas on the Newspeek project. When Yannis pitched the idea on Friday night, the main concept was to create a mobile phone application that would provide a better way to view news on the go. I don't believe it was very clear in his mind then, what would constitute a "better" experience, but after some chatting about it we all defined a few key aspects, which we refined later with lots of useful feedback and help from George. Surprisingly, for me at least, in only two days we managed to design, build and present a working prototype in front of the judges and the other teams. And even though the demo wasn't exactly on par with our accomplishments, I'm still amazed at what can be created in such a short time frame.
Newspeek, our product, had a server-side component that periodically collected news items from various news feeds, stored them and provided them to clients through a simple REST API. It also had an iPhone client that fetched the news items and presented them to the user in a way that respected the UI requirements and established UX norms for that device.

So, in the interest of informing future participants about what works and what doesn't work in Startup Weekend, here are the lessons I learned:

  1. If you plan to win, work on the business aspect, not on the technology. Personally, I didn't go to ASW with plans to create a startup, so I didn't care that much about winning. I mostly considered the event as a hackathon, and tried my best to end up with a working prototype. Other teams focused more on the business side of things, which is understandable, given the prize. Investors fund teams that have a good chance to return a profit, not the ones with cool technology and (mostly) working demos. Still, the small number of actual working prototypes was a disappointment for me. Even though the developers were the majority in the event, you obviously can't have too many of them in a Startup Weekend.
  2. For quick server-side prototyping and hosting, Google App Engine is your friend. Since everyone in the team had Java experience, we could have gone with a JavaEE solution and set up a dedicated server to host the site. But, since I've always wanted to try App Engine for Java and the service architecture mapped nicely to it, we tried a short experiment to see if it could fly. We built a stub service in just a few minutes, so we decided it was well worth it. Building our RESTful service was really fast, scalability was never a concern and the deployment solution was a godsend, since the hosting service provided for free by the event sponsors was evidently overloaded. We're definitely going to use it again for other projects.
  3. jQTouch rocks! Since our main deliverable would be an iPhone application, and there were only two of us who had ever built an iPhone application (of the Hello World variety), we knew we had a problem. Fortunately, I had followed the jQTouch development from a reasonable distance and had witnessed the good things people had to say, so I pitched the idea of a web application to the team and it was well received. iPhone applications built with web technologies and jQTouch can be almost indistinguishable from native ones. We all had some experience in building web applications, so the prospect of having a working prototype in only two days seemed within the realm of possibility again. The future option of packaging the application with PhoneGap and selling it in the App Store was also a bonus point for our modest business plan.
  4. For ad-hoc collaboration, Mercurial wins. Without time to set up central repositories, a DVCS was the obvious choice, and Mercurial has both bundles and a standalone server that make collaborative coding a breeze. If we had zeroconf/bonjour set up in all of our laptops, we would have used the zeroconf extension for dead easy machine lookup, but even without it things worked flawlessly.
  5. You can write code with a netbook. Since I haven't owned a laptop for the last three years, my only portable computer is an Asus EEE PC 901 running Linux. Its original purpose was to allow me to browse the web from the comfort of my couch. Lately however, I'm finding myself using it to write software more than anything else. During the Startup Weekend it had constantly open Eclipse (for server-side code), Firefox (for JavaScript debugging), Chrome (for webkit rendering), gedit (for client-side code) and a terminal, without breaking a sweat.
  6. When demoing an iPhone application, whatever you do, don't sweat. Half-way through our presentation, tapping the buttons didn't work reliably all the time, so anxiety ensued. Since we couldn't make a proper presentation due to a missing cable, we opted for a live demo, wherein Yannis held the mic and made the presentation, and I posed as the bimbo that holds the product and clicks around. After a while we ended up both touching the screen, trying to make the bloody buttons click, which ensured the opposite effect. In retrospect, using a cloth occasionally would have made for a smoother demo, plus we could have slipped a joke in there, to keep the spirit up.
All in all it was an awesome experience, where we learned how far we can stretch ourselves, made new friends and caught up with old ones. Next year, I'll make sure I have a napkin, too.

Emma Irwin#MozLove for April

This month’s #mozlove post is for April Morone.

I wrote this post with  inspiration from the first version of ‘Participation Personas . Personas (V1) is a  list of contributor profiles I use to design participation opportunities.  For each persona I also suggest a series of ‘lenses’ which, I believe can help us design with, and for greater diversity and dimension.

A lens can be anything from gender identity and age, to what I called a ‘toxic rating’, which changes the flexibility and value of collaborating with someone.   Another lens is what I have (so far) called ‘accessibility’, which encourages thinking about physical challenges of contribution.  This could be anything from asking ourselves if resources are ‘screen reader friendly’, to building in a respect for periods of time people may ‘disappear’ to take care of their wellness.  

In that light I would like to highlight the contributions, enthusiasm and dedication of April Morone. April describes herself as a ‘disabled contributor’ living with partial blindness, hearing loss and neuro-muscular problems . April is also advocate for helping other people living with disabilities contribute to the Mozilla project.   April was kind enough to take time to answer my questions, the first of which was “What got you started contributing?”

“What got me contributing was this insatiable need to help and insatiable need to learn more in the IT field, as well as to DO more in the IT field. I’ve always been helping others, from my cousins, helping teach them at the age of twelve on up, to teaching and helping others.”

You will find April embedded in the project helping others, especially focused on new contributors people setting up local environments for bug-fixes.  When I asked her what sustains her participation, she felt equally as motivated by people who ‘want to learn’, as her own interest in teaching and helping.

When listing the challenges to contribution, April identified the continual challenges posed by health issues which include the emotional effects of  surviving domestic abuse.  On the more predicable scale, April also listed issues with technology fails and limited time as worthy opponants.  What’s I think is very inspiring about both April and the community around her is how she describes her continued involvement and the people making a difference for her:

Abishek Gupta, Gautam Sharma, David Walsh, Luke Crouch, Janet Swisher, Hagen Halbach, and Daniel Desira have kept me going. They have been contributors and now also friends who have supported me through difficult times when I might have otherwise have given up contributing. I had thought of dropping out of contributing and even just giving up. But they stood by me, listened, and gave support, which help.  What also kept me going is my love of helping others, my love of Mozilla, and my love of IT and web development.

I think this is really, really special in that the community is as much a place to find ‘your people’, as it is a cause to contribute to.   I know April is among a small group of volunteers at Mozilla with ambitions of creating a more supportive network for contributors living with disability through directed documentation and on-boarding –  which I think is just amazing.  I am grateful to be a part of a community that includes April and many of the people she listed who help her be successful.

 

db8dadb3c7cc3bc2083881ff935416bd54101ced

Next month I hope to write a couple of these posts – we’ll see.

“Felt Heart” Image credit: Lauren Jong

 

Matthew NoorenbergheFirefox Password Manager Update: 2015-Q1

Logins are a part of our daily lives on the web and one of the active Firefox projects this year is to improve Firefox's password manager which has the simple high-level goal of helping users log in. Here's a summary of the progress made in the first quarter of 2015 (in no particular order):
  • Telemetry – Probes (with the prefix PWMGR_) were added to gain a better understanding of how users were using the feature and to allow measuring the impact of improvements.
  • Ignoring @autocomplete=off in login forms – An autocomplete attribute with a value of "off" no longer disables auto-filling of login fields. This puts users back in control of their login experience and aligns with the direction of other browser vendors. Last year we started ignoring @autocomplete=off when deciding whether to ask the user to save/update their login so this is an evolution of that change. Note that @autocomplete=off is still effective outside login forms e.g. to implement custom search box autocompletion.
  • Capture doorhanger fields – The remember/update password doorhanger notification is now easier to visually scan with fields for the captured username and password. The username field is also editable which helps in cases where the detected username is incorrect or missing.
    Screenshot of the password capture doorhanger on Windows 7 in Firefox 39
  • Viewing and managing logins on Android – A password management interface (about:logins) was added to Firefox for Android and is accessible via the menu under Tools > Logins.
    Firefox for Android version 42 about:logins list viewFirefox for Android version 42  about:logins context menu
  • Per-site recipes – A new mechanism was added to allow per-site overrides to the password manager capture and fill heuristics since it's not feasible/scalable to have general ones which work for every website. Initial recipe support allows overriding the username and password field detection via CSS selectors. There are only a handful of recipes currently in use as there hasn't been much focus/communication on gathering these yet but you're encouraged to file bugs about sites that don't work with the password manager (and if you're ambitious you can even submit patches to the JSON file).
  • Android Capture Doorhanger Polish – Capture doorhanger visuals were polished in preparation for later improvements.
  • General bug fixes – As usual, there were bug fixes for functionality that simply didn't work as expected. For example, Bug 1121040 regarding forms submitting via the Enter key during username autocompletion before a password had time to be filled in the password field.
Expect to see many more improvements in upcoming months as we continue to make major improvements to the password manager. If you'd like to contribute to this project, check out the password manager wiki page for mailing list, IRC, bug list and other information.

Tantek ÇelikDark Forest Run

Yesterday morning I ran through a forest in pitch darkness for the first time. I had a headlamp, a general sense of direction (uphill), and the knowledge that friends were just out of sight up ahead.

When I left my house it was dark as night in the city, which really means never darker than the dim glow from diffuse streetlamps and other light polluters. I ran nearly a kilometer before meeting my fellow #nopasoparungang members at the intersection of Frederick & Stanyan streets.

From there we ran half a mile up Stanyan’s steepest segments (240 feet elevation) to Belgrave Ave and the eastern edge of the Mt Sutro Open Space Preserve. Undaunted by poison oak warning signs, we leapt onto the narrow dirt forest trail. In mere seconds we disappeared into the dense woods, the city glow faded, and our headlamps barely lit the trail ahead. Anything beyond 10 meters was nothing but gray shapes blending into darkness.

Our gang of four split into lead and tail pairs, and we soon lost sight of the lead headlamps. We didn’t bother navigating by mobile, even the dimmest of backlighting would have been blinding. Whenever the trail split, we chose the uphill path.

Not only was the forest darkness pierced only by our headlamps, it was silent except for the sounds we made, breathing, pounding the trail, rustling leaves, snapping twigs.

The lead pair rejoined us from behind, having taken a wrong turn and doubled back. We emerged from the south side of the forest onto the street and found the few other @Nov_Project_SF early gang arrivals who took our photo.

Early rungang photo

Now seven strong, we hiked up Johnstone drive just a bit and ran uphill onto the East Ridge Trail, again leaving civilization behind in just moments. We ran all the way up to the Mt. Sutro summit, to a clearing formerly used for Nike Missile Control Site SF-89C.

Looking back through the dark forest I could see dawn’s light in the East.

Dark forest backlist by dawn.

From Fredrick & Stanyan we had only run a mile, and yet the second half of it through pitch black woods, and 400 more feet of incline for total of 640 feet of elevation gain.

Tapering for this weekend's race, once I reached the summit I did reps of planking, tricep dips, pushups, all while swatting perhaps nearly 100 mosquitos. Everyone else ran up & down the ridge trail and others nearby. A few more runners found us during the 30 minute hills workout.

Afterwards we ran back down to the meeting point on the street, and hugged the 6:25am arrivals. Then we did it all again, this time in the sunrise lit trails below.

NPSF late gang group photo in Sutro Forest.

This is November Project San Francisco #hillsforbreakfast. We run through poison-ivy laden mosquito-infested forests from darkness through dawn and into the sunrise.

Why are we shushing with our fingers? We heard from a concerned hiker that "the sound travels really far" out of the forest (which is odd, because the sound from the city doesn’t seem to make it into the forest). For more, see: NPSF: Do you know what it feels like to be 90 years old?

Cameron KaiserUpdating you on 38 just-in-time

Did you see what I did there? For the past two weeks my free time apart from work and the Master's degree has been sitting in a debugger trying to fix JavaScript, which is just murder on my dating life. Here is the current showstopper bug-roll for 38.1.1b1:

  • The Faceblech bug with the new IonPower JavaScript JIT compiler is squashed, I think, after repairing some conformance test failures which in turn appear to have repaired Forceblah. In my defence, the two bugs in question were incredibly weird edge cases and these tests are not part of the usual JIT test suite, so I guess we'll have to run them as well in future. This also repairs an issue with Instagrump which is probably the same underlying issue since Faceboink owns them also.

    The silver lining after all that was that I was considering disabling inlining in the JIT prior to release, which worked around the "badness," but also cut the engine speed in about half. (Still faster than JaegerMonkey!) To make this a bit less of a hit, I tuned the thresholds for starting the twin JITs and got about 10% improvement without inlining. With inlining back on, it's still faster by about 4% and change -- the G5 now achieves a score of nearly 5800 on V8, up from 5560. I also tweaked our foreground finalization patch for generational GC so that we should be able to get the best of both worlds. Overall you should see even better performance out of this next beta.

  • I have a presumptive fix for the webfont "ATSUI puke" on the New York Times, but it's not implemented or well-tested yet. This is a crash on 10.5, so I consider it a showstopper and it will be fixed before the next beta. (It affects 31.8 also but I will not be making another 31 release unless there is a Mozilla ESR chemspill.)

  • The modified strip7 tool required for building 38.x has a serious bug in it that causes it to crash trying to strip certain symbols. I have fixed this bug and builders will need to install this new version (remember: do not replace your normal strip with this one; it is intentionally loose with the Mach-O specification). I will be uploading it sometime this week along with an updated gdb7 that has better debugger performance and repairs a bug with too eagerly disabling register display while single-stepping Ion code.

These bugs are not considered showstoppers, but I do acknowledge them and I plan to fix them either for the final release or the next version of 38:

  • I can confirm saved passwords do not appear in the preferences panel. They do work, though, and can be saved, so this is more of an issue with managing them; while it's possible to do so manually it requires some inconvenient screwing around with your profile, so I consider this the highest priority of the non-showstopper bugs.

  • Checkboxes on the dropdown menus from the Console tabs do not appear. This specific manifestation is purely cosmetic because they work normally otherwise, but this may be an indication there is a similar issue with dropdowns and context menus elsewhere, so I do want to fix this as well.

Other miscellaneous changes include some adjustments to HTML5 media streaming and I have decided to reduce the default window and tab undos back to 31's level (6 and 2 respectively) so that the browser still gives up tenured memory a bit more easily. Unfortunately, there is not enough time to get MP3 support fully functional for final release. I plan to get this completed in a future version of 38.x, but it will not be officially supported until then (you can still toggle tenfourfox.mp3.enabled to use the minimp3 driver for those sites it does work with as long as you remember that seeking within a track doesn't work yet).

The localizer elves have French, German, Spanish, Italian, Russian and Finnish installers available. Our Japanese localization appears to have dropped off the web, so if you can help us, o-negai shimasu! Swedish just needs a couple of strings to be finished. We do not yet have Polish or Asturian, which we used to, so if you can help on any of these languages, please visit issue 42 where Chris is coordinating these efforts. A big thank you to all of our localizers!

Once the localizations are all in, the Google Code project will be frozen to prepare for the wiki and issue tracker moving to Github ahead of Google Code going read-only on 24 August. Downloads will remain on SourceForge, but everything else will go to Github, including the source tree when we eventually drop source parity. I was hoping to have an Elcapitanspoof up in time for 38's final release, but we'll see if I have time to do the graphics.

Watch for the next beta to come out by next weekend with any luck, which gives us enough time if there needs to be a third emergency release prior to the final (weekend prior to 11 August).

Finally, I am pleased to note we are now no longer the only PowerPC JavaScript JIT out there, though we are the only one I know of for Mozilla SpiderMonkey. IBM has been working on a port of Google V8 to PowerPC for some time, both AIX and Linux, which recently became an official part of the Google V8 repository (i.e., the PPC port is now officially supported). If you've been looking at nabbing a POWER8 with that money burning a hole in your pocket, it even works with the new Power ISA little endian mode, of which we dare not speak. Since uppsala, Floodgap's main server, is a POWER6 running AIX and should be able to run this, I might give it a spin sometime when I have a few spare cycles. However, before some of the freaks amongst you get excited and think this means Google Chrome on OS X/ppc is just around the corner, there's still an awful lot more work required to get it operational than just the JavaScript engine, and it won't be me that works on it. It does mean, however, that things like node.js will now work on a Power-based server with substantially less fiddling around, and that might be very helpful for those of you who run Power boxes like me.

Marcia KnousLibriFox emerges - try it now on your Firefox OS Device!

An update to my June 15th post about the group of students working on their own Firefox OS Summer of Code - as a result of their hard work, there is now a new app in the Firefox OS Marketplace - LibriFox! LibriFox is a native Firefox OS app that brings LibriVox.org audiobooks to your device. Alex Hirschberg did a great job taking this from concept to app, and I encourage all of you to download some audiobooks and try it out! While you are at it, please take a moment to review the app -

Hannah KaneQuick update: engagement on the MLN Site

Pledge to Teach

In my last post, I mentioned that we had recently launched the Pledge to Teach the Web. Since we launched it three weeks ago, 240 people have taken the pledge.

Screen Shot 2015-07-24 at 3.25.11 PMScreen Shot 2015-07-24 at 3.26.32 PMScreen Shot 2015-07-24 at 3.27.10 PMOf those who’ve taken the pledge, about a quarter of them have also completed a survey that we sent as a follow-up. The survey is helping us gain a better understanding of our audience, their contexts for teaching, and their needs. We’ll share an analysis of the survey results next month.

Site Traffic

Since we launched teach.mozilla.org back in April, we haven’t been particularly focused on driving traffic to the site. That changed recently, as we began our Maker Party promotion efforts in earnest. We started promoting Maker Party on both beta.webmaker.org and on mozilla.org. Those two referrals, along with our email campaign, led to our most highly trafficked week on the site since launch, during the lead-up to Maker Party. Our highest day was July 13th, when we had over 11K sessions. Since the initial bump, traffic has dropped back down again to between 1200 and 2500 sessions per day.

Unsurprisingly, the Maker Party page is the most popular content, after the homepage. The Activities page is the next most popular.


What we’re doing next with regard to user engagement

  • Adding analytics tracking to several things so we can better measure conversion rates.
  • Experimenting on pledge flow to increase conversion rate. One possibility is to make the pledge the only CTA on the homepage.
  • Determining our post-Maker Party strategy for people who take the pledge. We’re discussing ideas here.
  • Experimenting with “Community” link to increase Discourse activity. This is a larger-than-the-site effort , though. We can be promoting Discourse across all of our work.

Stuart ColvilleBack to the future: ES6 + React

I've just recently finished shaving about a billion yaks * to convert a React app over to use ES6 modules and classes so we can start living in the future that is ES6 with a sprinkling of ES7.

* Might not be true

Transpiling back to the present

We're using babel via webpack to transpile our ES6+ code into ES5. Babel exposes the various stages of ECMAScript proposals so you can choose whatever stages are appropriate for your project. For example Stage 0 will give you the most features, but that will include things like strawman proposals that may never actually see the light of day.

For our project we've tried to stick to Stage2 which is Babel's default (Stage 2 features Draft specs), and then add to that a couple of more modern features which should hopefully make it to prime time.

The two features we've specifically added are:

ES6 Modules

Es6 Modules are very powerful, it takes a bit of getting used to coming from commonJS or AMD modules.

One of the really great things is that you can export more than one thing by using a default export and exporting something else.
In this example the instance will be the default export that you'll get when importing this module:

E.g.

export class Tracking {  
    // Whatever
}

export default new Tracking();  

So now I have two options:

// Get an instance
import tracking from tracking;  
tracking.init();

// Get the class (handy for tests)
import { Tracking } from tracking;  
var tracking = new Tracking();  
tracking.init();  

There's lots of great info on the full range or what's possible with es6 modules here.

React components with ES6 classes

When converting from ES5 there's quite a number of things to be aware of.

Setting default state

You can easily do this in the constructor rather than using a separate getInitialState method.

import React, { Component } from 'react';

class MyComponent  extends Component {  
    constructor() {
        super();
        this.state = {
            // Default state here.
        }
    }

    //... Everything else  
}

Setting defaultProps and propTypes

When using defaultProps and propTypes are defined as static class properties.

If you have es7.classProperties you can do this like so:

import React, { Component, PropTypes } from 'react';

class MyComponent  extends Component {

    static propTypes = {
        foo: PropTypes.func.isRequired,
        bar: PropTypes.object,
    }

    static defaultProps = {
        foo: () => {
               console.log('hai');
        },
        bar: {},
    }

    //... Everything else
}

The alternative to this (when you don't have es7 shiny enabled) is to just set props directly on the class e.g:

class MyComponent  extends Component {  
    //... Everything else
}
MyComponent.propTypes = {foo: PropTypes.func.isRequired};  
MyComponent.defaultProps = {foo: function() { console.log('hai'); }};  

I found this very ugly though, so static class props seems to be a good way to go for now.

isMounted() isn't available

If you'd used isMounted() then this isn't availabe on ES6 classes extended from Component. If you really need it you can set a flag in componentDidMount and unset it again in componentWillUnMount.

Method binding

When passing methods around as props you'll find with ES6 classes this doesn't refer to the component like it used to with React.createClass.

To get back the same behaviour without using something ugly like:

<Foo onClick={this.handleClick.bind(this)} />  

You can use an arrow function instead:

class MyComponent  extends Component {  
    handleClick = (e) => {
        // this is now the component instance.
        console.log(this);
    }
}

This works because arrow functions capture the this value of the enclosing context. In otherwords an arrow function gets the this value from where it was defined, rather than where it's used.

If you want to get really fancy you can use "function bind syntax".

Which you get by doing this:

<Foo onClick={::this.handleClick} />  

Which is equivalent to the previous bind() example. But to do that you'd need to enable es7.functionBind. For me this was a step too far and I'm happy to stick with the arrow functions.

Testing with ES6/7

Our previous tests had made some good use of rewire.js to modify requires and vars that weren't exported from the module (It does this with some cheeky injection techniques).

Unfortunately due to some changes in Babel rewire isn't compatible (at the moment) with ES6 modules. There's an alternative babel-plugin-reqire but that injects getters and setters into every module.

Dependency Injection in React Classes

Instead of using module mangling it was easiest to fall-back to dependency injection for the places where we were changing Components to fake ones.

In React props look like a good fit for dep injection:

import DefaultOtherComponent from 'components/other';

class MyComponent extends Component {

    static defaultProps: {
        OtherComponent: DefaultOtherComponent,
    }

    //... Snip

    render() {
        var OtherComponent = this.props.OtherComponent;
        return (
          <OtherComponent {...this.props} />
        );
    }
}

Now in the test we can inject a fake component:

TestUtils.renderIntoDocument(  
  <MyComponent OtherComponent={FakeOtherComponent} />
);

getDOMNode() is deprecated

In ES6 React classes getDOMNode() is not present so you'll need to refactor all calls to MyComponent.getDOMNode() to React.findDOMNode(MyComponent) instead. The docs also mention it's deprecated so it might be removed completely in the future.

Summary

There's quite a bit of work in making the conversion depending on how big your code-base is and depending on how much use of ES6 you're making already.

A lot of the new features in ES6 are making things much easier for developers so I'd definitely say it's a direction worth moving in assuming you're comfortable with all the aspects of transpilation. Webpack + babel makes this pretty straightforward. If you're already using babel for JSX (or some other similar loader for JSX) then you're already most of the way there.

FWIW: if you're not using babel, switching to it is now an official recommendation.

I'm very impressed with the currently ecosystem around JS e.g. tools like babel and webpack alongside forward thinking libs like React + Redux. All of these things sit well with each other and allow projects like ours to be able to step into the future today.

Now we've just got to get used to all the new syntax and start using more of it!

Further reading

Credits

  • Image By Terabass (Own work) CC BY-SA 4.0, via Wikimedia Commons

Chris CooperRelEng & RelOps Weekly highlights - July 24, 2015

Welcome back. When we last left our heroes, they were battling the combined forces technical debt and a lack of self-service options. We join the fight already in progress…

Kapow! width=100%

Modernize infrastructure: To pave the way for creating continuous integration (CI) automation for Windows 10, Q is auditing all of our Windows 8 GPOs to determine which will work as-is on WIndows 10, which will no longer be needed, and which will require rewriting to work on the new platform (https://bugzil.la/1185844).

Dustin has completed the taskcluster scope handling audit and has reporting his findings back to the team and filed bugs for remediation.

Rail has deployed a change that allows us to specify docker images by their sha256 in TaskCluster, reducing the risk of MITM attacks. This was one of the hard security blockers for Funsize (https://bugzil.la/1175561).

After much discussion, we’ve chosen to move forward with installing hg as an EXE on Windows for the time being. Mark is implementing this method so we can continue progress towards moving Windows 2008 builds into AWS (https://bugzil.la/1170588).

Morgan has a prototype of some github/TaskCluster integration: if your project lives in github/mozilla, you can drop a .taskclusterrc file in the base of your repository and the jobs will just start running after each pull request (http://linuxpoetry.com/blog/23/)

Dustin is migrating treestatus to relengapi, removing one of the blockers to existing the PHX1 datacenter and centralizing another of our many web apps (https://bugzil.la/1181153).

Amy is working at replacing the servers we use to image mac builders and testers. This will allow us to perform backups of critical information and will prepare us for new OS X 10.10 hardware that’s in the purchasing pipeline now (https://bugzil.la/1186197).

Kim disabled Android 4.0 test jobs by default on Try as another step toward to disabling Pandas as a test platform as we move Android 4.3 test jobs to emulators on AWS (https://bugzil.la/1184117)

Today is the last day of Anhad’s internship. :( His end-of-internship presentation is now available on Air Mozilla: http://mzl.la/1JlpT0P This week he met with Anthony to hand-off his work on the getting Windows builds working with the generic worker in TaskCluster. (https://bugzil.la/1180775)

Improve release pipeline: Ben worked with OpSec to generate a new GPG signing key (replacing our expired one) and deploy it to our Nightly and Release signing servers. We are also working to improve the monitoring around signing key expiry to avoid future fire drills.

Improve CI pipeline: Jordan has re-deployed the change that switches over all future CI and release jobs to using a Gecko-based copy of Mozharness (http://jordan-lund.ghost.io/mozharness-goes-live-in-the-tree/).

Ben has been working towards stopping automated builds of XULRunner for the Firefox 42.0 cycle, starting September 22nd, 2015 (https://groups.google.com/forum/#!topic/mozilla.dev.planning/mmNWxHOt_lw)

Release: Firefox 40 is currently in beta. We’re up to b7 now.

Operational: Amy and Coop have worked with DCops to re-balance the Linux/Windows test pools, removing 30 machines from the Linux talos pools increasing capacity by 10 machines each in the Windows XP, Windows 7, and Windows 8 test pools (https://bugzil.la/1151591).

We giveth: This week we enabled the B2G 2.2r branch. (https://bugzil.la/1177598)

…and we taketh away: We also disabled many obsolete B2G builds/branches to improve throughput and reclaim capacity.

vcs-sync is now running in AWS! Hal made the official switch this week after running both setups in parallel for a while. This allows us to retire some ancient hardware in the datacenter.

Callek touched over 55 bugs this week as buildduty, many of them during triage and resolution of machine loans. (http://tinyurl.com/nhhpjyr)

Will our heroes emerge victorious? Tune in next week!