Andy McKayWriting a WebExtension

Yesterday I spent 10 minutes throwing together an example WebExtension to show how a few APIs worked. It's not a complicated or sophisticated one by any means, it just shows a pop-up and links to trigger some tab APIs.

But when writing an add-on like that, you might notice a few things:

  • Thanks to about:debugging you can load the add-on straight from the directory and run it. There is no xpi, zip or compile step. It just works.

  • The menu is a html page with some JavaScript on it. That's familiar territory for any web developer.

  • The HTML and Javascript reload each time I make a change. That's refreshingly simple and easy to develop with.

Those factors alone make developing that add-on so much easier than before. We are getting there.

Cameron Kaiser38.6.1 released

38.6.1 is released, fixing an urgent security issue in Firefox and updating the font blacklist. And ... that's all I'm going to say about that. It's already live on the main page.

Eric RahmAre they slim yet?

In my previous post I focused on how Firefox compares against itself with multiple content processes. In this post I’d like to take a look at how Firefox compares to other browsers.

For this task I automated as much as I could, the code is available as the atsy project on github. My goal here is to allow others to repeat my work, point out flaws, push fixes, etc. I’d love for this to be a standardized test for comparing browsers on a fixed set of pages.

As with my previous measurements, I’m going with:

total_memory = RSS(parent) + sum(USS(children))

An aside on the state of WebDriver and my hacky workarounds

When various WebDriver implementations get fixed we can make a cleaner test available. I had a dream of automating the tests across browsers using the WebDriver framework, alas, trying to do anything with tabs and WebDriver across browsers and platforms is a fruitless endeavor. Chrome’s actually the only one I could get somewhat working with WebDriver.

Luckily Chrome and Firefox are completely automated. I had to do some trickery to get Chrome working, filed a bug, doesn’t sound like they’re interested in fixing it. I also had to do some trickery to get Firefox to work (I ended up using our marionette framework directly instead), there are some bugs, not much traction there either.

IE and Safari are semi-automated, in that I launch a browser for you, you click a button, and then hit enter when it’s done. Safari’s WebDriver extension is completely broken, nobody seems to care. IE’s WebDriver completely failed at tabs (among other things), I’m not sure where to a file a bug for that.

Edge is mostly manual, its WebDriver implementation doesn’t support what I need (yet), but it’s new so I’ll give it a pass. Also you can’t just launch the browser with a file path, so there’s that. Also note I was stuck running it in a VM from modern.ie which was pretty old (they don’t have a newer one). I’d prefer not to do that, but I couldn’t upgrade my Windows 7 machine to 10 because Microsoft, Linux, bootloaders and sadness.

I didn’t test Opera, sorry. It uses blink so hopefully the Chrome coverage is good enough.

The big picture

Browser memory compared

The numbers

OS Browser Version RSS + USS
OSX 10.10.5 Chrome Canary 50.0.2627.0 1,354 MiB
OSX 10.10.5 Firefox Nightly (e10s) 46.0a1 20160122030244 1,065 MiB
OSX 10.10.5 Safari 9.0.3 (10601.4.4) 451 MiB
Ubuntu 14.04 Google Chrome Unstable 49.0.2618.8 dev (64-bit) 944 MiB
Ubuntu 14.04 Firefox Nightly (e10s) 46.0a1 20160122030244 (64-bit) 525 MiB
Windows 7 Chrome Canary 50.0.2631.0 canary (64-bit) 1,132 MiB
Windows 7 Firefox Nightly (e10s) 47.0a1 20160126030244 (64-bit) 512 MiB
Windows 7 IE 11.0.9600.18163 523 MiB
Windows 10 Edge 20.10240.16384.0 795 MiB

So yeah, Chrome’s using about 2X the memory of Firefox on Windows and Linux. Lets just read that again. That gives us a bit of breathing room.

It needs to be noted that Chrome is essentially doing 1 process per page in this test. In theory it’s configurable and I would have tried limiting its process count, but as far as I can tell they’ve let that feature decay and it no longer works. I should also note that Chrome has it’s own version of memshrink, Project TRIM, so memory usage is an area they’re actively working on.

Safari does creepily well. We could attribute this to close OS integration, but I would guess I’ve missed some processes. If you take it at face value, Safari is using 1/3 the memory of Chrome, 1/2 the memory of Firefox. Even if I’m miscounting, I’d guess they still outperform both browsers.

IE was actually on par with Firefox which I found impressive. Edge is using about 50% more memory than IE, but I wouldn’t read too much into that as I’m comparing running IE on Windows 7 to Edge on an outdated Windows 10 VM.

Nicolas MandilThe Internet, a human need or a human right ?

In its post The Internet is a Global Public Resource, Mark Surman asks if we think the Internet rank with other primary human needs.

Yes, it’s time to make the Internet a mainstream concern. It’s important that Mozilla, a FLOSS mainstream software maker supports it, because we will need end-to-end consuming chain involvement to succeed.

The Internet is important in our lives, and yes it’s turning to be critical with automation, IoT, IA… because we will delegate to it more decisions levels: decision resource to decision making. However, I don’t conceive it as an absolute human need. It’s more a consuming need: a (really nice and powerful) convenience.
The question to me is more about the social contract, the context it is used in: the (actual situation of the) Internet is critical regarding humans rights more than humans needs. Before being a life requirement, it’s a social condition. It’s more about democracy than life. More about people’s interaction than people’s life.

Hence, if it’s about the social contact, that in our societies is expressed in ours Constitution, we should express how the current state of the Internet is going for/against ours nation fundamental law, the values and the norms hierarchy it carry (like what is more important between security and liberty, in which context, …).
If, as Mark proposed, we consider the parallels with the environmental movement, the question started from preserving a living species, to the living species diversity, to the balance of a system we are part of. With the Internet, it’s the same thing, we need to spread out the idea that it’s not only about a particular thing aside, it’s about it’s interaction in our system and its impact on the actual balance that can drastically change the system’s rules.
To succeed, we should do it considering the actual questioning in the society (fears and hopes), the cultural crisis, to show how the alignment with our mission builds an alternative with answers. That’s a critical condition if we want to be cohesive to get a massive impact. The social contract is a common narrative that define a common identity on which we base the judgment of our actions.
To me, in my country, France, I would like to see a campaign that question liberty, equality and fraternity against the Internet in my daily life & choices.

Majken ConnorBuilding Mozilla’s Core Strength

I was lucky enough to be invited to “Mozlando” and was really pleased with the 3 pillars Chris Beard revealed for Mozilla in 2016. Especially the concept of building core strength. As a ballet dancer I’ve actually used this analogy myself when making suggestions for projects I’ve been involved in. Of course the issue always is turning concepts into practice, getting people to actually apply them practically when they go back to work.

Community should be one of Mozilla’s core strengths. It’s been taken for granted, not very well understood, and no one seems to really be sure how to measure its strength let alone build it. There are some really great people trying to approach the problem from different angles, but I think we’re overlooking the basics and not applying knowledge and process that we already have.

Minimum Viable Product

I assume most of you reading this are familiar enough with Agile best practices to recognize this term. George Roter brought it with him to the Participation Team (or at least used the term more often than his predecessors). We need to identify the MVP for community strength at Mozilla. What is the least thing that needs to happen to be able to say we have a healthy community? We need to identify it, and then realign everything necessary until we’re capable of shipping it.

Mobilize the Base

I think anyone that works with a movement understands that if you can’t mobilize your base, then you’re lost. It’s the definition of a movement. Mozilla has hundreds and thousands of volunteers, and volunteers-in-waiting. It’s a massive untapped resource that should be the single litmus test of Mozilla’s success:

Can Mozilla mobilize its base?

The answer is pretty clear, whether or not it can, it doesn’t. I think this should be the single most important goal for 2016, and it supports all the initiatives already identified. Technical teams are focusing on making what’s already in the browser better. That’s great, because you need to give your base a product it can believe in if you want them to be moved to action.  Done on its own, it’s much harder to come up with a direction, or a definition of success. But if you’re trying to get your base excited about your product again, all of a sudden you have something you can measure, and someone to whom you can listen to guide your progress.

Obviously where this weakness exposes itself the most is in non-technical teams and especially in marketing. In this age of viral advertising and the sharing economy, Mozilla came with a built in network of savvy internet citizens. Getting our message out there should be our core strength. We should have a network of professionals who know how to leverage that network. We should be intentionally investing in attracting and retaining those members of the community that can be mobilized, and cutting out practices that don’t further this goal. No other problem should be prioritized above this.

This must be our core strength, our minimum viable product. We make products but ultimately we’re an organization built on a mission, on values, on a movement. We must be able to mobilize our base before we can accomplish anything else worth accomplishing.

No one should be able to beat us at this.
This isn’t a nice-to-have.

This is how we survive.

 

Chris CooperRelEng & RelOps Weekly Highlights - February 12, 2016

2015-10-16 11.10.30

This past week, the release engineering managers – Amy, catlee, and coop (hey, that’s me!) – were in Washington, D.C. meeting up with the other managers who make up the platform operations management team. Our goal was to try to improve the ways we plan, prioritize, and cooperate. I thought it was quite productive. I’ll have more to say about next week once I gather my thoughts a little more.

Everyone else was *very* busy while we were away. Details are below.

Modernize infrastructure:

Dustin deployed a change to the TaskCluster authorization service to support “named temporary credentials”. With this change, credentials can come with a unique name, allowing better identification, logging, and auditing. This is a part of Dustin’s work to implement “TaskCluster Login v3” which should provide a smoother and more flexible way to connect to TaskCluster and create credentials for all of the other tasks you need to perform.

Windows 10 in the cloud is being tested. All the ground work is done to make golden AMIs, mirroring the first stages of work done for Windows 7 in the cloud. Being able to perform some subset of Windows 10 testing in the cloud should allow us to purchase less hardware than we had originally anticipated for this quarter.

Improve CI pipeline:

One of the subjects discussed at Mozlando was improving the overall integration of localization (l10n) builds with our continuous integration (CI) system. Mike fixed an l10n packaging bug this week that I first remember looking at over 4 years ago. This fix allows us to properly test l10n packaging of Mac builds in a release configuration on check-in, thereby avoiding possible headaches later in the release cycle. (https://bugzil.la/700997)

Armen, Joel, Dustin, and Greg worked together to green up even more Linux test jobs in TaskCluster. Among other things, this involved upgrading to the latest Docker (1.10.0) and diagnosing some test runner scripts which use 1.3GB of RAM – not counting the Firefox binaries they run! This project has already been a long slog, but we are constantly making progress and will soon have all jobs in-tree at Tier 2.

Release:

Ben and Nick started designing a new Balrog feature that will make it possible to change update rules in response to certain events. Ben is planning to blog about this is more detail next week.

It’s was a busy week for releases. Many were shipped or are still in-flight:

  • Firefox/Fennec 45.0b3
  • Firefox/Fennec 45.0b4
  • Firefox 44.0.1, quickly followed by Firefox/Fennec 44.0.2 due to a security issue
  • Firefox 38.6.1esr
  • Thunderbird_45.0b1 was shipped to Windows and Mac populations. We are planning a follow-up 45.0b2 in the near future to pick up the swap back to gtk2 for Linux users.
  • Firefox 45.0b5 (in progress)
  • Thunderbird 38.6.0 (in progress)

As always, you can find more specific release details in our post-mortem minutes: https://wiki.mozilla.org/Releases:Release_Post_Mortem:2016-02-10 and https://wiki.mozilla.org/Releases:Release_Post_Mortem:2016-02-17

Operational:

Kim landed a patch to enable Mac OS X 10.10.5 testing on try by default and disable 10.6 testing. This allowed us to disable some old r5 machines and install around 30 new 10.10.5 machines and enable them in production. Hooray for increased capacity! (https://bugzil.la/1239731)

See you next week!

Andrew HalberstadtThe Zen of Mach

Mach is the Mozilla developer's swiss army knife. It gathers all the important commands you'll ever need to run, and puts them in one convenient place. Instead of hunting down documentation, or asking for help on irc, often a simple |mach help| is all that's needed to get you started. Mach is great. But lately, mach is becoming more like the Mozilla developer's toolbox. It still has everything you need but it weighs a ton, and it takes a good deal of rummaging around to find anything.

Frankly, a good deal of the mach commands that exist now are either poorly written, confusing to use, or even have no business being mach commands in the first place. Why is this important? What's wrong with having a toolbox?

Here's a quote from an excellent article on engineering effectiveness from the Developer Productivity lead at Twitter:

Finally there’s a psychological aspect to providing good tools to engineers that I have to believe has a really (sic) impact on people’s overall effectiveness. On one hand, good tools are just a pleasure to work with. On that basis alone, we should provide good tools for the same reason so many companies provide awesome food to their employees: it just makes coming to work every day that much more of a pleasure. But good tools play another important role: because the tools we use are themselves software, and we all spend all day writing software, having to do so with bad tools has this corrosive psychological effect of suggesting that maybe we don’t actually know how to write good software. Intellectually we may know that there are different groups working on internal tools than the main features of the product but if the tools you use get in your way or are obviously poorly engineered, it’s hard not to doubt your company’s overall competence.

Working with good tools is a pleasure. Rather than breaking mental focus, they keep you in the zone. They do not deny you your zen. Mach is the frontline, it is the main interface to Mozilla for most developers. For this reason, it's especially important that mach and all of its commands are an absolute joy to use.

There is already good documentation for building a mach command, so I'm not going to go over that. Instead, here are some practical tips to help keep your mach command simple, intuitive and enjoyable to use.

Keep Logic out of It

As awesome as mach is, it doesn't sprinkle magic fairy dust on your messy jumble of code to make it smell like a bunch of roses. So unless your mach command is trivial, don't stuff all your logic into a single mach_commands.py. Instead, create a dedicated python package that contains all your functionality, and turn your mach_commands.py into a dumb dispatcher. This python package will henceforth be called the 'underlying library'.

Doing this makes your command more maintainable, more extensible and more re-useable. It's a no-brainer!

No Global Imports

Other than things that live in the stdlib, mozbuild or mach itself, don't import anything in a mach_commands.py's global scope. Doing this will evaluate the imported file any time the mach binary is invoked. No one wants your module to load itself when running an unrelated command or |mach help|.

It's easy to see how this can quickly add up to be a huge performance cost.

Re-use the Argument Parser

If your underlying library has a CLI itself, don't redefine all the arguments with @CommandArgument decorators. Your redefined arguments will get out of date, and your users will become frustrated. It also encourages a pattern of adding 'mach-only' features, which seem like a good idea at first, but as I explain in the next section, leads down a bad path.

Instead, import the underlying library's ArgumentParser directly. You can do this by using the parser argument to the @Command decorator. It'll even conveniently accept a callable so you can avoid global imports. Here's an example:

```python def setup_argument_parser(): from mymodule import MyModuleParser return MyModuleParser()

@CommandProvider class MachCommands(object): @Command('mycommand', category='misc', description='does something', parser=setup_argument_parser): def mycommand(self, **kwargs): # arguments from MyModuleParser are in kwargs ```

If the underlying ArgumentParser has arguments you'd like to avoid exposing to your mach command, you can use argparse.SUPPRESS to hide it from the help.

Don't Treat the Underlying Library Like a Black Box

Sometimes the underlying library is a huge mess. It can be very tempting to treat it like a black box and use your mach command as a convenient little fantasy-land wrapper where you can put all the nice things without having to worry about the darkness below.

This situation is temporary. You'll quickly make the situation way worse than before, as not only will your mach command devolve into a similar state of darkness, but now changes to the underlying library can potentially break your mach command. Just suck it up and pay a little technical debt now, to avoid many times that debt in the future. Implement all new features and UX improvements directly in the underlying library.

Keep the CLI Simple

The command line is a user interface, so put some thought into making your command useable and intuitive. It should be easy to figure out how to use your command simply by looking at its help. If you find your command's list of arguments growing to a size of epic proportions, consider breaking your command up into subcommands with an @SubCommand decorator.

Rather than putting the onus on your user to choose every minor detail, make the experience more magical than a Disney band.

Be Annoyingly Helpful When Something Goes Wrong

You want your mach command to be like one of those super helpful customer service reps. The ones with the big fake smiles and reassuring voices. When something goes wrong, your command should calm your users and tell them everything is ok, no matter what crazy environment they have.

Instead of printing an error message, print an error paragraph. Use natural language. Include all relevant paths and details. Format it nicely. Create separate paragraphs for each possible failure. But most importantly, only be annoying after something went wrong.

Use Conditions Liberally

A mach command will only be enabled if all of its condition functions return True. This keeps the global |mach help| free of clutter, and makes it painfully obvious when your command is or isn't supposed to work. A command that only works on Android, shouldn't show up for a Firefox desktop developer. This only leads to confusion.

Here's an example:

```python from mozbuild.base import ( MachCommandBase, MachCommandConditions as conditions, )

@CommandProvider class MachCommands(MachCommandBase): @Command('mycommand', category='post-build', description='does stuff' conditions=conditions.is_android): def mycommand(self): pass ```

If the user does not have an active fennec objdir, the above command will not show up by default in |mach help|, and trying to run it will display an appropriate error message.

Design Breadth First

Put another way, keep the big picture in mind. It's ok to implement a mach command with super specific functionality, but try to think about how it will be extended in the future and build with that in mind. We don't want a situation where we clone a command to do something only slightly differently (e.g |mach mochitest| and |mach mochitest-b2g-desktop| from back in the day) because the original wasn't extensible enough.

It's good to improve a very specific use case that impacts a small number of people, but it's better to create a base upon which other slightly different use cases can be improved as well.

Take a Breath

Congratulations, now you are a mach guru. Take a breath, smell the flowers and revel in the satisfaction of designing a great user experience. But most importantly, enjoy coming into work and getting to use kick-ass tools.

Christian HeilmannAnswering some questions about developer evangelism

I just had a journalist ask me to answer a few questions about developer evangelism and I did so on the train ride. Here are the un-edited answers for your perusal.

In your context, what’s a developer evangelist?

As defined quite some time ago in my handbook (http://developer-evangelism.com/):

“A developer evangelist is a spokesperson, mediator and translator between a company and both its technical staff and outside developers.”

This means first and foremost that you are a technical person who is focused on making your products understandable and maintainable.

This includes writing easy to understand code examples, document and help the engineering staff in your company find its voice and get out of the mindset of building things mostly for themselves.
It also means communicating technical needs and requirements to the non-technical staff and in many cases prevent marketing from over-promising or being too focused on your own products.
As a developer evangelist your job is to have the finger on the pulse of the market. This means you need to know about the competition and general trends as much as what your company can offer. Meshing the two is where you shine.

How did you get to become one?

I ran into the classic wall we have in IT: I’ve been a developer for a long time and advanced in my career to lead developer, department lead and architect. In order to advance further, the only path would have been management and discarding development. This is a big issue we have in our market: we seemingly value technical talent above all but we have no career goals to advance to beyond a certain level. Sooner or later you’d have to become something else. In my case, I used to be a radio journalist before being a developer, so I put the skillsets together and proposed the role of developer evangelist to my company. And that’s how it happened.

What are some of your typical day-to-day duties?

  • Helping product teams write and document good code examples
  • Find, filter, collate and re-distribute relevant news
  • Answer pull requests, triage issues and find new code to re-use and analyse
  • Help phrasing technical responses to problems with our products
  • Keep in contact with influencers and ensure that their requests get answered
  • Coach and mentor colleagues to become better communicators
  • Prepare articles, presentations and demos
  • Conference and travel planning

How often do you code?

As often as I can. Developer Evangelism is a mixture of development and communication. If you don’t build the things you talk about it is very obvious to your audience. You need to be trusted by your technical colleagues to be a good communicator on their behalf, and you can’t be that when all you do is powerpoints and attend meetings. At the same time, you also need to know when not to code and let others shine, giving them your communication skills to get people who don’t understand the technical value of their work to appreciate them more.

What’s the primary benefit enterprises hope to gain by employing developer evangelists?

The main benefit is developer retention and acquisition. Especially in the enterprise it is hard to attract new talent in today’s competitive environment. By showing that you care about your products and that you are committed to giving your technical staff a voice you give prospective hires a future goal that not many companies have for them. Traditional marketing tends to not work well with technical audiences. We have been promised too much too often. People are trusting the voice of people they can relate to. And in the case of a technical audience that is a developer evangelist or advocate (as other companies tend to favour to call it). A secondary benefit is that people start talking about your product on your behalf if they heard about it from someone they trust.

What significant challenges have you met in the course of your developer evangelism?

There is still quite some misunderstanding of the role. Developers keep asking you how much you code, assuming you betrayed the cause and run the danger of becoming yet another marketing shill. Non-technical colleages and management have a hard time measuring your value and expect things to happen very fast. Marketing departments have been very good over the years showing impressive numbers. For a developer evangelist this is tougher as developers hate being measured and don’t want to fill out surveys. The impact of your work is sometimes only obvious weeks or months later. That is an investment that is hard to explain at times. The other big challenge is that companies tend to think of developer evangelism as a new way of marketing and people who used to do that can easily transition into that role by opening a GitHub account. They can’t. It is a technical role and your “street cred” in the developer world is something you need to have earned before you can transition. The same way you keep getting compared to developers and measured by comparing how much code you’ve written. A large part of your job after a while is collecting feedback and measuring the success of your evangelism in terms of technical outcome. You need to show numbers and it is tough to get them as there are only 24 hours in a day.
Another massive issue is that companies expect you to be a massive fan of whatever they do when you are an evangelist there. This is one part, but it is also very important that you are the biggest constructive critic. Your job isn’t to promote a product right or wrong, your job is to challenge your company to build things people want and you can get people excited about without dazzling them.

What significant rewards have you achieved in the course of your developer evangelism?

The biggest win for me is the connections you form and to see people around you grow because you promote them and help them communicate better. One very tangible reward is that you meet exciting people you want to work with and then get a chance to get them hired (which also means a hiring bonus for you).
One main difference I found when transitioning was that when you get the outside excited your own company tends to listen to your input more. As developers we think our code speaks for itself, but seeing that we always get asked to build things we don’t want to should show us that by becoming better communicators we could lead happier lives with more interesting things to create.

What personality traits do you see as being important to being a successful developer evangelist?

You need to be a good communicator. You need to not be arrogant and sure that you and only you can build great things but instead know how to inspire people to work with you and let them take the credit. You need to have a lot of patience and a thick skin. You will get a lot of attacks and you will have to work with misunderstandings and prejudices a lot of times. And you need to be flexible. Things will not always go the way you want to, and you simply can not be publicly grumpy about this. Above all, it is important to be honest and kind. You can’t get away with lies and whilst bad-mouthing the competition will get you immediate results it will tarnish your reputation quickly and burn bridges.

What advice would you give to people who would like to become a developer evangelist?

Start by documenting your work and writing about it. Then get up to speed on your presenting skills. You do that by repetition and by not being afraid of failure. We all hate public speaking, and it is important to get past that fear. Mingle, go to meetups and events and analyse talks and articles of others and see what works for you and is easy for you to repeat and reflect upon. Excitement is the most important part of the job. If you’re not interested, you can’t inspire others.

How do you see the position evolving in the future?

Sooner or later we’ll have to make this an official job term across the market and define the skillset and deliveries better than we do now. Right now there is a boom and far too many people jump on the train and call themselves Developer “Somethings” without being technically savvy in that part of the market at all. There will be a lot of evangelism departments closing down in the nearer future as the honeymoon boom of mobile and apps is over right now. From this we can emerge more focused and cleaner.
A natural way to find evangelists in your company is to support your technical staff to transition into the role. Far too many companies right now try to hire from the outside and get frustrated when the new person is not a runaway success. They can’t be. It is all about trust, not about numbers and advertising.

Karl Dubost[worklog] A week with pretty funny bugs

From the window the shadow of the matsu (松) on the curtain, this monday morning and the in the back of the neighbor's garden, the orange color of the mikan (ミカン). Tune for this week: Kodo - "O-Daiko"

Webcompat Life

  • I created an index page for the previous worklogs, accessible from the menu. So it becomes easier to follow. This is the 8th edition of this worklog and so far, I'm trying to keep it going. We will see where it goes. It's an experiment.
  • I really need to better understand httparchive so I can create queries for it. I did one 2 weeks ago for HTTP Refresh, but it was really a dumb one.
  • Filed a feature request for a dependency graph in developer tools showing the HTTP requests. It is useful to know what has requested what and when in the process of debugging. The current cascade view is only a timeline view.
  • Did a re-evaluation about the bugs status for Add-ons and E10s. The ball is either in the camp of add-on team or in the camp of add-on developer. Nothing we can do.
  • We had a WebCompat meeting.

Firefox Bugs

Webcompat issues

(a selection of some of the bugs worked on this week).

  • SMH is an Australian newspaper, which was not detecting Firefox Android because of their detection script. They upgraded the script they are using for device detection. They changed from "^Mozilla/5.0.*Android [1-3].*" to '^Mozilla/5.0.*Android.*Mobile.*'. This is fixing the issue. I would have preferred they moved to my suggestion in the comment.
  • Nezu museum got rid of Sencha and now the site is compatible with more browsers.
  • deceuu fixed because of a Firefox bug.
  • Japanese website Belle Maison issue "fixed" by the webkit aliasing
  • On ponpare Web site, we have a persisting issue even with layout.css.prefixes.webkit; true. I tried to dig a bit more into it. It led me to create a codepen with different configuration of CSS. The rendering is different depending on Safari (WebKit) and Firefox (Gecko). Specifically when there is a background image, Safari seems to drop the button appearance. Added a comment to the issue on the "compat spec" spec issues list and I ended up opening an issue on Gecko.
  • OK Cupid is being partially fixed by layout.css.prefixes.webkit; true. There are remaining issues related to touch event and sliding of the central image. I pushed back in needsdiagnosis. Mike found and it's ugly. It fails because of H.caroInner.style["-webkit-transform"] = "translate3d(" + n + "px,0,0)". Mike filed a bug on Gecko.
  • On Shiseido Japan website, there is an interesting bug about icons not being displayed. They use a piece of JavaScript to query a RSS feed and extract the name of the image used in the HTML. The var imagePath=thisObj.baseImagePath+$(this).find("icon").text(); The icon here is in fact <shiseido:icon… in the feed. As hallvord mentioned, the code should be rewritten to not rely on jQuery's find but call getElementsByTagNameNS(). The issue? It is working on Blink and Safari but not in Firefox. A bug had been opened on WhatWG spec but it has been closed. Outreach seems the only way forward.
  • In Japan, there are at least two sites, Tokyo Metro and CircleSunkus which depend on the zoom property. This property is an IE-ism which was retrofitted into WebKit for Compatibility reasons, then made its way to Blink and because Firefox doesn't implement it, so we have Web compatibility issues. Microsoft has a rough draft about the property. It was discussed in May 2015 by the CSSWG. I sent an email to the CSS WG with the data.
  • Another funny issue on the ESA site (European Space Agency) and for once not a Web Compatibility issue, just a strange choice. They send users to the mobile site if their window is 1016px or less. if (d == 1 || (d == -1 && window.innerWidth < 1017)) window.location = 'http://m.esa.int/ESA'; It's a very strange strategy.

Webcompat.com development

  • When doing bugs triage and work, I noticed a couple of things monday morning. I filed 3 issues but I hope it might be a caching issue on my side. Let's see: 913, 914, 915. Guillaume has already fixed 914.
  • Starting to update a couple of modules for the project and issued a pull request for it.
  • Spent quite a bit of time wrapping my head around the tests fixtures for completing the tests of our webcompat API. Not totally there yet. I guess a good opportunity for documenting.

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.

Otsukare!

Robert O'CallahanIntroducing rr Chaos Mode

Most of my rr talks start with a slide showing some nondeterministic failures in test automation while I explain how hard it is to debug such failures and how record-and-replay can help. But the truth is that until now we haven't really used rr for that, because it has often been difficult to get nondeterministic failures in test automation to show up under rr. So rr's value has mainly been speeding up debugging of failures that were relatively easy to reproduce. I guessed that enhancing rr recording to better reproduce intermittent bugs is one area where a small investment could quickly pay off for Mozilla, so I spent some time working on that over the last couple of months.

Based on my experience fixing nondeterministic Gecko test failures, I hypothesized that our nondeterministic test failures are mainly caused by changes in scheduling. I studied a particular intermittent test failure that I introduced and fixed, where I completely understood the bug but the test had only failed a few times on Android and nowhere else, and thousands of runs under rr could not reproduce the bug. Knowing what the bug was, I was able to show that sleeping for a second at a certain point in the code when called on the right thread (the ImageBridge thread) at the right moment would reproduce the bug reliably on desktop Linux. The tricky part was to come up with a randomized scheduling policy for rr that would produce similar results without prior knowledge of the bug.

I first tried the obvious: allow the lengths of timeslices to vary randomly; give threads random priorities and observe them strictly; reset the random priorities periodically; schedule threads with the same priority in random order. This didn't work, for an interesting reason. To trigger my bug, we have to avoid scheduling the ImageBridge thread while the main thread waits for a 500ms timeout to expire. During that time the ImageBridge thread is the only runnable thread, so any approach that can only influence which runnable thread to run next (e.g. CHESS) will not be able to reproduce this bug.

To cut a long story short, here's an approach that works. Use just two thread priorities, "high" and "low". Make most threads high-priority; I give each thread a 0.1 probability of being low priority. Periodically re-randomize thread priorities. Randomize timeslice lengths. Here's the good part: periodically choose a short random interval, up to a few seconds long, and during that interval do not allow low-priority threads to run at all, even if they're the only runnable threads. Since these intervals can prevent all forward progress (no control of priority inversion), limit their length to no more than 20% of total run time. The intuition is that many of our intermittent test failures depend on CPU starvation (e.g. a machine temporarily hanging), so we're emulating intense starvation of a few "victim" threads, and allowing high-priority threads to wait for timeouts or input from the environment without interruption.

With this approach, rr can reproduce my bug in several runs out of a thousand. I've also been able to reproduce a top intermittent (now being fixed), an intermittent test failure that was assigned to me, and an intermittent shutdown hang in IndexedDB we've been chasing for a while. A couple of other people have found this enabled reproducing their bugs. I'm sure there are still bugs this approach can't reproduce, but it's good progress.

I just landed all this work on rr master. The normal scheduler doesn't do this randomization, because it reduces throughput, i.e. slows down recording for easy-to-reproduce bugs. Run rr record -h to enable chaos mode for hard-to-reproduce bugs.

I'm very interested in studying more cases where we figure out a bug that rr chaos mode was not able to reproduce, so I can extend chaos mode to find such bugs.

Matthew NoorenbergheFirefox screenshots can now be easily captured and compared in automation

Back in 2013, during Australis[1] development, I created a tool called mozscreenshots. The purpose of this tool was to help detect UI regressions and make it easier to review how the UI looks in various configurations on each of our supported platforms. The main hindrance to its regular usage was that it required developers and designers to install it on each machine, tying it up while the images were captured. The great news is that mozscreenshots is now running in automation on Nightlies and on-demand for try pushes, making it much easier to capture images. Comparing the captured images to a reference/base version can now be done via a web interface which means the images don’t need to be downloaded for review. mozscreenshots has already detected two issues during its first week in automation, and I hope that this tool will improve the quality of desktop Firefox by surfacing UI issues sooner. You can read about it in the documentation on MDN, but here’s a quick start guide:

Change detection via Nightly captures

In order to detect UI regressions, screenshots of some sets run on every Nightly. These are compared to the previous day’s Nightly, using the compare_screenshots tool (which uses ImageMagick’s `compare`). For now I’m manually kicking this off each day but soon I hope to automate this and have it send an email to interested parties for investigation. Currently, only one run of "TabsInTitlebar,Tabs,WindowSize,Toolbars,LightweightThemes" occurs on Nightlies. I plan to add runs with the DevEdition theme, DevTools, and Preferences shortly, as the code for those are already written.

Capturing on Try

You can request screenshots be captured on a Try push for UI review or comparison to a know-good base by requesting the "mochitest-browser-screenshots" test job. You can specify what you would like captured by setting the MOZSCREENSHOTS_SETS environment variable with a comma-separated list of configurations like so: try: -b o -p linux,linux64,macosx64,win32,win64 -t none -u mochitest-browser-screenshots[Ubuntu,10.6,10.10,Windows XP,Windows 7,Windows 8] --setenv MOZSCREENSHOTS_SETS=TabsInTitlebar,WindowSize,LightweightThemes Note that the job is currently Tier 3 and "excluded" on TreeHerder so you will need to toggle both of those filters to see the jobs there with the symbol: "M[Tier-3] (ss)". Unlike Nightlies, Try pushes won’t capture any images by default if MOZSCREENSHOTS_SETS isn’t specified. This avoids capturing images when developers request all mochitest runs, but don’t really want the screenshots. The capturing is implemented as a mochitest-bc test in the "screenshots" subsuite, meaning configurations can use things like BrowserTestUtils and such. See fetch_screenshots if you would like to download the captured images.

Comparing screenshots

The simplest way to compare images is via the web UI at http://screenshots.mattn.ca/compare/ (Example: http://bit.ly/1Qv4uWD ). Simply provide the project and revision of a base push with images, like the Nightly rev. from about:buildconfig, and your new revision, like a try push with some patches to review. In the background, the images are fetched from automation via fetch_screenshots, and then compared using compare_screenshots with the output displayed on the page. The first comparison for a pair of revisions can take several minutes, as around one thousand (5 platforms x 2 revisions x 100 screenshots) images need to be downloaded and compared for the default set of screenshots. The results are cached, so subsequent comparisons for the same revision are much faster.
Example image generated when a change is detected
There are a lot of opportunities for this tool (e.g. pulse integration, notifications, simplified bug filing based on differences, etc.), and I hope to continue to improve the workflow. Please file bugs on the capturing infrastructure blocking the "mozscreenshots" meta bug (1169179) and bugs related to the web UI, comparison and fetching at https://github.com/mnoorenberghe/mozscreenshots/issues. See also the list of issues and ideas at https://public.etherpad-mozilla.org/p/mozscreenshots which haven’t been filed yet. Thank you to everyone who has helped with getting mozscreenshots to this point, specifically Felipe Gomes, Armen Zambrano Gasparnian, Joel Maher, Brian Grinstead, Kit Cambridge, and Justin Dolske. [1] Australis was the code name of the Firefox UI redesign that launched April 29, 2014.

Aaron KlotzNew Mozdbgext Command: !iat

As of today I have added a new command to mozdbgext: !iat.

The syntax is pretty simple:

!iat <hexadecimal address>

This address shouldn’t be just any pointer; it should be the address of an entry in the current module’s import address table (IAT). These addresses are typically very identifiable by the _imp_ prefix in their symbol names.

The purpose of this extension is to look up the name of the DLL from whom the function is being imported. Furthermore, the extension checks the expected target address of the import with the actual target address of the import. This allows us to detect API hooking via IAT patching.

An Example Session

I fired up a local copy of Nightly, attached a debugger to it, and dumped the call stack of its main thread:


 # ChildEBP RetAddr
00 0018ecd0 765aa32a ntdll!NtWaitForMultipleObjects+0xc
01 0018ee64 761ec47b KERNELBASE!WaitForMultipleObjectsEx+0x10a
02 0018eecc 1406905a USER32!MsgWaitForMultipleObjectsEx+0x17b
03 0018ef18 1408e2c8 xul!mozilla::widget::WinUtils::WaitForMessage+0x5a
04 0018ef84 13fdae56 xul!nsAppShell::ProcessNextNativeEvent+0x188
05 0018ef9c 13fe3778 xul!nsBaseAppShell::DoProcessNextNativeEvent+0x36
06 0018efbc 10329001 xul!nsBaseAppShell::OnProcessNextEvent+0x158
07 0018f0e0 1038e612 xul!nsThread::ProcessNextEvent+0x401
08 0018f0fc 1095de03 xul!NS_ProcessNextEvent+0x62
09 0018f130 108e493d xul!mozilla::ipc::MessagePump::Run+0x273
0a 0018f154 108e48b2 xul!MessageLoop::RunInternal+0x4d
0b 0018f18c 108e448d xul!MessageLoop::RunHandler+0x82
0c 0018f1ac 13fe78f0 xul!MessageLoop::Run+0x1d
0d 0018f1b8 14090f07 xul!nsBaseAppShell::Run+0x50
0e 0018f1c8 1509823f xul!nsAppShell::Run+0x17
0f 0018f1e4 1514975a xul!nsAppStartup::Run+0x6f
10 0018f5e8 15146527 xul!XREMain::XRE_mainRun+0x146a
11 0018f650 1514c04a xul!XREMain::XRE_main+0x327
12 0018f768 00215c1e xul!XRE_main+0x3a
13 0018f940 00214dbd firefox!do_main+0x5ae
14 0018f9e4 0021662e firefox!NS_internal_main+0x18d
15 0018fa18 0021a269 firefox!wmain+0x12e
16 0018fa60 76e338f4 firefox!__tmainCRTStartup+0xfe
17 0018fa74 77d656c3 KERNEL32!BaseThreadInitThunk+0x24
18 0018fabc 77d6568e ntdll!__RtlUserThreadStart+0x2f
19 0018facc 00000000 ntdll!_RtlUserThreadStart+0x1b

Let us examine the code at frame 3:


14069042 6a04            push    4
14069044 68ff1d0000      push    1DFFh
14069049 8b5508          mov     edx,dword ptr [ebp+8]
1406904c 2b55f8          sub     edx,dword ptr [ebp-8]
1406904f 52              push    edx
14069050 6a00            push    0
14069052 6a00            push    0
14069054 ff159cc57d19    call    dword ptr [xul!_imp__MsgWaitForMultipleObjectsEx (197dc59c)]
1406905a 8945f4          mov     dword ptr [ebp-0Ch],eax

Notice the function call to MsgWaitForMultipleObjectsEx occurs indirectly; the call instruction is referencing a pointer within the xul.dll binary itself. This is the IAT entry that corresponds to that function.

Now, if I load mozdbgext, I can take the address of that IAT entry and execute the following command:


0:000> !iat 0x197dc59c
Expected target: USER32.dll!MsgWaitForMultipleObjectsEx
Actual target: USER32!MsgWaitForMultipleObjectsEx+0x0

!iat has done two things for us:

  1. It did a reverse lookup to determine the module and function name for the import that corresponds to that particular IAT entry; and
  2. It followed the IAT pointer and determined the symbol at the target address.

Normally we want both the expected and actual targets to match. If they don’t, we should investigate further, as this mismatch may indicate that the IAT has been patched by a third party.

Note that !iat command is breakpad aware (provided that you’ve already loaded the symbols using !bploadsyms) but can fall back to the Microsoft symbol engine as necessary.

Further note that the !iat command does not yet accept the _imp_ symbolic names for the IAT entries, you need to enter the hexadeicmal representation of the pointer.

Eric RahmMemory Usage of Firefox with e10s Enabled

Quick background

With the e10s project full steam ahead, likely to be enabled for many users in mid-2016, it seemed like a good time to measure the memory overhead of switching Firefox from a single-process architecture to a multi-process architecture. The concern here is simple: the more processes we have, the more memory we use. Starting Q4-2015 I began setting up a test to measure the memory usage of Firefox with a variable amount of content processes.

Methodology

For the test I used a slightly modified version of the AWSY framework that I maintain for areweslimyet.com. This test runs through a sample pageset, the same one used in Talos perf testing, in an attempt to simulate a long-lived session.

The steps:

  1. Open Firefox configured to use N content processes.
  2. Measure memory usage.
  3. Open 100 urls in 30 tabs, cycling through tabs once 30 are opened. Wait 10 seconds per tab.
  4. Measure memory usage.
  5. Close all tabs.
  6. Measure memory usage.

For this test I performed two iterations of this, reporting the startup memory usage from the first and the end of test memory usage (TabsOpen, TabsClosed) for the second.

Note: Just summing the total memory usage of each Firefox process is not a useful metric as it will include memory shared between the main process and the content processes. For a more realistic baseline I chose to use a combination of RSS and USS (aka unique set size, private working bytes):

total_memory = RSS(parent_process) + sum(USS(content_processes))

For example if we had:

Process RSS USS
parent 100 50
content_1 90 30
content_2 95 40

total_memory = 100 + 30 + 40

Results

Note on memory checkpoints:

  • Settled: 30 seconds have passed since previous checkpoint.
  • ForceGC: We manually invoked garbage collection.
  • We list the memory usage for each checkpoint using 0, 1, 2, 4, 8 content processes.

Linux, 64-bit

0 1 2 4 8
Start 190 MiB 232 MiB 223 MiB 223 MiB 229 MiB
StartSettled 173 MiB 219 MiB 216 MiB 219 MiB 213 MiB
TabsOpen 457 MiB 544 MiB 586 MiB 714 MiB 871 MiB
TabsOpenSettled 448 MiB 542 MiB 582 MiB 696 MiB 872 MiB
TabsOpenForceGC 415 MiB 510 MiB 560 MiB 670 MiB 820 MiB
TabsClosed 386 MiB 507 MiB 401 MiB 381 MiB 381 MiB
TabsClosedSettled 264 MiB 359 MiB 325 MiB 308 MiB 303 MiB
TabsClosedForceGC 242 MiB 322 MiB 304 MiB 285 MiB 281 MiB

Windows 7, 64-bit

32-bit Firefox
0 1 2 4 8
Start 172 MiB 212 MiB 207 MiB 204 MiB 213 MiB
StartSettled 194 MiB 236 MiB 234 MiB 232 MiB 234 MiB
TabsOpen 461 MiB 537 MiB 631 MiB 800 MiB 1,099 MiB
TabsOpenSettled 463 MiB 535 MiB 635 MiB 808 MiB 1,108 MiB
TabsOpenForceGC 447 MiB 514 MiB 593 MiB 737 MiB 990 MiB
TabsClosed 429 MiB 512 MiB 435 MiB 333 MiB 347 MiB
TabsClosedSettled 356 MiB 427 MiB 379 MiB 302 MiB 306 MiB
TabsClosedForceGC 342 MiB 392 MiB 360 MiB 297 MiB 295 MiB
64-bit Firefox
0 1 2 4 8
Start 245 MiB 276 MiB 275 MiB 279 MiB 295 MiB
StartSettled 236 MiB 290 MiB 287 MiB 288 MiB 289 MiB
TabsOpen 618 MiB 699 MiB 805 MiB 1061 MiB 1334 MiB
TabsOpenSettled 625 MiB 690 MiB 795 MiB 1058 MiB 1338 MiB
TabsOpenForceGC 600 MiB 661 MiB 740 MiB 936 MiB 1184 MiB
TabsClosed 568 MiB 663 MiB 543 MiB 481 MiB 435 MiB
TabsClosedSettled 451 MiB 517 MiB 454 MiB 426 MiB 377 MiB
TabsClosedForceGC 432 MiB 480 MiB 429 MiB 412 MiB 374 MiB

OSX, 64-bit

0 1 2 4 8
Start 319 MiB 350 MiB 342 MiB 336 MiB 336 MiB
StartSettled 311 MiB 393 MiB 383 MiB 384 MiB 382 MiB
TabsOpen 889 MiB 1,038 MiB 1,243 MiB 1,397 MiB 1,694 MiB
TabsOpenSettled 876 MiB 977 MiB 1,105 MiB 1,252 MiB 1,632 MiB
TabsOpenForceGC 795 MiB 966 MiB 1,096 MiB 1,235 MiB 1,540 MiB
TabsClosed 794 MiB 996 MiB 977 MiB 889 MiB 883 MiB
TabsClosedSettled 738 MiB 925 MiB 876 MiB 823 MiB 832 MiB
TabsClosedForceGC 621 MiB 800 MiB 799 MiB 755 MiB 747 MiB

Conclusions

Simply put: the more content processes we use, the more memory we use. On the plus side it’s not a 1:1 factor, with 8 content processes we see roughly a doubling of memory usage on the TabsOpenSettled measurment. It’s a bit worse on Windows, a bit better on OSX, but it’s not 8 times worse.

Overall we see a 10-20% increase in memory usage for the 1 content process case (which is what we plan on shipping initially). This seems like a fair tradeoff for potential security and performance benefits, but as we try to grow the number of content processes we’ll need to take another look at where that memory is being used.

For the next steps I’d like to take a look at how our memory usage compares to other browsers. Expect a follow up post on that shortly.

Support.Mozilla.OrgWhat’s up with SUMO – 11th February

Hello, SUMO Nation!

How have you been? Apparently it’s the season to be a bit sick here and there… Hopefully you’re all fine and happy! We wish you a moderately easy-going weekend and dive straight into the most recent updates from the world of SUMO. Let’s go!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Safwan – for his continuous coding awesomeness and working on making our localizers’ lives easier with his magnificent additions to Kitsune – tōmākē anēka dhan’yabāda!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • …is happening on Monday the 15th of February – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Developers

Community

Social

Support Forum

  • No updates from the forums… But soon… ;-)

Knowledge Base

Localization

  • The “Save as Draft” feature has been submitted and released, thanks to Safwan, our resident coding superhero :-) Now, when you localize a long article and need to take a break, you can save a “draft” version to finish it later – and it won’t get in the way of other localizers! Documentation about this coming soon.
  • We are deprioritizing Firefox OS content localization for now (more context here). If you have questions about it, let me know.
  • Goals for February can be found on your dashboards – go forth and localize!
  • Spanish, Portuguese, French, Italian, Polish, and German localizers looking for screenshots for the new Firefox for iOS 2.0 content – look no more! They’re here, courtesy of Joni.
  • Do you speak a fringe language or know someone who does? Take a look at this summerlab session in San Sebastian!

Firefox

  • for iOS
    • We’re almost there for 2.0… are you ready, iOS users?

There you go, I hope you enjoyed this brief summary of what’s going on with SUMO. Let me know in the comments! Don’t forget to talk to your friends and family about mzl.la/support, mzl.la/help, and maybe even mzl.la/sumodev :-) See you around!

 

Air MozillaWeb QA Weekly Meeting, 11 Feb 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Mark SurmanMoFo 2016 Goals + KPIs

Earlier this month, we started rolling out Mozilla Foundation’s new strategy. The core goal is to make the health of the open internet a mainstream issue globally. We’re going to do three things to make this happen: shape the agenda; connect leaders; and rally citizens. I provided an overview of this strategy in another post back in December.

goals

As we start rolling out this strategy, one of our first priorities is figuring out how to measure both the strength and the impact of our new programs. A team across the Foundation has spent the past month developing an initial plan for this kind of measurement. We’ve posted a summary of the plan in slides (here) and the full plan (here).

Preparing this plan not only helped us get clear on the program qualities and impact we want to have, it also helped us come with a crisper way to describe our strategy. Here is a high level summary of what we came up with:

1. Shape the agenda

Impact goal: our top priority issues are mainstream issues globally (e.g. privacy).
Measures: citations of Mozilla / MLN members, public opinion

2. Rally citizens

Strength goal: rally 10s of millions of people to take action and change how they — and their friends — use the web.
Measures: # of active advocates, list size

Impact goal: people make better, more conscious choices. Companies and governments react with better products and laws.
Measures: per campaign evaluation, e.g. educational impact or did we defeat bad law?

3. Connect leaders

Strength goal: build a cohesive, world class network of people who care about the open internet.
Measures: network strength; includes alignment, connectivity, reach and size

Impact goal: network members shape + spread the open internet agenda.
Measures: participation in agenda-setting, citations, influence evaluation

Last week, we walked through this plan with the Mozilla Foundation board. What we found: it turns out that looking at metrics is a great way to get people talking about the intersection of high level goals and practical tactics. E.g. we need to be thinking about tools other than email as we grow our advocacy work outside of Europe and North America.

If you’re involved in our community or just following along with our plans, I encourage you to open up the slides and talk them through with some other people. My bet is they will get you thinking in new and creative ways about the work we have ahead of us. If they do, I’d love to hear thoughts and suggestions. Comments, as always, welcome on this post and by email.

The post MoFo 2016 Goals + KPIs appeared first on Mark Surman.

Air MozillaReps weekly, 11 Feb 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Christian HeilmannMaking ES6 available to all with ChakraCore – A talk at JFokus2016

Today I gave two talks at JFokus in Stockholm, Sweden. This is the one about JavaScript and ChakraCore.


Presentation: Making ES6 available to all with ChakraCore
Christian Heilmann, Microsoft

2015 was a year of massive JavaScript innovation and changes. Lots of great features were added to language, but using them was harder than before as not all features are backwards compatible with older browsers. Now browsers caught on and with the open sourcing of ChakraCore you have a JavaScript runtime to embed in your products and reliable have ECMAScript support. Chris Heilmann of Microsoft tells the story of the language and the evolution of the engine and leaves you with a lot of tips and tricks how to benefit from the new language features in a simple way.

I wrote the talk the night before, and thought I structure it the following way:

  • Old issues
  • The learning process
  • The library/framework issue
  • The ES6 buffet
  • Standards and interop
  • Breaking monopolies

Slides

The Slide Deck is available on Slideshare.

Screencast

A screencast of the talk is on YouTube

Resources:

Soledad PenadesScore another one for the web!

Last week I made a quick trip to Spain. It was a pretty early flight and I was quite sleepy and so… I totally forgot my laptop! I indeed thought that my bag felt “a bit weird”, as the laptop makes the back flat (when it’s in the bag), but I was quite zombified, and so I just kept heading to the station.

I realised my laptop wasn’t there by the time I had to take my wallet out to buy a train ticket. You see, TFL have been making a really big noise about the fact that you can now use your Oyster to travel to Gatwick. But they have been very quiet about requiring people to have enough credit in their cards to pay the full amount of the ticket. And since I use “auto top up”, sometimes my card might have £18. Sometimes it won’t, as in this case.

Anyway, I didn’t want to go back for the laptop, as I was going on a short holidays trip, and a break from computers would be good. Except… I did have stuff to do, namely researching for my next trip!

I could use my phone, but I quite dislike using phones for researching trips: the screen is just too small, the keyboard is insufferable, and I want to open many tabs, look at maps, go back and forth, which isn’t easy on a phone, etc. I could also borrow some relative’s laptop… or I could try to resuscitate and old tablet that I hadn’t used since 2013!

It had become faulty at the beginning of 2013, but I thought I had fixed it. But months after, it decided to enter its mad loop of “restart, restart, restart and repeat” during a transatlantic flight. I had to hide it in my bag and let it expire its battery. And then I was very bored during both the rest of the flight, and the flight back, as all my carefully compiled entertainment was on it. Bah! And so I stopped using it and at some point I brought it to Spain, “just in case”.

Who would have guessed I’d end up using it again!?

I first spent about 30 minutes looking for a suitable plug for the charger. This tablet requires 2A and all the USB chargers I could find were 0.35A or 0.5A. The charger only had USA style pins, but that part could be removed, and revealed a “Mickey mouse” connector, or C7/C8 coupler if you want to be absolutely specific. A few years ago you could find plenty of appliances using this connector, but nowadays? I eventually found the charger for an old camera, with one of these cables! So I made a Frankenchargenstein out of old parts. Perfect.

The tablet took a long time to even show the charging screen. After a while I could finally turn it on, and oh wow, Android has changed a lot for the better since 3.1. But even if this tablet could be updated easily, I had no laptop and no will to install developer tools on somebody else’s laptop. So I was stuck in 3.1.

The Play Store behaved weirdly, with random glitches here and there. Many apps would not show any update, as developers have moved on to use newer versions of the SDK in order to use new hardware features and what not, and I don’t blame them, because programming apps that can work with different SDKs and operating system versions in Android is a terribly painful experience. So the easiest way to deal with old hardware or software versions is just not supporting them at all. But this leaves out people using outdated devices.

One of these “discriminatory apps” I wanted to install for my research was a travel app which lets you save stuff you would like to visit, and displays it on a map, which is very convenient for playing it by ear when you’re out and about. Sadly, it did not offer a version compatible with my device.

But I thought: Firefox still works in Android 3.1!

I got it updated to the latest version and opened the website for this app/service, and guess what? I could access the same functionalities I was looking for, via the web.

And being really honest, it was even better than using the app. I could have a tab with the search results, and open the interesting ones in a different tab, then close them when I was done perusing, without losing the scrolling point in the list. You know… like we do with normal websites. And in fact we’re not even doing anything superspecial with the app either. It’s not like it’s a high end game or like it works offline (which it doesn’t). Heck, it doesn’t even work properly when the network is a bit flaky… like most of the apps out there 😛

So sending a huge thanks to all the Firefox for Android team for extending the life of my ancient device, and a sincere message to app makers: make websites, not apps 😉

flattr this!

Air MozillaQuality Team (QA) Public Meeting, 10 Feb 2016

Quality Team (QA) Public Meeting The bi-monthly status review of the Quality team at Mozilla. We showcase new and exciting work on the Mozilla project, highlight ways you can get...

Air MozillaThe Joy of Coding - Episode 44

The Joy of Coding - Episode 44 mconley livehacks on real Firefox bugs while thinking aloud.

Chris H-CSSE2 Support in Firefox Users

Let me tell you a story.

Intel invented the x86 assembly language back in the Dark Ages of the Late 1970s. It worked, and many CPUs implemented it, consolidating a fragmented landscape into a more cohesive and compatible whole. Unfortunately, x86 had limitations, so in time it would have to go.

Lo, the time came in the Middle Ages of the Mid 1980s when x86 had to be replaced with something that could handle 32-bit widths for numbers and addresses. And more registers. And yet more addressing modes.

But x86 was popular, so Intel didn’t replace it. Instead they extended it with something called IA-32. And it was popular as well, not least because it was backwards-compatible with basic x86: all of the previous x86 programs would work on x86 + IA-32.

By now, personal and business computing was well in the mainstream. This means Intel finally had some data on what, at the lowest level, programmers were wanting to run on their chips.

It turns out that most of the heaviest computations people wanted to do on computers were really simple to express: multiply this list of numbers by a number, add these two lists of numbers together… spreadsheet sorts of things. Finance sorts of things.

But also video games sorts of things. Windows 95 released with DirectX and unleashed a flood of computer gaming. To the list we can now add: move every point and pixel of this 3D model forward by one step, transform all of this geometry and these textures from this camera POV to that one, recolour this sprite’s pixels to be darker to account for shade.

The structure all of these (and a lot of other) tasks had in common was that they all wanted to do one thing (multiply, add, move, transform, recolour) over multiple pieces of data (one list of numbers, multiple lists of numbers, points and pixels, geometry and textures, sprite colours).

SIMD stands for Single Instruction Multiple Data and is how computer engineers describe these sorts of “do one action over and over again to every individual element in this list of data” operations.

So, for Intel’s new flagship “Pentium” processor they were releasing in 1997 they introduced a new extension: MMX (which doesn’t stand for anything. They apparently chose the letters because they looked cool). MMX lets you do some of those SIMD things directly at the lowest level of the computer with the limitation that you can’t also be performing high-precision math at the same time.

AMD was competing with Intel. Not happy with the limitations of the MMX extension, they developed their own x86 extension “3DNow!” which performed the same operations, but without the limitations and with higher precision.

Intel retaliated with SSE: Streaming SIMD Extensions. They shipped it on their Pentium III processors starting in ’99. It wasn’t a full replacement for MMX, though, so they had to quickly follow it up in the Pentium 4.

Which finally brings us to SSE2. First released in 2001 in the Pentium 4 line of processors (also implemented by AMD two years later in their Opteron line), it reimplemented MMX’s capabilities without its shortcomings (and added some other capabilities at the same time).

So why am I talking ancient history? 2001 was fifteen years ago. What use do we have for this lesson on SSE2 when even SSE4 has been around since ’07, and AVX-512 will ship on real silicon within months?

Well, it turns out that Firefox doesn’t assume you have SSE2 on your computer. It can run on fifteen-year-old hardware, if you have it.

There are some code paths that benefit strongly from the ability to run the SIMD instructions present in SSE2. If Mozilla can’t assume that everyone running Firefox has a computer capable of running SSE2, Firefox has to detect, at runtime, whether the user’s computer is capable of using that fast path.

This makes Firefox bigger, slower, and harder to test and maintain.

A question came up on the dev-platform mailing list about how many Firefox users are actually running computers that lack SSE2. I live in a very rich country and have a very privileged job. Any assumption I make about who does and does not have the ability to run computers that are newer than fifteen years old is going to be clouded by biases I cannot completely account for.

So we turn to the data. Which means Telemetry. Which means I get to practice writing custom analyses. (Yay!)

It turns out that, if a Firefox user has Telemetry enabled, we ask that user’s computer about a lot of environmental information. What is your operating system? What version? How much RAM do you have installed? What graphics card do you have? What version is its driver?

And, yes: What extensions does your CPU support?

We collect this information to determine from real users machines whether a particular environmental variable makes Firefox behave poorly. In the not-too-distant past there was a version of Firefox that would just appear black. No reason, no recourse, no explanation. By examining environmental data we were able to track down what combination of graphics cards and driver versions were susceptible to this and develop a fix within days.

(If you want to see an application of this data yourself, here is a dashboard showing the population breakdown of Firefox users. You can use it to see how much of the Firefox user base is like you. For me, less than 1% of the user base was running a computer like mine with a Firefox like mine, reinforcing that what I might think makes sense may not exactly be representative of reality for Firefox users.)

So I asked of the data: of all the users reporting Telemetry on Thursday January 21, 2016, how many have SSE2 capability on their CPUs?

And the answer was: about 99.5%

This would suggest that at most 0.5% of the Firefox release population are running CPUs that do not have SSE2. This is not strictly correct (there are a variety of data science reasons why we cannot prove anything about the population that doesn’t report SSE2 capability), but it’s a decent approximation so let’s go with it.

From there, as with most Telemetry queries, there were more questions. The first was: “Are the users not reporting SSE2 support keeping themselves on older versions of Firefox?” This is a good question because, if the users are keeping themselves on old versions, we can enable SSE2 support in new versions and not worry about the users being unable to upgrade because they already chose not to.

Turns out, no. They’re not.

With such a small population we’re subdividing (0.5%) it’s hard to say anything for certain, but it appears as though they are mostly running up-to-date versions of Firefox and, thus, would be impacted by any code changes we release. Ah, well.

The next questions were: “We know SSE2 is required to run Windows 8. Are these users stuck on Windows XP? Are there many Linux users without SSE2?”

Turns out: yes and no. Yes, they almost all are on Windows XP. No, basically none of them are running Linux.

Support and security updates for Windows XP stopped on April 8, 2014. It probably falls under Mozilla’s mission to try and convince users still running XP to upgrade themselves if possible (as they did on Data Privacy Day), to improve the security of the Internet and to improve those users’ access to the Web.

If you are running Windows XP, or administer a family member’s computer who is, you should probably consider upgrading your operating system as soon as you are able.

If you are running an older computer and want to know if you might not have SSE2, you can open a Firefox tab to about:telemetry and check the Environment section. Under system should be a field “cpu.extensions” that will contain the token “hasSSE2” if Firefox has detected that you have SSE2.

(If about:telemetry is mostly blank, try clicking on the ‘Change’ links at the top labelled “FHR data upload is disabled” and “Extended Telemetry recording is disabled” and then restarting your Firefox)

SSE2 will probably be coming soon as a system requirement for Firefox. I hope all of our users are ready for when that happens.

:chutten


Luis VillaReinventing FOSS user experiences: a bibliography

There is a small genre of posts around re-inventing the interfaces of popular open source software; I thought I’d collect some of them for future reference:

Recent:

Older:

The first two (Drupal, WordPress) are particularly strong examples of the genre because they directly grapple with the difficulty of change for open source projects. I’m sure that early Firefox and VE discussions also did that, but I can’t find them easily – pointers welcome.

Other suggestions welcome in comments.

Robert O'Callahanrr Talk At linux.conf.au

For the last few days I've been attending linux.conf.au, and yesterday I gave a talk about rr. The talk is now online. It was a lot of fun and I got some good questions!

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1245003] increase the apache sizelimit used by the taskcluster image
  • [1246864] Unable to comment tickets with “WONTFIX” status without change the status on the experimental UI

discuss these changes on mozilla.tools.bmo.


Michael KaplyMac Admin and Developer Conference UK Presentation

I did a presentation today at the Mac Admin and Developer Conference UK on configuring Firefox.

I’m making the slides available now, and will link to the video when it is available.

Also, here is a link to the demo AutoConfig file.

The Mozilla BlogThe Internet is a Global Public Resource

One of the things that first drew me to Mozilla was this sentence from our manifesto:

“The Internet is a global public resource that must remain open and accessible to all.”

These words made me stop and think. As they sunk in, they made me commit.

I committed myself to the idea that the Internet is a global public resource that we all share and rely on, like water. I committed myself to stewarding and protecting this important resource. I committed myself to making the importance of the open Internet widely known.

When we say, “Protect the Internet,” we are not talking about boosting Wi-fi so people can play “Candy Crush” on the subway. That’s just bottled water, and it will very likely exist with or without us. At Mozilla, we are talking about “the Internet” as a vast and healthy ocean.

We believe the health of the Internet is an important issue that has a huge impact on our society. An open Internet—one with no blocking, throttling, or paid prioritization—allows individuals to build and develop whatever they can dream up, without a huge amount of money or asking permission. It’s a safe place where people can learn, play and unlock new opportunities. These things are possible because the Internet is an open public resource that belongs to all of us.

Making the Internet a Mainstream Issue

Not everyone agrees that the health of the Internet is a major priority. People think about the Internet mostly as a “thing” other things connect to. They don’t see the throttling or the censorship or the surveillance that are starting to become pervasive. Nor do they see how unequal the benefits of the Internet have become as it spreads across the globe. Mozilla aims to make the health of the Internet a mainstream issue, like the environment.

Consider the parallels with the environmental movement for a moment. In the 1950s, only a few outdoor enthusiasts and scientists were talking about the fragility of the environment. Most people took clean air and clean water for granted. Today, most of know we should recycle and turn out the lights. Our governments monitor and regulate polluters. And companies provide us with a myriad of green product offerings—from organic food to electric cars.

But this change didn’t happen on its own. It took decades of hard work by environmental activists before governments, companies and the general public took the health of the environment seriously as an issue. This hard work paid off. It made the environment a mainstream issue and got us all looking for ways to keep it healthy.

When in comes to the health of the Internet, it’s like we’re back in the 1950s. A number of us have been talking about the Internet’s fragile state for decades—Mozilla, the EFF, Snowden, Access, the ACLU, and many more. All of us can tell a clear story of why the open Internet matters and what the threats are. Yet we are a long way from making the Internet’s health a mainstream concern.

We think we need to change this, so much so that it’s now one of Mozilla’s explicit goals.

Read Mark Surman’s “Mozilla Foundation 2020 Strategy” blog post.

Starting the Debate: Digital Dividends

The World Bank’s recently released “2016 World Development Report” shows that we’re making steps in the right direction. Past editions have focused on major issues like  “jobs.” This year the report focuses directly on “digital dividends” and the open Internet.

According to the report, the benefits of the Internet, like inclusion, efficiency, and innovation, are unequally spread. They could remain so if we don’t make the Internet “accessible, affordable, and open and safe.” Making the Internet accessible and affordable is urgent. However,

“More difficult is keeping the internet open and safe. Content filtering and censorship impose economic costs and, as with concerns over online privacy and cybercrime, reduce the socially beneficial use of technologies. Must users trade privacy for greater convenience online? When are content restrictions justified, and what should be considered free speech online? How can personal information be kept private, while also mobilizing aggregate data for the common good? And which governance model for the global internet best ensures open and safe access for all? There are no  simple answers, but the questions deserve a vigorous global debate.”

—”World Development Report 2016: Main Messages,” p.3

We need this vigorous debate. A debate like this can help make the open Internet an issue that is taken seriously. It can shape the issue. It can put it on the radar of governments, corporate leaders and the media. A debate like this is essential. Mozilla plans to participate and fuel this debate.

Creating A Public Conversation

Of course, we believe the conversation needs to be much broader than just those who read the “World Development Report.” If we want the open Internet to become a mainstream issue, we need to involve everyone who uses it.

We have a number of plans in the works to do exactly this. They include collaboration with the likes of the World Bank, as well as our allies in the open Internet movement. They also include a number of experiments in a.) simplifying the “Internet as a public resource” message and b.) seeing how it impacts the debate.

Our first experiment is an advertising campaign that places the Internet in a category with other human needs people already recognize: Food. Water. Shelter. Internet. Most people don’t think about the Internet this way. We want to see what happens when we invite them to do so.

The outdoor campaign launches this week in San Francisco, Washington and New York. We’re also running variations of the message through our social platforms. We’ll monitor reactions to see what it sparks. And we will invite conversation in our Mozilla social channels (Facebook & Twitter).

Billboard_Food-Shelter-Water_Red

Billboard_Food-Shelter-Water_Blue

Fueling the Movement

Of course, billboards don’t make a movement. That’s not our thinking at all. But we do think experiments and debates matter. Our messages may hit the mark with people and resonate, or it may tick them off. But our goal is to start a conversation about the health of the Internet and the idea that it’s a global resource that needs protecting.

Importantly, this is one experiment among many.

We’re working to bolster the open Internet movement and take it mainstream. We’re building easy encryption technology with the EFF (Let’s Encrypt). We’re trying to make online conversation more inclusive and open with The New York Times and The Washington Post (Coral Project). And we’re placing fellows and working on open Internet campaigns with organizations like the ACLU, Amnesty International, and Freedom of the Press Foundation (Open Web Fellows Program). The idea is to push the debate on many fronts.

About the billboards, we want to know what you think:

  • Has the time come for the Internet to become a mainstream concern?
  • Is it important to you?
  • Does it rank with other primary human needs?

I’m hoping it does, but I’m also ready to learn from whatever the results may tell us. Like any important issue, keeping the Internet healthy and open won’t happen by itself. And waiting for it to happen by itself is not an option.

We need a movement to make it happen. We need you.

The Mozilla BlogMartin Thomson Appointed to the Internet Architecture Board

Standards are a key part of keeping the Open Web open. The Web runs on standards developed mainly by two standards bodies: the World Wide Web Consortium (W3C), which standardizes HTML and Web APIs, and the Internet Engineering Task Force (IETF), which standardizes networking protocols, such as HTTP and TLS, the core transport protocols for the Web. I’m pleased to announce that Martin Thomson, from the CTO group, was recently appointed to the Internet Architecture Board (IAB), the committee responsible for the architectural oversight of the IETF standards process.

Martin’s appointment recognizes a long history of major contributions to the Internet standards process: including serving as editor for HTTP/2, the newest and much improved version of HTTP, helping to design, implement, and document WebPush, which we just launched in Firefox, and playing major roles in WebRTC, TLS and Geolocation. In addition to his standards work, Martin has committed code all over Gecko, in areas ranging from the WebRTC stack to NSS. Serving on the IAB will give Martin a platform to do even greater things for the Internet and the Open Web as a whole.

Please join me in congratulating Martin.

Jorge VillalobosWebExtensions presentation at FOSDEM 2016

Last week, a big group of Mozillians converged in Brussels, Belgium for FOSDEM 2016. FOSDEM is a huge free and open source event, with thousands of attendees. Mozilla had a stand and a “dev room” for a day, which is a room dedicated to Mozilla presentations.

This year I attended for the first time, and I gave a presentation titled Building Firefox Add-ons with WebExtensions. The presentation covers some of the motivations behind the new API. I also spent a little time going over one of the WebExtensions examples on MDN. I only had 30 minutes for the whole talk, so it was all covered fairly quickly.

The presentation went well, and there were lots of people showing interest and asking questions. I felt that for all of the Mozilla presentations I attended, which makes me want to kick myself for not trying to go to FOSDEM before. It’s a great venue to discuss our ideas, and I want us to come back and do more. We have lots of European contributors and have been looking for a good venue where to have a meetup. This looks ideal, so maybe next year ;).

Mark SurmanThe Internet is a Global Public Resource

One of the things that first drew me to Mozilla was this sentence from our manifesto:

“The Internet is a global public resource that must remain open and accessible to all.”

These words made me stop and think. As they sunk in, they made me commit.

I committed myself to the idea that the Internet is a global public resource that we all share and rely on, like water. I committed myself to stewarding and protecting this important resource. I committed myself to making the importance of the open Internet widely known.

When we say, “Protect the Internet,” we are not talking about boosting Wi-fi so people can play “Candy Crush” on the subway. That’s just bottled water, and it will very likely exist with or without us. At Mozilla, we are talking about “the Internet” as a vast and healthy ocean.

We believe the health of the Internet is an important issue that has a huge impact on our society. An open Internet—one with no blocking, throttling, or paid prioritization—allows individuals to build and develop whatever they can dream up, without a huge amount of money or asking permission. It’s a safe place where people can learn, play and unlock new opportunities. These things are possible because the Internet is an open public resource that belongs to all of us.

Making the Internet a Mainstream Issue

Not everyone agrees that the health of the Internet is a major priority. People think about the Internet mostly as a “thing” other things connect to. They don’t see the throttling or the censorship or the surveillance that are starting to become pervasive. Nor do they see how unequal the benefits of the Internet have become as it spreads across the globe. Mozilla aims to make the health of the Internet a mainstream issue, like the environment.

Consider the parallels with the environmental movement for a moment. In the 1950s, only a few outdoor enthusiasts and scientists were talking about the fragility of the environment. Most people took clean air and clean water for granted. Today, most of know we should recycle and turn out the lights. Our governments monitor and regulate polluters. And companies provide us with a myriad of green product offerings—from organic food to electric cars.

But this change didn’t happen on its own. It took decades of hard work by environmental activists before governments, companies and the general public took the health of the environment seriously as an issue. This hard work paid off. It made the environment a mainstream issue and got us all looking for ways to keep it healthy.

When in comes to the health of the Internet, it’s like we’re back in the 1950s. A number of us have been talking about the Internet’s fragile state for decades—Mozilla, the EFF, Snowden, Access, the ACLU, and many more. All of us can tell a clear story of why the open Internet matters and what the threats are. Yet we are a long way from making the Internet’s health a mainstream concern.

We think we need to change this, so much so that it’s now one of Mozilla’s explicit goals.

Read Mark Surman’s “Mozilla Foundation 2020 Strategy” blog post.

Starting the Debate: Digital Dividends

The World Bank’s recently released “2016 World Development Report” shows that we’re making steps in the right direction. Past editions have focused on major issues like  “jobs.” This year the report focuses directly on “digital dividends” and the open Internet.

According to the report, the benefits of the Internet, like inclusion, efficiency, and innovation, are unequally spread. They could remain so if we don’t make the Internet “accessible, affordable, and open and safe.” Making the Internet accessible and affordable is urgent. However,

“More difficult is keeping the internet open and safe. Content filtering and censorship impose economic costs and, as with concerns over online privacy and cybercrime, reduce the socially beneficial use of technologies. Must users trade privacy for greater convenience online? When are content restrictions justified, and what should be considered free speech online? How can personal information be kept private, while also mobilizing aggregate data for the common good? And which governance model for the global internet best ensures open and safe access for all? There are no  simple answers, but the questions deserve a vigorous global debate.”

—”World Development Report 2016: Main Messages,” p.3

We need this vigorous debate. A debate like this can help make the open Internet an issue that is taken seriously. It can shape the issue. It can put it on the radar of governments, corporate leaders and the media. A debate like this is essential. Mozilla plans to participate and fuel this debate.

Creating A Public Conversation

Of course, we believe the conversation needs to be much broader than just those who read the “World Development Report.” If we want the open Internet to become a mainstream issue, we need to involve everyone who uses it.

We have a number of plans in the works to do exactly this. They include collaboration with the likes of the World Bank, as well as our allies in the open Internet movement. They also include a number of experiments in a.) simplifying the “Internet as a public resource” message and b.) seeing how it impacts the debate.

Our first experiment is an advertising campaign that places the Internet in a category with other human needs people already recognize: Food. Water. Shelter. Internet. Most people don’t think about the Internet this way. We want to see what happens when we invite them to do so.

The outdoor campaign launches this week in San Francisco, Washington and New York. We’re also running variations of the message through our social platforms. We’ll monitor reactions to see what it sparks. And we will invite conversation in our Mozilla social channels (Facebook & Twitter).

Billboard_Food-Shelter-Water_Red

Billboard_Food-Shelter-Water_Blue

Fueling the Movement

Of course, billboards don’t make a movement. That’s not our thinking at all. But we do think experiments and debates matter. Our messages may hit the mark with people and resonate, or it may tick them off. But our goal is to start a conversation about the health of the Internet and the idea that it’s a global resource that needs protecting.

Importantly, this is one experiment among many.

We’re working to bolster the open Internet movement and take it mainstream. We’re building easy encryption technology with the EFF (Let’s Encrypt). We’re trying to make online conversation more inclusive and open with The New York Times and The Washington Post (Coral Project). And we’re placing fellows and working on open Internet campaigns with organizations like the ACLU, Amnesty International, and Freedom of the Press Foundation (Open Web Fellows Program). The idea is to push the debate on many fronts.

About the billboards, we want to know what you think:

  • Has the time come for the Internet to become a mainstream concern?
  • Is it important to you?
  • Does it rank with other primary human needs?

I’m hoping it does, but I’m also ready to learn from whatever the results may tell us. Like any important issue, keeping the Internet healthy and open won’t happen by itself. And waiting for it to happen by itself is not an option.

We need a movement to make it happen. We need you.

[This blog post originally appeared on blog.mozilla.org on February 8, 2016]

The post The Internet is a Global Public Resource appeared first on Mark Surman.

The Servo BlogThis Week In Servo 50

In the last week, we landed 113 PRs in the Servo organization’s repositories.

Alan Jeffrey has been made a reviewer! We look forward to his help with the huge PR backlog :-)

Notable Additions

  • larsberg moved our Linux builds onto reserved EC2 instances. Same cost, way more availability!
  • nox removed the in-tree version of HeapSizeOf
  • kichjang disabled some of our worst intermittents while we investigate the cause
  • manish made WebSockets work in a worker scope
  • aneeshusa fixed our Mac builder provisioning code around the multiple versions of autoconf required by our dependencies
  • bholley continued his work to refactor the Servo style system code for easier uplifting into Gecko
  • ajeffrey released v0.1.0 of the parsell parser combinator library

New Contributors

Screenshot

No screenshot this week.

Meetings

We had a meeting on final updates to our changed meeting times, the status of the stylo work, and the incoming WebRender PR to master.

Florian QuèzeProject ideas wanted for Summer of Code 2016

Google is running Summer of Code again in 2016. Mozilla has had the pleasure of participating many years so far, and even though we weren't selected last year, we are hoping to participate again this year. In the next few weeks, we need to prepare a list of suitable projects to support our application.

Can you think of a 3-month coding project you would love to guide a student through? This is your chance to get a student focusing on it for 3 months! Summer of Code is a great opportunity to introduce new people to your team and have them work on projects you care about but that aren't on the critical path to shipping your next release.

Here are the conditions for the projects:

  • completing the project should take roughly 3 months of effort for a student;
  • any part of the Mozilla project (Firefox, Firefox OS, Thunderbird, Instantbird, SeaMonkey, Bugzilla, L10n, NSS, IT, and many more) can submit ideas, as long as they require coding work;
  • there is a clearly identified mentor who can guide the student through the project.

If you have an idea, please put it on the Brainstorming page, which is our idea development scratchpad. Please follow the instructions at the top.

The deadline to submit project ideas and help us be selected by Google is February 19th.

Note for students: the student application period starts on March 14th, but the sooner you start discussing project ideas with potential mentors, the better.

Nikki BeeOkay, but What Does Your Work Actually Mean, Nikki? Part 3: Translating A Standard Into Code

Over my previous two posts, I described my introduction to work on Servo, and my experience with learning and modifying the Fetch Standard. Now I’m going to combine these topics today, as I’ll be talking about what it’s like putting the Fetch Standard into practice in Servo. The process is roughly: I pick a part of the Fetch Standard to implement or improve; I write it on a step-by-step basis, often asking many questions; then when I feel it’s mostly complete, I submit my code for review and feedback.

I will talk about the review aspect in my next post, along with other things, as this entry ended up being pretty long!

Where To Start?

Whenever I realize I’m not sure what to be doing for the day, I go over my list of tasks, often talking with my project mentor about what I can do next. There’s a lot more work then I could manage in any internship - especially a 3 month long one - so having an idea of what aspects are the most important is good to keep in mind. Plus, I’m not equally skilled or knowledgeable about every aspect of Fetch or programming in Rust, and learning a new area more than halfway through my internship could be a significant waste of time. So, the main considerations are: “How important is this for Servo?”, “Will it take too long to implement?”, and “Do I know how to do this?”.

“How important is this for Servo?”

Often, my Servo mentor or co-workers are the only people who can answer “How important is this?”, since they’ve all been with Servo for much longer than me, and take a broader level view- personally, I only think of Servo in terms of the Fetch implementation, which isn’t too far off from reality: the Fetch implementation will be used by a number of parts of Servo which can easily use any of it, with clear boundaries for what should be handled by Fetch itself.

I’m not too concerned with what’s most important for the Fetch implementation, since I can’t answer it by myself. There’s always multiple things I could be doing, and I have a better idea for answering the other two aspects.

“Will it take too long to implement?”

“Will it take too long to implement?” is probably the hardest question, but one that gets just a bit easier all the time. Simply put, the more code I write, the better I can predict how long any specific task will take me to accomplish. There are always sort of random chances though: sometimes I run into surprising blocks for a single line of code; or I spend just half a day writing an entire Fetch step with no problems. Maybe with years of experience I will see those easy successes or hard failures coming as well, but for now, I’ll have to be content with rough estimates and a lenient time schedule.

“Do I know how to do this?”

The last question, “Do I know how to do this?”, depends greatly on what “this” is for me to be able to answer. Learning new aspects of Rust is always a closed book to me, in a way- I don’t know how difficult or simple any aspect of it will be to learn. Not to mention, just reading something has a minimal effect on my understanding. I need to put it into practice, multiple times, for me to really understand what I’m doing.

Unfortunately, programming Servo (or any project, really) doesn’t necessarily line up the concepts I need to learn and use in a nice order, so I often need to juggle multiple concepts of varying complexity at once. For areas not specific to Rust though, I can often have a better idea of. Generalized programming ideas though, I can better gauge my ability for. Writing tests? Sure, I’ve done that plenty- it shouldn’t be difficult to apply that to Rust. Writing code that handles multiple concurrent threads? I’ve only learned enough to know that those buzzwords mean something- I’d probably need a month to be taught it well!

An Example Of Deciding A Task

Right now, and for the past while, my work on Servo has been focused on writing tests to make sure my implementation of Fetch, and the functions written previously by my co-workers, conform to what’s expected by the Fetch Standard. What factors in to deciding this is a good task though?

Importance

Most of the steps for Fetch are mostly complete by now. The steps that aren’t coded either cannot be done yet in Servo, or are not necessary for a minimally working Fetch protocol. Sure, I can make the Rust compiler happy- but just because I can run the code at all doesn’t mean it’s working right! Thus, before deciding that the basics of Fetch have been perfected and can be built on, extensive test coverage of the code is significantly important. Testing the code means I can intentionally create many situations, both to make sure the result matches the standard, and that errors come up at only the defined moments.

Time

Writing the tests is often straightforward. I just need to add a lot of conditionals, such as: the Fetch Standard says a basic filtered response has the type “basic”. That’s simple- I need to have the Fetch protocol return a basic filtered response, then verify that the type is “basic”! And so on for every declaration the Fetch Standard makes. The trickier side of this though is that I can’t be absolutely sure, until I run a test, whether or not the existing code supports this.

It’s simple on paper to return a basic filtered response- but when I tell my Fetch implementation to do so, maybe it’ll work right away, or maybe I’m missing several steps or made mistakes that prevent it from happening! The time is a bit open ended as a result, but that can be weighed with how good it would be to catch and repair a significant error.

Knowledge

I have experience with writing tests, as I like them conceptually very much. Often, testing a program by hand is slow, difficult, and hard to reproduce errors. When the testing itself is written in code, everything is accessible and easily repeatable. If I don’t know what’s going on, I can update the test code to be more informative and run it again, or dig into it with a debugger. So I have the knowledge of tests themselves- but what about testing in Rust, or even Servo?

I can tell you the answer: I hadn’t foreseen much difficulty (reading over how to write tests in Rust was easy), but I ended up lacking a lot of experience with testing Servo. Since Servo is a browser engine, and the Fetch protocol deals with fetching any kind of resource a browser access, I need to handle having a resource on a running server for Fetch to retrieve. While this is greatly simplified thanks to some of the libraries Servo works with, it still took a lot of time for to have a good mental model to understand what I was doing in my head and thus, be able to write effective tests.

Actually Writing The Code

So I’ve said a lot about what goes into picking a task, but what about actually writing the code? That requires knowing how to a translate a step from the programming language-abstracted Fetch Standard into Rust code. Sometimes this is almost exactly like the original writing, such as step 1 of the Main Fetch function, “Let response be null”, which in Rust looks like this: let response = None;.

In Rust, the let keyword makes a variable binding- it’s just a coincidence that the Main Fetch step uses the same word. And Rust’s null operator is called None (which declares a variable that is currently holding nothing, and cannot be used for anything while still None, but it sets aside some memory now for response to be used later).

A More Complex Step

Of course, not every step is so literal to translate to Rust. Take step 10 of Main Fetch for instance: “If main fetch is invoked recursively, return response”. The first part of this is knowing what it’s saying, which is “if main fetch is invoked by itself or another function it invokes (ie., invoked recursively), return the response variable”. Translating that to Rust code gives us if main fetch is invoked recursively { return response }. This isn’t very good- main fetch is invoked recursively isn’t valid code, it’s a fragment of an English sentence.

The step doesn’t answer this for me, so I need to get answers elsewhere. There’s two things I can do: keep reading more of the Fetch Standard (and check personal notes I’ve made on it), or ask for help. I’ll do both, in that order. Right now, I have two questions I need answers to: “When is Main Fetch invoked recursively?”, and “How can I tell when Main Fetch is invoked recursively?”.

Doing My Own Research

I often like to try to get answers myself before asking other people for help (although sometimes I do both at the same time, which has lead to me answering my own questions immediately after asking a few times). I think it’s a good ideal to spend a few minutes trying to learn something on my own, but to also not let myself be stuck on any one problem for about 15 minutes, and 30 minutes at the absolute worst. I want to use my own time well- asking a question I can answer on my own shortly isn’t necessary, but not asking a question that is beyond my capability can end up wasting a lot more time.

So I’ll start trying to figure out this step by answering for myself, “When is Main Fetch invoked recursively?”. Most functions in the Fetch Standard invoke at least one other function, which is how it all works- each part is separated so they can be understood in smaller chunks, and can easily repeat specific steps, such as invoking Main Fetch again.

What I would normally need to do here is read through the Fetch Standard to find at least one point where Main Fetch is invoked from itself or another function it invokes. Thankfully, I don’t have to go read through each of those two functions, and everything they call, and so on, until I happen to get my answer, because of part of my notes I took earlier, when I was reading through as much of the Fetch Standard as I could.

I had decided to make short notes declaring when each Fetch function calls another one, and in what step it happens. At the time I did so to help give me an understanding of the relations between all the Fetch functions- now, it’s going to help me pinpoint when Main Fetch is invoked recursively! First I look for what calls Main Fetch, other than the initial Fetch function which primarily serves to set up some basic values, then pass it on to Main Fetch. The only function that does so is HTTP Redirect Fetch, in step 15. I can also see that HTTP Fetch calls HTTP Redirect Fetch in step 5, and that Main Fetch calls HTTP Fetch in step 9.

That was easy! I’ve now answered the question: “Main Fetch is invoked recursively by HTTP Redirect Fetch.”

Questions Are Welcome

However, I’m still at a loss for answering “How can I tell when Main Fetch is invoked recursively?”. Step 15 of HTTP Redirect Fetch doesn’t say to set any variable that would say “oh yeah, this is a recursive call now”. In fact, no such variable is defined by the Fetch Standard to be used!* So, I’ll ask my Servo mentor on IRC.

This example I’m covering actually happened a month or so ago, so I’m not going to pretend I can remember exactly what the conversation was. But the result of it was that the solution is actually very simple: I add a new boolean (a variable that is just true, or false) parameter (a variable that must be sent to a function when invoking it) to Main Fetch that’ll say whether or not it’s being invoked recursively. When Main Fetch is invoked from the Fetch function, I set it to false; when Main Fetch is invoked from HTTP Redirect Fetch, I set it to true.

Repeat

This is the typical process for every single function and step in the Fetch Standard. For every function I might implement, improve, or test, I repeat a similar decision process. For every step, I repeat a similar question-answer procedure, although unfortunately not all English to code translations are so shortly resolved as the examples in this post.

Sometimes picking a new part of the Fetch Standard to implement means it ends up relying on another piece, and needs to be put on hold for that, or occasionally my difficulty with a step might result in a request for change to the Fetch Standard to improve understanding or logic, as I’ve described in previous posts.

This, in sum with the other two parts of this series, effectively describes the majority of my work on Servo! Hopefully, you now have a better idea of what it all means.

Fourth post ending.

  • After sharing a first draft of my post, it was pointed out that adding this to the Fetch spec would be simple to do, so this case will likely soon become irrelevant for understanding Fetch. But it’s still a neat, concise example, if easily outdated.

Air MozillaSuMo weekly community call

SuMo weekly community call The Sumo (Support Mozilla)community meet every Monday in the SuMo vidyo channel meetings are about 30 minutes and start at 17:00 UTC

Brian KingConnected Devices at the Singapore Leadership Summit

Preface

Around 150 Mozillians gathered in Singapore for education, training, skills building, and planning to bring them and their communities into the fold on the latest Participation projects. With the overarching theme being leadership training, we had two main tracks. The first was Campus Campaign, a privacy focused effort to engage students as we work more with that demographic this year. Second, and the focus of this post, is Connected Devices.

Unchartered Ground

As we built out a solid Firefox OS Participation program last year, the organisation is moving more into Connected Devices. The challenge we have is to evolve the program to fit the new direction. However, the strategy and timeline has not been finalised, so in Singapore we needed to get people excited in a broader sense on what we are doing in this next phase of computing, and see what could be done now to start hacking on things.

Sessions

We had three main sessions during the weekend. Here is a brief summary of each.

Strategy And Update

We haven’t been standing still since we announced the changes in December. During this sessions John Bernard walked us through why Mozilla is moving in this direction, how the Connected Devices team has been coming together, and how initial project proposals have been going through the ‘gating process’. The team will be structured in three parts based on the three core pillars of Core, Consumer, and Collaboration. The latter is in essence a participation team embedded, led by John. We talked about some of the early project ideas floating around, and we discussed possible uses of the foxfooding program, cunningly labelled ‘Outside the Fox’.

John Bernard presenting

John Bernard presenting

Dietrich Ayala then jumped in and talked about some of the platform APIs that we can use today to hook together the Web of Things. There are many ways to experiment today using existing Firefox OS devices and even Firefox desktop and mobile. The set of APIs in Firefox OS phones allow access to a wide range of sensors, enabling experimentation with physical presence detection, speech synthesis and recognition, and many types of device connectivity.

Check out the main sessions slides and Dietrich’s slides.

Research Co-Creation Workshop

Led by Rina Jensen and Jared Cole, the Participation and Connected Devices teams have been working on a project to explore the open source community and understand what makes people contribute and be part of the communities, from open hardware projects to open data projects. During the session, some of the key insights from that work were shared. The group then partook in co-creation exercises, coming up with ideas for what an ideal contributor experience. During the session, a few key research insights were shared and provided to the group as input for a co-creation exercise. The participants then spent the next hour generating ideas focused on the ideal contributor experience. Rina and Jared are going to continue working closely with the Participation and Connected Devices teams to come up with a clear set of actionable recommendations.

Flore leading the way during co-creation exercise

Flore leading the way during co-creation exercise

You can find more information and links to session materials on the session wiki page.

Designing and Planning for Participation

True participation is working on all aspects of a project, from ideation, through implementation, to launch and beyond. The purpose of this session was two-fold:

  1. How can we design together an effective participation program for Connected Devices
  2. What can we start working on now. We wanted attendees to share their project ideas and start setting up the infrastructure NOW.

On the first topic, we didn’t get very far on the day due to time constraints, but it is something we work on all the time of course at Mozilla. We have a good foundation built with the Firefox OS Participation project. Connected Devices is a field where we can innovate and excel, and we see a lot of excitement for all Mozillians to lead the way here. This discussion will continue.

For the second topic, we wanted to come out of the weekend with something tangible, to send a strong message that volunteer leadership is looking to the future and are ready to build things now. We heard some great ideas, and then broke out into teams to start working on them. The result is thirteen projects to get some energy behind, and I’m sure many more will arise.

Read more about the projects.

In order to accelerate the next stage of tinkering and ideation, we’ve set up a small Mozilla Reps innovation fund to hopefully set in motion a more dynamic environment in which Mozillians can feel at home in.

Working on a project idea

Working on a project idea

To Conclude

Connected Devices, Internet of Things, Web of Things. You have heard many labels for essentially is the next era of computing. At Mozilla we want to ensure that technology-wise the Web is at the forefront of this revolution, and that the values we hold dear such as privacy are central. Now more than ever, open is important. Our community leaders are ready. Are you?

QMOFirefox 45 Beta 3 Testday Results

Hello Mozillians!

As you may already know, last Friday – February 5th – we held a new Testday, for Firefox 45 Beta 3 and it was another successful event!

We’d like to take this opportunity to thank Iryna Thompson, Chandrakant Dhutadmal, Mohammed Adam, Vuyisile Ndlovu, Spandana Vadlamudi, Ilse Macías, Bolaram Paul, gaby2300, Ángel Antonio, Preethi Dhinesh  and the people from our Bangladesh Community: Rezaul Huque Nayeem, Hossain Al Ikram, Raihan Ali, Moniruzzaman, Khalid Syfullah Zaman, Amlan BIswas, Abdullah Umar Nasib, Najmul Amin, Pranjal Chakraborty, Azmina Akter Papeya, Shaily Roy, Kazi Nuzhat Tasnem, Md Asaduzzaman John, Md.Tarikul Islam Oashi, Fahmida Noor, Fazle Rabbi, Md. Almas Hossain, Mahfuza Humayra Mohona, Syed Nayeem Roman, Saddam Hossain, Shahadat  Hossain, Abdullah Al Mamun, Maruf Rahman, Muhtasim kabir, Ratul Ahmed, Mita Halder, Md Faysal Rabib, Tanvir Rahman, Tareq Saifullah, Dhiman roy, Parisa Tabassum, SamadTalukdar, Zubair Ahmed, Toufiqul haque Mamun, Md. Nurnobi, Sauradeep Dutta, Noban Hasan, Israt  jahan, Md. Nazmus Shakib (Robin), Zayed News, Ashickur Rahman, Hasna Hena, Md. Rahimul islam, Mohammad Maruf Islam, Mohammed Jawad Ibne Ishaque, Kazi Nuzhat Tasnem and Wahiduzzaman Hridoy for getting involved in this event and making Firefox as best as it could be.

Results:

Also a big thank you goes to all our active moderators.

Keep an eye on QMO for upcoming events!

Gregory SzorcMozReview Git Support and Improved Commit Mapping

MozReview - Mozilla's Review Board based code review tool - now supports ingestion from Git. Previously, it only supported Mercurial.

Instructions for configuring Git with MozReview are available. Because blog posts are not an appropriate medium for documenting systems and processes, I will not say anything more here on how to use Git with MozReview.

Somewhat related to the introduction of Git support is an improved mechanism for mapping commits to existing review requests.

When you submit commits to MozReview, MozReview has to decide how to map those commits to review requests in Review Board. It has to choose whether to recycle an existing review request or create a new one. When recycling, is has to pick an appropriate one. If it chooses incorrectly, wonky things can happen. For example, a review request could switch to tracking a new and completely unrelated commit. That's bad.

Up until today, our commit mapping algorithm was extremely simple. Yet it seemed to work 90% of the time. However, a number of people found the cracks and complained. With Git support coming online, I had a feeling that Git users would find these cracks with higher frequency than Mercurial users due to what I perceive to be variations in the commit workflows of Git versus Mercurial. So, I decided to proactively improve the commit mapping before the Git users had time to complain.

Both the Git and Mercurial MozReview client-side extensions now insert a MozReview-Commit-ID metadata line in commit messages. This line effectively defines a (likely) unique ID that identifies the commit across rewrites. When MozReview maps commits to review requests, it uses this identifier to find matches. What this means is that history rewriting (such as reordering commits) should be handled well by MozReview and should not confuse the commit mapping mechanism.

I'm not claiming the commit mapping mechanism is perfect. In fact, I know of areas where it can still fall apart. But it is much better than it was before. If you think you found a bug in the commit mapping, don't hesitate to file a bug. Please have it block bug 1243483.

A side-effect of introducing this improved commit mapping is that commit messages will have a MozReview-Commit-ID line in them. This may startle some. Some may complain about the spam. Unfortunately, there's no better alternative. Both Mercurial and Git do support a hidden key-value dictionary for each commit object. In fact, the MozReview Mercurial extension has been storing the very commit IDs that now appear in the commit message in this dictionary for months! Unfortunately, actually using this hidden dictionary for metadata storage is riddled with problems. For example, some Mercurial commands don't preserve all the metadata. And accessing or setting this data from Git is painful. While I wish this metadata (which provides little value to humans) were not located in the commit message where humans could be bothered by it, it's really the only practical place to put it. If people find it super annoying, we could modify Autoland to strip it before landing. Although, I think I like having it preserved because it will enable some useful scenarios down the road, such as better workflows for uplift requests. It's also worth noting that there is precedent for storing unique IDs in commit messages for purposes of commit mapping in the code review tool: Gerrit uses Change-ID lines.

I hope you enjoy the Git support and the more robust commit to review request mapping mechanism!

Daniel GlazmanInventory and Strategy

“There’s class warfare, all right, but it’s my class, the native class, that’s making war, and we’re winning.” -- Android and iOS, blatantly stolen from Warren Buffet

Firefox OS tried to bring Web apps to the mobile world and it failed. It has been brain dead - for phones - for three days and the tubes preserving its life will be turned off in May 2016. I don't believe at all myself in the IoT space being a savior for Mozilla. There are better and older competitors in that space, companies or projects that bring smaller, faster, cleaner software architectures to IoT where footprint and performance are an even more important issue than in the mobile space. Yes, this is a very fragmented market; no, I'm not sure FirefoxOS can address it and reach the critical mass. In short, I don't believe in it at all.

Maybe it's time to discuss a little bit a curse word here: strategy. What would be a strategy for the near- and long-term future for Mozilla? Of course, what's below remains entirely my own view and I'm sure some readers will find it pure delirium. I don't really mind.

To do that, let's look a little bit at what Mozilla has in hands, and let's confront that and the conclusion drawn from the previous lines: native apps have won, at least for the time being.

  • Brains! So many hyper-talented brains at Mozilla!
  • Both desktop and mobile knowledge
  • An excellent, but officially unmaintained, runtime
  • Extremely high expertise on Web Standards and implementation of Web Standards
  • Extremely high expertise on JS
  • asm.js
  • Gaia, that implements a partial GUI stack from html but limited to mobile

We also need to take a look at Mozilla's past. This is not an easy nor pleasant inventory to make but I think it must be done here and to do it, we need to go back as far in time as the Netscape era.

Technology Year(s) Result
Anya 2003 AOL (Netscape's parent company) did not want of Anya, a remote browser moving most of the CPU constraints to the server, and it died despite of being open-sourced by its author. At the same time, Opera successfully launched Opera Mini and eventually acquired its SkyFire competitor. Opera Mini has been a very successful product on legacy phones and even smartphones in areas with poor mobile connectivity.
XUL 2003- Netscape - and later Mozilla - did not see any interest in bringing XUL to Standards committees. When competitors eventually moved to XML-based languages for UI, they adopted solutions (XAML, Flex, ...) that were not interoperable with it.
Operating System 2003- A linux+Gecko Operating System is not a new idea. It was already discussed back in 2003 - yes, 2003 - at Netscape and was too often met with laughter. It was mentioned again multiple times between 2003 and 2011, without any apparent success.
Embedding 2004- Embedding has always been a poor parent in Gecko's family. Officially dropped loooong ago, it drove embedders to WebKit and then Blink. At the time embedding should have been improved, the focus was solely on Firefox for desktop. If I completely understand the rationale behind a focus on Firefox for desktop at that time, the consequences of abandoning Embedding have been seriously underestimated.
Editing 2005- Back in 2004/2005, it was clear Gecko had the best in-browser core editor on the market. Former Netscape editor peers working on Dreamweaver compared mozilla/editor and what Macromedia/Adobe had in hands. The comparison was vastly in favor of Mozilla. It was also easy to predict the aging Dreamweaver would soon need a replacement for its editor core. But editing was considered as non-essential at that time, more a burden than an asset, and no workforce was permanently assigned to it.
Developer tools 2005 In 2005, Mozilla was so completely mistaken on Developer Tools, a powerful attractor for early adopters and Web Agencies, that it wanted to get rid of the error console. At the same moment, the community was calling for more developer tools.
Runtime 2003- XULRunner has been quite successful for such a complex technology. Some rather big companies believed enough in it to implement apps that, even if you don't know their name, are still everywhere. As an example, here's at least one very large automotive group in Europe, a world-wide known brand, that uses XULRunner in all its test environments for car engines. That means all garages dealing with that brand use a XULRunner-fueled box...
But unfortunately, XULrunner was never considered as essential, up to the point its name is still a codename. For some time, the focus was instead given to GRE, a shared runtime that was doomed to fail from the very first minute.
Update: XULRunner just died...
Asian market 2005 While the Asian market was exploding, Gecko was missing a major feature: vertical writing. It prevented Asian embedders from considering Gecko as the potential rendering engine to embed in Ebook reading systems. It also closed access to the Asian market for many other usages. But vertical writing did not become an issue to fix for Mozilla until 2015.
Thunderbird 2007 Despite of growing adoption of Thunderbird in governmental organizations and some large companies, Mozilla decided to spin off Thunderbird into a Mail Corporation because it was unable to get a revenue stream from it. MailCo was eventually merged back with Mozilla and Thunderbird is again in 2015/2016 in limbos at Mozilla.
Client Customization Kit 2003- Let's be clear, the CCK has never been seen as a useful or interesting project. Maintained only by the incredible will and talent of a single external contributor, many corporations rely on it to release Firefox to their users. Mozilla had no interest in corporate users. Don't we spend only 60% of our daily time at work?
E4X 2005-2012 Everyone had high expectations about E4X and and many were ready to switch to E4X to replace painful DOM manipulations. Unfortunately, it never allowed to manipulate DOM elements (BMO bug 270553), making it totally useless. E4X support was deprecated in 2012 and removed after Firefox 17.
Prism (WebRunner) 2007-2009 Prism was a webrunner, i.e. a desktop platform to run standalone self-contained web-based apps. Call them widgets if you wish. Prism was abandoned in 2009 and replaced by Mozilla Chromeless that is itself inactive too.
Marketplace 2009 Several people called for an improved marketplace where authors could sell add-ons and standalone apps. That required a licensing mechanism and the possibility to blackbox scripting. It was never implemented that way.
Browser Ballot 2010 The BrowserChoice.eu thing was a useless battle. If it brought some users to Firefox on the Desktop, the real issue was clearly the lack of browser choice on iOS, world-wide. That issue still stands as of today.
Panorama (aka Tab Groups) 2010 When Panorama reached light, some in the mozillian community (including yours truly) said it was bloated, not extensible, not localizable, based on painful code, hard to maintain on the long run and heterogeneous with the rest of Firefox, and it was trying to change the center of gravity of the browser. Mozilla's answer came rather sharply and Panorama was retained. In late 2015, it was announced that Panorama will be retired because it's painful to maintain, is heterogeneous with the rest of Firefox and nobody uses it...
Jetpack 2010 Jetpack was a good step on the path towards HTML-based UI but a jQuery-like framework was not seen by the community as what authors needed and it missed a lot of critical things. It never really gained traction despite of being the "official" add-on way. In 2015, Mozilla announced it will implement the WebExtensions global object promoted by Google Chrome and WebExtensions is just a more modern and better integrated JetPack on steroids. It also means being Google's assistant to reach the two implementations' standardization constraint again...
Firefox OS 2011 The idea of a linux+Gecko Operating System finally touched ground. 4 years later, the project is dead for mobile.
Versioning System 2011 When Mozilla moved to faster releases for Firefox, large corporations having slower deployment processes reacted quite vocally. Mozilla replied it did not care about dinosaurs of the past. More complaints led to ESR releases.
Add-ons 2015 XUL-based add-ons have been one of the largest attractors to Firefox. AdBlock+ alone deserves kudos, but more globally, the power of XUL-based add-ons that could interact with the whole Gecko platform and all of Firefox's UI has been a huge market opener. In 2015/2016, Mozilla plans to ditch XUL-based add-ons without having a real replacement for them, feature-per-feature.
Evangelism 2015 While Google and Microsoft have built first-class tech-evangelism teams, Mozilla made all its team flee in less than 18 months. I don't know (I really don't) the reason behind that intense bleeding but I read it as a very strong warning signal.
Servo 2016 Servo is the new cool kid on the block. With parallel layout and a brand new architecture, it should allow new frontiers in the mobile world, finally unleashing the power of multicores. But instead of officially increasing the focus on Servo and decreasing the focus on Gecko, Gecko is going to benefit from Servo's rust-based components to extend its life. This is the old sustaining/disruptive paradigm from Clayton Christensen.

(I hope I did not make too many mistakes in the table above. At least, that's my personal recollection of the events. If you think I made a mistake, please let me know and I'll update the article.)

Let's be clear then: Mozilla really succeeded only three times. First, with Firefox on the desktop. Second, enabling the Add-ons ecosystem for Firefox. Third, with its deals with large search engine providers. Most of the other projects and products were eventually ditched for lack of interest, misunderstanding, time-to-market and many other reasons. Mozilla is desperately looking for a fourth major opportunity, and that opportunity can only extend the success of the first one or be entirely different.

The market constraints I see are the following:

  • Native apps have won
  • Mozilla's reputation as an embedded solution's provider among manufacturers will probably suffer a bit from Firefox OS for phones' death. BTW, it probably suffers a bit among some employees too...

Given the assets and the skills, I see then only two strategic axes for Moz:

  1. Apple must accept third-party rendering engines even if it's necessary to sue Apple.
  2. If native apps have won, Web technologies remain the most widely adopted technologies by developers of all kinds and guess what, that's exactly Mozilla's core knowledge! Let's make native apps from Web technos then.

I won't discuss item 1. I'm not a US lawyer and I'm not even a lawyer. But for item 2, here's my idea:

  1. If asm.js "provides a model closer to C/C++" (quote from asmjs.org's FAQ), it's still not possible to compile asm.js-based JavaScript into native. I suggest to define a subset of ES2015/2016 that can be compiled to native, for instance through c++, C#, obj-C and Java. I suggest to build the corresponding multi-target compiler. Before telling me it's impossible, please look at Haxe.
  2. I suggest to extend the html "dialect" Gaia implements to cross-platform native UI and submit it immediately to Standard bodies. Think Qt's ubiquity. The idea is not to show native-like (or even native) UI inside a browser window... The idea is to directly generate browser-less native UI from a html-based UI language, CSS and JS that can deal with all platform's UI elements. System menus, dock, icons, windows, popups, notifications, drawers, trees, buttons, whatever. Even if compiled, the UI should be DOM-modifyable just like XUL is today.
  3. WebComponents are ugly, and Google-centric. So many people think that and so few dare saying it... Implementing them in Gecko acknowledges the power of Gmail and other Google tools but WebComponents remain ugly and make Mozilla a follower. I understand why Firefox needs it. But for my purpose, a simpler and certainly cleaner way to componentize and compile (see item 1) the behaviours of these components to native would be better.
  4. Build a cross-platform cross-device html+CSS+JS-based compiler to native apps from the above. Should be dead simple to install and use. A newbie should be able to get a native "Hello World!" native app in minutes from a trivial html document. When a browser's included in the UI, make Gecko (or Servo) the default choice.
  5. Have a build farm where such html+CSS+JS are built for all platforms. Sell that service. Mozilla already knows pretty well how to do build farms.

That plan addresses:

  • Runtime requests
  • Embedding would become almost trivial, and far easier than Chromium Embedded Framework anyway... That will be a huge market opener.
  • XUL-less future for Firefox on Desktop and possibly even Thunderbird
  • XUL-less future for add-ons
  • unique source for web-based app and native app, whatever the platform and the device
  • far greater performance on mobile
  • A more powerful basis for Gaia's future
  • JavaScript is currently always readable through a few tools, from the Console to the JS debugger and app authors don't want that.
  • a very powerful basis for Gaming, from html and script
  • More market share for Gecko and/or Servo
  • New revenue stream.

There are no real competitors here. All other players in that field use a runtime that does not completely compile script to native, or are not based on Web Standards, or they're not really ubiquitous.

I wish the next-generation native source editor, the next-gen native Skype app, the next-gen native text processor, the next-gen native online and offline twitter client, the next native Faecbook app, the next native video or 3D scene editor, etc. could be written in html+CSS+ECMAScript and compiled to native and if they embed a browser, let be it a Mozilla browser if that's allowed by the platform.

As I wrote at the top of this post, you may find the above unfeasible, dead stupid, crazy, arrogant, expensive, whatever. Fine by me. Yes, as a strategy document, that's rather light w/o figures, market studies, cost studies, and so on. Absolutely, totally agreed. Only allow me to think out loud, and please do the same. I do because I care.

Updates:

  • E4X added
  • update on Jetpack, based on feedback from Laurent Jouanneau
  • update on Versioning and ESR, based on feedback from Fabrice Desré (see comments below)
  • XULrunner has died...

Clarification: I'm not proposing to do semi-"compilation" of html à la Apache Cordova. I suggest to turn a well chosen subset of ES2015 into really native app and that's entirely different.

This Week In RustThis Week in Rust 117

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and Andre.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

Updates from Rust Core

121 pull requests were merged in the last week.

Notable changes

New Contributors

  • Alexander Lopatin
  • Brandon W Maister
  • Nikita Baksalyar
  • Paul Smith
  • Prayag Verma
  • qpid
  • Reeze Xia
  • Ryan Thomas
  • Sandeep Datta
  • Sean Leffler

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

The week's Crate of the Week is roaring, the Rust version of Prof. D. Lemire's compressed bitmap data structure. I can personally attest that both the Rust and Java versions compare very favorably in both speed and size to other bit sets and are easy to use.

Thanks to polyfractal for the suggestion.

Submit your suggestions for next week!

Mike HommeySSH through jump hosts, revisited

Close to 7 years ago, I wrote about SSH through jump hosts. Twice. While the method used back then still works, Openssh has grown an new option in version 5.3 that allows it to be simplified a bit, by not using nc.

So here is an updated rule, version 2016:

Host *+*
ProxyCommand ssh -W $(echo %h | sed 's/^.*+//;s/^\([^:]*$\)/\1:22/') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')

The syntax you can use to connect through jump hosts hasn’t changed compared to previous blog posts:

  • With one jump host:
    $ ssh login1%host1:port1+host2:port2 -l login2
  • With two jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3
  • With three jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
  • etc.

Logins and ports can be omitted.

Update: Add missing port to -W flag when one is not given.

Jennifer BorissOnto New Challenges

After a wild and wonderful year and a half at Reddit, I’ve decided to move on. And when I say wild, I mean it. The past year has been the […]

Michael KohlerMozilla Switzerland Goals H1 2016

Back in November we had a Community Meetup. The goal was to get a current status on the Community and define plans and goals for 2016. To do that, we started with a SWOT-Analysis. You can find it here.

With these remarks in mind, we started to define goals for 2016. Since there are a lot of changes within one year, the goals will currently only focus on the first part of the year. Then we can evaluate them, shift metrics if needed, and define new goals. This allows us to be more flexible.

The goals are highly influenced by the OKR (Objective – Key Results) Framework. To document open issues that support this goal, I have created a repository in our MozillaCH GitHub organization. The goal is to assign the “overall goal” label to each issue. You can find a good documentation on GitHub issues in their documentation. There is a template you can use for new issues.

  • Objective 1: The community is vibrant and active due to structured contribution areas
  • Objective 2: MozillaCH is a valuable partner for privacy in Switzerland
  • Objective 3: There is a vibrant community in the “Romandie” which is part of the overall community
  • Objective 4: The MozillaCH website is the place to link to for community topics
  • Objective 5: With talks and events we increase our reach and provide a valuable information source regarding the Open Web
  • Objective 6: Social Media is a crucial part of our activities providing valuable information about Mozilla and the Open Web

CRhvD5rWwAEDmGp

We know that not all of those goals are easily achievable, but this gives us a good way to be ambitious. To a successful first half of 2016, let’s bring our community further and keep rocking the Open Web!

CL52IqpWIAAQN0Y

Cameron KaiserImagine there's no Intel transition ...

... and with a 12-core POWER8 workstation, it's easy if you try. Reported to me by a user is the Raptor Engineering Talos Secure Workstation, the first POWER workstation I've seen in years since the last of the PowerPC Intellistations. You can sign up for preorders so they know you're interested. (I did, of course. Seriously. In fact, I'm actually considering buying two, one as a workstation and the second as a new home server.) Since it's an ATX board, you can just stick it in any case you like with whatever power supply and options you want and configure to taste.

Before you start hyperventilating over the $3100 estimated price (which includes the entry-level 8-core CPU), remember that the Quad G5, probably the last major RISC workstation, cost $3300 new and this monster would drive it into the ground. Plus, at "only" 130 watts TDP, it certainly won't run anywhere near as hot as the G5 did either. Likely it will run some sort of Linux, though I can't imagine with its open architecture that the *BSDs wouldn't be on it like a toupee on William Shatner. Let's hope they get enough interest to produce a few, because I'd love to have an excuse to buy one and I don't need much of an excuse.

Mozilla Addons BlogHi, I’m Your New AMO Editor

jetpackYou may have wondered who this “Scott DeVaney” is who posted February’s featured add-ons. Well it’s me. I just recently joined AMO as your new Editorial & Campaign Manager. But I’m not new to Mozilla; I’ve spent the past couple years managing editorial for Firefox Marketplace.

This is an exciting deal, because my job will be to not only maintain the community-driven editorial processes we have in place today, but to grow the program and build new endeavors designed to introduce even more Firefox users to the wonders of add-ons.

In terms of background, I’ve been editorializing digital content since 1999 when I got my first internet job as a video game editor for the now-dead CheckOut.com. That led to other editorial gigs at DailyRadar, AtomFilms, Shockwave, Comedy Central, and iTunes (before all that I spent a couple years working as a TV production grunt where my claim to fame is breaking up a cast brawl on the set of Saved by the Bell—The New Class; but that’s a story for a different blog.)

I’m sdevaney on IRC, so don’t be a stranger.

Mozilla Addons BlogAdd-on Compatibility for Firefox 45

Firefox 45 will be released on March 8th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 45 for Developers, so you should also give it a look.

General

UI

XPCOM

Signing

  • Firefox is currently enforcing add-on signing, with a preference to override it. Firefox 46 will remove the preference entirely , which means your add-on will need to be signed in order to run in release versions of Firefox. You can read about your options here.

New

  • Support a simplified JSON add-on update protocol. Firefox now supports a JSON update file for add-ons that manage their own automatic updates, as an alternative to the existing XML format. For new add-ons, we suggest using the JSON format. You shouldn’t immediately switch for older add-ons until most of your users are on 45 and later.

Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 45, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 44.

Daniel PocockGiving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Still not convinced? Read about Amazon secretly removing George Orwell's 1984 and Animal Farm from Kindles while people were reading them, Apple filtering the availability of apps with a pro-Life bias and Facebook using algorithms to identify homosexual users.

Chris CooperRelEng & RelOps Weekly Highlights - February 5, 2016

This week, we have two new people starting in Release Engineering: Aki Sasaki (:aki) and Rok Garbas (:garbas). Please stop by #releng and say hi!

Modernize infrastructure:

This week, Jake and Mark added check_ami.py support to runner for our Windows 2008 instances running in Amazon. This is an important step towards parity with our Linux instances in that it allows our Windows instances to check when a newer AMI is available and terminate themselves to be re-created with the new image. Until now, we’ve need to manually refresh the whole pool to pick up changes, so this is a great step forward.

Also on the Windows virtualization front, Rob and Mark turned on puppetization of Windows 2008 golden AMIs this week. This particular change has taken a long time to make it to production, but it’s hard to overstate the importance of this development. Windows is definitely *not* designed to manage its configuration via puppet, but being able to use that same configuration system across both our POSIX and Windows systems will hopefully decrease the time required to update our reference platforms by substantially reducing the cognitive overhead required for configuration changes. Anyone who remembers our days using OPSI will hopefully agree.

Improve CI pipeline:

Ben landed a Balrog patch that implements JSONSchemas for Balrog Release objects. This will help ensure that data entering the system is more consistent and accurate, and allows humans and other systems that talk to Balrog to be more confident about the data they’ve constructed before they submit it.

Ben also enabled caching for the Balrog admin application. This dramatically reduces the database and network load it uses, which makes it faster, more efficient, and less prone to update races.

Release:

We’re currently on beta 3 for the Firefox 45. After all the earlier work to unhork gtk3 (see last week’s update), it’s good to see the process humming along.

A small number of stability issues have precipitated a dot release for Firefox 44. A Firefox 44.0.1 release is currently in progress.

Operational:

Kim implemented changes to consume SETA information for Android API 15+ data using data from API 11+ data until we have sufficient data for API 15+ test jobs. This reduced the number of high number of pending counts for the AWS instance types used by Android. (https://bugzil.la/1243877)

Coop (hey, that’s me!) did a long-overdue pass of platform support triage. Lots of bugs got closed out (30+), a handful actually got fixed, and a collection of Windows test failures got linked together under a root cause (thanks, philor!). Now all we need to do is find time to tackle the root cause!

See you next week!

Mozilla WebDev CommunityExtravaganza – February 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Git Submodules are Gone from MDN

First up was jezdez with news about MDN moving away from using git submodules to pull in dependencies. Instead, MDN now uses pip to pull in dependencies during deployment. Hooray!

Careers now on AWS/Deis

Next was giorgos who let us know that careers.mozilla.org has moved over to the Engagement Engineering Deis cluster on AWS. For deployment, the site has Travis CI build a Docker image and run tests against it. If the tests pass, the image is deployed directly to Deis. Neat!

Privacy Day

jpetto helped ship the Privacy Day page. It includes a mailing list signup form as well as instructions for several platforms on how to update your software to stay secure.

Automated Functional Testing for Mozilla.org

agibson shared news about the migration of previously-external functional tests for mozilla.org to live within the Bedrock repository itself. This allows us to run the tests, which previously were run by the WebQA team against live environments, whenever the site is deployed to dev, stage, or production. Having the functional tests be a part of the build pipeline ensures that developers are aware when the tests are broken and can fix them before deploying broken features. A slide deck is available with more details.

Peep 3.x

ErikRose shared news about the 3.0 (and 3.1) release of Peep, which helps smooth the transition from Peep to Pip 8, which now supports hashed requirements natively. The new Peep includes a peep port command for porting Peep-compatible requirements files to the new Pip 8 format.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

Jazzband

jezdez shared news about JazzBand, a cooperative experiment to reduce the stress of maintaining Open Source software alone. The group operates as a Github organization that anyone can join and transfer projects to. Anyone in the JazzBand can access JazzBand projects, allowing projects that would otherwise die due to lack of activity thrive thanks to the community of co-maintainers.

Notable projects already under the JazzBand include django-pipeline and django-configurations. The group is currently focused on Python projects and is still figuring out things like how to secure releases on PyPI.

django-configurations 1.0

Speaking of the JazzBand, members of the collective pushed out the 1.0 release of django-configurations, which is an opinionated library for writing class-based settings files for Django. The new release adds Django 1.8+ support as well as several new features.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Travis CI Sudo for Specific Environments

Next was ErikRose with an undocumented tip for Travis CI builds. As seen on the LetsEncrypt travis.yml, you can specify sudo: required for a specific entry in the build matrix to run only that build on Travis’ container-based infrastructure.

Docker on OS X via xhyve

Erik also shared xhyve, which is a lightweight OS X hypervisor. It’s a port of bhyve, and can be used as the backend for running Docker containers on OS X instead of VirtualBox. Recent changes that have made this more feasible include the removal of a 3 gigabyte RAM limit and experimental NFS support that, according to Erik, is faster than VirtualBox’s shared folder functionality. Check it out!


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Mozilla Security BlogMozilla Winter of Security-2015 MozDef: Virtual Reality Interface

Mozilla runs Winter of Security (MWoS) every year to give folks an opportunity to contribute to ongoing security projects in flight. This year an ambitious group took on the task of creating a new visual interface in our SIEM overlay for Elastic  Search that we call MozDef: The Mozilla Defense Platform.

Security personnel are in high demand and analyst skill sets are difficult to maintain. Rather than only focusing on making people better at security, I’m a firm believer that we need to make security better at people. Interfaces that are easier to comprehend and use seem to be a worthwhile investment in that effort and I’m thrilled with the work this team has done.

They’ve wrapped up their project with a great demo of their work. If you are interested in security automation tools and alternative user interfaces, take a couple minutes and check out their work over at air mozilla.

 

Air MozillaMozilla Winter of Security-2015 MozDef: Virtual Reality Interface

Mozilla Winter of Security-2015 MozDef: Virtual Reality Interface MWOS Students give an awesome demo of their work adding a unique interface to MozDef: The Mozilla Defense Platform.

Chris CooperWelcome (back), Aki!

Aki in Slave UnitThis actually is Aki.

In addition to Rok who also joined our team week, I’m ecstatic to welcome back Aki Sasaki to Mozilla release engineering.

If you’ve been a Mozillian for a while, Aki’s name should be familiar. In his former tenure in releng, he helped bootstrap the build & release process for both Fennec *and* FirefoxOS, and was also the creator of mozharness, the python-based script harness that has allowed us to push so much of our configuration back into the development tree. Essentially he was devops before it was cool.

Aki’s first task in this return engagement will be to figure out a generic way to interact with Balrog, the Mozilla update server, from TaskCluster. You can follow along in bug 1244181.

Welcome back, Aki!

Chris CooperWelcome, Rok!

The RockThis is *not* our Rok.

I’m happy to announce a new addition to the Mozilla release engineering. This week, we are lucky to welcome Rok Garbas to the team.

Rok is a huge proponent of Nix and NixOS. Whether we end up using those particular tools or not, we plan to leverage his experience with reproducible development/production environments to improve our service deployment story in releng. To that end, he’s already working with Dustin who has also been thinking about this for a while.

Rok’s first task is to figure out how the buildbot-era version of clobberer, a tool for clearing and resetting caches on build workers, can be rearchitected to work with TaskCluster. You can follow along in bug 1174263 if you’re interested.

Welcome, Rok!

Carsten BookSheriff Newsletter for January 2016

Hi,
To give a little insight into our work and make our work more visible to our Community we decided to create a  monthly report of what’s going on in the Sheriffs Team.
If you have questions or feedback, just let us know!
In case you don’t know who the sheriffs are, or to check if there are current issues on the tree, see:
Topics of this month!
1. How-To article of the month
2. Get involved
3. Statistics for January
4. Orange Factor
5. Contact
1. How-To article of the month and notable things!
-> In the Sheriff Newsletter we mentioned the “Orange Factor” but what is this ?  It is simply the ratio of oranges (test failures) to test runs. The ideal value is, of course, zero.

Practically, this is virtually impossible for a code base of any substantial size,so it is a matter of policy as to what is an acceptable orange factor.

It is worth noting that the overall orange factor indicates nothing about the severity of the oranges. [4]

The main site where you can checkout the “Orange Factor” is at https://brasstacks.mozilla.com/orangefactor/  and some interesting info’s are here https://wiki.mozilla.org/Auto-tools/Projects/OrangeFactor
-> As you might be aware Firefox OS has moved into Tier 3 Support [5] – this means that there is no Sheriff Support anymore for the b2g-inbound tree.

Also with moving into tier-3 – b2g tests have also moved to tier 3 and this tests are by default “hidden” on treeherder. To view test results as example on treeherder for mozilla-central you need to click on the checkbox in the treeview “show/hide excluded jobs”.

2. Get involved!
Are you interested in helping out by becoming a Community Sheriff? Let us know!
3. Statistics
Intermittent Bugs filed in January  [1]: 667
and of those are closed: 107 [2]
For Tree Closing times and reasons see:
4. Orange Factor
Current Orangefactor [3]: 12.92
5.  How to contact us
There are a lot of ways to contact us. The fastest one is to contact
the sheriff on duty (the one with the |sheriffduty tag on their nick
:) or by emailing sheriffs @ mozilla dot org.

Karl Dubost[worklog] Outreach is hard, Webkit aliasing big progress

Tunes of the week: Earth, Wind and Fire. Maurice White, the founder, died at 74.

WebCompat Bugs

WebKit aliasing

  • When looking for usage of -webkit-mask-*, I remembered that Google Image was a big offender. So I tested again this morning and… delight! They now use SVG. So now, I need to test extensively Google search and check if they can just send us the version they send to Chrome.
  • Testing Google Calendar again on Gecko with Chrome user agent to see how far we are to receive a better user experience. We can't really yet ask Google to send us the same thing they send to Chrome. A couple of glitches here and there. But we are very close. The better would be for Google to fix their CSS, specifically to make flexbox and gradients standard-compatible.
  • The code for the max-width issue (not a bug but implementation differences due to an undefined scenario in the CSS specification) is being worked on by David Baron and reviewed by Daniel Holbert. And this makes me happy, it should solve a lot of the Webcompat bugs reports. Look at the list of SeeAlso in that bug.

Webcompat Life and Working with Developer Tools

  • Changing preferences all the time through "about:config" is multiple step. I liked in Opera Presto how you could link to a specific preference, so I filed a bug for Firefox. RESOLVED. It exists: about:config?filter=webkit and it's bookmark-able.
  • Bug 1245365 - searching attribute values through CSS selectors override the search terms
  • A discussion has been started on improving the Responsive Design Mode of Firefox Developer Tools. I suggested a (too big) list of features that would make my life easier.

Firefox OS Bugs to Firefox Android Bugs

  • Web Compatibility Mozilla employees reduced their support for solving Firefox OS bugs to its minimum. The community is welcome to continue to work on them. But some of these bugs have still an impact on Firefox Android. One good example of this is Bug 959137. Let's come up with a process to deal with those.
  • Another last week todo. I have been closing a lot of old bugs (around 600 in a couple of days) in Firefox OS and Firefox Android in Tech Evangelism product. The reasons for closing them are mostly:
    • the site doesn't exist anymore. (This goes into my list of Web Compatibility axioms: "Wait long enough, every bug disappears.")
    • the site fixed the initial issue
    • the layout.css.prefixes.webkit; true fixes it (see Bug 1213126)
    • the site has moved to a responsive design

Bug 812899 - absolutely positioned element should be vertically centered even if the height is bigger than that of the containing block

This bug was simple at the beginning, but when providing the fix, it broke other tests. It's normal. Boris explained which parts of the code was impacted. But I don't feel I'm good enough yet for touching this. Or it would require patience and step by step guidance. It could be interesting though. I have the feeling I have too much on my plate right now. So a bug to take over!

Testing Google Search On Gecko With Different UA Strings

So last week, I gave myself a Todo "testing Google search properties and see if we can find a version which is working better on Firefox Android than the current default version sent by Google. Maybe testing with Chrome UA and Iphone UA." My preliminary tests sound pretty good.

Reading List

Follow Your Nose

Otsukare!

Karl DubostSteps Before Considering a Bug "Ready for Outreach"

Sometimes another team of Mozilla will ask help from Webcompat team for contacting site owners to fix an issue on their Web site which hinders the user experience on Firefox. Let's go through some tips to maximize the chances of getting results when we outreach.

Bug detection

Bug

A bug has been reported by a user or a colleague. They probably had the issue at the moment they tested. The source of the issue is still quite unknown. Network glitch, specific addon configuration, particular version of Firefox, broken build of Nightly. Assess if the bug is reproducible in the most neutral possible environment. And if it's not already done, write in the comments "Steps to reproduce" and describe all the steps required to reproduce the bug.

Analyzing the issue

Bug

You have been able to reproduce. It is time to understand it. Explain it in very clear terms. Think about the person on the other hand who will need to fix the bug. This person might not be an English native speaker. This person might not be as knowledgeable as you for Web technologies. Provide links and samples to the code with the issue at stake. This will help the person to find the appropriate place in the code.

Providing a fix for the issue

Bug

When explaining the issue, you might have also find out how to fix it or at least one way to fix it. It might not be the way the contacted person will fix it. We do not know their tools, but it will help them to create an equivalent fix that fits in their process. If your proposal is a better practice explained why it is beneficial for performance, longevity, resilience, etc.

Partly a Firefox bug

Bug

The site is not working but it's not entirely their fault. Firefox changed behavior. The browser became more compliant. The feature had a bug which is in the process of being fixed. Think twice before asking for outreach. Sometimes it's just better to push a bit more on fixing the bug in Firefox. It has more chances to be useful for all the unknown sites using the feature. If the site is a big popular site, you might want to ask for outreach, but you need a very good incentive such as improving performances.

Provide a contact hint

Bug

If by chance, you already have contacts in this company, share the data, even try directly to contact that person. If you have information that even bookies don't know about the company, be sure to pass it on for maximizing the chances of outreach. The hardest part is often to find the right person who can help you fix the issue.

Outreach might fail

And here the dirty secret: The outreach might now work or might not be effective right away. Be patient. Be perseverant.

Bug

Fixing a Web site costs a lot more than you can imagine. Time and frustration are part of the equation. Outreach is not a magical bullet. Sometimes it takes months to years to fix an issue. Some reasons why the outreach might fail:

  • Impossible to find the right contacts. Sometimes you can send bug reports through the official channels of communications from the company and have your bug being ignored, misunderstood, unusual. For one site, I had reported for months through the bug reporting system until I finally decided to try a back door with emailing specifically a developer I happened to find the information online. The bug was fixed in a couple of days.
  • Developers have bosses. They first need to comply with what their bosses told them to do. They might not be in a very good position in the company, have conflicts with the management, etc. Or they just don't have the freedom to take the decision that will fix the issue, even a super simple one.
  • Another type of bosses is the client. The client had been sold a Web site with a certain budget. Maintenance is always a contentious issue. The Web agencies are not working for free and even if the bug is there in the first place. The client might not have asked for them to test in that specific browser. Channeling up a bug to the client will mean for the Web agency to bill the client. The client might not want to pay.
  • Sometimes, you will think that you got an easy win. The bug has been solved right away. What you do not know is that the developer in charge had just a put a hack in his code with a beautiful TOFIX that will be crushed at the next change of tools or updates.
  • You just need to upgrade to version X of your JS library: Updating the library will break everything else or will require to test all the zillion of other features that are using this lib in the rest of the site. In a cost/benefit scenario, you have to demonstrate to the dev that the fix is worth his time and the test.
  • Wrong department. Sometimes you get the press service, sometimes the communications department, sometimes the IT department in charge of the office backend or commercial operations systems, but not the Web site.
  • The twitter person is not techy. This happens very often. With the blossoming of social managers (do we still say that?), the people on the front-line are usually helpless when it's really technical. Your only chance is to convince them to communicate with the tech team. But the tech team is despising them because too often they brought bugs which were just useless. If the site is an airline company, a bank, a very consumers oriented service, just forget trying to contact them through twitter.
  • The twitter person is a bot. Check the replies on this twitter account, if there is no meaningful interaction with the public, just find another way.
  • You contacted. Nothing happened. People on the other side forget. I'm pretty sure you are also late replying this email or patching this annoying bug. People ;)
  • The site is just not maintained anymore. No budget. No team. No nobody for moving forward the issue.
  • You might have pissed off someone when contacting. You will never know why. Maybe it was not the right day, maybe it was something in your signature, maybe it was the way you addressed them. Be polite, have empathy.

In the end my message is look for the bare necessities of life.

Bugs images from American entomology : or description of the insects of North America, illustrated by coloured figures from original drawings executed from nature. Thanks to the New-York Public Library.

Otsukare!

Mike HommeyGoing beyond NS_ProcessNextEvent

If you’ve been debugging Gecko, you’ve probably hit the frustration of having the code you’re inspecting being called asynchronously, and your stack trace rooting through NS_ProcessNextEvent, which means you don’t know at first glance how your code ended up being called in the first place.

Events running from the Gecko event loop are all nsRunnable instances. So at some level close to NS_ProcessNextEvent, in your backtrace, you will see Class::Run. If you’re lucky, you can find where the nsRunnable was created. But that requires the stars to be perfectly aligned. In many cases, they’re not.

There comes your savior: rr. If you don’t know it, check it out. The downside is that you must first rr record a Firefox session doing what you’re debugging. Then, rr replay will give you a debugger with the capabilities of a time machine.

Note, I’m kind of jinxed, I don’t do much C++ debugging these days, so every time I use rr replay, I end up hitting a new error. Tip #1: try again with rr’s current master. Tip #2: roc is very helpful. But my takeaway is that it’s well worth the trouble. It is a game changer for debugging.

Anyways, once you’re in rr replay and have hit your crasher or whatever execution path you’re interested in, and you want to go beyond that NS_ProcessNextEvent, here is what you can do:

(rr) break nsEventQueue.cpp:60
(rr) reverse-continue

(Adjust the line number to match wherever the *aResult = mHead->mEvents[mOffsetHead++]; line is in your tree).

(rr) disable
(rr) watch -l mHead->mEvents[mOffsetHead]
(rr) reverse-continue
(rr) disable

And there you are, you just found where the exact event that triggered the executed code you were looking at was put on the event queue. (assuming there isn’t a nested event loop processed during the first reverse-continue)

Rinse and repeat.

Mozilla Addons BlogFebruary 2016 Featured Add-ons

Pick of the Month: Proxy Switcher

by rNeomy
Access all of Firefox’s proxy settings right from the toolbar panel.

“Exactly what I need to switch on the fly from Uni/Work to home.”

Featured: cyscon Security Shield

by patugo GmbH
Cybercrime protection against botnets, malvertising, data breaches, phishing, and malware.

“The plugin hasn’t slowed down my system in any way. Was especially impressed with the Breach notification feature—pretty sure that doesn’t exist anywhere else.”

Featured: Decentraleyes

by Thomas Rientjes
Evade ad tracking without breaking the websites you visit. Decentraleyes works great with other content blockers.

“I’m using it in combination with uBlock Origin as a perfect complement.”

Featured: VimFx

by akhodakivkiy, lydell
Reduce mouse usage with these Vim-style keyboard shortcuts for browsing and navigation.

“It’s simple and the keybindings are working very well. Nice work!!”

Featured: Saved Password Editor

by Daniel Dawson
Adds the ability to create and edit entries in the password manager.

“Makes it very easy to login to any sight, saves the time of manually typing everything in.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

Support.Mozilla.OrgWhat’s up with SUMO – 4th February

Hello, SUMO Nation!

Last week went by like lightning, mainly due to FOSDEM 2016, but also due to the year speeding up – we’re already in February! What are the traditional festivals in your region this month? Let us know in the comments!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Philipp – for his continuous help with Firefox Desktop and many other aspects of Mozilla and SUMO – Vielen Dank!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • is happening on Monday the 8th of February – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Developers

Community

Social

Support Forum

Knowledge Base

Localization

  • Please check the for iOS section below for an important announcement!

Firefox

And that’s it – short and sweet for your reading pleasure. We hope you have a great weekend and we are looking forward to seeing you on Monday! Take it easy and keep rocking the helpful web. Over & out!

Air MozillaWeb QA Weekly Meeting, 04 Feb 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly, 04 Feb 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Gervase MarkhamMOSS Applications Still Open

I am currently running the MOSS (Mozilla Open Source Support) program, which is Mozilla’s program for assisting other projects in the open source ecosystem. We announced the first 7 awardees in December, giving away a total of US$533,000.

The application assessment process has been on hiatus while we focussed on getting the original 7 awardees paid, and while the committee were on holiday for Christmas and New Year. However, it has now restarted. So if you know of a software project that could do with some money and that Mozilla uses or relies on (note: that list is not exhaustive), now is the time to encourage them to apply. :-)

Gijs KruitboschWhy was Tab Groups (Panorama) removed?

Firefox 44 has been released, and it has started warning users of Tab Groups about its removal in Firefox 45. There were a number of reasons that led to the removal of Tab Groups. This post will aim to talk about each in a little bit more detail.

The removal happened in the context of “Great or Dead”, where we examine parts of Firefox, look at their cost/benefit balance, and sometimes decide to put resources into improving them, and sometimes decide to recognize that they don’t warrant that and remove that part of the browser.

For Tab Groups, here are some of the things we considered:

  • It had a lot of bugs. A number of serious issues relating to performance, a lot of its tests failed intermittently, group and window closing was buggy, as well as a huge pile of smaller issues: you couldn’t move tabs represented as large squares to groups represented as small squares; sometimes you could get stuck in it, or groups would randomly move ; and the list goes on – the quality simply wasn’t what it should be, considering we ship it to millions of users, which was part of the reason why it was hidden away as much as it was.
  • The Firefox team does not believe that the current UI is the best way to manage large numbers of tabs. Some of the user experience and design folks on our team have ideas in this area, and we may revisit “managing large numbers of tabs” at some point in the future. We do know that we wouldn’t choose to re-implement the same UI again. It wouldn’t make sense to heavily invest in a feature that we should be replacing with something else.
  • It was interfering with other important projects, like electrolysis (multi-process Firefox). When using separate processes for separate tabs, we need to make certain behaviours that used to be synchronous deal with being asynchronous. The way that Tab Groups’ UI was interwoven with the tabbed browser code, and the way the UI effectively hid all the tabs and showed thumbnails for all of them instead, made this harder for things like tab switching and tab closing.
  • It had a number of serious code architecture problems. Some of the animation and library choices caused intermittent issues for users as linked to earlier. All of the groups were stored with absolute pixel positions, creating issues if you change your window size, use a different screen or resolution, etc. When we added a warning banner to the bottom of the UI telling users we were going to remove it, that interfered with displaying search results. The code is very fragile.
  • It was a large feature. By removing tab groups we removed more than 24,000 lines of code from Firefox.

With all these issues in mind, we had to decide if it was better to invest in making it a great feature in Firefox, or remove the code and focus on other improvements to Firefox. When we investigated usage data, we found that only a extremely small fraction of Firefox users were making use of Tab Groups. Around 0.01%. Such low usage couldn’t justify the massive amount of work it would take to improve Tab Groups to an acceptable quality level, and so we chose to remove it.

If you use Tab Groups, don’t worry: we will preserve your data for you, and there are add-ons available that can make the transition completely painless.

Mike TaylorA quiz about ES2015 block-scoped function declarations (in a with block statement)

Quiz time, nerds.

Given the following bit of JS, what's the value of window.f when lol gets called outside of the with statement?

with (NaN) {
  window.f = 1;
  function lol(){window.f = 2};
  function lol(){window.f = 3};
}
lol()

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Trick question, your program crashed before it got to call lol()!

According to ES2015, it should be a SyntaxError, because you're redefining a function declaration in the same scope. Just like if you were re-declaring a let thingy more than once.

However, the real trick is that Chrome and Firefox will just ignore the first declaration so your program doesn't explode (for now, anyways). So the answer is really just 3 (which you probably guessed).

Double-tricked!

Not surprisingly there are sites out there that depend on this funky declared-my-function-twice-in-the-same-scope pattern. webex.com was one (bug here), but they were super cool and fixed their code already. The Akamai Media Player on foxnews.com (bug here) is another (classic foxnews.com move).

It would be really cool if browsers didn't have to do this, so if you know anybody who works on the Akamai Advanced Media Player, tell them to delete the second declaration of RecommendationsPlugin()? And if you see an error in Firefox that says redeclaration of block-scoped function 'coolDudeFunction' is deprecated, go fix that too — it might stop working one day.

Now don't forget to like and subscribe to my Youtube ES2016 Web Compat Pranks channel.

(The with(NaN){} bit isn't important, but it was lunch time when I wrote this.)

Darrin HeneinPrototyping Firefox Mobile (or, Designers Aren’t Magicians)

I like to prototype things. To me—and surely not me alone—design is how something looks, feels, and works. These are very hard to gauge just by looking at a screen. Impossible, some would argue. Designs (and by extension, designers) solve problems. They address real needs. During the creative process, I need to see them in action, to hold them in my hand, on the street and on the train. In real life. So, as often as not, I need to build something.

Recently we’ve been exploring a few interesting ideas for Firefox Mobile. One advantage we have, as a mobile browser, is the idea of context—we can know where you are, what time it is, what your network connection is like—and more so than on other platforms utilize that context to provide a better experience. Not many people shop at the mall or wait in line at the bank with their laptop open, but in many of Firefox’s primary markets, people will have their phone with them. Some of this context could help us surface better content or shortcuts in different situations… we think.

The first step was to decide on scope. I sketched a few of these ideas out and decided on which I would test as a proof of concept: location aware shortcut links, a grouped history view, and some attempt at time-of-day recommendations. I wanted to test these ideas with real data (which, in my opinion, is the only legitimate way to test features of this nature), so I needed to find a way to make my history and other browser data available to my prototype. This data is available in our native apps, so whatever form my prototype took it would need to have access to this data in some way. In many apps or products, the content is the primary focus of the experience, so making sure you shift from static/dummy content to real/dynamic content as quickly as possible is important. Hitting edge cases like ultra-long titles or poor-quality images are real problems your design should address, and these will surface far sooner if you’re able to see your design with real content.

Next I decided (quickly) on some technology. The only criteria here was to use things that would get me to a testable product as quickly as possible. That meant using languages I know, frameworks to take shortcuts, and to ask for help when I was beyond my expertise. Don’t waste time writing highly abstracted, super-modular code or designing an entire library of icons for your prototypes… take shortcuts, use open-source artwork or frameworks, and just write code that works.

I am most comfortable with web technologies—I do work at Mozilla, after all—so I figured I’d make something with HTML and CSS, and likely some Javascript. However, our mobile clients (Firefox for Android and Firefox for iOS) are written in native code. I carry an iPhone most days, so I looked at our iOS app, which is written in Swift. I figured I could swap out one of the views with a web view to display my own page, but I still needed some way to get my browsing data (history, bookmarks, etc.) down into that view. Turns out, the first step in my plan was a bit of a roadblock.

Thankfully, I work with a team of incredible engineers, and my oft-co-conspirator Steph said he could put something together later that week. It took him an afternoon, I think. Onward. Even if I thought I hack this together, I wasn’t sure, and didn’t want to waste time.

🔑 Whenever possible, use tools and frameworks you’ve used before. It sounds obvious, but I could tell you some horror stories of times where I wasted countless hours just trying to get something new to work. Save it for later.

In the meantime, I got my web stack all set up: using an off-the-shelf boilerplate for webpack and React (which I had used before), I got the skeleton of my idea together. Maybe overkill at this point, but having this in place would let me quickly swap new components in and out to test other ideas down the road, so I figured the investment was worth it. Because the location idea was not dependant on the users existing browser data, I could get started on that while Steph built the WebPanel for me.

Working for now in Firefox on the desktop, I used the Geolocation API to get the current coordinates of the user. Appending that to a Foursquare API url and performing a GET request, I now had a list of nearby locations. Using Lodash.js I filtered them to only include records with attached URLs, then sorted by proximity.


var query = "FoursquareAPI+MyClientID"

navigator.geolocation.getCurrentPosition(function(position){
  var ll = position.coords.latitude + "," + position.coords.longitude
  $.get(query + ll, function(data) {
    data = _.filter(data.response.venues, function(venue){ 
        return venue.url != null 
    })
    comp.setState({
      foursquareData: _.sortBy(data, function(venue){
        return venue.location.distance 
      })
  });
});

 

Step 1 of my prototype, done. Well, it worked in the desktop browser at least. I knew our mobile web view supported the same Geo API, so I was confident this would work there as well (and, it did).

At this point, Steph had built some stuff I could work with. By building a special branch of Firefox iOS, I now had a field in the settings app which let me define a URL which would load in one of my home panels instead of the default native views. One of the benefits of this approach is that I could update the web-app remotely and not have to rebuild/redeploy the native app with each change. And by using a tool like ngrok I could actually have that panel powered by a dev server running on my machine’s localhost.

Simulator Screen Shot Feb 3, 2016, 11.05.57 AM

Steph’s WebPanel.swift provided me with a simple API to query the native profile for data, seen here:


window.addEventListener("load", function () {
  webkit.messageHandlers.mozAPI.postMessage({
    method: "getSitesByLastVisit",
    params: {
      limit: 10000
    },
    callback: "receivedHistory"
  });
});

Here, I’m firing off a message to our mozAPI once the page has loaded, and passing it some parameters: the method I’d like to run and the limit on the number of records returned. Lastly, the name of a callback for the iOS app to pass the result of the query to.


window.receivedHistory = function(err, data) {
  store.dispatch(updateHistory(data));
}

This is the callback in my app, which just updates the flux store with the data passed from the native code.

At this point, I had a flux-powered app that could display native browser data through react views. This was enough to get going with, and let me start to build some of the UI.

Steph had stubbed out the API for me and was passing down a JSONified collection of history visits, including the URL and title for each visit. To build the UI I had in mind, however, I needed the timestamps and icons, too. Thankfully, I contributed a few hundred lines of Swift to Firefox 1.0, and could hack these in:


extension Site: DictionaryView {
    func toDictionary() -> [String: AnyObject] {
        let iconURL = icon?.url != nil ? icon?.url : ""
        return [
            "title": title,
            "url": url,
            "date": NSNumber(unsignedLongLong: (latestVisit?.date)!),
            "iconURL": iconURL!,
        ]
    }
}

Which gave me the following JSON passed to the web view:


[
  {
    title: “We’re building a better internet — Mozilla”,
    url: “http://mozilla.org”,
    date: “1454514630131”,
    iconUrl: “/media/img/favicon.52506929be4c.ico”
  },
  …
]

Firstly, try not to judge my Swift skills. The purpose here was to get it working as quickly as possible, not to ship this code. Hacks are allowed, and encouraged, when prototyping. I added a date and iconURL field to the history record object and before long, I was off to the races.

With timestamps and icons in hand, I could build the rest of the UI. A simple history view that batched visits by domain (so 14 Gmail links would collapse to 3 and “11 more…”), and a quick attempt at a time-based recommendation engine.

This algorithm may be ugly, but naively does one thing: depending on what time of day and day of the week it was, return to me some guesses of which sites I may be interested in (based on past browsing behaviour). It worked simply by following these steps:

  1. Filter my entire history to only include visits from the same day type (weekday vs. weekend)
  2. Exclude domains that are useless, like t.co or bit.ly
  3. Further filter the set of visits to only include visits +/- some buffer around the current time: the initial prototype used a buffer of +/- one hour
  4. Group the visits by their TLD + one level of path (i.e. google.com/document), which gave me better groups to work with
  5. Sort these groups by length, to provide an array with the most popular domain at the beginning (and limit this output to the top n domains, 10 in my case)

The output is similar to the following:


[
  {
    domain: “http://flickr.com/photos”,
    date: “Wed Feb 03 2016 10:54:02 GMT-0500”,
    count: 3
  },
  {
    domain: “www.dpreview.com/forums”,
    date: “Wed Feb 03 2016 10:54:02 GMT-0500”,
    count: 2
  },
  …
]

Awesome. Now I have a Foursquare-powered component at the top which lists nearby URLs. Below that, a component that shows me the 5 websites I visit most often around this time of day. And next, a component that shows my history in a slightly improved format, with domain-grouping and truncation of long lists of related visits. All with my actual data, ready for me to use this week and evaluate these ideas.

One problem surfaces, though. Any visits that are from another device (through Firefox Sync) have no icon attached to them (right now, we don’t sync favicons across devices), which leaves us with large series of visits with no icon. One of the hypotheses we want to confirm is that the favicon (among other visual cues) help the user parse and understand the lists of URLs we present them with.

🔑 Occasionally I’ll be faced with a problem like this: one where I know the desired outcome but have not tackled something like this before and so have low confidence in my ability to fix it quickly. I know I need some way to get icons for a set of URLs, but not how exactly that will work. At this point its crucial to remember one of the goals of a prototype: get to a testable artifact as quickly as possible. Often in this situation I’ll time box myself: if I can get something working in a few hours, great. If not, move on or just fake it (maybe having a preset group of icons I could assign at random would help address the question).

Again, I turn to my trusty toolbox, where I know the tools and how to use them. In this case that was Node and Express, and after a few hours I had an app running on Heroku with a simple API. I could POST an array of URLs to my endpoint /icons and my Node app would spin up a series of parallel tasks (using async.parallel). Each task would load the url via node’s request module, and would hand the HTML in the response over to cheerio, a server-side analog for jQuery. Using cheerio I could grab all the <meta> tags, check for a number of known values (‘icon’, ‘apple-touch-icon’, etc) and grab the URL associated with it. While I was there, I figured I might as well capture a few other tags, such as Facebooks OpenGraph og:image tag.

Once each of the parallel requests either completed or timed out, I combined all the extracted data into one JSON object and sent it back down the wire. A sample request may look like this:


POST to ‘/icons’

{ urls: [“facebook.com”] }

And the response would look like this (keyed to the URL so the app which requested it could associate icons/images to the right URL… the above array could contain any number of URLs, and the blow response would just have more top-level keys):


{
  “facebook.com”: {
    “icons”: [
      {
        “type”: “shortcut-icon”,
        “url”: “https://static.xx.fbcdn.net/rsrc.php/yV/r/hzMapiNYYpW.ico”
      }, …
    ],
    “images”: [
      {
        “type”: “og-image”,
        “url”: “http://images.apple.com/ca/home/images/og.jpg?201601060653”
      }, …
    ]
  }
}

Again, maybe not the best API design, but it works and only took a few hours. I added a simple in-memory cache so that subsequent requests for icons or images for URLs we’ve already fetched are returned instantly and with no delay. The entire Express app was 164 lines of Javascript, including all requires, comments, and error handling. It’s also generic enough that I can now use it for other prototypes where metadata such as favicons or lead images are needed for any number or URLs.

panel-blog

So why do all this work? Easy: because we have to. Things that just look pretty have become a commodity, and beyond being nice to look at they don’t serve much purpose. As designers, product managers, engineers—anyone who makes things—we are responsible for delivering real value to our users. Features and apps they will actually use, and when they use them, they will work well. They will work as expected, and even go out of their way to provide a moment of delight from time to time. It should be clear that the people who designed this “thing” actually used it. That it went through a number of iterations to get right. That it was no accident or coincidence that what you are holding in your hands ended up they way it is. That the designers didn’t just guess at how to solve the problem, but actually tried a few things to really understand it at a fundamental level.

Designers are problem solvers, not magicians. It is relatively cheap to pivot an idea or tweak an interface in the design phase, versus learning something in development (or worse, post-launch) and having to eat the cost of redesigning and rebuilding the feature. Simple ideas often become high-value features once their utility is seen with real use. Sometimes you get 95% of the way, and see how a minor revision can really push an idea across the finish line. And, realistically, sometimes great ideas on paper utterly flop in the field. Better to crash and burn in the hands of your trusted peers than out in the market, though.

Test your ideas, test them with real content or data, and test them with real people.

Air MozillaThe Joy of Coding - Episode 43

The Joy of Coding - Episode 43 mconley livehacks on real Firefox bugs while thinking aloud.

Laura de Reynal38 hours

TrainMapIndiaFrom Bangaluru to Ahmedabad, immersed into the train ecosystem for 38hours.
Caravan 2016 India-24Caravan 2016 India-33Caravan 2016 India-37Caravan 2016 India-49Caravan 2016 India-51
Caravan 2016 India-14Caravan 2016 India-21

Caravan 2016 India-441608km 37:45 hours

Filed under: Mozilla, Photography, Research Tagged: Connected Spaces, ethnography, India, journey, mobile office, research, train

Robert O'Callahanrr 4.1.0 Released

This release mainly improves replay performance dramatically, as I documented in November. It took a while to stabilize for release, partly because we ran into a kernel bug that caused rr tests (and sometimes real rr usage) to totally lock up machines. This release contains a workaround for that kernel bug. It also contains support for the gdb find command, and fixes for a number of other bugs.

Mozilla Addons BlogWebExtensions in Firefox 46

We last updated you on our progress with WebExtensions when Firefox 45 landed in Developer Edition (Aurora), and today we have an update for Firefox 46, which landed in Developer Edition last week.

While WebExtensions will remain in an alpha state in Firefox 46, we’ve made lots of progress, with 40 bugs closed since the last update. As of this update, we are still on track for a milestone release in Firefox 48 when it hits Developer Edition. We encourage you to get involved early with WebExtensions, since this is a great time to participate in its evolution.

A focus of this release was quality. All code in WebExtensions now pass eslint, and we’ve fixed a number of issues with intermittent test failures and timeouts. We’ve also introduced new APIs in this release that include:

  • chrome.notifications.getAll
  • chrome.runtime.sendMessage
  • chrome.webRequest.onBeforeRedirect
  • chrome.tabs.move

Create customizable views

In addition to the new APIs, support was added for second-level popup views in bug 1217129, giving WebExtension add-ons the ability to create customizable views.

Check out this example from the Whimsy add-on:
browser-action-1217129

Create an iFrame within a page

The ability to create an iFrame that is connected to the content script was added in bug 1214658. This allows you to create an iFrame within a rendered page, which gives WebExtension add-ons the ability to add additional information to a page, such as an in-page toolbar:

demo-1214658

For additional information on how to use these additions to WebExtensions, (and WebExtensions in general), please check out the examples on MDN or GitHub.

Upload and sign on addons.mozilla.org (AMO)

WebExtension add-ons can now be uploaded to and signed on addons.mozilla.org (AMO). This means you can sign WebExtension add-ons for release. Listed WebExtension add-ons can be uploaded to AMO, reviewed, published and distributed to Firefox users just like any other add-on. The use of these add-ons on AMO is still in beta and there are areas we need to improve, so your feedback is appreciated in the forum or as bugs.

Get involved

Over the coming months we will work our way towards a beta in Firefox 47 and the first stable release in Firefox 48. If you’d like to jump in to help, or get your APIs added, please join us on our mailing list or at one of our public meetings, or check out this wiki page.

Air MozillaWebdev Extravaganza: February 2016

Webdev Extravaganza: February 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Henrik SkupinFirefox Desktop automation goals Q1 2016

As promised in my last blog posts I don’t want to only blog about the goals from last quarters, but also about planned work and what’s currently in progress. So this post will be the first one which will shed some light into my active work.

First lets get started with my goals for this quarter.

Execute firefox-ui-tests in TaskCluster

Now that our tests are located in mozilla-central, mozilla-aurora, and mozilla-beta we want to see them run on a check-in basis including try. Usually you will setup Buildbot jobs to get your wanted tasks running. But given that the build system will be moved to Taskcluster in the next couple of months, we decided to start directly with the new CI infrastructure.

So how will this look like and how will mozmill-ci cope with that? For the latter I can say that we don’t want to run more tests as we do right now. This is mostly due to our limited infrastructure I have to maintain myself. Having the needs to run firefox-ui-tests for each check-in on all platforms and even for try pushes, would mean that we totally exceed the machine capacity. Therefore we continue to use mozmill-ci for now to test nightly and release builds for en-US but also a couple of other locales. This might change later this year when mozmill-ci can be replaced by running all the tasks in Taskcluster.

Anyway, for now my job is to get the firefox-ui-tests running in Taskcluster once a build task has been finished. Although that this can only be done for Linux right now it shouldn’t matter that much given that nothing in our firefox-puppeteer package is platform dependent so far. Expanding testing to other platforms should be trivial later on. For now the primary goal is to see test results of our tests in Treeherder and letting developers know what needs to be changed if e.g. UI changes are causing a regression for us.

If you are interested in more details have a look at bug 1237550.

Documentation of firefox-ui-tests and mozmill-ci

We are submitting our test results to Treeherder for a while and are pretty stable. But the jobs are still listed as Tier-3 and are not taking care of by sheriffs. To reach the Tier-2 level we definitely need proper documentation for our firefox-ui-tests, and especially mozmill-ci. In case of test failures or build bustage the sheriffs have to know what’s necessary to do.

Now that the dust caused by all the refactoring and moving the firefox-ui-tests to hg.mozilla.org settles a bit, we want to start to work more with contributors again. To allow an easy contribution I will create various project documentation which will show how to get started, and how to submit patches. Ultimately I want to see a quarter of contribution project for our firefox-ui-tests around mid this year. Lets see how this goes…

More details about that can be found on bug 1237552.

Christian HeilmannAll the small things at Awwwards Amsterdam

Last week, I cut my holiday in the Bahamas short to go to the Awwwards conference in Amsterdam and deliver yet another fire and brimstone talk about performance and considering people outside of our sphere of influence.

me at awwwardsPhoto by Trine Falbe

The slides are on SlideShare:

The screencast of the talk is on YouTube:

I want to thank the organisers for allowing me to vent a bit and I was surprised to get a lot of good feedback from the audience. Whilst the conference, understandably, is very focused on design and being on the bleeding edge, some of the points I made hit home with a lot of people.

Especially the mention of Project Oxford and its possible implementations in CMS got a lot of interest, and I’m planning to write a larger article for Smashing Magazine on this soon.