Marco ZeheWAI-ARIA menus, and why you should generally avoid using them

The WAI-ARIA standard defines a number of related menu roles. However, in 99% of all cases, these should not be used.

A bit of history

In the mid 2000s, people were arguing over whether HTML4.01 or XHTML 1.x was the better standard. Both markup languages had in common that they hardly knew anything about advanced widgets. However, such widgets were more and more asked for, by web developers around the world, but because standards processes were very slow, and browser vendors were not that many as today, things were moving too slowly.

So, web developers started wildly implementing these with div and span elements and style them to look like desktop counterparts. This was to create richer user experiences for their web sites which were slowly transitioning to become web applications.

Back then, people were still using mostly desktop applications that had classic menu bars with dropdown menus, on operating systems like Windows XP. The Mac still knows these today, Windows, however, has moved away from them in modern applications.

WAI-ARIA to the rescue

To somehow cope with the influx of new web experiences, the WAI-ARIA standard was brought to life by a group of very talented people, to map desktop concepts to the web, and give web authors the means to communicate to desktop screen readers what desktop equivalent of a widget was being created. Screen readers were thus given the information to deal with these widgets in a similar way as they would in desktop applications.

One set of widgets were menu bars, menus, menu items, and their siblings menuitemradio and menuitemcheckbox. These were to mimic a menu bar, menus that popped up from these, and the three types of menu items that Windows, Linux/GNOME and Mac knew about. Screen readers reacted to these menus the same way as for desktop applications. Because of a long history of no proper programming interface support, especially on Windows, screen readers had to hack together special so-called menu modes for applications, because technically, focus was in two places at the same time, or so it seemed. And there were a bunch of other problems and quirks.

The important takeaway is that menuitem and friends provoke a very specific set of expectations within users and screen readers of a web application’s behavior. A menu can only contain menu items, for example, a menu bar only menus, and such. Also, the keyboard is expected to behave in a very specific way: A set of menu bar items is traversed from left to right (or right to left in RTL languages), menus are dropped down with Enter or DownArrow, the menu is closed with Escape etc.

The problem

Over the years, it has become very apparent that these concepts are, with the receeding of menus especially on Windows, less and less known to people. Many web developers lack the context of this complex set of widgets and the expected interaction. One of the most common ARIA errors we see in the wild comes from improper uses of the menu roles. Often, menu items are used without proper menu parents, resulting in totally improper behavior by assistive technologies. A screen reader could get stuck in their menu mode, for example, because the web author doesn’t signal when a menu has been closed.

Generally, it seems that web developers read about these menu roles, and think a set of links within a list that may pop open another sub set of links is enough to warrant it calling a menu. Well, let me put it bluntly: Nope. Because these then don’t support the expected behavior, links bring you to other pages instead of performing an action, and thus are a totally different type of widget, etc.

Likewise, a button that, when pressed, pops open a set of options or actions to choose from is often coded as a menu. While this seems correct at first glance, it, too, means that you have to use at least menu and menuitem, and the proper set of key strokes. But in general, because of all these problems that screen readers have to deal with when it comes to menus in desktop applications, even such an experience is often not as good for users. While NVDA and Firefox might cope fine in an instance, JAWS may react differently, and where both JAWS and NVDA might cope well, VoiceOver might fall over, or any other combination of these. Trust me, I have seen them all over the years.

The solution

The solution is plain and simple: Generally, don’t use menu, menuitem, menubar, menuitemcheckbox, or menuitemradio. Only if you build something like Google Docs and deal with what you get when you press Alt+F (or Alt+Shift+F in Firefox), these are warranted. In most likely all other cases, they are not.

Even when you have a button with aria-haspopup=”true” and aria-expanded=”true” or “false”, depending on whether the popup is shown or not, the user experience is usually better if the container of your popup items is an ul element, and the buttons or links that perform actions are just part of a list, normal li elements within that ul. You can still implement keyboard navigation like moving from button to button with arrow keys, and you must make Escape close that popup and return focus to the button that opened the popup, but leaving all the menu related roles out of the game makes the user experience that much better. Remember that WAI-ARIA roles hide the normal semantics. A user actually needs to know if the item is a button, which performs an action, or a link, which is most likely going to take them to another page within your site.

One that doesn’t know about menus

And here’s another aspect that I intentionally left until last: Mobile doesn’t know about these old desktop concepts of menu bars and pull-down menus at all. So the user experience for such roles is even more unpredictable than on desktop. Buttons and links, on the other hand, are very well supported on mobile as well. Other pop-ups on mobile operating systems contain these normally, too, so this is a totally expected and consistent user experience.


In my opinion, these menu* roles should be deprecated in WAI-ARIA 2.0, but I realise that that might not be feasible. Moreover, since we do have applications like Google Docs which implement such a menu system, we still need them. But at least the language to discourage the use of these roles for 99.9% of all cases should be much stronger in the spec or authoring practices. This is one of those things that gets totally misunderstood by web developers who are not as experienced in the finer points of accessibility nuances, and their use of these roles is often misguided by thinking along the lines of “it’s a popup, has to be a menu”. The situation is quite similar to the misuse of the “application” role, which also isn’t justified by just the fact that a web application is being created.

So as a TL;DR, I’d say: Don’t use menu* roles, they’re most probably inappropriate and cause more churn than a good user experience. Exceptions apply, but you’ll probably be told when they do. 😉

Mozilla Localization (L10N)L10N Report: September Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

Javanese and Sundanese locales have been added to Firefox Rocket and are now launching in Indonesia. Congrats to these new teams and to their new localizers! Come check out the Friends of the Lion section below for more information on who they are.

New content and projects

What’s new or coming up in Firefox desktop

There are currently few strings landing in Nightly, making it a good time to catch up (if your localization is behind), or test your work. Focus in particular on certificate errors and preferences, given the amount of changes happening around Content blocking, Privacy and Security. The deadline to ship any localization update in Beta (Firefox 63) is October 9.

It’s been a while since we talked about Fluent, but that doesn’t mean that the migration is not progressing. Expect a lot of files moving to Fluent in the coming weeks, thanks to the collaboration with students from the Michigan University.

What’s new or coming up in mobile

Firefox Focus for Android is launching a new version this week, and it’s shipping with 9 new locales: Aymara (ay), Galician (gl), Huastec (hus), Marathi (mr), Punjabi (pa-IN), Náhuat Pipil (ppl), K’iche’ (quc), Sundanese (su), Yucatec Maya (yua).

What’s new or coming up in web projects

Activate Mozilla

The Activate Mozilla campaign aims at the grassroots of volunteer contributions. The initiative wants to bring more clarity on what are the most important areas to contribute to at Mozilla right now by providing guidance to mobilizers on how to recruit contributors and create community around meaningful Mozilla projects.

The project is added to Pontoon in a few required languages, with opt-in option for other languages. Once the completion reaches 95%, the locale will be enabled on production. There is no staging server, and any changes in Pontoon will be synced up and pushed to production directly. has a new tracking protection tour that highlights the content blocking feature. The update is ready for localization, and the tour will be available on production in early October.

Common Voice

The new home page was launched earlier this month. It is only available for 50% of the users during the A/B testing phase. The new look will roll out to all users in the next sprint. The purpose of this redesign is to convert more visitors to the site to contribute. We hope the new looking will help improve the conversion rate we currently have, which is between 10-15%. Though the change doesn’t increase more localization work at string level, the team believes it will bring more people to donating their voices to established locales.

If your language is not available on the current site, the best way to make a request is through Pontoon by following these step-by-step instructions. To learn more about Common Voice project and discussions, check them out on Discourse.

What’s new or coming up in Foundation projects

A very quick update on the fundraising campaign: the September fundraiser is being sent out in English and localization will start very soon

On the Advocacy side, the copyright campaign will continue after the unfortunate vote from the EU Parliament and the next steps are being discussed. The final vote is scheduled towards the end of year.

Stay tuned!

What’s new or coming up in Support

What’s new or coming up in Pontoon

  • New homepage. The new homepage was designed and developed by Pramit Singhi as part of Google Summer of Code. Apart from the new design, it also brings several important content changes. It presents Pontoon as the place to localize Mozilla projects, explains the “whys” and “hows” of localization at Mozilla in general, brings a clear call to action, and moves in-context localization demo to a separate page.

Guided tour. Another product of Google Summer of Code is a guided tour of Pontoon, designed and developed by an experienced Pontoon contributor Vishal Sharma. It’s linked from the homepage as a secondary call to action, and consists of two pieces: the actual tour, which explains the translation user interface, and the tutorial project, which demonstrates more details through carefully chosen strings.

  • System projects. Vishal also developed the ability to mark projects as “system projects”, which are hidden from dashboards. The aforementioned Tutorial project and Pontoon Intro are both treated as system projects.
  • Read-only locales. Last month we enabled locales previously not in Pontoon, in read-only mode. That means dashboards and the API now present full project status across all locales, all Mozilla translations are accessible in the Locales tab, and the Translation Memory of locales previously not available in Pontoon has improved. Check out the newsgroup for more details on which locales were enabled for which projects.
  • Unchanged filter improvement. Thanks to Raivis Dejus, the Unchanged filter now works as expected. Previously, it returned all strings for which the source string is the same as one of the suggestions. Now, it only compares source strings with active translations (show in the string list).


Pontoon tips

Here’s a quick Pontoon tip that a lot of people already know, but can still help some.

Pontoon has a feature that automatically identifies specific elements in strings, highlights them and makes them clickable. Here’s an example:

This feature is meant to allow you to easily copy placeables into your translation by clicking them. This saves you time and reduces the risk of introducing typos if you manually rewrite them, or partially select them while copy/pasting them.

One common misconception is to think those elements should always be kept in English. While it’s certainly true in multiple cases (variables, HTML tags like in the screenshot above…), there are several places where Pontoon highlights parts of a string that could or should be translated.

Here’s an example where all the highlighted elements should be translated:

Here Pontoon thinks those words are acronyms, and that you could potentially keep them in your translation. It turns out here they are not acronyms, it’s just a sentence in full caps, so we can simply ignore the highlights and translate it like any other string.

Here’s a last example where Pontoon successfully detects an acronym, and it could have been kept but the localizer decided to translate it anyway (and it’s okay):

To summarize the feature, Pontoon does its best to guess what parts of a string you are likely to keep in your translation, but these are suggestions only.

Also remember, you’re not alone! If you have a doubt, you can always reach out to the l10n PM owning the project. They will clarify the context for you and help you better identify false positives.


  • Jakarta (Indonesia) Rocket Sprint held on August 11-12 added two new languages to the product. Sundanese contributor Akhlis summarized the weekend activity with his blog.
  • Pune (India) l10n community event just happened (Sept. 1-2). Come check out some pictures:
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

  • Ali Demirtaş, who surpassed the goal of over 1 thousand suggestions for Turkish.
  • Congratulations to the contributors who have helped launch Firefox Rocket in Javanese and Sundanese! Here they are:
    • Javanese Team:
      • Rizki Dwi Kelimutu
      • Dian Ina Mahendra
      • Armen Ringgo Sukiro
      • Nur Fahmia
      • Nuri Abidin
      • Akhlis Purnomo
    • Sundanese Team:
      • Fauzan Alfi Agirachman
      • Muhammad Fadhil
      • Mira Marsellia
      • Yusup Ramdani
      • Iskandar Alisyahbana Adnan
  • Ahmad Nourallah, who localized numerous Support articles as part of the “Top 20” month.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Daniel StenbergMore curl bug bounty

Together with Bountygraph, the curl project now offers money to security researchers for report security vulnerabilities to us.

The idea is that sponsors donate money to the bounty fund, and we will use that fund to hand out rewards for reported issues. It is a way for the curl project to help compensate researchers for the time and effort they spend helping us improving our security.

Right now the bounty fund is very small as we just started this project, but hopefully we can get a few sponsors interested and soon offer "proper" rewards at decent levels in case serious flaws are detected and reported here.

If you're a company using curl or libcurl and value security, you know what you can do...

Already before, people who reported security problems could ask for money from Hackerone's IBB program, and this new program is in addition to that - even though you won't be able to receive money from both bounties for the same issue.

After I announced this program on twitter yesterday, I did an interview with Arif Khan for Here's what I had to say:

A few questions

Q: You have launched a self-managed bug bounty program for the first time. Earlier, IBB used to pay out for most security issues in libcurl. How do you think the idea of self-management of a bug bounty program, which has some obvious problems such as active funding might eventually succeed?

First, this bounty program is run on so I wouldn't call it "self-managed" since we're standing on a lot of infra setup and handled by others.

To me, this is an attempt to make a bounty program that is more visible as clearly a curl bounty program. I love Hackerone and the IBB program for what they offer, but it is A) very generic, so the fact that you can get money for curl flaws there is not easy to figure out and there's no obvious way for companies to sponsor curl security research and B) they are very picky to which flaws they pay money for ("only critical flaws") and I hope this program can be a little more accommodating - assuming we get sponsors of course.

Will it work and make any differences compared to IBB? I don't know. We will just have to see how it plays out.

Q: How do you think the crowdsourcing model is going to help this bug bounty program?

It's crucial. If nobody sponsors this program, there will be no money to do payouts with and without payouts there are no bounties. Then I'd call the curl bounty program a failure. But we're also not in a hurry. We can give this some time to see how it works out.

My hope is though that because curl is such a widely used component, we will get sponsors interested in helping out.

Q: What would be the maximum reward for most critical a.k.a. P0 security vulnerabilities for this program?

Right now we have a total of 500 USD to hand out. If you report a p0 bug now, I suppose you'll get that. If we just get sponsors, I'm hoping we should be able to raise that reward level significantly. I might be very naive, but I think we won't have to pay for very many critical flaws.

It goes back to the previous question: this model will only work if we get sponsors.

Q: Do you feel there’s a risk that bounty hunters could turn malicious?

I don't think this bounty program particularly increases or reduces that risk to any significant degree. Malicious hunters probably already exist and I would assume that blackhat researchers might be able to extract more money on the less righteous markets if they're so inclined. I don't think we can "outbid" such buyers with this program.

Q: How will this new program mutually benefit security researchers as well as the open source community around curl as a whole?

Again, assuming that this works out...

Researchers can get compensated for the time and efforts they spend helping the curl project to produce and provide a more secure product to the world.

curl is used by virtually every connected device in the world in one way or another, affecting every human in the connected world on a daily basis. By making sure curl is secure we keep users safe; users of countless devices, applications and networked infrastructure.

Update: just hours after this blog post, Dropbox chipped in 32,768 USD to the curl bounty fund...

Niko MatsakisOffice Hours #0: Debugging with GDB

This is a report on the first “office hours”, in which we discussed debugging Rust programs with gdb. I’m very grateful to Ramana Venkata for suggesting the topic, and to Tom Tromey, who joined in. (Tom has been doing a lot of the work of integrating rustc into gdb and lldb lately.)

This blog post is just going to be a quick summary of the basic workflow of using Rust with gdb on the command line. I’m assuming you are using Linux here, since I think otherwise you would prefer a different debugger. There are probably also nifty graphical tools you can use and maybe even IDE integrations, I’m not sure.

The setting

We specifically wanted to debug some test failures in a cargo project (esprit). When running cargo test, some of the tests would panic, and we wanted to track down why. This particular crate is also nightly only.

How to launch gdb

The first is to find the executable that runs the tests. This can be done by running cargo test -v and looking in the output for the final Running line. In this particular project (esprit), we needed to use nightly, so the command was something like:

> cargo +nightly test -v
     Running `/home/espirit/target/debug/deps/prettier_rs-7c95ceaface142a9`

Then one can invoke gdb with that executable. Note also that you need to be running a version of gdb that is somewhat recent in order to get good Rust support (ideally in the 8.x series). You can test your version of gdb by running gdb -v:

> gdb -v
GNU gdb (GDB) Fedora 8.1-15.fc28

To run gdb, it is recommended that you use the rust-gdb wrapper, which adds some Rust-specific pretty printers and other configuration. This is installed by rustup, and hence it respects the +nightly flag. In this case, we want to invoke it with the test executable. We are also going to set the environment variable RUST_TEST_THREADS to 1; this prevents the test runner from using multiple threads, since that complicates the process of stepping through the binary:

> RUST_TEST_THREADS=1 rust-gdb target/debug/deps/prettier_rs-7c95ceaface142a9

Once you are in gdb

Once you are in gdb, you can run the program by typing run (or just r). But in this case it will just run, find the test failure, and then exit, which isn’t exactly what we wanted: we wanted execution to stop when the panic! occurs and let us inspect what’s going on. To do that, you will need to set a breakpoint. In this case, we want to set it on the special function rust_panic, which is defined in libstd for this exact purpose. We can do that with the break command, as shown below. After setting the break, then we can run:

> break rust_panic
Breakpoint 1 at 0x55555564e273: file libstd/, line 525.
> run

Now when the panic occurs, we will trigger the breakpoint, and gdb gives us back control. At this point, you can use the bt command to get a backtrace, and the up command to move up and inspect the callers’ state. You may also enjoy the “TUI mode”. Anyway, I’m not really going to try to teach GDB here, I’m sure there are much better tutorials available.

One thing I did not know: gdb even supports the ability to use a limited subset of Rust expressions from within the debugger, so you can do things like p foo.0 to access the first field of a tuple. You can even call functions and methods, but not through traits.

Final note: use rr

Another option that is worth emphasizing is that you can use the rr tool to get reversible debugging. rr basically extends gdb but allows you to not only step and move forward through your program, but also backward. So – for example – after we break no rust_panic, we could execute backwards and see what happened that led us there. Using rr is pretty straightforward and is explained here. (There is also Huon’s old blog post, which still seems fairly accurate.) I could not, however, figure out how to use rust-gdb with rr replay, but even just plain old gdb works ok – I filed #54433 about using rust-gdb and rr replay, so maybe the answer is in there.

Ideas for the future

gdb support works pretty well. There were some rough edges we encountered:

  • Dumping hashmaps and btree-maps doesn’t give useful output. It just shows their internal representation, which you don’t care about.
  • It’d be nice to be able to do cargo test --gdb (or, even better, cargo test --rr) and have it handle all the details of getting you into the debugger.

The Rust Programming Language BlogSecurity advisory for the standard library

The Rust team was recently notified of a security vulnerability affecting the standard library’s str::repeat function. When passed a large number this function has an integer overflow which can lead to an out of bounds write. If you are not using str::repeat, you are not affected.

We’re in the process of applying for a CVE number for this vulnerability. Fixes for this issue have landed in the Rust repository for the stable/beta/master branches. Nightlies and betas with the fix will be produced tonight, and 1.29.1 will be released on 2018-09-25 with the fix for stable Rust.

You can find the full announcement on our rustlang-security-announcements mailing list here.

Daniel PocockResigning as the FSFE Fellowship's representative

I've recently sent the following email to fellows, I'm posting it here for the benefit of the wider community and also for any fellows who don't receive the email.

Dear fellows,

Given the decline of the Fellowship and FSFE's migration of fellows into a supporter program, I no longer feel that there is any further benefit that a representative can offer to fellows.

With recent blogs, I've made a final effort to fulfill my obligations to keep you informed. I hope fellows have a better understanding of who we are and can engage directly with FSFE without a representative. Fellows who want to remain engaged with FSFE are encouraged to work through your local groups and coordinators as active participation is the best way to keep an organization on track.

This resignation is not a response to any other recent events. From a logical perspective, if the Fellowship is going to evolve out of a situation like this, it is in the hands of local leaders and fellowship groups, it is no longer a task for a single representative.

There are many positive experiences I've had working with people in the FSFE community and I am also very grateful to FSFE for those instances where I have been supported in activities for free software.

Going forward, leaving this role will also free up time and resources for other free software projects that I am engaged in.

I'd like to thank all those of you who trusted me to represent you and supported me in this role during such a challenging time for the Fellowship.


Daniel Pocock

Firefox UX8 tips for hosting your first participatory workshop

Some practical suggestions from our workshop hosting experience.

<figcaption>A scene of our participatory workshop</figcaption>

“Why not give it a try?” Ricky, our senior user researcher said.
“Design with people in my parents age without any design backgrounds? In-ter-est-ing……!” I couldn’t believe that he just threw such a crazy idea in our design planning meeting.

Before we go through the whole story, let me give you more context about it. Mozilla Taipei UX team is currently working on a new product exploration for improving the online experience of people between the age of 55~65 in Taiwan. From 2 month, 4 rounds of in-depth interviews we conducted with 34 participants, we understood our target users holistically from their internet behaviors, unmet needs, to their lifestyles. After hosting a 2-day condense version of design sprint in Taipei office for generating brilliant product concepts (more stories, stay tuned :)), we were about to reach the stage of validation.

How do we know if the concepts are solving users’ pain points? What will be the best validation method? We were discussing enthusiastically in the meeting. As designers with age 30+ years younger than our users, we weren’t fully confident with our concepts. Then we realized that participatory workshop might be a good approach especially for such a special segment of the target users.

We have lots of experience on hosting workshops, but most of them are internal. This is our first time hosting a participatory workshop externally, and also our first time having a group of participants who have lots of life experiences but zero for designing. That’s why I felt so anxious when we decided to give it a try. But in the end, we nailed it. We were so satisfied with what we learned from the whole process, and how efficient it is (compared to having individual user interviews).

Here are the tips summarized from our experience.

Set Goals

Starting with clear goals is the key for the entire workshop

Since we can’t accomplish all of what we have in mind in just one 2.5-hrs workshop, we need to make clear goals and scope at the outset to make sure we’re focusing on the right direction. During our planning session, we went through all the hypothesis and product concepts, and discussed about the questions we have for validation.

After the prioritization, the 3 goals we want to achieve through the participatory workshop are: validating the hypothetical scenarios, get users’ feedback for the product concepts, and get more insights from their crazy ideas.

Invite Participants

Participants who are extreme users help us get more valuable insights

Phone interviews are very useful for screening participants. Before we decided to hand out the invitation, we need to make sure that he/ she not only fit our participants’ criteria, but also is an expert on the use cases we set. Influencers with who can express their digital experience precisely are the best, because we’ll ask them to share their thoughts on the design topics, not just having Q&A interviews (from the previous street intercept experience, we found out that some of the people at this age bracket couldn’t do so).

<figcaption>The extreme user shared his cross-device online experiences with the team</figcaption>

3+1+1 is a magic combination of a team in the workshop

We set up 2 teams in the workshop. In each of the team, we have:

3 Participants
Consist of all genders, and from diverse industries.

<figcaption>A designer took notes from his observation</figcaption>

1 Designer
Considering our senior participants don’t have any design experience, we invited a designer to join the team’s designing conversation to avoid having 3 strangers staring at each other all the time. Surprisingly, our participants all expressed their thoughts quite well and contributed ideas smoothly.

However, senior participants preferred to speak out their thoughts instead of writing things down, so we switched the designer’s role immediately to note taker. From the learning we got, we suggest to have the designers to:

  • Listen and note down the differences between general users and participants.
  • Explore more insights and visualize the ideas.
<figcaption>A facilitator showed some examples to the participants</figcaption>

1 Facilitator
The facilitator will be in charge of maintaining the momentum in each session. Instead of brainstorming with participants, he/ she will focus on:

  • Create vibrant atmosphere and engage every single participant.
  • Trigger ideas by providing some examples.
  • Time control and redirecting to the topic when needed.

Design the Workshop

The warm up activity kick started the creative and imaginative atmosphere

<figcaption>The atmosphere was delighted from the outset</figcaption>

Don’t forget to add this most interesting session in your workshop! If you arrange the warm up activity successfully, the participants will not only practice creative thinking but also get more familiar with other team members in a delightful way.

The activity we ran is a super power game which is similar to the icebreaker in Gamestorming. In the first 10 mins, we asked participants and designers to share their own super power in their daily life, such as making 6 dishes at a time or finding the best deal of merchandise. This was a good chance for them to have a peek of other team members’ life and personality, and also for us to build a sense of the participants’ profiles.

Then, we provided a bunch of unrealistic terms on a sheet for them to select the ones they wish to own for their future superpower. The terms are so odd that they’re forced to discuss with each other to figure out what the meaning was, and got a sense of “nothing is impossible” through the activity.

<figcaption>Participants started some conversations by discussing the terms on the sheet</figcaption>

Validate the scenarios by asking true/ false questions

Answering right or wrong is easier than creating scenarios from scratch. We prepared 10 scenario cards related to the product concepts, and had a true or false question for each card. When designing the scenario cards, we intentionally keep the statements vague, so that we can ask participants to explain what the real scenarios look like without leading them. Here’s an example:

😟 “I want to bookmark this important article in my browser.”
😐 “I want to keep this important content in my space.”
😀 “I want to keep this important content organized in my space.”

As you can see, we replaced the “bookmark” by a broader term “keep”, so that it will be open to have more discussions on what’s their current methods for collecting things from the internet. It could be other words like “download”, “save”, “copy & paste”,…whatsoever, and that’s exactly what we want to learn from the participants. So does the reasons we replaced “ article” by “content”, and “browser” by “space”.

Mixing 2 related actions into one card can be another trick. By adding a word “organized” in the statement, we can have another round of discussion for validating a possible afterward behavior in the same section. Participants have rights to correct the scenarios we offered, so that we will get several validated scenarios in the end of the session.

<figcaption>Participants explained what their scenarios look like when answering true/ false questions</figcaption>

Prepared design components can help participants easily generate ideas

<figcaption>Using prepared design components to generate ideas is easier than starting with blank sheets</figcaption>

Participatory design isn’t asking people to do the job for designers. Actually, It’s an interactive approach, which is different from passive user interviews, for us to learn more about the underlying needs/ motive for each of the ideas. Therefore, the subjects we asked participants to design weren’t all feasible but allowed us to get more insights of their priority and preferences. After the workshop, our product concepts were refined by injecting some ideas that transformed from the co-design activities.

We ran 2 different types of brainstorming for each topic, free brainstorming and brainwiting. As we expected, the free brainstorming was a bit difficult for participants to get started. So we prepared some examples to inspire them. As for brainwriting, it didn’t work quite well for the senior participants due to their dislike for writing that I mentioned above. Thus, I collected some tips we learned from running the brainstorming session:

  • Reveal some design opportunities by pointing out some keywords of the subject that can help participants get started.
  • Provide some paper UI components on the table to empower participants to present their ideas physically by cutting and sticking.
  • Keep reminding participants to think wildly and forget feasibility concerns.
  • For getting insights, always step into the conversations to ask participants about why they came across these ideas.

While Hosting the Workshop

Time management is important

Remember to setup a timer when running a workshop. You can use free tools such as vClock and Timertimer to show a full-screen timer counting down on a big screen, so it will make more sense when facilitators have to pause any conversations and move participants forward. Showing how many major tasks need to be accomplished in the beginning of the workshop can also prevent participants from spending too much energy on unnecessary topics. Time is limited, so let’s make sure people can finish all tasks before they walk away from the room.

Know more about their thoughts by observing their non verbal language while discussing

We expected some debates happening in the workshop, especially in the brainstorming session. Every participant’s opinion weights equally to us, so it’s very important to have facilitators to assure everyone can speak out their thoughts. Having consensus from a group of seniors is more difficult than younger ages, so we experienced longer discussions than other typical workshops we hosted. However, having all team members agree on the same opinion wasn’t our main focus. So we kept having an eye on silent participants by observing their non verbal language and then we approached them individually by asking for their opinions. When we approached them, they were very generous to share more valuable feedback which help us understand more from the discussion.

“If I had asked people what they wanted,
they would have said faster horses.”

The most famous quote attributed to Henry Ford. Let’s don’t question if Ford ever said this, but focus on the quote itself. We literary ran through a similar process in our participatory workshop: asking people what they wanted. However, we were not pivoting to build the “faster horses” but involved in their discussion to look into the unmet needs behind their “faster horses” idea and then refined our product concepts based on the insights.

Participatory workshop isn’t a silver bullet for every project, but it’ll be one of the good methodologies for you to first-hand experience a user-centered design in an efficient way.

Special thanks to my colleagues: Ricky, Helen, Juwei and Mark for making this workshop happen, and our manager, Harly for the stunning photoshoots :)

Participatory Design: what it is, what it isn’t and how it actually works
Design Thinking: Getting Started with Empathy

8 tips for hosting your first participatory workshop was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla Addons BlogThe future of themes is here!

Themes have always been an integral part of the add-ons ecosystem and (AMO). The current generation of themes – also known as lightweight themes and previously known as Personas (long story) – were introduced to AMO in 2009. There are now over 400 thousand of them available on AMO. Today we’re announcing the AMO launch of the next major step in the evolution of Firefox themes.

If you follow this blog, this shouldn’t come as a surprise. We’ve talked about theme updates a few times before. We actually turned on the new theme submission flow for testing a couple of weeks ago, but didn’t remove the old one. We’ve now flipped the switch and AMO will only accept the new themes.

What’s new about themes

Lightweight themes allowed designers to set a background image for the main browser toolbox, as well as the text color and background color. With this update, themes let you do much more:

  • Change other aspects of the browser, like the color of the toolbar icons, the color of the text in the location bar, and the color of the active tab.
  • Set multiple background images, with different alignment, and tiling. You no longer need a massive background image, or make guesses about the width and height of the browser toolbox.
  • Use color transparency to make interesting color blends.

Here’s an example of one of the recently-submitted themes using some of these new properties:

Orange theme

A detailed list of the supported theme properties can be found in this MDN article. If you scroll down to the compatibility table, you’ll find many properties that only very recent versions of Firefox support. That’s because Firefox engineers are still adding new theme capabilities, making them more powerful with every release.

How to submit themes now

If you’re a theme designer, the submission flow for themes has changed a bit.

  • In the Developer Hub, the Submit a New Theme button will take you to the new submission flow, which is the same used for extensions.
  • You’ll be able to choose if you want to host your theme on AMO or distribute it yourself. This feature has been available for extensions for years, and it allows you to create files you can host on your website or keep for personal use. More on Distribution.
  • On the next step, you can choose to either upload an XPI file or Create a Theme. The outcome of either path is the same.
  • These instructions explain how to build a theme XPI. If you prefer using a wizard like the one we had for lightweight themes, click on the Create a Theme button.

Themes Creation Wizard

  • The new wizard supports the theme features of its predecessor, as well as some of the new ones. To take advantage of all new properties, however, you’ll need to upload an XPI.
  • The browser preview image at the bottom of the screenshot is what becomes the main image for the theme on your theme page. It better reflects how Firefox will look after you install the theme, instead of just showing the background image.

If you run into any problems with these new tools, please report it here.

What about Personas Plus?

The Personas Plus extension has been a handy companion for theme designers for years. It makes it easy to create themes, preview them, and use them locally. Its successor in the new world of themes is Firefox Color.

Firefox Color is exclusively a development tool for themes, so it doesn’t match all features in Personas Plus. However, it should cover what is needed for easy theme creation.

Migrating Lightweight Themes

What about the 400K+ themes already hosted on AMO? We’re keeping them, of course, but we will transform them to the new format later this year. So, if you’re a theme designer and want your theme to be updated, don’t worry, we got you covered. And please don’t submit duplicate themes!

After the migration is done, we’ll notify you about it. The main difference you’ll notice is the new preview image in the theme page. You’ll then be able to submit new versions of your theme that take advantage of the new theme properties.

You’ll also notice that all new and migrated themes have different editing tools to change their descriptions. They are very similar to the tools we use for extensions. They may take a bit of getting used to, but they provide great benefits over the lightweight theme tools. You’ll be able to set a Contributions URL, so your users can compensate you for your work. Also, you get a detailed stats dashboard so you can learn about your users.

uBlock Statistics Dashboard

This level of success not guaranteed

This may seem like a small step, but it’s actually been a large undertaking. It’s taken years and over a dozen people on the Firefox and AMO teams to finally get this out the door. I won’t even try to list everyone because I’m sure I’ll forget some (but thank you all anyway!). We’re very excited with about these new themes, and hope they will lead to even more and better Firefox customization.

The post The future of themes is here! appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgPerformance-Tuning a WebVR Game

For the past couple of weeks, I have been working on a VR version of one of my favorite puzzle games, the Nonogram, also known as Picross or Griddlers. These are puzzles where you must figure out which cells in a grid are colored in by using column and row counts. I thought this would be perfect for a nice, relaxing VR game. I call it Lava Flow.


Since Lava Flow is meant to be a casual game, I want it to load quickly. My goal is for the whole game to be as small as possible and load under 10 seconds on a 3G connection. I also wanted it to run at a consistent 60 frames per second (fps) or better. Consistent frame rate is the most important consideration when developing WebVR applications.

Measure first

Before I can improve the performance or size, I need to know where I’m starting. The best way to see how big my application really is is to use the Network tab of the Firefox Developer tools. Here’s how to use it.

Open the Network tab of Firefox developer tools. Click disable cache, then reload the page. At the bottom of the page, I can see the total page size. When I started this project it was at 1.8MB.

My application uses three.js, the popular open source 3D library. Not surprisingly, the biggest thing is three.js itself. I’m not using the minified version, so it’s over 1MB! By loading three.min.js instead, the size is now 536KB vs 1110KB. Less than half the size.

The game uses two 3D models of rocks. These are in GLB format, which is compressed and optimized for web usage. My two rocks are already weighing in at less than 3KB each, so there is not much to optimize there. The JavaScript code I’ve written is a bunch of small files. I could use a compressor later to reduce the fetch count, but it’s not worth worrying about yet.

Image compression

The next biggest resources are the two lava textures. They are PNGs and collectively add up to 561KB.

I re-compressed the two lava textures as JPEGs with medium quality. Also, since the bump map image doesn’t need to be as high resolution as the color texture, I resized it from 512×512 to 128×128. That dropped the sizes from 234KB to 143KB and 339KB to 13KB. Visually there isn’t much difference, but the page is now down to 920KB.

The next two big things are a JSON font and the GLTFLoader JavaScript library. Both of those can be gzip compressed, so I won’t worry about them yet.


Now let’s play the game and make sure everything is still working. It looks good. Wait, what’s that? A new network request? Of course, the audio files! The sound isn’t triggered until the first user interaction. Then it downloads over 10MB of MP3s. Those aren’t accounted for by the DefaultLoader because I’m loading it through audio tags instead of JavaScript. Gah!

I don’t really want to wait for the background music to load, however it would be nice to have the sound effects preloaded. Plus audio elements don’t have the control I need for sound effects, nor are they 3D-capable. So I moved those to be loaded as Audio objects with the AudioLoader within three.js. Now they are fully loaded on app start and are accounted for in the download time.

With all of the audio (except the background theme), everything is 2.03 MB. Getting better.


There is a weird glitch where the whole scene pauses when rebuilding the game board. I need to figure out what’s going on there. To help debug the problems, I need to see the frames per second inside of VR Immersive mode. The standard stats.js module that most three.js apps use actually works by overlaying a DOM element on top of the WebGL canvas. That’s fine most of the time but won’t work when we are in immersive mode.

To address this, I created a little class called JStats which draws stats to a small square anchored to the top of the VR view. This way you can see it all the time inside of immersive mode, no matter what direction you are looking.

I also created a simple timer class to let me measure how long a particular function takes to run. After a little testing, I confirmed that anywhere from 250 to 450 msec is required to run the setupGame function that happens every time the player gets to a new level.

I dug into the code and found two things. First, each cell is using its own copy of the rounded rectangle geometry and material. Since this is the same for every cell, I can just create it once and reuse it for each cell. The other thing I discovered is that I’m creating the text overlays every time the level changes. This isn’t necessary. We only need one copy of each text object. They can be reused between levels. By moving this to a separate setupText function, I saved several hundred milliseconds. It turns out triangulating text is very expensive. Now the average board setup is about 100 msec, which shouldn’t be noticeable, even on a mobile headset.

As a final test I used the network monitor to throttle the network down to 3G speeds, with the cache disabled. My goal is for the screen to first be painted within 1 second and the game ready to play within 10 seconds. The network screen says it takes 12.36 seconds. Almost there!

Two steps forward, one step back

As I worked on the game, I realized a few things were missing.

  • There should be a progress bar to indicate that the game is loading.
  • The tiles need sound effects when entering and exiting the mouse/pointer.
  • There should be music each time you complete a level.
  • The splash screen needs a cool font.

The progress bar is easy to build because the DefaultLoadingManager provides callbacks. I created a progress bar in the HTML overlay like this:

<progress id="progress" value="0" max="100"></progress> 

Then update it whenever the loading manager tells me something is loaded.

THREE.DefaultLoadingManager.onProgress = (url, loaded, total) => {

Combined with some CSS styling it looks like this:

Battling bloat

Next up is music and effects. Adding the extra music is another 133KB + 340KB, which bloats the app up nearly another half a megabyte. Uh oh.

I can get a little bit of that back with fonts. Currently I’m using one of the standard three.js fonts which are in JSON format. This is not a terribly efficient format. Files can be anywhere from 100KB to 600KB depending on the font. It turns out three.js can now load TrueType fonts directly without first converting to JSON format. The font I picked was called Hobby of Night by deFharo, and it’s only 80KB. However, to load TTF file requires TTFLoader.js (4KB) and opentype.min.js which is 124KB. So I’m still loading more than before, but at least opentype.min.js will be amortized across all of the fonts. It doesn’t help today since I’m only using one font, but it will help in the future. So that’s another 100KB or so I’m using up.

The lesson I’ve learned today is that optimization is always two steps forward and one step back. I have to investigate everything and spend the time polishing both the game and loading experience.

The game is currently about 2.5MB. Using the Good 3G setting, it takes 13.22 seconds to load.

Audio revisited

When I added the new sound effects, I thought of something. All of them are coming from which generally provides them in WAV format. I have used iTunes to convert them to MP3s, but iTunes may not use the most optimized format. Looking at one of the files I discovered it was encoded at 192KBps, the default for iTunes. Using a command line tool, I bet I could compress them further.

I installed ffmpeg and reconverted the 30-second song like this:

ffmpeg -i piano.wav -codec:a libmp3lame -qscale:a 5 piano.mp3

It went from 348KB to 185KB. That’s a 163KB savings! In total the sounds went from 10MB to 4.7MB, greatly reducing the size of my app. The total download size to start the game without the background music is now 2.01MB.

Sometimes you get a freebie

I loaded the game to my web server here and tested it on my VR headsets to make sure everything still works. Then I tried loading the public version in Firefox again with the network tab open. I noticed something weird. The total download size is smaller! In the status bar it says: 2.01 MB/1.44 MB transferred. On my local web server where I do development, it says: 2.01 MB/2.01 MB transferred. That’s a huge difference. What accounts for this?

I suspect it’s because my public web server does gzip compression and my local web server does not. For an MP3 file this makes no difference, but for highly compressible files like JavaScript it can be huge. For example, the three.min.js file is 536.08KB uncompressed but an astounding 135.06KB compressed. Compression makes a huge difference. And now the download is just 1.44MB, and download time over Good 3G is 8.3 seconds. Success!

I normally do all of my development on the local web server and only use the public one when the project is ready. This is a lesson to look at everything and always measure.

In the end

These are the lessons I learned while tuning my application. Making the first version of a game is easy. Getting it ready for release is hard. It’s a long slog, but the end results are so worth it. The difference between a prototype and a game is sweating the little details. I hope these tips help you make your own great WebVR experiences.

The post Performance-Tuning a WebVR Game appeared first on Mozilla Hacks - the Web developer blog.

Chris AtLeeSo long Buildbot, and thanks for all the fish

Last week, without a lot of fanfare, we shut off the last of the Buildbot infrastructure here at Mozilla.

Our primary release branches have been switched over to taskcluster for some time now. We needed to keep buildbot running to support the old ESR52 branch. With the release of Firefox 60.2.0esr earlier this month, ESR52 is now officially end-of-life, and therefore so is buildbot here at Mozilla.

Looking back in time, the first commits to our buildbot-configs repository was over 10 years ago on April 27, 2008 by Ben Hearsum: "Basic Mozilla2 configs". Buildbot usage at Mozilla actually predates that by at least two years, Ben was working on some patches in 2006.

Earlier in my career here at Mozilla, I was doing a lot of work with Buildbot, and blogged quite a bit about our experiences with it.

Buildbot served us well, especially in the early days. There really were no other CI systems at the time that could operate at Mozilla's scale.

Unfortunately, as we kept increasing the scale of our CI and release infrastructure, even buildbot started showing some problems. The main architectural limitations of buildbot we encountered were:

  1. Long lived TCP sessions had to stay connected to specific server processes. If the network blipped, or you needed to restart a server, then any jobs running on workers were interrupted.

  2. Its monolithic design meant that small components of the project were hard to develop independently from each other.

  3. The database schema used to implement the job queue became a bottleneck once we started doing hundreds of thousands of jobs a day.

On top of that, our configuration for all the various branches and platforms had grown over the years to a complex set of inheritance rules, defaults, and overrides. Only a few brave souls outside of RelEng managed to effectively make changes to these configs.

Today, much much more of the CI and release configuration lives in tree. This has many benefits including:

  1. Changes are local to the branches they land on. They ride the trains naturally. No need for ugly looooooops.

  2. Developers can self-service most of their own requests. Adding new types of tests, or even changing the compiler are possible without any involvement from RelEng!

Buildbot is dead! Long live taskcluster!

QMOFirefox 63 Beta 10 Testday, September 28th

Hello Mozillians,

We are happy to let you know that Friday, September 28th, we are organizing Firefox 63 Beta 10 Testday. We’ll be focusing our testing on: Firefox Customize, Font UI, Tracking protection.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Hacks.Mozilla.OrgDweb: Creating Decentralized Organizations with Aragon

In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

While many of the projects we’ve covered build on the web as we know it or operate like the browsers we’re familiar with, the Aragon project has a broader vision: Give people the tools to build their own autonomous organizations with social mores codified in smart contracts. I hope you enjoy this introduction to Aragon from project co-founder Luis Cuende.

– Dietrich Ayala

Introducing Aragon

I’m Luis. I cofounded Aragon, which allows for the creation of decentralized organizations. The principles of Aragon are embodied in the Aragon Manifesto, and its format was inspired by the Mozilla Manifesto!

Here’s a quick summary.

  • We are in a key moment in history: Technology either oppresses or liberates us.
  • That outcome will depend on common goods being governed by the community, and not just nation states or corporate conglomerates.
  • For that to happen, we need technology that allows for decentralized governance.
  • Thanks to crypto, decentralized governance can provide new means of organization that don’t entail violence or surveillance, therefore providing more freedom to the individual and increasing fairness.

With Aragon, developers can create new apps, such as voting mechanisms, that use smart contracts to leverage decentralized governance and allow peers to control resources like funds, membership, and code repos.

Aragon is built on Ethereum, which is a blockchain for smart contracts. Smart contracts are software that is executed in a trust-less and transparent way, without having to rely on a third-party server or any single point of failure.

Aragon is at the intersection of social, app platform, and blockchain.


The Aragon app is one of few truly decentralized apps. Its smart contracts and front end are upgrade-able thanks to aragonOS and Aragon Package Manager (APM). You can think of APM as a fully decentralized and community-governed NPM. The smart contracts live on the Ethereum blockchain, and APM takes care of storing a log of their versions. APM also keeps a record of arbitrary data blobs hosted on decentralized storage platforms like IPFS, which in our case we use for storing the front end for the apps.

Aragon architecture diagram

The Aragon app allows users to install new apps into their organization, and those apps are embedded using sandboxed iframes. All the apps use Aragon UI, therefore users don’t even know they are interacting with apps made by different developers. Aragon has a very rich permission system that allows users to set what each app can do inside their organization. An example would be: Up to $1 can be withdrawn from the funds if there’s a vote with 51% support.

Aragon tech stack diagram

Hello World

To create an Aragon app, you can go to the Aragon Developer portal. Getting started is very easy.

First, install IPFS if you don’t have it already installed.

Second, run the following commands:

$ npm i -g @aragon/cli
$ aragon init foo.aragonpm.eth
$ cd foo
$ aragon run

Here we will show a basic counter app, which allows members of an organization to count up or down if a democratic vote happens, for example.

This would be the smart contract (in Solidity) that keeps track of the counter in Ethereum:

contract Counter is AragonApp {
* @notice Increment the counter by 1
function increment() auth(INCREMENT_ROLE) external {
// ...

* @notice Decrement the counter by 1
function decrement() auth(DECREMENT_ROLE) external {
// ...

This code runs in a web worker, keeping track of events in the smart contract and caching the state in the background:

// app/script.js
import Aragon from '@aragon/client'

// Initialize the app
const app = new Aragon()

// Listen for events and reduce them to a state
const state$ =, event) => {
// Initial state
if (state === null) state = 0

// Build state
switch (event.event) {
case 'Decrement':
case 'Increment':

return state

Some basic HTML (not using Aragon UI, for simplicity):

<!-- app/index.html !-->
<!doctype html>

<button id="decrement">-</button>
<div id="view">...</div>
<button id="increment">+</button>
<script src="app.js"></script>

And the JavaScript that updates the UI:

// app/app.js
import Aragon, { providers } from '@aragon/client'

const app = new Aragon(
new providers.WindowMessage(window.parent)
const view = document.getElementById('view')

function(state) {
view.innerHTML = `The counter is ${state || 0}`
function(err) {
view.innerHTML = 'An error occurred, check the console'

aragon run takes care of updating your app on APM and uploading your local webapp to IPFS, so you don’t need to worry about it!

Learn More

You can go to Aragon’s website or the Developer Portal to learn more about Aragon. If you are interested in decentralized governance, you can also check out our research forum.

If you would like to contribute, you can look at our good first issues.

If you have any questions, please join the Aragon community chat!

The post Dweb: Creating Decentralized Organizations with Aragon appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Open Policy & Advocacy BlogLessons from Carpenter – Mozilla panel discussion at ICDPPC

The US Supreme Court recently released a landmark ruling in Carpenter vs. United States, which held that law enforcement authorities must secure a warrant in order to access citizens’ cell-site location data. At the upcoming 40th Conference of Data Protection and Privacy Commissioners, we’re hosting a panel discussion to unpack what Carpenter means in a globalised world.

Event blurb:

The Court’s judgement in Carpenter rested on the understanding that communications metadata can reveal sensitive information about individuals, and that citizens had a reasonable expectation of privacy with respect to that  metadata.

This panel discussion will seek to unpack what Carpenter says about users’ expectations of privacy in the fully-connected world. It will make this assessment through both a legal and ethical lens, and compare the notion of expectation of privacy in Carpenter to other jurisdictions where data protection legislation is currently debated. Finally, the panel will examine the types of metadata implicated by the Carpenter ruling; how sensitive is that data and what legal standards should be applied given that sensitivity.


  • Pam Dixon, Founder and Executive Director, World Privacy Forum
  • Malavika Jayaram, Executive Director, Digital Asia Hub
  • Marshall Erwin, Director Trust & Security, Mozilla Corporation
  • European Commission, TBC
  • Moderator: Owen Bennett, Mozilla


Thursday 25 October, 14:30-15:50
The Stanhope Hotel,  Rue du Commerce 9, 1000 Brussels, Belgium


The post Lessons from Carpenter – Mozilla panel discussion at ICDPPC appeared first on Open Policy & Advocacy.

Mozilla Open Policy & Advocacy BlogThe future of online advertising – Mozilla panel discussion at ICDPPC

At the upcoming 40th International Conference of Data Protection and Privacy Commissioners, we’re convening a timely high-level panel discussion on the future of advertising in an open and sustainable internet ecosystem.

Event title: Online advertising is broken: Can ethics fix it?


There’s no doubt that advertising is the dominant business model online today – and it has allowed a plethora of platforms, services, and publishers to operate without direct payment from end users. However, there is clearly a crisis of trust among these end users, driving skepticism of advertising, annoyance, and a sharp increase in adoption of content blockers. Ad fraud, adtech centralization, and [bad] practices like cryptojacking and pervasive tracking have made the web a difficult – and even hostile – environment for users and publishers alike.

While advertising is not the only contributing factor, it is clear that the status quo is crumbling. This workshop will bring together stakeholders from across the online ecosystem to examine the role that ethics, policy, and legislation (including the GDPR) play in increasing online trust, improving end user experience, and bolstering sustainable economic models for the web.


  • Katharina Borchert, Chief Innovation Officer, Mozilla
  • Catherine Armitage, Head of Digital Policy, World Federation of Advertisers
  • David Gehring, Co-founder and CEO, Distributed Media Lab
  • Matt Rogerson, Head of Public Policy, the Guardian
  • Moderator: Raegan MacDonald, Head of EU Public Policy, Mozilla


Tuesday 23 October 2018, 16:15-17:30
The Hotel, Boulevard de Waterloo 38, 1000 Bruxelles

Register here.

The post The future of online advertising – Mozilla panel discussion at ICDPPC appeared first on Open Policy & Advocacy.

Emily DunhamRunning a Python3 script in the right place every time

Running a Python3 script in the right place every time

I just wrote a thing in a private repo that I suspect I’ll want to use again later, so I’ll drop it here.

The situation is that there’s a repo, and I’m writing a script which shall live in the repo and assist users with copying a project skeleton into its own directory.

The script, newproject, lives in the bin directory within the repo.

The script needs to do things from the root of the repository for the paths of its file copying and renaming operations to be correct.

If it was invoked from somewhere other than the root of the repo, it must thus change directory to the root of the repo before doing any other operations.

The snippet that I’ve tested to meet these constraints is:

# chdir to the root of the repo if needed
if __file__.endswith("/bin/newproject"):
if __file__ == "newproject":

In code review, it was pointed out that this simplifies to a one-liner:

os.chdir(os.path.join(os.path.dirname(__file__), '..'))

This will keep working right up until some malicious or misled individual moves the script to an entirely different location within the repository or filesystem and tries to run it from there.

Mike TaylorNotable moments in Firefox for Android UA string history

Back by popular demand*, here's a follow-up to my blog post on Firefox Desktop UA string changes.

(*actual demand for this blog post asymptotically approaches zero the further you read)

Early versions of Firefox for Android used a Linux desktop UA string with a Fennec/<fennecversion> product token appended to the end.

Since version 41, Firefox for Android has (generally) followed the following UA string format:

Mozilla/5.0 (Android <androidversion>; <devicecompat>; rv: <geckoversion>) Gecko/<geckoversion> Firefox/<firefoxversion>

Gecko Version Sample Firefox for Android UA string
4 Mozilla/5.0 (Android; Linux armv7l; rv:2.1.1) Gecko/20110415 Firefox/4.0.2pre Fennec/4.0.1
11 Mozilla/5.0 (Android; Tablet1; rv:11.0) Gecko/11.0 Firefox/11.0 Fennec/11.0
11 Mozilla/5.0 (Android; Tablet; rv:11.0) Gecko/11.0 Firefox/11.02
41 Mozilla/5.0 (Android 4.4.43; Mobile; rv:41.0) Gecko/41.0 Firefox/41.0
46 Mozilla/5.0 (Android 4.4.4; Mobile; CoolDevice4; rv:46.0) Gecko/46.0 Firefox/46.0
46 Mozilla/5.0 (Android 4.4.4; Mobile; Custom CoolDevice/ABCDEFG5; rv:46.0) Gecko/46.0 Firefox/46.0

Footnotes: 1. Version 11 added the notion of a <devicecompat> token to distinguish between Tablet and Mobile.

2. Version 11 also dropped the Fennec/<version> token for Native UI (non-XUL) builds.

3. For versions running on Android older than KitKat (v4), the Android version number is set to 4.41 to avoid UA sniffing assumptions tied to the Android platform capabilities. 4. Version 46 also added the ability to add the Android device name, controlled by the pref general.useragent.use_device. This is probably not widely used, if at all.

5. Version 46 added the ability to add a custom device string, with optional device ID, controlled by the pref general.useragent.device_string. This is also probably not widely used, if at all.

Footnotes to the Footnotes: 1. If I had to write this patch again, I would choose an obviously non-real 4.4.99, so it would be sniffable as a spoofed value. Yes, I am familiar with the concept of irony.

Daniel PocockWhat is the relationship between FSF and FSFE?

Ever since I started blogging about my role in FSFE as Fellowship representative, I've been receiving communications and queries from various people, both in public and in private, about the relationship between FSF and FSFE. I've written this post to try and document my own experiences of the issue, maybe some people will find this helpful. These comments have also been shared on the LibrePlanet mailing list for discussion (subscribe here)

Being the elected Fellowship representative means I am both a member of FSFE e.V. and also possess a mandate to look out for the interests of the community of volunteers and donors (they are not members of FSFE e.V). In both capacities, I feel uncomfortable about the current situation due to the confusion it creates in the community and the risk that volunteers or donors may be confused.

The FSF has a well known name associated with a distinctive philosophy. Whether people agree with that philosophy or not, they usually know what FSF believes in. That is the power of a brand.

When people see the name FSFE, they often believe it is a subsidiary or group working within the FSF. The way that brands work, people associate the philosophy with the name, just as somebody buying a Ferrari in Berlin expects it to do the same things that a Ferrari does in Boston.

To give an example, when I refer to "our president" in any conversation, people not knowledgeable about the politics believe I am referring to RMS. More specifically, if I say to somebody "would you like me to see if our president can speak at your event?", some people think it is a reference to RMS. In fact, FSFE was set up as a completely independent organization with distinct membership and management and therefore a different president. When I try to explain this to people, they sometimes lose interest and the conversation can go cold very quickly.

FSFE leadership have sometimes diverged from FSF philosophy, for example, it is not hard to find some quotes about "open source" and one fellow recently expressed concern that some people behave like "FSF Light". But given that FSF's crown jewels are the philosophy, how can an "FSF Light" mean anything? What would "Ferrari Light" look like, a red lawnmower? Would it be a fair use of the name Ferrari?

Some concerned fellows have recently gone as far as accusing the FSFE staff of effectively domain squatting or trolling the FSF (I can't link to that because of FSFE's censorship regime). When questions appear about the relationship in public, there is sometimes a violent response with no firm details. (I can't link to that either because of FSFE's censorship regime)

The FSFE constitution calls on FSFE to "join forces" with the FSF and sometimes this appears to happen but I feel this could be taken further.

FSF people have also produced vast amounts of code (the GNU Project) and some donors appear to be contributing funds to FSFE in gratitude for that or in the belief they are supporting that. However, it is not clear to me that funds given to FSFE support that work. As Fellowship representative, a big part of my role is to think about the best interests of those donors and so the possibility that they are being confused concerns me.

Given the vast amounts of money and goodwill contributed by the community to FSFE e.V., including a recent bequest of EUR 150,000 and the direct questions about this issue I feel it is becoming more important for both organizations to clarify the issue.

FSFE has a transparency page on the web site and this would be a good place to publish all documents about their relationship with FSF. For example, FSFE could publish the documents explaining their authorization to use a name derived from FSF and the extent to which they are committed to adhere to FSF's core philosophy and remain true to that in the long term. FSF could also publish some guidelines about the characteristics of a sister organization, especially when that organization is authorized to share the FSF's name.

In the specific case of sister organizations who benefit from the tremendous privilege of using the FSF's name, could it also remove ambiguity if FSF mandated the titles used by officers of sister organizations? For example, the "FSFE President" would be referred to as "FSFE European President", or maybe the word president could be avoided in all sister organizations.

People also raise the question of whether FSFE can speak for all Europeans given that it only has a large presence in Germany and other organizations are bigger in other European countries. Would it be fair for some of those other groups to aspire to sister organization status and name-sharing rights too? Could dozens of smaller FSF sister organizations dilute the impact of one or two who go off-script?

Even if FSFE was to distance itself from FSF or even start using a new name and philosophy, as a member, representative and also volunteer I would feel uncomfortable with that as there is a legacy of donations and volunteering that have brought FSFE to the position the organization is in today.

That said, I would like to emphasize that I regard RMS and the FSF, as the original FSF, as having the final authority over the use of the name and I fully respect FSF's right to act unilaterally, negotiate with sister organizations or simply leave things as they are.

If you have questions or concerns about this topic, I would invite you to raise them on the LibrePlanet-discuss mailing list or feel free to email me directly.

K Lars LohnThings Gateway - Rules Rule

A smart home is a lot more than just lights, switches and thermostats that you can control remotely from your phone.  To truly make a Smart Home, the devices must be reactive and work together.  This is generally done with a Rule System: a set of maxims that automate actions based on conditions.  It is automation that makes a home smart.

There are a couple options for a rule system with the Things Gateway from Mozilla.  First, there is a rule system built into the Web GUI, accessed via the Rules option in the drop down menu.  Second, there is the Web Things API that allows programs external to the Things Gateway to automate the devices that make up a smart home.  Most people will gravitate to the former built-in system, as it is the most accessible to those without predilection to writing software.   This blog post is going to focus on the this rules system native to the Things Gateway.

The Rule System is as example of a graphical programming system.  Icons representing physical devices in the Smart Home are dragged onto a special display area and attached together to form a rule.  Rules are composed of two parts: a predicate and an action.

The predicates are logical conditions like "the bedroom light is on" or "somebody pushed the button marked 'do not press'".  These logical conditions can be chained together using operations like "AND" and "OR":  "somebody is in the room AND the television is on".

The actions consist of telling a set of devices to take on specific states.  These actions can be as simple as "turn the heater on" or "turn the light on and set the color to red".

Throughout the history of graphical user interfaces, there have been many attempts to create graphical, drag and drop, programming environments.  Unfortunately, most fail when the programming goal rises above a certain threshold of complexity.  From the perspective of a programmer, that threshold is depressingly low.  As such, the Things Gateway Rules System doesn't try to exceed that threshold and is suitable only for simple rules with a restricted compound predicate and a set of actions.  Other more complex programming constructs such as loops, variables, and functions are intentionally omitted.

If a desired predicate/action is more complex than the Rules System GUI can do, there is always the option of falling back to the Web Thing API and any programming language that can speak HTTP and/or Web Sockets.  Some of my previous blog posts use that Web Thing API: see the Tide Light or Bonding Lights Together for examples.

Let's start with a simple example: we've got four Philips HUE light bulbs. We'll create a rule that designates bulb #1 as the leader and bulbs #2, #3, and #4 as followers.

We start by navigating to the rules page (≡ ⇒ Rules) and making a new rule by pressing the "+" button on the rules page.  Then drag and drop the first bulb to left side of the screen.  This is where the predicates will live.  Then select the property "ON".   Notice that on the top of the screen in the red area, a sentence is forming.  "If Philips HUE 01 is on, ???".  This is an English translation of the selections that you've made to create your rule.  As you create your rule, use this sentence as a sanity check to make sure that your rule does what you want it to do.

Next, drag each of the three other lights on the right half of the screen and select their "ON" properties.

Notice how the sentence in the upper area changes to read out the rule in an understandable sentence.

Finally, give your rule a name.  I'm choosing "01 leads, others follow".  Make sure you hit "Enter" after typing the name. 

Now click the "" to return to the rules page.  Then return to the "Things" page (≡ ⇒ Things). Turn on "Philips HUE 01" by clicking on the bulb.  All four of the bulbs will light up.

Now click on the "Philips HUE 01" bulb again to turn the light off and watch what happens.

The other lights stayed on.  If you've used older versions of the Things Gateway rules, this will surprise you.  With the latest release (0.5.x), there are now two types of rules: "If" rules and "While" rules.  The "If" rules are just a single shot - if the predicate is true, do the action once.  There is no automatic undo when the predicate no longer is true.

"While" rules, on the other hand, will automatically undo the action when the predicate is no longer true.  This can be best understood by reading the rule sentence out loud and imagine giving it as a command to a servant.  "If light 01 is on, turn on the other lights" implies doing a single thing with no follow up.  A "While" rule, though, implies sticking around to undo the action when the predicate is no longer true.   Say it out loud and the difference becomes clear immediately.  Paraphrasing: "While light 01 is on, turn on the other lights".  The word "While" implies passing time.

The Things Gateway rules system can do both kinds of rules.  Let's go back and make our rule into a "While" rule.   Return to the Rules page (≡ ⇒ Rules) and move your mouse over the rule then press the "Edit Rule" Button.

Take a close look at the sentence at the top of the screen.  The symbol under the word "If" is an indication of a word selection drop down menu.  Click on the word "If" and you'll see that you can change the word "If" to "While".  Do it.

Exit from the rule and go back to Things page.  Turn all the lights off.  Then turn on the leader light, in my case, "Philips HUE 01".  All the lights turn on.  Turn off the leader, and the action is undone: the rest of the lights go off.

Here's a video demonstrating the difference in behavior between the "If" and "While" forms of the rules.

Earlier I stated, the Things Gateway Rule System doesn't try to exceed the complexity threshold where visual programming paradigms start to fail.  However, the system as it stands right now is not without some troubles.

Consider a rule that uses the clock in the predicate.  That could result in a rule that reads like this: "If the time of day is 10:00, turn Philips HUE 01 on".  The interpretation of this is straight forward.

However, what if you change the "If" to "While"?  "While the time of day is 10:00, turn Philips HUE 01 on."  Since the resolution of the clock is only to the minute, the clock considers it to be 10:00 for sixty seconds.  The light stays on for one minute.  It is not particularly useful to use the clock with the "While" form of a rule.   The Rule System needs some sort of timer object so a duration can be set on a rule.

How would you make a rule to turn a light on at dusk and then turn it off at 11PM?  Currently, the clock does not support the concepts of dawn and dusk, so that rule just can't be done within the Things Gateway.  However, with some programming, it would be possible to add a NightThing that could accomplish the task.

In many of these blog posts, I predict what I'm going to talk about in the next posting.  I've got a completely rotten record of actually following through with my predictions.  However, I hope in my next posting to write about how to implement the rules above using the Web Things API and the Python language.

Firefox Test PilotAutoFill your passwords with Firefox Lockbox in iOS

Today Firefox Lockbox 1.3 gives you the ability to automatically fill your username and password into apps and websites. This is available to anyone running the latest iOS 12 operating system.

How do I set it up?

If you just downloaded Firefox Lockbox, you’ll start with a screen which includes “Set Up Autofill”, which takes you directly to your device settings.

Here you can select Firefox Lockbox to autofill logins for you. You also want to make sure that “AutoFill Passwords” is green and toggled on.

If you’re already using Firefox Lockbox, you can set Lockbox to autofill your logins by navigating through the device: Settings > Passwords & Accounts > AutoFill Passwords

While you’re here, unselect iCloud Keychain as an AutoFill provider. If you leave this enabled, it may be confusing when signing into apps and web forms.

If you haven’t yet signed in to Lockbox, you will be prompted to do so in order to authenticate the app to automatically fill passwords.

Your setup is now complete. You can now start using your saved logins in Lockbox.

NOTE: You can only have one third-party AutoFill provider enabled, in addition to iCloud Keychain.

How does it work?

When you need to log into an app or an online account in a browser, tap in one of the entry fields. This will display the username and password you have saved in Lockbox.

From there, you can tap the information to enter it into the app or website’s login form.

If you can’t find the saved login you need, tap on the key icon. Then select Lockbox. There you can see all the accounts you have saved and can choose your desired entry to populate the login form.

How do I know this is secure?

Every time you invoke Lockbox to fill a form, you will need to confirm your identity with either Face ID or Touch ID to enter a password. This is to ensure that you are in fact asking Lockbox to fill in the username and password and unlocking the app to do so.

Where can I autofill passwords?

You can now easily use a Firefox saved login to get into a third-party app like Twitter or Instagram. Or you can use those Firefox saved logins to fill in website forms. You may recognize this but it’s something that used to only be available to iCloud Keychain users until today!

AutoFill your passwords with Firefox Lockbox in iOS was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Hacks.Mozilla.OrgStreaming RNNs in TensorFlow

The Machine Learning team at Mozilla Research continues to work on an automatic speech recognition engine as part of Project DeepSpeech, which aims to make speech technologies and trained models openly available to developers. We’re hard at work improving performance and ease-of-use for our open source speech-to-text engine. The upcoming 0.2 release will include a much-requested feature: the ability to do speech recognition live, as the audio is being recorded. This blog post describes how we changed the STT engine’s architecture to allow for this, achieving real-time transcription performance. Soon, you’ll be able to transcribe audio at least as fast as it’s coming in.

When applying neural networks to sequential data like audio or text, it’s important to capture patterns that emerge over time. Recurrent neural networks (RNNs) are neural networks that “remember” — they take as input not just the next element in the data, but also a state that evolves over time, and use this state to capture time-dependent patterns. Sometimes, you may want to capture patterns that depend on future data as well. One of the ways to solve this is by using two RNNs, one that goes forward in time and one that goes backward, starting from the last element in the data and going to the first element. You can learn more about RNNs (and about the specific type of RNN used in DeepSpeech) in this article by Chris Olah.

Using a bidirectional RNN

The current release of DeepSpeech (previously covered on Hacks) uses a bidirectional RNN implemented with TensorFlow, which means it needs to have the entire input available before it can begin to do any useful work. One way to improve this situation is by implementing a streaming model: Do the work in chunks, as the data is arriving, so when the end of the input is reached, the model is already working on it and can give you results more quickly. You could also try to look at partial results midway through the input.

This animation shows how the data flows through the network. Data flows from the audio input to feature computation, through three fully connected layers. Then it goes through a bidirectional RNN layer, and finally through a final fully connected layer, where a prediction is made for a single time step.

This animation shows how the data flows through the network. Data flows from the audio input to feature computation, through three fully connected layers. Then it goes through a bidirectional RNN layer, and finally through a final fully connected layer, where a prediction is made for a single time step.

In order to do this, you need to have a model that lets you do the work in chunks. Here’s the diagram of the current model, showing how data flows through it.

As you can see, on the bidirectional RNN layer, the data for the very last step is required for the computation of the second-to-last step, which is required for the computation of the third-to-last step, and so on. These are the red arrows in the diagram that go from right to left.

We could implement partial streaming in this model by doing the computation up to layer three as the data is fed in. The problem with this approach is that it wouldn’t gain us much in terms of latency: Layers four and five are responsible for almost half of the computational cost of the model.

Using a unidirectional RNN for streaming

Instead, we can replace the bidirectional layer with a unidirectional layer, which does not have a dependency on future time steps. That lets us do the computation all the way to the final layer as soon as we have enough audio input.

With a unidirectional model, instead of feeding the entire input in at once and getting the entire output, you can feed the input piecewise. Meaning, you can input 100ms of audio at a time, get those outputs right away, and save the final state so you can use it as the initial state for the next 100ms of audio.

An alternative architecture that uses a unidirectional RNN in which each time step only depends on the input at that time and the state from the previous step.

An alternative architecture that uses a unidirectional RNN in which each time step only depends on the input at that time and the state from the previous step.

Here’s code for creating an inference graph that can keep track of the state between each input window:

import tensorflow as tf

def create_inference_graph(batch_size=1, n_steps=16, n_features=26, width=64):
    input_ph = tf.placeholder(dtype=tf.float32,
                              shape=[batch_size, n_steps, n_features],
    sequence_lengths = tf.placeholder(dtype=tf.int32,
    previous_state_c = tf.get_variable(dtype=tf.float32,
                                       shape=[batch_size, width],
    previous_state_h = tf.get_variable(dtype=tf.float32,
                                       shape=[batch_size, width],
    previous_state = tf.contrib.rnn.LSTMStateTuple(previous_state_c, previous_state_h)

    # Transpose from batch major to time major
    input_ = tf.transpose(input_ph, [1, 0, 2])

    # Flatten time and batch dimensions for feed forward layers
    input_ = tf.reshape(input_, [batch_size*n_steps, n_features])

    # Three ReLU hidden layers
    layer1 = tf.contrib.layers.fully_connected(input_, width)
    layer2 = tf.contrib.layers.fully_connected(layer1, width)
    layer3 = tf.contrib.layers.fully_connected(layer2, width)

    # Unidirectional LSTM
    rnn_cell = tf.contrib.rnn.LSTMBlockFusedCell(width)
    rnn, new_state = rnn_cell(layer3, initial_state=previous_state)
    new_state_c, new_state_h = new_state

    # Final hidden layer
    layer5 = tf.contrib.layers.fully_connected(rnn, width)

    # Output layer
    output = tf.contrib.layers.fully_connected(layer5, ALPHABET_SIZE+1, activation_fn=None)

    # Automatically update previous state with new state
    state_update_ops = [
        tf.assign(previous_state_c, new_state_c),
        tf.assign(previous_state_h, new_state_h)
    with tf.control_dependencies(state_update_ops):
        logits = tf.identity(logits, name='logits')

    # Create state initialization operations
    zero_state = tf.zeros([batch_size, n_cell_dim], tf.float32)
    initialize_c = tf.assign(previous_state_c, zero_state)
    initialize_h = tf.assign(previous_state_h, zero_state)
    initialize_state =, initialize_h, name='initialize_state')

    return {
        'inputs': {
            'input': input_ph,
            'input_lengths': sequence_lengths,
        'outputs': {
            'output': logits,
            'initialize_state': initialize_state,

The graph created by the code above has two inputs and two outputs. The inputs are the sequences and their lengths. The outputs are the logits and a special “initialize_state” node that needs to be run at the beginning of a new sequence. When freezing the graph, make sure you don’t freeze the state variables previous_state_h and previous_state_c.

Here’s code for freezing the graph:

from import freeze_graph


With these changes to the model, we can use the following approach on the client side:

  1. Run the “initialize_state” node.
  2. Accumulate audio samples until there’s enough data to feed to the model (16 time steps in our case, or 320ms).
  3. Feed through the model, accumulate outputs somewhere.
  4. Repeat 2 and 3 until data is over.

It wouldn’t make sense to drown readers with hundreds of lines of the client-side code here, but if you’re interested, it’s all MPL 2.0 licensed and available on GitHub. We actually have two different implementations, one in Python that we use for generating test reports, and one in C++ which is behind our official client API.

Performance improvements

What does this all mean for our STT engine? Well, here are some numbers, compared with our current stable release:

  • Model size down from 468MB to 180MB
  • Time to transcribe: 3s file on a laptop CPU, down from 9s to 1.5s
  • Peak heap usage down from 4GB to 20MB (model is now memory-mapped)
  • Total heap allocations down from 12GB to 264MB

Of particular importance to me is that we’re now faster than real time without using a GPU, which, together with streaming inference, opens up lots of new usage possibilities like live captioning of radio programs, Twitch streams, and keynote presentations; home automation; voice-based UIs; and so on. If you’re looking to integrate speech recognition in your next project, consider using our engine!

Here’s a small Python program that demonstrates how to use libSoX to record from the microphone and feed it into the engine as the audio is being recorded.

import argparse
import deepspeech as ds
import numpy as np
import shlex
import subprocess
import sys

parser = argparse.ArgumentParser(description='DeepSpeech speech-to-text from microphone')
parser.add_argument('--model', required=True,
                    help='Path to the model (protocol buffer binary file)')
parser.add_argument('--alphabet', required=True,
                    help='Path to the configuration file specifying the alphabet used by the network')
parser.add_argument('--lm', nargs='?',
                    help='Path to the language model binary file')
parser.add_argument('--trie', nargs='?',
                    help='Path to the language model trie file created with native_client/generate_trie')
args = parser.parse_args()

LM_WEIGHT = 1.50

print('Initializing model...')

model = ds.Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
if args.lm and args.trie:
sctx = model.setupStream()

subproc = subprocess.Popen(shlex.split('rec -q -V0 -e signed -L -c 1 -b 16 -r 16k -t raw - gain -2'),
print('You can start speaking now. Press Control-C to stop recording.')

    while True:
        data =
        model.feedAudioContent(sctx, np.frombuffer(data, np.int16))
except KeyboardInterrupt:
    print('Transcription:', model.finishStream(sctx))

Finally, if you’re looking to contribute to Project DeepSpeech itself, we have plenty of opportunities. The codebase is written in Python and C++, and we would love to add iOS and Windows support, for example. Reach out to us via our IRC channel or our Discourse forum.

The post Streaming RNNs in TensorFlow appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogExplore the immersive web with Firefox Reality. Now available for Viveport, Oculus, and Daydream

Earlier this year, we shared that we are building a completely new browser called Firefox Reality. The mixed reality team at Mozilla set out to build a web browser that has been designed from the ground up to work on stand-alone virtual and augmented reality (or mixed reality) headsets. Today, we are pleased to announce that the first release of Firefox Reality is available in the Viveport, Oculus, and Daydream app stores.

At a time when people are questioning the impact of technology on their lives and looking for leadership from independent organizations like Mozilla, Firefox Reality brings to the 3D web and immersive content experiences the level of ease of use, choice, control and privacy they’ve come to expect from Firefox.

But for us, the ability to enjoy the 2D web is just table stakes for a VR browser. We built Firefox Reality to move seamlessly between the 2D web and the immersive web.

Designed from the virtual ground up

The Mixed Reality team here at Mozilla has invested a significant amount of time, effort, and research into figuring out how we can design a browser for virtual reality:

We had to rethink everything, including navigation, text-input, environments, search and more. This required years of research, and countless conversations with users, content creators, and hardware partners. The result is a browser that is built for the medium it serves. It makes a big difference, and we think you will love all of the features and details that we’ve created specifically for a MR browser.
– Andre Vrignaud, Head of Mixed Reality Platform Strategy at Mozilla


Among these features is the ability to search the web using your voice. Text input is still a chore for virtual reality, and this is a great first step towards solving that. With Firefox Reality you can choose to search using the microphone in your headset.

Content served fresh


We spent a lot of time talking to early VR headset owners. We asked questions like: “What is missing?” “Do you love your device?” And “If not, why?” The feedback we heard the most was that users were having a hard time finding new games and experiences. This is why we built a feed of amazing content into the home screen of Firefox Reality.
– Andre Vrignaud, Head of Mixed Reality Platform Strategy at Mozilla


From the moment you open the browser, you will be presented with immersive experiences that can be enjoyed on a VR headset directly from the Firefox Reality browser. We are working with creators around the world to bring an amazing collection of games, videos, environments, and experiences that can be accessed directly from the home screen.

A new dimension of Firefox

We know a thing or two about making an amazing web browser. Firefox Reality is using our new Quantum engine for mobile browsers. The result is smooth and fast performance that is crucial for a VR browser. We also take things like privacy and transparency very seriously. As a company, we are dedicated to fighting for your right to privacy on the web. Our values have guided us through this creation process, just as they do with every product we build.

We are just getting started

We are in this for the long haul. This is version 1.0 of Firefox Reality and version 1.1 is right around the corner. We have an always-growing list of ideas and features that we are working to add to make this the best browser for mixed reality. We will also be listening and react quickly when we need to provide bug fixes and other minor updates.

If you notice a few things are missing (“Hey! Where are the bookmarks”), just know that we will be adding features at a steady pace. In the coming months, we will be adding support for bookmarks, 360 videos, accounts, and more. We intend to quickly prove our commitment to this product and our users.

Built in the open

Here at Mozilla, we make it a habit to work in the open because we believe in the power of transparency, community and collaboration. If you have an idea, or a bug report, or even if you just want to geek out, we would love to hear from you. You can follow @mozillareality on twitter, file an issue on GitHub, or visit our support site.

Calling all creators

Are you creating immersive content for the web? Have you built something using WebVR? We would love to connect with you about featuring those experiences in Firefox Reality. Are you building a mixed reality headset that needs a best-in-class browser? Let’s chat.

Firefox Reality is available right now.

Download for Oculus
(supports Oculus Go)

Download for Daydream
(supports all-in-one devices)

Download for Viveport (Search for “Firefox Reality” in Viveport store)
(supports all-in-one devices running Vive Wave)

The post Explore the immersive web with Firefox Reality. Now available for Viveport, Oculus, and Daydream appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 252

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is mtpng, a parallelized PNG encoder. Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

131 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Sometimes bad designs will fail faster in Rust

Catherine West @ Rustconf.

Thanks to kornel for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla Security BlogSeptember 2018 CA Communication

Mozilla has sent a CA Communication to inform Certification Authorities (CAs) who have root certificates included in Mozilla’s program about current events relevant to their membership in our program and to remind them of upcoming deadlines. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 7 action items:

  1. Mozilla recently published version 2.6.1 of our Root Store Policy. The first action confirms that CAs have read the new version of the policy.
  2. The second action asks CAs to ensure that their CP/CPS complies with the changes that were made to domain validation requirements in version 2.6.1 of Mozilla’s Root Store Policy.
  3. CAs must confirm that they will comply with the new requirement for intermediate certificates issued after 1-January 2019 to be constrained to prevent use of the same intermediate certificate to issue both SSL and S/MIME certificates.
  4. CAs are reminded in action 4 that Mozilla is now rejecting audit reports that do not comply with section 3.1.4 of Mozilla’s Root Store Policy.
  5. CAs must confirm that they have complied with the 1-August 2018 deadline to discontinue use of BR domain validation methods 1 “Validating the Applicant as a Domain Contact” and 5 “Domain Authorization Document”
  6. CAs are reminded of their obligation to add new intermediate CA certificates to CCADB within one week of certificate creation, and before any such subordinate CA is allowed to issue certificates. Later this year, Mozilla plans to begin preloading the certificate database shipped with Firefox with intermediate certificates disclosed in the CCADB, as an alternative to “AIA chasing”. This is intended to reduce the incidence of “unknown issuer” errors caused by server operators neglecting to include intermediate certificates in their configurations.
  7. In action 7 we are gathering information about the Certificate Transparency (CT) logging practices of CAs. Later this year, Mozilla is planning to use CT logging data to begin testing a new certificate validation mechanism called CRLite which may reduce bandwidth requirements for CAs and increase performance of websites. Note that CRLite does not replace OneCRL which is a revocation list controlled by Mozilla.

The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

With this CA Communication, we reiterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

The post September 2018 CA Communication appeared first on Mozilla Security Blog.

Mozilla VR BlogRemote Debugging Firefox Reality

Remote Debugging Firefox Reality

You can debug your web pages running in Firefox Reality remotely over USB from your computer using the latest release of Firefox for Windows, Mac, or Linux.

To set up debugging you will need to do a couple of things. Don’t worry, it’s easy.

  • Install the Android command line tools
  • Turn on debugging in Firefox Reality on your VR device
  • Turn on debugging in Firefox on your desktop computer

Install the Android command line tools.

First you will need adb, the Android command line debugging tool. There are many ways to get it. The command line tools come with the full install of the Android Developer Suite, but if you only need the command line tools you can install just those. If you are on a Mac follow these instructions to install it using Homebrew. However you get adb, you should be able to run it on the commandline to see your headset, like this:

MozMac:~ josh$ adb devices
List of devices attached
1KWPH813E48106    device

Turn on Firefox Reality Debugging

After you have installed Firefox Reality on your VR device, launch it and click the gear icon to open the settings dialog.

Remote Debugging Firefox Reality

Then click the Developer Options button to open the developer settings, then turn on remote debugging with the toggle switch.

Remote Debugging Firefox Reality

Turn on Debugging in Firefox for Desktop

To set up debugging on your development computer, you must have the latest version of Firefox for Developers installed or the latest Nightly build.

Open the developer console then select the settings menu option.

Remote Debugging Firefox Reality

Remote Debugging Firefox Reality

Scroll down to advanced settings and turn on the browser chrome and remote debugging options.

Remote Debugging Firefox Reality

You should already have the ADB Helper addon installed. You can check it using the about:addons page. If it’s not there install it by searching for ADB Helper in the addons page.

Remote Debugging Firefox Reality

Now open the WebIDE. You should see the connected device in the upper right corner:

Remote Debugging Firefox Reality

Remote Debugging Firefox Reality

Now you can select the device and connect to the main Firefox Reality process. Underneath that are entries for each live tab where you can view the console, take screenshots, and get performance information.

Remote Debugging Firefox Reality

Remote Debugging Firefox Reality

That’s it. Debugging pages in Firefox Reality is easy. Once you are connected the experience is much like debugging regular desktop pages. To learn more about how to make web content for Firefox Reality, check out the Developer’s Guide.

Emily DunhamCFP tricks 1

CFP tricks 1

Or, “how to make a selection committee do one of the hard parts of your job as a speaker for you”. For values of “hard parts” that include fine-tuning your talk for your audience.

I’m giving talk advice to a friend today, which means I’m thinking about talk advice and realizing it’s applicable to lots of speakers, which means I’m writing a blog post.

Why choosing an audience is hard

Deciding who you’re speaking to is one of the trickiest bits of writing an abstract, because a good abstract is tailored to bring in people who will be interested in and benefit from your talk. One of the reasons that it’s extra hard for a speaker to choose the right audience, especially at a conference that they haven’t attended before, is because they’re not sure who’ll be at the conference or track.

Knowing your audience lets you write an abstract full of relevant and interesting questions that your talk will answer. Not only do these questions show that you can teach your subject matter, but they’re an invaluable resource for assessing your own slides to make sure your talk delivers everything that you promised it would!

Tricks for choosing an audience

Some strategies I’ve recommended in the past for dealing with this include looking at the conference’s marketing materials to imagine who they would interest, and examining the abstracts of past years’ talks.

Make the committee choose by submitting multiple proposals

Once you narrow down the possible audiences, a good way to get the right talk in is to offload the final choice onto the selection committee! A classic example is to offer both a “Beginner” and an “Advanced” talk, on the same topic, so that the committee can pick whichever they think will be a better fit for the audience they’re targeting and the track they choose to schedule you for.

If the CFP allows notes to the committee, it can be helpful to add a note about how your talks are different, especially if their titles are similar: “This is an introduction to Foo, whereas my other proposal is a deep dive into Foo’s Bar and Baz”.

Use the organizers’ own words

I always encourage resume writers to use the same buzzwords as the job posting to which they’re applying when possible. This shows that you’re paying attention, and makes it easy for readers to tell that you meet their criteria.

In the same way, if your talk concept ties into a buzzword that the organizers have used to describe their conference, or directly answers a question that their marketing materials claim the conference will answer, don’t be afraid to repeat those words!

When choosing between several possible talk titles, keep in mind that any jargon you use can show off your ability, or lack thereof, to relate to the conference’s target audience. For instance, a talk with “Hacking” in the title may be at an advantage in an infosec conference but at a disadvantage in a highly professional corporate conf. Another example is that spinning my Rust Community Automation talk to “Life is Better with Rust’s Community Automation” worked great for a conference whose tagline and theme was “Life is Better with Linux”, but would not have been as successful elsewhere.

Good luck with your talks!

Daniel StenbergThe world’s biggest curl installations

curl is quite literally used everywhere. It is used by a huge number of applications and devices. But which applications, devices and users are the ones with the largest number of curl installations? I've tried to come up with a list...

I truly believe curl is one of the world's most widely used open source projects.

If you have comments, other suggestions or insights to help me polish this table or the numbers I present, please let me know!

Some that didn't make the top-10

10 million Nintendo Switch game consoles all use curl, more than 20 million Chromebooks have been sold and they have curl as part of their bundled OS and there's an estimated 40 million printers (primarily by Epson and HP) that aren't on the top-10. To reach this top-list, we're looking at 50 million instances minimum...

10. Internet servers: 50 million

There are many (Linux mainly) servers on the Internet. curl and libcurl comes pre-installed on some Linux distributions and for those that it doesn't, most users and sysadmins install it. My estimate says there are few such servers out there without curl on them.

This source says there were 75 million servers "hosting the Internet" back in 2013.

curl is a default HTTP provider for PHP and a huge percentage of the world's web sites run at least parts with PHP.

9. Sony Playstation 4: 75 million

Bundled with the Operating system on this game console comes curl. Or rather libcurl I would expect. Sony says 75 million units have been sold.

curl is given credit on the screen Open Source software used in the Playstation 4.

8. Netflix devices: 90 million

I've been informed by "people with knowledge" that libcurl runs on all Netflix's devices that aren't browsers. Some stats listed on the Internet says 70% of the people watching Netflix do this on their TVs, which I've interpreted as possible non-browser use. 70% of the total 130 million Netflix customers makes 90.

libcurl is not used for the actual streaming of the movie, but for the UI and things.

7. Grand Theft Auto V: 100 million

The very long and rarely watched ending sequence to this game does indeed credit libcurl. It has also been recorded as having been sold in 100 million copies.

There's an uncertainty here if libcurl is indeed used in this game for all platforms GTA V runs on, which then could possibly reduce this number if it is not.

6. macOS machines: 100 million

curl has shipped as a bundled component of macOS since August 2001. In April 2017, Apple's CEO Tim Cook says that there were 100 million active macOS installations.

Now, that statement was made a while ago but I don't have any reason to suspect that the number has gone down notably so I'm using it here. No macs ship without curl!

5. cars: 100 million

I wrote about this in a separate blog post. Eight of the top-10 most popular car brands in the world use curl in their products. All in all I've found curl used in over twenty car brands.

Based on that, rough estimates say that there are over 100 million cars in the world with curl in them today. And more are coming.

4. Fortnite: 120 million

This game is made by Epic Games and credits curl in their Third Party Software screen.

In June 2018, they claimed 125 million players. Now, I supposed a bunch of these players might not actually have their own separate device but I still believe that this is the regular setup for people. You play it on your own console, phone or computer.

3. Television sets: 380 million

We know curl is used in television sets made by Sony, Philips, Toshiba, LG, Bang & Olufsen, JVC, Panasonic, Samsung and Sharp - at least.

The wold market was around 229 million television sets sold in 2017 and about 760 million TVs are connected to the Internet. Counting on curl running in 50% of the connected TVs (which I think is a fair estimate) makes 380 million devices.

2. Windows 10: 500 million

Since a while back, Windows 10 ships curl bundled by default. I presume most Windows 10 installations actually stay fairly updated so over time most of the install base will run a version that bundles curl.

In May 2017, one number said 500 million Windows 10 machines.

1. Smart phones: 3000 million

I posit that there are almost no smart phones or tablets in the world that doesn't run curl.

curl is bundled with the iOS operating system so all iPhones and iPads have it. That alone is about 1.3 billion active devices.

curl is bundled with the Android version that Samsung, Xiaomi and OPPO ship (and possibly a few other flavors too). According to some sources, Samsung has something like 30% market share, and Apple around 20% - for mobile phones. Another one billion devices seems like a fair estimate.

Further, curl is used by some of the most used apps on phones: Youtube, Instagram, Skype, Spotify etc. The three first all boast more than one billion users each, and in Youtube's case it also claims more than one billion app downloads on Android. I think it's a safe bet that these together cover another 700 million devices. Possibly more.

Same users, many devices

Of course we can't just sum up all these numbers and reach a total number of "curl users". The fact is that a lot of these curl instances are used by the same users. With a phone, a game console, a TV and some more an ordinary netizen runs numerous different curl instances in their daily lives.


Did I ever expect this level of success? No.


The Servo BlogThese Months In Servo 113

In the past 1.5 months, we merged 439 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Exciting Work in Progress

Notable Additions

  • derekdreery documented many parts of of the html5ever crate.
  • gterzian implemented the API.
  • jdm fixed an ipc-channel bug on macOS that limited the maximum payload.
  • asajeffrey upgraded SpiderMonkey from version 50 to 60.
  • ferjm implemented part of the WebAudio API on top of GStreamer.
  • gterzian made it possible to cancel in-progress document loads.
  • nox prevented WebGL objects from different contexts from being used interchangeably.
  • paavininanda and nupurbaghel implemented responsive image support for environment changes.
  • jdm made several improvements the specification conformance of WebGL framebuffers and renderbuffers.
  • pyfisch removed a significant amount of duplication between the Servo and WebRender display lists.
  • eijebong reenabled support for secure websockets by switching to the ws crate.
  • paulrouget fixed the app suspension behaviour on Android.
  • manish implemented the AudioListener and AudioParameter APIs for WebAudio.
  • nupurbaghel corrected the implementation of HTMLImageElement.currentSrc.
  • gterzian implemented targetted task throttling for certain kinds of low priority events.
  • JacksonCoder made file: URLs read the target file in chunks rather than all at once.
  • Manishearth added a bootstrap command to automatically prepare a Linux environment for building Servo.
  • nox corrected various texture conversion algorithms for WebGL.
  • gterzian replaced the deprecated channel selection API with the crossbeam-channel crate

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Hacks.Mozilla.OrgMDN Changelog for August 2018

Here’s what happened in August to the code, data, and tools that support MDN Web Docs:

Here’s the plan for September:

Done in August

Migrated 95% of compatibility data

The MDN content team prioritized reviewing and merging Browser Compatibility Data pull requests (PRs) in August, and met their goal of getting the open PRs to less than 50. The team reviewed and merged 85 PRs that were open at the start of the month, including a schema change to catch duplicate identifiers (PR 1415) from Dominique Hazael-Massieux. The team also merged 123 PRs that were opened during the month, including Visual Studio Code configurations for BCD editing (PR 2498) from ExE Boss.

A lot of these were migration PRs, and the migration is now 95% complete, with 10,000 features over 6,300 pages. Some of the remaining migration work will be straightforward. Other data sources will require strategy and format discussions, such as Event support and summary pages. These discussions will be easier with the experience of migrating thousands of simpler features.

Existing data also got some love. Contributors fixed incorrect data, clarified if and when a browser supported a feature, and celebrated support in new browser releases. We expect a steady stream of maintenance PRs as the project transitions from migration to ongoing maintenance.

Florian Scholz has worked to make this a community project, organizing the effort with spreadsheets and transitioning to issues as the remaining work becomes manageable. This has been a successful effort, and GitHub insights shows that most contributions were not from MDN staff.

Bar chart of top contributors, mentioned by name and count below

Top BCD contributors for August 2018










Thanks to ExE Boss (24 PRs), Connor Shea (23 PRs), Claas Augner (18 PRs), David Ross (17 PRs), Lucian Condrea (13 PRs), Joe Medley (8 PRs), and all our contributors, and thanks to the staff and tool builders that keep the review queue moving!

Improved performance and experience

Tim Kadlec audited MDN in July, and created performance metrics and goals, as well as recommending changes. In August, we started implementing these changes. Schalk Neethling improved the load time for the homepage by optimizing the hero image (PR 4903) and removing a section with an image (PR 4912). Ryan Johnson automated recording deployments and re-calculating metrics with Speedcurve (PR 4902). We’ll continue working on performance in the coming months.

Previously, if you wanted to link to a section in a page, such as MDN’s advice on why you should use labels for <input> elements, you had to use the Developer Tools to get the section ID. Schalk added section-level anchor links (PR 4901), so that you can quickly grab the link and paste it into a code review:

A chain link icon next to section titles links to that section

The new section links on MDN











Maintained the platform

Anthony Maton is switching Kuma to Python 3. Our memcached library hasn’t been updated for Python 3, and instead of a library swap, Anthony simplified the caching configuration and switched to Redis (PR 4870). He continues to make incremental changes (PR 4899) to get to a shared Python 2 / Python 3 codebase, with a goal of switching to Python 3 by the end of the year.

I completed the ElasticSearch 5.6 update, which was harder than expected. The update from 1.7 to 2.4 only required updating the servers (PR 4192), and didn’t even merit a mention in the April 2017 report. ElasticSearch no longer provides libraries that span major versions. The upgrade from 2.5 to 5.6 required updating the client libraries, the Kuma code that uses them (PR 4906), and the server (PR 4904), all at the same time. This update included some minor fixes, and search with 5.x appears faster, but site search still needs a lot of work. The next update, to ElasticSearch 6.x, will be in March 2019.

Ryan Johnson is continuing the work of migrating from MozMEAO to Mozilla IT support. Ed Lim provisioned the new Kubernetes cluster (PR 24) and backing services (PR 31), with support from Dave Parfitt and Josh Mize. Ryan configured the new Jenkins server to run parallel tests and deployments (PR 4931), and to publish Docker images to a new repository (PR 4933). We’re now deploying to both the MozMEAO staging environment and the MozIT staging environment.

We’ll continue with production and disaster-recovery environments in September, and prioritize the infrastructure issues. The goal is to switch traffic in October.

Shipped tweaks and fixes

There were 400 PRs merged in August:

This includes some important changes and fixes:

78 pull requests were from first-time contributors:

Planned for September

In September, we’ll continue working on new and improved interactive examples, converting compatibility data, migrating MDN services, and other long-term projects.

Hack on accessibility

We’re happy with the results of the Paris Hack on MDN event in March, and are doing it again in September. MDN staff will meet in London for a week of meetings and 2019 planning, and then have the fourth Hack on MDN event, focusing on accessibility. We plan to write docs, build tools, and explore ways to help web developers make the internet more accessible for all users.

Ship more performance improvements

We’ll continue working on the suggested performance improvements, to meet the performance goals for the year.

One area for improvement is optimizing MDN’s use of custom web fonts. These fonts often need to be downloaded, increasing page load time. Some plugins and clients, like Firefox Focus, improve the mobile experience by blocking these by default. Our goal is to improve the experience for desktop users by downloading optimized fonts after the initial page load, and avoiding required custom fonts like FontAwesome for icons.

Another focus is Interactive Examples, which are useful but have a large impact on page load time. James Hobin is working through the requirements to load the examples directly into the page, rather than via an <iframe>. Schalk is improving the asset builder for new features and for optimized asset building.

The post MDN Changelog for August 2018 appeared first on Mozilla Hacks - the Web developer blog.

Dave HuntEuroPython 2018

In July I took the train up to beautiful Edinburgh to attend the EuroPython 2018 conference. Despite using Python professionally for almost 8 years, this was my first experience of a Python conference. The schedule was packed, and it was challenging deciding what talks to attend, but I had a great time and enjoyed the strong community feeling of the event. We even went for a group run around Holyrood Park and Arthur’s Seat, which I hope is included in the schedule for future years.

Now that the videos of the talks have all been published, I wanted to share my personal highlights, and list the talks I saw during and since the conference. I still haven’t caught up on everything I wanted to see, so I’ve also included my watch list. First, here’s the full playlist of talks from the conference

Here are my top picks from the talks I either attended or have watched since:

I also wanted to highlight the following lightning talks:

Here is a list of the other talks I either attended at the conference or have watched since:

Here’s my list of talks I have yet to watch:

Were you at EuroPython 2018? Let me know if you have any favourite talks that aren’t already on my list! I’m keen to attend again next year, if my travel schedule allows for it.

Mozilla VR BlogFirefox Reality Developers Guide

Firefox Reality Developers Guide

Firefox Reality, Mozilla's VR web browser, is getting closer to release; so let's talk about how to make your experiences work well in this new browser.

Use a Framework with WebVR 1.1 Support

Building WebVR applications from scratch requires using WebGL, which is very low level. Most developers use some sort of a library, framework, or game engine to do the heavy lifting for them. These are some commonly used libraries that support WebVR.


As of June 2018 three.js has new and improved WebVR support. It should just work. See these official examples of how to use it.


A-Frame is a framework built on top of three.js that lets you build VR scenes using an HTML-like syntax. It is the best way to get started with VR if you have never used it before.


Babylon.js is an open source high-performance 3D engine for the web. Since version 2.5 it has full WebVR support. This doc explains how to use the WebVRFreeCamera class.

Amazon Sumerian

Amazon’s online Sumerian tool lets you easily build VR and AR experiences, and obviously supports WebVR out of the box.


PlayCanvas is a web-first game engine, and it supports WebVR out of the box.

Existing WebGL applications

If you have an existing WebGL application you can easily add WebVR support. This blog covers the details.

No matter what framework you use, make sure it supports the WebVR 1.1 API, not the newer WebXR API. WebXR will eventually be a full web standard but no browser currently ships with non-experimental support. Use WebVR for now, and in the future a polyfill will make sure existing applications continue to work once WebXR is ready.

Optimize Like it’s the Mobile Web, Because it is.

Developing for VR headsets is just like developing for mobile. Though some VR headsets run attached to a desktop with a beefy graphics card, most users have a mobile device like a smartphone or dedicated headset like the Oculus Go or Lenovo Mirage. Regardless of the actual device, rendering to a headset requires at least twice the rendering cost of a non-immersive experience because everything must be rendered twice, one for each eye.

To help your VR application be fast, keep the draw-call count to a minimum. The draw call count matters far more than the total polygons in your scene, though polygons are important as well. Drawing 10 polygons 100 times is far slower than 100 polygons 10 times.

Lighting also tends to be expensive on mobile, so use fewer lights or cheaper materials. If a lambert or phong material will work just as well as a full PBR material (physically based rendering), go for the cheaper option.

Compress your 3D models as GLB files instead of glTF. Decompression time is about the same but the download time will be much faster. There are a variety of command-line tools to do this, or you can use this web based tool by SBtron. Just drag in your files and get back a GLB.

Always use powers-of-2 texture sizes and try to keep textures under 1024 × 1024. Mobile GPUs don’t have nearly as much texture memory as desktops. Plus big textures just take a long time to download. You can often use 512 or 128 for things like bump maps and light maps.

For your 2D content, don’t assume any particular screen size or resolution. Use good responsive design practices that work well on a screen of any shape or size.

Lazy load your assets so that the initial experience is good. Most VR frameworks have a way of loading resources on demand. three.js uses the DefaultLoadingManager. Your goal is to get the initial screen up and running as quickly as possible. Keep the initial download to under 1MB if at all possible. A fast loading experience is one that people will want to come back to over and over.

Prioritize framerate over everything else. In VR having a high framerate and smooth animation matters far more than the details of your models. In VR the human eye (and ear) are more sensitive to latency, low framerates, skipped frames, and janky animation than on a 2D screen. Your users will be very tolerant of low-polygon models and less-realistic graphics as long as the experience is fun and smooth.

Don’t do browser sniffing. If you hardcode in detection for certain devices, your code may work today, but will break as soon as a new device comes to market. The WebVR ecosystem is constantly growing and changing and a new device is always around the corner. Instead check for what the VR API actually returns, or rely on your framework to do this for you.

Assume the controls could be anything. Some headsets have a full 6DoF controller (six degrees of freedom). Some are 3DoF. Some are on a desktop using a mouse or on a phone with a touchscreen. And some devices have no input at all, relying on gaze-based interaction. Make sure your application works in any of these cases. If you aren’t able to make it work in certain cases (say, gaze-based won’t work), then try to provide a degraded-view-only experience instead. Whatever you do, don’t block viewers if the right device isn’t available. Someone who looks at your app might still have a headset, but just isn’t using it right now. Provide some level of experience for everyone.

Never enter VR directly on page load unless coming from a VR page. On some devices the page is not allowed to enter VR without a user interaction. On other devices audio may require user interaction as well. Always have some sort of a 2D splash page that explains where the user is and what the interaction will be, then have a big ‘Enter VR’ button to actually go into immersive mode. The exception is if the user is coming from another page and is already in VR. In this case you can jump right in. This A-Frame doc explains how it works.

Use Remote Debugging on a Real Device

The real key to creating a responsive and fun WebVR experience is debugging on both your desktop, a phone, and at least one real VR headset. Using a desktop browser is fine during development, but there is no substitute for a actual device strapped to your noggin. Things which seem fine on desktop will be annoying in a real headset. The FoV (Field of View) of headsets is radically different than a phone or desktop window, so different things may be visible in each. You must test across form factors.

The Oculus Go is fairly easy to acquire and very affordable. If you have a smartphone also consider using a plastic or cardboard viewer, which are cheap and easy to find.

Firefox Reality supports remote debugging over USB so you can see the performance and console right in your desktop browser.

Firefox Reality Developers Guide

Get your Site Featured

If you have a cool creation you want to share, let us know. We can help you get great exposure for your VR website. Submit your creation to this page. You must put in an email address or we can’t contact you back. To qualify your website must at least run in WebVR in Firefox Reality on the Oculus Go. If you don’t have one to test with then please contact us to help you test it.

Your site must support some sort of VR experience without having to create an account or pay money. Asking for money/account for a deeper experience is fine, but it must provide some sort of functionality right off the bat (intro level, tutorial, example, etc.).

The Future is Now

We are so excited that the web is at the forefront of VR development. Our goal is to help you create fun and successful VR content. This post contains a few tips to help, but we are sure you will come up with your own tips as well. Happy coding.

Daniel PocockWhat is the difference between moderation and censorship?

FSFE fellows recently started discussing my blog posts about Who were the fellowship? and An FSFE Fellowship Representative's dilemma.

Fellows making posts in support of reform have reported their emails were rejected. Some fellows had CC'd me on their posts to the list and these posts never appeared publicly. These are some examples of responses received by a fellow trying to post on the list:

The list moderation team decided now to put your email address on moderation for one month. This is not censorship.

One fellow forwarded me a rejected message to look at. It isn't obscene, doesn't attack anybody and doesn't violate the code of conduct. The fellow writes:

+1 for somebody to answer the original questions with real answers
-1 for more character assassination

Censors moderators responded to that fellow:

This message is in constructive and unsuited for a public discussion list.

Why would moderators block something like that? In the same thread, they allowed some very personal attack messages in favour of existing management.

Moderation + Bias = Censorship

Even links to the public list archives are giving errors and people are joking that they will only work again after the censors PR team change all the previous emails to comply with the censorship communications policy exposed in my last blog.

Fellows have started noticing that the blog of their representative is not being syndicated on Planet FSFE any more.

Some people complained that my last blog didn't provide evidence to justify my concerns about censorship. I'd like to thank FSFE management for helping me respond to that concern so conclusively with these heavy-handed actions against the community over the last 48 hours.

The collapse of the fellowship described in my earlier blog has been caused by FSFE management decisions. The solutions need to come from the grass roots. A totalitarian crackdown on all communications is a great way to make sure that never happens.

FSFE claims to be a representative of the free software community in Europe. Does this behaviour reflect how other communities operate? How successful would other communities be if they suffocated ideas in this manner?

This is what people see right now trying to follow links to the main FSFE Discussion list archive:

Josh MatthewsBugs Ahoy: The Next Generation

There’s a new Bugs Ahoy in town, and it’s called Codetribute.

The past

I started the Bugs Ahoy project in October 2011 partly because I was procrastinating from studying for midterms, but mostly because I saw new contributors being overwhelmed by Bugzilla’s… Bugzilla-ness. I wanted to reduce the number of decisions that new contributors had to make by:

  • only showing bugs that match their skills
  • only showing bugs that have someone ready to mentor
  • presenting only the most useful information needed to make the required contribution

Bugs Ahoy was always something that was Good Enough, but I’ve never been able to focus on making it the best tool it could be. I’ve heard enough positive feedback over the past 7 years to convince me that it was better than nothing, at least!

The future

Bugs Ahoy’s time is over, and I would like to introduce the new Codetribute site. This is the result of Fienny Angelina’s hard work, with Dustin Mitchell, Hassan Ali, and Eli Perelman contributing as well. It is the spiritual successor to Bugs Ahoy, built to address limitations of the previous system by people who know what they’re doing. I was thrilled by the discussions I had with the team while Codetribute was being built, and I’m excited to watch as the project evolves to address future needs.

Bugs Ahoy will redirect automatically to the Codetribute homepage, but you should update any project or language-specific bookmarks or links so they remain useful. If your project isn’t listed and you would like it to be, please go ahead and add it! Similarly, if you have suggestions for ways that Codetribute could be more useful, please file an issue!

Hacks.Mozilla.OrgFirefox Focus with GeckoView

Firefox Focus is private browsing as an app: It automatically blocks ads and trackers, so you can surf the web in peace. When you’re done, a single tap completely erases your history, cookies, and other local data.

Protecting you from invasive tracking is part of Mozilla’s non-profit mission, and Focus’s built-in tracking protection helps keep you safe. It also makes websites load faster!

A screenshot of Firefox Focus, showing the main menu open with the heading "26 Trackers Blocked"

With Focus, you don’t have to worry about your browsing history coming back to haunt you in retargeted ads on other websites.

Bringing Gecko to Focus

In the next weeks, we’ll release a new version of Focus for Android, and for the first time, Focus will come bundled with Gecko, the browser engine that powers Firefox Quantum. This is a major architectural change, so while every copy of Focus will include Gecko—hence the larger download size—we plan on enabling it gradually to ensure a smooth transition. You can help us test Gecko in Focus today by installing the Focus Beta.

Diagram of Firefox Focus 7, showing how the app now contains GeckoView, instead of just relying on the WebView component provided by Android

Note: At time of publishing, Focus Beta is conducting an A/B test between the old and new engines. Look for “Gecko/62.0” in your User-Agent String to determine if your copy is using Gecko or not.

Up until this point, Focus has been powered exclusively by Android’s built-in WebView. This made sense for initial development, since WebView was already on every Android device, but we quickly ran into limitations. Foremost, it isn’t designed for building browsers. Despite being based on Chromium, WebView only supports a subset of web standards, as Google expects app developers to use native Android APIs, and not the Web, for advanced functionality. Instead, we’d prefer if apps had access to the entirety of the open, standards-based web platform.

In Focus’s case, we can only build next-generation privacy features if we have deep access to the browser internals, and that means we need our own engine. We need Gecko. Fortunately, Firefox for Android already uses Gecko, just not in a way that’s easy to reuse in other applications. That’s where GeckoView comes in.

GeckoView: Making Gecko Reusable

GeckoView is Gecko packaged as a reusable Android library. We’ve worked to decouple the engine itself from its user interface, and made it easy to embed in other applications. Thanks to GeckoView’s clean architecture, our initial benchmarks of the new Focus show a median page load improvement of 20% compared to Firefox for Android, making GeckoView our fastest version of Gecko on Android yet.

Screenshot of the GeckoView AAR (Android Library) file. It is about 37 MB large.

We first put GeckoView into production last year, powering both Progressive Web Apps (PWAs) and Custom Tabs in Firefox for Android. These minimal, self-contained features were good initial projects, but with Focus we’re going much further. Focus will be our first time using GeckoView to completely power an existing, successful, and standalone product.

We’re also using GeckoView in entirely new products like Firefox Reality, a browser designed exclusively for virtual and augmented reality headsets. We’ll be sharing more about it later this year.

Building Browsers with Android Components

To build a web browser, you need more than just an engine. You also need common functionality like tabs, auto-complete, search suggestions, and so on. To avoid unnecessary duplication of effort, we’ve also created Android Components, a collection of independent, ready-to-use libraries for building browsers and browser-like applications on Android.

For Mozilla, GeckoView means we can leverage all of our Firefox expertise in building more compelling, safe, and robust online experiences, while Android Components ensures that we can continue experimenting with new projects (like Focus and Firefox Reality) without reinventing wheels. In many ways, these projects set the stage for the next generation of the Firefox family of browsers on Android.

For Android developers, GeckoView means control. It’s a production-grade engine with a stable and expansive API, usable either on its own or through Android Components. Because GeckoView is a self-contained library, you don’t have to compile it yourself. Furthermore, powering your app with GeckoView gives you a specific web engine version you can work with. Compare that to WebView, which tends to have quite a bit of variance between versions depending on the OS and Chrome version available on the device. With GeckoView, you always know what you’re getting — and you benefit from Gecko’s excellent, cross-platform support for web standards.

Get Involved

We’re really excited about what GeckoView means for the future of browsers on Android, and we’d love for you to get involved:

Let us know what you think of GeckoView and the new Focus in the comments below!

The post Firefox Focus with GeckoView appeared first on Mozilla Hacks - the Web developer blog.

Paul BoneAvoiding large immediate values

We’re often told that we shouldn’t worry about the small details in optimisation, that either "premature optimisation is the root of all evil" or "the compiler is smarter than you". These things are true, in general. Which is why if you asked me about 10 years ago if I thought I would be using knowledge of machine code (not just assembly!) to improve a browser’s benchmark score by 2.5% I wouldn’t have believed you.

First of, I’m sorry (not sorry) for the gloating, and for what it’s worth the optimisation isn’t really that clever, and wasn’t even my idea. What I’m finding almost funny is that younger-me would not have believed that such low level details mattered this much.

Bump-pointer allocation

SpiderMonkey (Firefox’s JavaScript engine) separates its garbage collector into two areas, the nursery and the tenured heap. New objects are typically allocated first in the nursery, when the nursery is collected the object will be moved into the tenured heap if it is still alive. Collecting the nursery is faster than the whole heap since less data needs to be scanned, and most objects die when they are young. This is a fairly standard way to manage a garbage collector and is called generational garbage collection.

Allocating something in either heap should be fast, but since nursery allocation is more common it needs to be VERY fast. When JITing JavaScript code, allocation code is JITed right into the execution paths in each place it is needed.

I was working on a change to this code, I want to count the number of tenured and nursery allocations. And above all, I have to not add too much of a performance impact. That work is Bug 1473213 and isn’t actually the topic of this post, it’s just what drew my attention. (TL;DR: this work is Bug 1479360.)

The nursery fast-path looked like this, I’ve simplified it for easier reading, mostly by removing unnecessary things.

Register result(...), temp(...);
CompileZone* zone = GetJitContext()->realm->zone();
size_t totalSize = ...
void *ptrNurseryPosition = zone->addressOfNurseryPosition();
const void *ptrNurseryCurrentEnd = zone->addressOfNurseryCurrentEnd();

loadPtr(AbsoluteAddress(ptrNurseryPosition), result);
computeEffectiveAddress(Address(result, totalSize), temp);
branchPtr(Assembler::Below, AbsoluteAddress(ptrNurseryCurrentEnd), temp,
storePtr(temp, AbsoluteAddress(ptrNurseryPosition));

That probably didn’t read right for most readers. What we’re looking at here is the code generator of the JIT compiler, this is not the allocation code itself, but the code that creates the machine code that does the allocation. I’ve broken it into two sections, the first five lines prepare some values and have absolutely zero runtime cost. The last five lines generate the code that does the bump pointer allocation. Function calls like loadPtr generate one or more machine code instructions:

loadPtr(AbsoluteAddress(ptrNurseryPosition), result)

Read a pointer-sized value from memory at ptrNurseryPosition and store it in the register result. ptrNurseryPosition points to a pointer that points to the next free cell in the heap. So this places the pointer of the next free cell into the result register.

computeEffectiveAddress(Address(result, totalSize), temp)

Use an lea or similar instruction to add totalSize (a displacement) to the contents of the result register, store the result of this addition into temp. After executing this temp will contain the pointer to the next free cell once we perform the current allocation.

branchPtr(..., AbsoluteAddress(ptrNurseryCurrentEnd), temp, fail)

Compare the temp register’s contents against the contents of the memory at ptrNurseryCurrentEnd and if temp is higher, branch to the fail label. This compares the next value for the allocation pointer to the end of the heap, if the allocation would go beyond the end of the nursery then fail.

storePtr(temp, AbsoluteAddress(ptrNurseryPosition))

Store the new value for the next free cell (temp) into the memory at ptrNurseryPosition.

Unfortunately this isn’t as efficient as it could be.

Immediates and displacements

I’ve recently written about addressing in x86 where I wrote that instructions refer to operands and these operands may be registers, memory locations or immediate values. To recap, there are two main situations where some value can follow the instruction, it’s either as an immediate value or as a displacement for a memory operand.


A displacement my be either 8 or 32 bits (on x86 running in 32 or 64 bit mode).


An immediate value depends on the size of the operation, and may be 8, 16, 32 or 64 bits.

The point here, is that displacements cannot store a 64 bit value, so:

branchPtr(Assembler::Below, AbsoluteAddress(ptrNurseryCurrentEnd), temp,

Cannot directly use 64 bit displacement (ptrNurseryPosition) for its memory operand, and requires an extra instruction to first load this value into a scratch register from an immediate (which can be 64 bit) before doing the comparison. This operation will now need three instructions rather than two (compare and jump are already separate instructions).

Intel provides a special exception to these rules about displacements for move instructions. There are four special opcodes for move that allow it to work with a 64-bit moffset. So:

loadPtr(AbsoluteAddress(ptrNurseryPosition), result);

Can be almost be represented. But these opcodes hard code result to the ax or eax registers, which is not suitable for a 64-bit value as this is. Therefore using 64-bit addresses also makes these loadPtr and storePtr operations use two instructions rather than one.

Here’s the disassembled code that this generates.

movabs $0x7ffff5d1b618,%r11
mov    (%r11),%rbx
lea    0x60(%rbx),%rbp
movabs $0x7ffff5d1b630,%r11
cmp    %rbp,(%r11)
jb     0x1f2f3ed1a351
movabs $0x7ffff5d1b618,%r11
mov    %rbp,(%r11)

This sequence, rather than being five instructions long is now eight instructions long (and 49 bytes) and makes more use of a scratch register (which may impact instruction-level parallelism).

The instruction cache

Instructions aren’t the only cost. This code sequence contains four 64-bit addresses, that’s a total of 32 bytes in the instruction stream (including the target for the jump on failed allocations). That takes up room in the CPU’s caches and other resources in the processor front-end.

The front-end of a processor’s pipeline must fetch and decode instructions before they’re queued, scheduled, executed and retired. Processor front-ends have changed a lot, and there are multiple levels of cacheing and buffering. Let’s use the Intel Core Microarchitecture as an example, it’s new enough to be in common use and things got more complex in the next microarchitecture due to having two different font-end pathways. The resource for this information is Intel’s optimisation reference manual.

Instructions are fetched 16-bytes at a time and immediately following the fetch a pre-decode pass occurs, a fast calculation of instruction lengths, Once the processor knows the lengths (and boundaries) of the instructions within the 16-bytes, they’re written into a buffer (the instruction queue) six at a time, if there are more than six instructions in the 16-byte block, then more cycles are used to pre-decode the remaining instructions. If fewer than six instructions were in the 16 bytes, or a read of less than 16 bytes occurred due to alignment or branching, then the full bandwidth of the pre-decode is not being utilised. If this happens often the instruction queue may starve.

The instruction queue is 18 instructions deep (but I think it’s shared by hyper-threading) instructions are decoded from this queue four or five at a time by the four decoders. One of the decoders is special and can handle some pairs of instructions turning them into a single operation.

Our instruction sequence above contains eight instructions, in 49 bytes. Assuming alignment is in our favour this will take four and pre-decode steps, averaging 2 instructions per pre-decode cycle; less than the CPU is capable of. (I don’t know how this behaves when an instruction crosses then 16-byte boundary, but back-of-the-envelope reasoning tells me it’s not a problem.)

This low instruction density might not be a problem in many situations, such as when the instruction cache already contains plenty of instructions and this bubble does not affect overall throughput. However in a loop or when other things already affect the processor’s pipeline, it could definitely be an issue.

The change

My colleague sfink had left a comment in the nursery string allocation path where he attempted to experiment with this in the past. His solution was eventually removed because it was a little bit fiddly, but it was the inspiration for my eventual change.

The code (tidied up) now looks like:

CheckedInt<int32_t> endOffset = (CheckedInt<uintptr_t>(uintptr_t(curEndAddr)) -
    "Position and end pointers must be nearby");

movePtr(ImmPtr(posAddr), temp);
loadPtr(Address(temp, 0), result);
addPtr(Imm32(totalSize), result);
branchPtr(Assembler::Below, Address(temp, endOffset.value()), result, fail);
storePtr(result, Address(temp, 0));
subPtr(Imm32(size), result);

This loads a 64-bit address once and uses a relative address to describe the end of the nursery (the Address argument to the branchPtr call), then can re-use the original address when updating the current pointer (storePtr). We have to add the object size to result and subtract it later because we can’t easily get guaranteed access to another register with the way the code generator is written. So there are six operations in this sequence, let’s see the machine code:

movabs $0x7ffff5d1b618,%rbp
mov    0x0(%rbp),%rbx
add    $0x60,%rbx
cmp    %rbx,0x18(%rbp)
jb     0x164f300ea154
mov    %rbx,0x0(%rbp)
sub    $0x60,%rbx

Seven instructions long rather than eight, and 36 bytes rather than 49. This can be retrieved in three 16-byte transfers, rather than four. The instructions per fetch is now a 2 1/3 rather than 2.


It doesn’t look like a huge improvement, seven instructions compared with eight?! But now it uses one less 16-byte fetch which means one less cycle to fill the pipeline for these instructions, in the right loop that could make a huge difference. It did make Firefox perform about 2.5% faster on the Speedometer benchmark when tested on my laptop (Intel Core i7-6600U, Skylake). Sadly we didn’t see any noticeable difference in our performance testing infrastructure (arewefastyet or perfherder). This could be because our CI systems have different CPUs that behave differently with regard to instruction lengths/density.

My examples above were for the simpler Core microarchitecture, whereas my testing was on a Skylake CPU and will be quite different. Starting with Sandy Bridge there are two paths for code to take through the CPU front end, and which one is used depends on multiple conditions. To simplify it, on tight enough loops the CPU is able to cache decoded instructions and execute them out of a μop cache.


Another difference is that with an absolute address used in the cmp instruction it could behave different with regard to macro-fusion (being fused with the jmp to execute as a single operation). I’m not sure if large displacements affect macro-fusion.

Update 2018-09-18

I received some feedback from Robert O’Callahan, he wrote with three suggestions.

  • Allocate all JIT code and globals within a single 2GB region and use RIP-relative addressing (x86-64), so that addresses will not be larger than 32bits. This is a good idea and I considered this for the jump instruction in that sequence which still uses a 64 bit address (because the jump is created before the label, and so the address is written after, it must leave 64bits of space for now).

  • Using known bit patterns in the nursery address range we could test for overflow by checking the value of the bits, avoiding an extra memory read. This is a great idea but will require some other work first.

  • The final subtraction might be skippable if the caller can handle an address to the end of the structure and use negative offsets, eg by filling in slots in the object using negative offsets. I’m skeptical if this will provide much benefit compared to the effort required to avoid the subtraction, or probably at best delay it.

Mozilla GFXWebRender newsletter #22

The closer we get to shipping WebRender, the harder it is for me to take the time to go through commit logs and write the newsletter. But this time is special.

Yesterday we enabled WebRender by default on Firefox Nightly (🎉🎉🎉) for a subset of the users: Desktop Nvidia GPUs on Windows 10. This represents 17% of the nightly population. We chose to first target this very specific configuration in order to avoid getting flooded with driver bugs, and we’ll gradually add more as things stabilize.

Needless to say, this is a pretty exciting moment for the graphics team and everyone who contributed to WebRender, since we have been working on this project for quite a while. There are still a number of blocker bugs to fix before we can hit the beta population, and then some more to meet release quality. Nonetheless, shipping in nightly is huge step. Monitoring bug reports for the next few days will be interesting.

Notable WebRender changes

  • Glenn avoided allocating clip masks for clip rects in the same coordinate system.
  • Glenn avoided allocating clip masks for scale and offset transforms.
  • Gankro fixed backface-visibility in the presence of preserve-3d.
  • Nical added a mechanism to notify the embedded (Gecko) at different stages of the rendering pipeline.
  • Nical added some infrastructure for tracking the validity of the current frame and hit tester to avoid redundant work.
  • Emilio made dashed border look more like Gecko’s.
  • Glenn refactored the display list flattener for mix blend mode optimizations.
  • Kvark improved the recording infrastructure to generate test cases.
  • Nical moved the scene data structure to the scene builder thread in preparation for low priority command execution.
  • Nical implemented a system for processing low priority work without blocking high priority commands. This improves the scheduling of work from background tabs as well as better integration with Gecko’s tab-switching mechanism.
  • Glenn implemented rasterization with arbitrary coordinate roots. This fixed a bunch of correctness issue and will make it possible to do more caching of rendered content.
  • Emilio removed some useless allocations.
  • Emilio improved the rendering of very small dots.
  • Kvark refactored the border shader to avoid driver warnings.
  • Glenn fixed a bug with relative transforms.
  • Kvark added better checks for dual source blending support.
  • Dan added a simple border shader for solid borders, improving shader compile times for the common cases.
  • Emilio made the color modulation for groove and ridge borders more consistent with what is done in Gecko.
  • Kvark avoided signed integer divisions in the shaders.
  • Lee allowed blob images to query font instance data.
  • Glenn fixed the order of compositing operations withing a stacking context.
  • Glenn improved the accuracy of the clipper code.
  • Glenn fixed backface visibility in nested stacking contexts.
  • Glenn improved the 3d transform support.
  • Kvark used the hardware for perspective division.
  • Kvark fixed perspective interpolation in the brush blend shader.
  • Glenn fixed a bug causing redundant clips to be drawn.
  • Dan added support for shadows for border and image brushes.
  • Glenn added some fixes and optimizations for clips in different coordinate systems.
  • Lee moved font addition/deletion off the render backend thread.
  • Glenn refactored clip chains to allow rasterizing pictures in local space.
  • Glenn simplified how local rects are accumulated in 3d contexts.
  • Glenn simplified determining if batches can be merged.
  • Glenn fixed a bug happening when a brush’s mask kind changes during scrolling.
  • Lee added color masking support for images.
  • Glenn optimized lolcal rects calculation during culling.
  • Patrick added a debugging feature to visualize overdraw.
  • Nical fixed a crash caused by huge border widths.
  • Kvark fixed near plane splitting.
  • Kvark reduced the amount of data transferred to the GPU in some cases.
  • Nical reduced the memory consumption of outdated images in the texture cache.
  • Nical fixed a crash caused by huge border radii.
  • Kvark avoided unnecessary transform copies.
  • Kvark removed some unnecessary allocations.
  • Kvark avoided rendering primitive runs of invisible clip nodes.

Notable Gecko changes

  • Jeff reduced the size of the blob command list by skipping out of bound items.
  • Jeff sped up filters in blob images by only filtering the portion we need instead of the entire surface.
  • Emilio added an environment variable to force pre-caching shaders.
  • Patrick added a preference to show overdraw.
  • Emilio avoided some useless allocations when building the display list.
  • Lee avoided spending time populating cairo scaled fonts when we don’t target cairo.
  • Andrew fixed a bug with disappearing background images.
  • Jeff fixed a blob image invalidation bug insde masks and filters.
  • Andrew fixed a crash.
  • Jeff also fixed a crash.
  • Lee improved and fixed the way we deal with missing glyphs.
  • Sotaro fixed some issues with the clear color.
  • Nical cleared memory resources when the driver reports a memory purge.
  • Sotaro fixed a crash
  • Sotaro fixed another crash.
  • Jeff fixed yet another crash.
  • Lee also fixed a crash.
  • Oh and another one.
  • Jeff further reduced the size of blob image command lists.
  • Nical added the dirty rect optimization for async blobs.
  • Sotaro forwarded logs to the right place on android.
  • Jeff improved the handling of foreign objects in blob invalidation (24% performance improvement on the tscrollx test).
  • Jeff hooked up invalidation testing.
  • Markus made CSS filters in SVG elements not use the fallback.
  • Kats fixed duplicated window controls on Mac.
  • Andrew fixed the tracking of the dirty rects of an image used by several tabs.
  • Matt fixed some of the recording infrastructure used by our performance benchmarks with webrender.
  • Sotaro fixed other pieces of performance recording infrastructure.
  • Andrew fixed an invalidation bug with animated images.
  • Kats fixed a leak.
  • Sotaro reduced the latency of video frames.
  • Lee reused scaled fonts across blob image recordings.
  • Jeff made blob images more robust against corrupted data.
  • Henrik added support for the -moz-crips-edges property for canvas.
  • Henrik added support for the -moz-crips-edges property for images.
  • Henrik added support for the -moz-crips-edges property for video.
  • Lee fixed a text clipping bug.
  • Jeff fixed an issue with image sampling.

Enabling WebRender in Firefox Nightly

  • In about:config set “gfx.webrender.all” to true,
  • restart Firefox.

Reporting bugs

The best place to report bugs related to WebRender in Gecko is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Mozilla Future Releases BlogDNS over HTTPS (DoH) – Testing on Beta

DNS is a critical part of the Internet, but unfortunately has bad security and privacy properties, as described in this excellent explainer by Lin Clark. In June, Mozilla started experimenting with DNS over HTTPS, a new protocol which uses encryption to protect DNS requests and responses. As we reported at the end of August, our experiments in the Nightly channel look very good: the slowest users show a huge improvement, anywhere up to hundreds of milliseconds, and most users see only a small performance slowdown of around 6 milliseconds, which is acceptable given the improved security.

This is a very promising result and the next step is to validate the technique over a broader set of users on our Beta channel. We will once again work with users who are already participating in Firefox experiments, and continue to provide in-browser notifications about the experiment and details about the DoH service provider so that everyone is fully informed and has a chance to decline participation in this particular experiment. A soft rollout to selected Beta users in the United States will begin the week of September 10th.

As before, this experiment will use Cloudflare’s DNS over HTTPS service. Cloudflare has been a great partner in developing this feature and has committed to very strong privacy guarantees for our users. Moving forward, we are working to build a larger ecosystem of trusted DoH providers that live up to this high standard of data handling, and we hope to be able to experiment with other providers soon.

References to DoH

The post DNS over HTTPS (DoH) – Testing on Beta appeared first on Future Releases.

The Rust Programming Language BlogAnnouncing Rust 1.29

The Rust team is happy to announce a new version of Rust, 1.29.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.29.0 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.29.0 on GitHub.

What’s in 1.29.0 stable

The 1.29 release is fairly small; Rust 1.30 and 1.31 are going to have a lot in them, and so much of the 1.29 cycle was spent preparing for those releases. The two most significant things in this release aren’t even language features: they’re new abilities that Cargo has grown, and they’re both about lints.

  • cargo fix can automatically fix your code that has warnings
  • cargo clippy is a bunch of lints to catch common mistakes and improve your Rust code

cargo fix

With the release of Rust 1.29, Cargo has a new subcommand: cargo fix. If you’ve written code in Rust before, you’ve probably seen a compiler warning before. For example, consider this code:

fn do_something() {}

fn main() {
    for i in 0..100 {

Here, we’re calling do_something a hundred times. But we never use the variable i. And so Rust warns:

> cargo build
   Compiling myprogram v0.1.0 (file:///path/to/myprogram)
warning: unused variable: `i`
 --> src\
4 |     for i in 1..100 {
  |         ^ help: consider using `_i` instead
  = note: #[warn(unused_variables)] on by default

    Finished dev [unoptimized + debuginfo] target(s) in 0.50s

See how it suggests that we use _i as a name instead? We can automatically apply that suggestion with cargo fix:

> cargo fix
    Checking myprogram v0.1.0 (file:///C:/Users/steve/tmp/fix)
      Fixing src\ (1 fix)
    Finished dev [unoptimized + debuginfo] target(s) in 0.59s

If we look at src\ again, we’ll see that the code has changed:

fn do_something() {}

fn main() {
    for _i in 0..100 {

We’re now using _i, and the warning will no longer appear.

This initial release of cargo fix only fixes up a small number of warnings. The compiler has an API for this, and it only suggests fixing lints that we’re confident recommend correct code. Over time, as our suggestions improve, we’ll be expanding this to automatically fix more warnings.

if you find a compiler suggestion and want to help make it fixable, please leave a comment on this issue.

cargo clippy

Speaking of warnings, you can now check out a preview of cargo clippy through Rustup. Clippy is a large number of additional warnings that you can run against your Rust code.

For example:

let mut lock_guard = mutex.lock();



This code is syntactically correct, but may have a deadlock! You see, we dropped a reference to lock_guard, not the guard itself. Dropping a reference is a no-op, and so this is almost certainly a bug.

We can get the preview of Clippy from Rustup:

$ rustup component add clippy-preview

and then run it:

$ cargo clippy
error: calls to `std::mem::drop` with a reference instead of an owned value. Dropping a reference does nothing.
 --> src\
5 |     std::mem::drop(&lock_guard);
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  = note: #[deny(drop_ref)] on by default
note: argument has type &std::result::Result<std::sync::MutexGuard<'_, i32>, std::sync::PoisonError<std::sync::MutexGuard<'_, i32>>>
 --> src\
5 |     std::mem::drop(&lock_guard);
  |                    ^^^^^^^^^^^
  = help: for further information visit

As you can see from that help message, you can view all of the lints that clippy offers on the web.

Please note that this is a preview; clippy has not yet reached 1.0. As such, its lints may change. We’ll release a clippy component once it has stabilized; please give the preview a try and let us know how it goes.

Oh, and one more thing: you can’t use clippy with cargo-fix yet, really. It’s in the works!

See the detailed release notes for more.

Library stabilizations

Three APIs were stabilized this release:

Additionally, you can now compare &str and OsString.

See the detailed release notes for more.

Cargo features

We covered the two new subcommands to Cargo above, but additionally, Cargo will now try to fix up lockfiles that have been corrupted by a git merge. You can pass --locked to disable this behavior.

cargo doc has also grown a new flag: --document-private-items. By default, cargo doc only documents public things, as the docs it produces are intended for end-users. But if you’re working on your own crate, and you have internal documentation for yourself to refer to, --document-private-items will generate docs for all items, not just public ones.

See the detailed release notes for more.

Contributors to 1.29.0

Many people came together to create Rust 1.29. We couldn’t have done it without all of you. Thanks!

Joel MaherLooking at Firefox performance 57 vs 63

Last November we released Firefox v.57, otherwise known as Firefox Quantum.  Quantum was in many ways a whole new browser with the focus on speed as compared to previous versions of Firefox.

As I write about many topics on my blog which are typically related to my current work at Mozilla, I haven’t written about measuring or monitoring Performance in a while.  Now that we are almost a year out I thought it would be nice to look at a few of the key performance tests that were important for tracking in the Quantum release and what they look like today.

First I will look at the benchmark Speedometer which was used to track browser performance primarily of the JS engine and DOM.  For this test, we measure the final score produced, so the higher the number the better:


You can see a large jump in April, that is when we upgraded the hardware we run the tests on, otherwise we have only improved since last year!

Next I want to look at our startup time test (ts_paint) which measure time to launch the browser from a command line in ms, in this case lower is better:


Here again,  you can see the hardware upgrade in April, overall we have made this slightly better over the last year!

What is more interesting is a page load test.  This is always an interesting test and there are many opinions about the right way to do this.  How we do pageload is to record a page and replay it with mitmproxy.  Lucky for us (thanks to neglect) we have not upgraded our pageset so we can really compare the same page load from last year to today.

For our pages we initially setup, we have 4 pages we recorded and have continued to test, all of these are measured in ms so lower is better. (measuring time to first non blank paint):


We see our hardware upgrade in April, otherwise small improvements over the last year!

Facebook (logged in with a test user account, measuring time to first non blank paint):


Again, we have the hardware upgrade in April, and overall we have seen a few other improvements 🙂

Google (custom hero element on search results):


Here you can see that what we had a year ago, we were better, but a few ups and downs, overall we are not seeing gains, nor wins (and yes, the hardware upgrade is seen in April).

Youtube (measuring first non blank paint):


As you can see here, there wasn’t a big change in April with the hardware upgrade, but in the last 2 months we see some noticeable improvements!

In summary, none of our tests have shown regressions.  Does this mean that Firefox v.63 (currently on Beta) is faster than Firefox Quantum release of last year?  I think the graphs here show that is true, but your mileage may vary.  It does help that we are testing the same tests (not changed) over time so we can really compare apples to apples.  There have been changes in the browser and updates to tools to support other features including some browser preferences that change.  We have found that we don’t necessarily measure real world experiences, but we get a good idea if we have made things significantly better or worse.

Some examples of how this might be different for you than what we measure in automation:

  • We test in an isolated environment (custom prefs, fresh profile, no network to use, no other apps)
  • Outdated pages that we load have most likely changed in the last year
  • What we measure as a startup time or a page loaded time might not reflect what a user perceives as accurate


Mozilla Open Policy & Advocacy BlogMozilla reacts to EU Parliament vote on copyright reform

Today marks a very sad day for the internet in Europe. Lawmakers in the European Parliament have just voted to turn their backs on key principles on which the internet was built; namely openness, decentralisation, and collaboration.

Parliamentarians have given a green light to new rules that will compel online services to implement blanket upload filters, a crude and ineffective measure that could well spell an end to the rich creative fabric of memes, mashups, and GIFs that make internet culture so great. The Parliament’s vote also endorses a ‘link tax’ that will undermine access to knowledge and the sharing of information in Europe.

We recognise the efforts of many MEPs who attempted to find workable solutions that would have rectified some of the grave shortcomings in this proposal. Sadly, the majority dismissed those constructive solutions, and the open internet that we’ve taken for granted the last 20 years is set to turn into something very different in Europe.

The fight is not over yet. Lawmakers still need to finalise the new rules, and we at Mozilla will do everything we can to achieve a modern reform that safeguards the health of the internet and promotes the rights of users. There’s simply too much at stake not to.

The post Mozilla reacts to EU Parliament vote on copyright reform appeared first on Open Policy & Advocacy.

Mozilla Open Policy & Advocacy BlogEU terrorism regulation threatens internet health in Europe

The European Commission has today proposed a troublesome new regulation regarding terrorist content online. As we have said, illegal content – of which terrorist content is a particularly striking example – undermines the overall health of the internet. We welcome effective and sustainable efforts to address illegal content online. But the Commission’s proposal is a poor step in that direction. It would undermine due process online; compel the use of ineffective content filters; strengthen the position of a few dominant platforms while hampering European competitors; and, ultimately, violate the EU’s commitment to protecting fundamental rights.

Under the Commission’s proposal, government-appointed authorities – not independent courts – would have the unilateral power to suppress speech on the internet. Longstanding norms around due process and the separation of powers would be swept aside, with little evidence to support such a drastic departure from established norms. These authorities would have vague, indeterminate authority to require additional proactive measures (including but not limited to content filters) from platforms where they deem them appropriate.

In keeping with a worrying global policy trend, this proposal falls victim to the flawed and dangerous assumption that technology is a panacea to complex problems. It would force private companies to play an even greater role in defining acceptable speech online. In practice it would force online services throughout the internet ecosystem to adapt the standards of speech moderation designed for the largest platforms, strengthening their role in the internet economy and putting European competitors at a disadvantage. At a time when lawmakers around the world are increasingly concerned with centralisation and competition in the digital marketplace, this would be a step backwards.

A regulation that poses broad threats to free expression outside of a rule-of-law framework is incompatible with the EU’s long-standing commitment to protecting fundamental rights. As a mission-driven technology company and not-for-profit foundation, both maker of the Firefox web browser and steward of a community of internet builders, we believe user rights and technical expertise must play an essential part in this legislative debate. We have previously presented the Commission with a framework to guide effective policy for illegal content in the European legal context. This proposal falls far short of what is needed for the health of the internet in Europe.

The post EU terrorism regulation threatens internet health in Europe appeared first on Open Policy & Advocacy.

Mike HommeyFirefox is now built with clang LTO on all* platforms

You might have read that Mozilla recently switched Windows builds to clang-cl. More recently, those Windows builds have seen both PGO (Profile-Guided Optimization) and LTO (Link-Time Optimization) enabled.

As of next nightly (as of writing, obviously), all tier-1 platforms are now built with clang with LTO enabled. Yes, this means Linux, Mac and Android arm, aarch64 and x86. Linux builds also have PGO enabled.

Mac and Android builds were already using clang, so the only difference is LTO being enabled, which brought some performance improvements.

The most impressive difference, though, was on Linux, where we’re getting more than 5% performance improvements on most Talos tests (up to 18% (!) on some tests) compared to GCC 6.4 with PGO. I must say I wasn’t expecting switching from GCC to clang would make such a difference. And that is with clang 6. A quick test with upcoming clang 7 suggests we’d additionally get between 2 and 5% performance improvement from an upgrade, but our static analysis plugin doesn’t like it.

This doesn’t mean GCC is being unsupported. As a matter of fact, we still have automated jobs using GCC for some static analysis, and we also have jobs ensuring everything still builds with a baseline of GCC 6.x.

You might wonder if we tried LTO with GCC, or tried upgrading to GCC 8.x. As a matter of fact, I did. Enabling LTO turned up linker errors, and upgrading to GCC 7.x turned up breaking binary compatibility with older systems, and if I remember correctly had some problems with our test suite. GCC 8.1 was barely out when I was looking into this, and we all know to stay away from any new major GCC version until one or two minor updates. Considering the expected future advantages from using clang (cross-language inlining with Rust, consistency between platforms), it seemed a better deal to switch to clang than to try to address those issues.

Update: As there’s been some interest on reddit and HN, and I failed to mention it originally, it’s worth noting that comparing GCC+PGO vs. clang+LTO or GCC+PGO vs. clang+PGO was a win for clang overall in both cases, although GCC was winning on a few benchmarks. If I remember correctly, clang without PGO/LTO was also winning against GCC without PGO.

Anyways, what led me on this quest was a casual conversation at our last All Hands, where we were discussing possibly turning on LTO on Mac, and how that should roughly just be about turning a switch.

Famous last words.

At least, that’s a somehow reasonable assumption. But when you have a codebase the size of Firefox, you’re up for “interesting” discoveries.

This involved compiler bugs, linker bugs (with a special mention for a bug in ld64 that Apple has apparently fixed in Xcode 9 but hasn’t released the source of), build system problems, elfhack issues, crash report problems, clang plugin problems (would you have guessed that __attribute__((annotate("foo"))) can affect the generated machine code?), sccache issues, inline assembly bugs (getting inputs, outputs and clobbers correctly is hard), binutils bugs, and more.

I won’t bother you with all the details, but here we are, 3 months later with it all, finally, mostly done. Counting only the bugs assigned to me, there are 77 bugs on bugzilla (so, leaving out anything in other bug trackers, like LLVM’s). Some of them relied on work from other people (most notably, Nathan Froyd’s work to switch to clang and then non-NDK clang on Android). This spread over about 150 commits on mozilla-central, 20 of which were backouts. Not everything went according to plan, obviously, although some of those backouts were on purpose as a taskcluster trick.

Hopefully, this sticks, and Firefox 64 will ship built with clang with LTO on all tier-1 platforms as well as PGO on some. Downstreams are encouraged to do the same if they can. The build system will soon choose clang by default on all builds, but won’t enable PGO/LTO.

As a bonus, as of a few days ago, Linux builds are also finally using Position Independent Executables, which improves Address Space Layout Randomization for the few things that are in the executables instead of some library (most notably, mozglue and the allocator). This was actually necessary for LTO, because clang doesn’t build position independent code in executables that are not PIE (but GCC does), and that causes other problems.

Work is not entirely over, though, as more inline assembly bugs might be remaining only not causing visible problems by sheer luck, so I’m now working on a systematic analysis of inline assembly blocks with our clang plugin.

Niko MatsakisRust office hours

Hello, all! Beginning this Friday (in two days)1, I’m going to start an experiment that I call Rust office hours. The idea is simple: I’ve set aside a few slots per week to help people work through problems they are having learning or using Rust. My goal here is both to be of service but also to gain more insight into the kinds of things people have trouble with. No problem is too big or too small!2

To start, I’m running this through my office-hours GitHub repository. All you have to do to sign up for a slot is to open a pull request adding your name; I will try to resolve things on a first come, first serve basis.

I’m starting small: I’ve reserved two 30 minute slots per week for the rest of September. One of those slots is reserved for beginner folks, the other is for anybody. If this is a success, I’ll extend to October and beyond, and possibly add more slots.

So please, come check out the office-hours repository!


  1. Uh, I meant to post this blog post earlier. But I forgot.

  2. OK, some problems may be too big. I’m not that clever, and it’s only a 30 minute slot.

Mozilla Security BlogProtecting Mozilla’s GitHub Repositories from Malicious Modification

At Mozilla, we’ve been working to ensure our repositories hosted on GitHub are protected from malicious modification. As the recent Gentoo incident demonstrated, such attacks are possible.

Mozilla’s original usage of GitHub was an alternative way to provide access to our source code. Similar to Gentoo, the “source of truth” repositories were maintained on our own infrastructure. While we still do utilize our own infrastructure for much of the Firefox browser code, Mozilla has many projects which exist only on GitHub. While some of those project are just experiments, others are used in production (e.g. Firefox Accounts). We need to protect such “sensitive repositories” against malicious modification, while also keeping the barrier to contribution as low as practical.

This describes the mitigations we have put in place to prevent shipping (or deploying) from a compromised repository. We are sharing both our findings and some tooling to support auditing. These add the protections with minimal disruption to common GitHub workflows.

The risk we are addressing here is the compromise of a GitHub user’s account, via mechanisms unique to GitHub. As the Gentoo and other incidents show, when a user account is compromised, any resource the user has permissions to can be affected.


GitHub is a wonderful ecosystem with many extensions, or “apps”, to make certain workflows easier. Apps obtain permission from a user to perform actions on their behalf. An app can ask for permissions including modifying or adding additional user credentials. GitHub makes these permission requests transparent, and requires the user to approve via the web interface, but not all users may be conversant with the implications of granting those permissions to an app. They also may not make the connection that approving such permissions for their personal repositories could grant the same for access to any repository across GitHub where they can make changes.

Excessive permissions can expose repositories with sensitive information to risks, without the repository admins being aware of those risks. The best a repository admin can do is detect a fraudulent modification after it has been pushed back to GitHub. Neither GitHub nor git can be configured to prevent or highlight this sort of malicious modification; external monitoring is required.


The following are taken from our approach to addressing this concern, with Mozilla specifics removed. As much as possible, we borrow from the web’s best practices, used features of the GitHub platform, and tried to avoid adding friction to the daily developer workflows.

Organization recommendations:

  • 2FA must be required for all members and collaborators.
  • All users, or at least those with elevated permissions:
    • Should have contact methods (email, IM) given to the org owners or repo admins. (GitHub allows Users to hide their contact info for privacy.)
    • Should understand it is their responsibility to inform the org owners or repo admins if they ever suspect their account has been compromised. (E.g. laptop stolen)

Repository recommendations:

  • Sensitive repositories should only be hosted in an organization that follows the recommendations above.
  • Production branches should be identified and configured:
    • To not allow force pushes.
    • Only give commit privileges to a small set of users.
    • Enforce those restrictions on admins & owners as well.
    • Require all commits to be GPG signed, using keys known in advance.

Workflow recommendations:

  • Deployments, releases, and other audit-worthy events, should be marked with a signed tag from a GPG key known in advance.
  • Deployment and release criteria should include an audit of all signed commits and tags to ensure they are signed with the expected keys.

There are some costs to implementing these protections – especially those around the signing of commits. We have developed some internal tooling to help with auditing the configurations, and plan to add tools for auditing commits. Those tools are available in the mozilla-services/GitHub-Audit repository.

Image of README contents

Here’s an example of using the audit tools. First we obtain a local copy of the data we’ll need for the “octo_org” organization, and then we report on each repository:

$ ./ octo_org
2018-07-06 13:52:40,584 INFO: Running as ms_octo_cat
2018-07-06 13:52:40,854 INFO: Gathering branch protection data. (calls remaining 4992).
2018-07-06 13:52:41,117 INFO: Starting on org octo_org. (calls remaining 4992).
2018-07-06 13:52:59,116 INFO: Finished gathering branch protection data (calls remaining 4947).

Now with the data cached locally, we can run as many reports as we’d like. For example, we have written one report showing which of the above recommendations are being followed:

$ ./ --header octo_org.db.json

We can see that only “octo_org/react-starter” has enabled protection against force pushes on it’s production branch. The final output is in CSV format, for easy pasting into spreadsheets.

How you can help

We are still rolling out these recommendations across our teams, and learning as we go. If you think our Repository Security recommendations are appropriate for your situation, please help us make implementation easier. Add your experience to the Tips ‘n Tricks page, or open issues on our GitHub-Audit repository.

The post Protecting Mozilla’s GitHub Repositories from Malicious Modification appeared first on Mozilla Security Blog.

Hacks.Mozilla.OrgConverting a WebGL application to WebVR

A couple months ago I ported the Pathfinder demo app to WebVR. It was an interesting experience, and I feel like I learned a bunch of things about porting WebGL applications to WebVR that would be generally useful to folks, especially folks coming to WebVR from non-web programming backgrounds.

Pathfinder is a GPU-based font rasterizer in Rust, and it comes with a demo app that runs the Rust code on the server side but does all the GPU work in WebGL in a TypeScript website.

We had a 3D demo showing a representation of the Mozilla Monument as a way to demo text rasterization in 3D. What I was hoping to do was to convert this to a WebVR application that would let you view the monument by moving your head instead of using arrow keys.

I started working on this problem with a decent understanding of OpenGL and WebGL, but almost zero background in VR or WebVR. I’d written an Android Cardboard app three years previously and that was about it.

I’m hoping this article may be useful for others from similar backgrounds.

The converted triangle demo running in WebVR

What is WebVR?

WebVR is a set of APIs for writing VR applications on the web. It lets us request jumping into VR mode, at which point we can render things directly to the eyes of a VR display, rather than rendering to a flat surface browser within the display. When the user is on a device like the Cardboard or Daydream where a regular phone substitutes for the VR display, this is the point where the user puts their phone within the headset.

WebVR APIs help with transitioning to/from VR mode, obtaining pose information, rendering in VR, and dealing with device input. Some of these things are being improved in the work in progress on the new WebXR Device API specification.

Do I need any devices to work with WebVR?

Ideally, a good VR device will make it easier to test your work in progress, but depending on how much resolution you need, a Daydream or Cardboard (where you use your phone in a headset casing) is enough. You can even test stuff without the headset casing, though stuff will look weird and distorted.

For local testing Chrome has a WebVR API emulation extension that’s pretty useful. You can use the devtools panel in it to tweak the pose, and you get a non-distorted display of what the eyes see.

Firefox supports WebVR, and Chrome Canary supports it if you enable some flags. There’s also a polyfill which should work for more browsers.

How does it work under the hood?

I think not understanding this part was the source of a lot of confusion and bugs for me when I was getting started. The core of the API is basically “render something to a canvas and then magic happens”, and I had trouble figuring how that magic worked.

Essentially, there’s a bunch of work we’re supposed to do, and then there’s extra work the browser (or polyfill) does.

Once we enter VR mode, there’s a callback triggered whenever the device requests a frame. Within this callback we have access to pose information.

Using this pose information, we can figure out what each eye should see, and provide this to the WebVR API in some form.

What the WebVR API expects is that we render each eye’s view to a canvas, split horizontally (this canvas will have been passed to the API when we initialize it).

That’s it from our side, the browser (or polyfill) does the rest. It uses our rendered canvas as a texture, and for each eye, it distorts the rendered half to appropriately work with the lenses used in your device. For example, the distortion for Daydream and Cardboard follows this code in the polyfill.

It’s important to note that, as application developers, we don’t have to worry about this — the WebVR API is handling it for us! We need to render undistorted views from each eye to the canvas — the left view on the left half and the right view on the right half, and the browser handles the rest!

Porting WebGL applications

I’m going to try and keep this self contained, however I’ll mention off the bat that some really good resources for learning this stuff can be found at and MDN. has a bunch of neat samples if, like me, you learn better by looking at code and playing around with it.

Entering VR mode

First up, we need to be able to get access to a VR display and enter VR mode.

let vrDisplay;
navigator.getVRDisplays().then(displays => {
    if (displays.length === 0) {
    vrDisplay = displays[displays.length - 1];

    // optional, but recommended
    vrDisplay.depthNear = /* near clip plane distance */;
    vrDisplay.depthFar = /* far clip plane distance */;

We need to add an event handler for when we enter/exit VR:

let canvas = document.getElementById(/* canvas id */);
let inVR = false;

window.addEventListener('vrdisplaypresentchange', () => {
  // no VR display, exit
  if (vrDisplay == null)

  // are we entering or exiting VR?
  if (vrDisplay.isPresenting) {
    // We should make our canvas the size expected
    // by WebVR
    const eye = vrDisplay.getEyeParameters("left");
    // multiply by two since we're rendering both eyes side
    // by side
    canvas.width = eye.renderWidth * 2;
    canvas.height = eye.renderHeight;

    const vrCallback = () => {
        if (vrDisplay == null || !inVR) {
        // reregister callback if we're still in VR

        // render scene
    // register callback
  } else {
    inVR = false;
    // resize canvas to regular non-VR size if necessary

And, to enter VR itself:

if (vrDisplay != null) {
    inVR = true;
    // hand the canvas to the WebVR API
    vrDisplay.requestPresent([{ source: canvas }]);

    // requestPresent() will request permission to enter VR mode,
    // and once the user has done this our `vrdisplaypresentchange`
    // callback will be triggered

Rendering in VR

Well, we’ve entered VR, now what? In the above code snippets we had a render() call which was doing most of the hard work.

Since we’re starting with an existing WebGL application, we’ll have some function like this already.

let width = canvas.width;
let height = canvas.height;

function render() {
    let gl = canvas.getContext("gl");
    gl.viewport(0, 0, width, height);
    gl.clearColor(/* .. */);
    gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

    gl.bindBuffer(/* .. */);
    // ...
    let uProjection = gl.getUniformLocation(program, "uProjection");
    let uModelView = gl.getUniformLocation(program, "uModelview");
    gl.uniformMatrix4fv(uProjection, false, /* .. */);
    gl.uniformMatrix4fv(uModelView, false, /* .. */);
    // set more parameters
    // run gl.drawElements()

So first we’re going to have to split this up a bit further, to handle rendering the two eyes:

// entry point for WebVR, called by vrCallback() function renderVR() { let gl = canvas.getContext("gl"); // set clearColor and call gl.clear() clear(gl); renderEye(true); renderEye(false); vrDisplay.submitFrame(); // Send the rendered frame over to the VR display } // entry point for non-WebVR rendering // called by whatever mechanism (likely keyboard/mouse events) // you used before to trigger redraws function render() { let gl = canvas.getContext("gl"); // set clearColor and call gl.clear() clear(gl); renderSceneOnce(); } function renderEye(isLeft) { // choose which half of the canvas to draw on if (isLeft) { gl.viewport(0, 0, width / 2, height); } else { gl.viewport(width / 2, 0, width / 2, height); } renderSceneOnce(); } function renderSceneOnce() { // the actual GL program and draw calls go here }

This looks like a good step forward, but notice that we’re rendering the same thing to both eyes, and not handling movement of the head at all.

To implement this we need to use the perspective and view matrices provided by WebVR from the VRFrameData object.

The VRFrameData object contains a pose member with all of the head pose information (its position, orientation, and even velocity and acceleration for devices that support these). However, for the purpose of correctly positioning the camera whilst rendering, VRFrameData provides projection and view matrices which we can directly use.

We can do this like so:

let frameData = new VRFrameData();

// use frameData.leftViewMatrix / framedata.leftProjectionMatrix
// for the left eye, and
// frameData.rightViewMatrix / framedata.rightProjectionMatrix for the right

In graphics, we often find ourselves dealing with the model, view, and projection matrices. The model matrix defines the position of the object we wish to render in the coordinates of our space, the view matrix defines the transformation between the camera space and the world space, and the projection matrix handles the transformation between clip space and camera space (also potentially dealing with perspective). Sometimes we’ll deal with the combination of some of these, like the “model-view” matrix.

One can see these matrices in use in the cubesea code in the stereo rendering example from

There’s a good chance our application has some concept of a model/view/projection matrix already. If not, we can pre-multiply our positions with the view matrix in our vertex shaders.

So now our code will look something like this:

// entry point for non-WebVR rendering
// called by whatever mechanism (likely keyboard/mouse events)
// we used before to trigger redraws
function render() {
    let gl = canvas.getContext("gl");
    // set clearColor and call gl.clear()
    let projection = /*
        calculate projection using something
        like glmatrix.mat4.perspective()
        (we should be doing this already in the normal WebGL app)
    let view = /*
        use our view matrix if we have one,
        or an identity matrix
    renderSceneOnce(projection, view);

function renderEye(isLeft) {
    // choose which half of the canvas to draw on
    let projection, view;
    let frameData = new VRFrameData();
    if (isLeft) {
        gl.viewport(0, 0, width / 2, height);
        projection = frameData.leftProjectionMatrix;
        view = frameData.leftViewMatrix;
    } else {
        gl.viewport(width / 2, 0, width / 2, height);
        projection = frameData.rightProjectionMatrix;
        view = frameData.rightViewMatrix;
    renderSceneOnce(projection, view);

function renderSceneOnce(projection, view) {
    let model = /* obtain model matrix if we have one */;
    let modelview = glmatrix.mat4.create();
    glmatrix.mat4.mul(modelview, view, model);

    gl.bindBuffer(/* .. */);
    // ...

    let uProjection = gl.getUniformLocation(program, "uProjection");
    let uModelView = gl.getUniformLocation(program, "uModelview");
    gl.uniformMatrix4fv(uProjection, false, projection);
    gl.uniformMatrix4fv(uModelView, false, modelview);
    // set more parameters
    // run gl.drawElements()

This should be it! Moving your head around should now trigger movement in the scene to match it! You can see the code at work in this demo app that takes a spinning triangle WebGL application and turns it into a WebVR-capable triangle-viewing application using the techniques from this blog post.

If we had further input we might need to use the Gamepad API to design a good VR interface that works with typical VR controllers, but that’s out of scope for this post.

The post Converting a WebGL application to WebVR appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 251

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is cargo-src, a Rust source browser with syntax highlighting, jump to def, smart search and much more. Thanks to mark-i-m for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

137 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Bare Metal Attracts Rust

Sven Gregori on Hackaday.

Thanks to llogiq for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Ryan KellySecurity Bugs in Practice: SSRF via Request Splitting

One of the most interesting (and sometimes scary!) parts of my job at Mozilla is dealing with security bugs. We don’t always ship perfect code – nobody does – but I’m privileged to work with a great team of engineers and security folks who know how to deal effectively with security issues when they arise. I’m also privileged to be able to work in the open, and I want to start taking more advantage of that to share some of my experiences.

One of the best ways to learn how to write more secure code is to get experience watching code fail in practice. With that in mind, I’m planning to write about some of the security-bug stories that I’ve been involved in during my time at Mozilla. Let’s start with a recent one: Bug 1447452, in which some mishandling of unicode characters by the Firefox Accounts API server could have allowed an attacker to make arbitrary requests to its backend data store.

Mozilla Open Innovation TeamWe’re intentionally designing open experiences, here’s why.

At Mozilla, our Open Innovation team is driven by the guiding principle of being Open by Design. We are intentionally designing how we work with external collaborators and contributors — both at the individual and organizational level — for the greatest impact and shared value. This includes foundational strategic questions from business objectives to licensing through to overall project governance. But importantly, it also applies to how we design experiences for our communities. Including how we think about creating interactions, from onboarding to contribution.

<figcaption>Human-centered design is shaping our approach to designing interactions, from onboarding to contribution</figcaption>

In a series of articles we will share deeper insight as to why, and how, we’re applying experience design practices throughout our open innovation projects. It is our goal in sharing these learnings that further Open Source projects and experiments may benefit from their application and a holistic Service Design approach. As a relevant example, throughout the series, we’ll point often to the Common Voice project where we’ve enabled these practices from its inception.

Starting with a Question

What is now Common Voice, an multi-language voice collection experience, started merely as an identified need. Since early 2016 Mozilla’s Machine Learning Group has been working on an Open Source speech recognition engine and model, project “Deep Speech”. Any high quality speech-to-text engines require thousands of hours of voice data to train them, but publicly available voice data is very limited and the cost of commercial datasets is exorbitant. This prompted the question, how might we collect large quantities of voice data for Open Source machine learning?

<figcaption>Mozilla community members participate in ideation exercises during the Taipei design sprint.</figcaption>

We hypothesized that creating an Open Source voice dataset could lead to more diverse and accurate machine learning capabilities. But how to do this? The best way to ideate and capture multiple potential solutions is to leverage some additional minds and organize a design sprint. In the case of Common Voice our team gathered in Taipei to lead a group of Mozilla community members through various design thinking exercises. Multiple ideas emerged around crowdsourcing voice data and ultimately resulted in testable paper prototypes.

<figcaption>Mobile app focused paper prototypes in their first iteration.</figcaption>

Engaging with Actual Humans

At this point we could have gone immediately to a build phase, and may have in the past. However we chose to pursue further human interaction, by engaging people via in person feedback. The purpose of this human-centered research being to both understand what ideas resonated with people and to narrow in on what design concepts we should move forward with. Our test audience consisted of the people we hoped to ultimately engage with our data collection efforts — everyday internet citizens. We tested concepts by taking to the streets of Taipei and utilizing guerilla research methods. These concepts were quite varied and included everything from a voice-only dating app to a simple sentence read back mechanism.

<figcaption>Guerilla research with people passing by on the streets of Taipei.</figcaption>

We went into this research phase fully expecting the more robust app concepts to win out. Our strongly held belief was that people wanted to be entertained or needed an ulterior motive in order to facilitate this level of voice data collection. What resulted was surprisingly intriguing (and heartening): it was the experience of voice donation itself that resonated most with people. Instead of using a shiny app that collects data as a side-effect to its main features, people were more interested in the voice data problem itself and wanted to help. People desired to understand more about why we were doing this type of voice collection at all. This research showed us that our initial assumptions about the need to build an app were wrong. Our team had to let go of their first ideas in order to make way for something more human-centered, resonant and effective.

This is why we built Common Voice. To tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology. To better enable this storytelling, we created a robot that users on our website would “teach” to understand human speech by speaking to it through reading sentences. This interaction model has proved effective and has already evolved significantly. The robot is still a mainstay, but the focus has shifted. True to experience design practices, we are consistently iterating, currently with a focus on building the largest multi-language voice dataset to date.

<figcaption>The initial iteration of the Common Voice website, putting learning into action.</figcaption>

Next up…

As we continue our series we’ll break down the subsequent phases of our Common Voice work. Highlighting where we put into action our experience design practice of prototyping with intention. We’ll take learnings from the human interaction research and walk through how the project has moved from an early MVP prototype to its current multi-language contribution model, all with the help of our brilliant communities.

If you’d like to learn more in the meantime, share thoughts or news about your projects, please reach out to the Mozilla Open Innovation team at

We’re intentionally designing open experiences, here’s why. was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Hacks.Mozilla.OrgNew API to Bring Augmented Reality to the Web

We’re entering a new phase of work on JavaScript APIs here at Mozilla, that will help everyone create and share virtual reality (VR) and augmented reality (AR) projects on the open web.

As you might know, we formally launched this work last year with the release of Firefox desktop support for the WebVR 1.1 API. Using that draft API, early adopters like WITHIN were able to distribute 3D experiences on the web and have them work well on a range of devices, from mobile phones and cardboard viewers to full-fledged, immersive VR headsets.

AR demo app on iOS

The Expansion of WebVR

WebVR has been instrumental in democratizing VR, so more people can experience 3D content without expensive headsets. It’s also been a huge time-saver for content creators, who need to test and verify that their work renders well on every viewing platform. Having a stable API to work with means 3D content can find a wider audience, and it cuts down on the rework creators have to do to deliver great web experiences to a range of devices.

Mozilla has been pushing the boundaries of VR in the browser, getting people together across the industry to support a standard way of rendering 3D content. That work has created a fast lane for artists and programmers to share web-based VR experiences with a growing user base. And with WebVR support in browsers like Firefox, we’ve started the work of liberating VR and AR content from silos and headset stores, and making them accessible on the open web.

The Promise of Mixed Reality

Mixed Reality is going to be a powerful platform, bringing highly engaging and emotionally evocative immersive content to the web. Like any new creative medium, we want it to be widely accessible, so curious viewers can experience the next generation of digital media without having to shell out hundreds of dollars for a high-end viewer.

Today, the industry is taking another step toward these goals. We have ambitions to broaden the number of platforms and devices that can display VR and AR content. For instance, the camera on most mobile phones can be used to overlay information on physical reality – if it has a set of instructions on how to do that.

Experimentation continues with a new JavaScript API called the WebXR Device API. We expect this specification will replace WebVR in time and offer a smooth path forward for folks using WebVR today.

What’s New in WebXR

The new WebXR Device API has two new goals that differentiate it from WebVR. They are:

  • To support a wider variety of user inputs, such as voice and gestures, giving users options for navigating and interacting in virtual spaces
  • To establish a technical foundation for development of AR experiences, letting creators integrate real-world media with contextual overlays that elevate the experience.

You can find details about WebXR Device API by visiting the Immersive Web Community Group. We expect that many of the same crew that worked on WebVR – talented engineers from Mozilla, Google, Samsung, Amazon and other companies – will continue to work on the WebXR Device API, along with new contributors like Magic Leap.

AR Comes to the Web

AR and VR both are at the cutting edge of creative expression. Some museums offer AR experiences to give depth and context to exhibits. Other projects include educational content, from geology lessons to what it’s like to walk the streets in war-torn Syria.

What can augmented reality do on the web? Already there are examples that demonstrate powerful use cases. For instance, want to know how that new sofa will fit in your living room, before you buy it? Or how an espresso machine would look in your kitchen? Augmented reality can make online shopping a more sensory experience, so you can test-drive new products in your home in a way that preserves size and scale. It’s a great complement to online shopping, especially as companies start offering online visualizations of physical products.

Mozilla has some key tenets for how we’d like this next-generation media to work on behalf of users.

  • We want to ensure user privacy. You shouldn’t have to give an art store website access to pictures of your home and everything in it in order to see how a poster would look on your wall.
  • We want to make AR and VR accessible to the widest possible audience. We’re committed to removing barriers for people.
  • We want to help creators make content that works on all devices, so users can access mixed reality experiences with the device they have, or want to use.
  • We want to enable the long tail of creators, not just big studios and well-known brands. Everyone who wants to should be able to augment the world, not just those who can get an app into a store.

The WebXR community is working on draft specifications that target some of the constraints of today’s wireless devices. For instance, creating a skybox setting you can use to can change the background image of a web page. We’re also working on a way to expose the world-sensing capabilities of early AR platforms to the web, so developers can determine where surfaces are without needing to run complex computer vision code on a battery-powered device.

Support in Firefox

We’re proud that Firefox supports WebVR today, so people can use current technology while we’re working to implement the next-generation specification. We have begun work to add WebXR support to Firefox. An early implementation will be available in Firefox Nightly in the coming months, so developers and early adopters can turn it on and give it a test-drive.

Some parts of the WebXR specification are still in motion. Rather than waiting for a final version of the spec, we’re going to move forward with what we have now and adjust to any changes along the way. The roadmap for the upcoming Firefox Reality browser will be similar to the Firefox desktop version, with initial support for immersive browsing using WebVR, and WebXR support to follow.

In time, we plan to support WebXR everywhere that we support WebVR currently, including Windows, Linux, macOS, and Android/GeckoView platforms. We will continue supporting WebVR until most popular sites and engines have completed the transition to WebXR. Want more technical details? Check out this WebXR explainer.

Today’s AR Experiments

If you can’t wait to dive into augmented reality, here’s something you can try today: Mozilla’s WebXR Viewer for iOS. It’s a way you can get a glimpse of the future right on your iPhone (6s or newer) or iPad. To be clear: this app is an experiment based on a proposed interim API we created last year. We are currently converting it to use the WebXR Device API.

We created this app as a way to experiment with AR and to find out how easy it was to get it working on iOS using Apple’s ARKit.  If you want to have a look at the code for the iOS app, it’s posted on GitHub. For Android users, Google has a similar experiment going with early support for the immersive web.

Want to keep up with the progress of WebXR and the new WebXR Device API? Follow @mozillareality on twitter, or subscribe to the Mozilla Mixed Reality blog for our weekly roundup of XR news.

The post New API to Bring Augmented Reality to the Web appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogNew API to Bring Augmented Reality to the Web

New API to Bring Augmented Reality to the Web

Mozilla is excited to enter a new phase of work on JavaScript APIs that will help everyone create and share virtual reality (VR) and augmented reality (AR) projects on the open web.

As you might know, Mozilla formally launched this work last year with the release of Firefox desktop support for the WebVR 1.1 API. Using that draft API, early adopters like WITHIN were able to distribute 3D experiences on the web and have them work well on a range of devices, from mobile phones and cardboard viewers to full-fledged, immersive VR headsets.

New API to Bring Augmented Reality to the Web

The Expansion of WebVR

WebVR has been instrumental in democratizing VR, so more people can experience 3D content without expensive headsets. It’s also been a huge time-saver for content creators, who need to test and verify that their work renders well on every viewing platform. Having a stable API to work with means 3D content can find a wider audience, and it cuts down on the rework creators have to do to deliver great web experiences to a range of devices.

Mozilla has been pushing the boundaries of VR in the browser, getting people together across the industry to support a standard way of rendering 3D content. That work has created a fast lane for artists and programmers to share web-based VR experiences with a growing user base. And with WebVR support in browsers like Firefox, we’ve started the work of liberating VR and AR content from silos and headset stores, and making them accessible on the open web.

The Promise of Mixed Reality

Mixed Reality is going to be a powerful platform, bringing highly engaging and emotionally evocative immersive content to the web. Like any new creative medium, we want it to be widely accessible, so curious viewers can experience the next generation of digital media without having to shell out hundreds of dollars for a high-end viewer.
Today, the industry is taking another step toward these goals. We have ambitions to broaden the number of platforms and devices that can display VR and AR content. For instance, the camera on most mobile phones can be used to overlay information on physical reality – if it has a set of instructions on how to do that.

Experimentation continues with a new JavaScript API called the WebXR Device API. We expect this specification will replace WebVR in time and offer a smooth path forward for folks using WebVR today.

What’s New in WebXR

The new WebXR Device API has two new goals that differentiate it from WebVR. They are:

  • To support a wider variety of user inputs, such as voice and gestures, giving users options for navigating and interacting in virtual spaces
  • To establish a technical foundation for development of AR experiences, letting creators integrate real-world media with contextual overlays that elevate the experience.

You can find details about WebXR Device API by visiting the Immersive Web Community Group. We expect that many of the same crew that worked on WebVR – talented engineers from Mozilla, Google, Samsung, Amazon and other companies – will continue to work on the WebXR Device API, along with new contributors like Magic Leap.

AR Comes to the Web

AR and VR both are at the cutting edge of creative expression. Some museums offer AR experiences to give depth and context to exhibits. Other projects include educational content, from geology lessons to what it’s like to walk the streets in war-torn Syria.

What can augmented reality do on the web? Already there are examples that demonstrate powerful use cases. For instance, want to know how that new sofa will fit in your living room, before you buy it? Or how an espresso machine would look in your kitchen? Augmented reality can make online shopping a more sensory experience, so you can test-drive new products in your home in a way that preserves size and scale. It’s a great complement to online shopping, especially as companies start offering online visualizations of physical products.

Mozilla has some key tenets for how we’d like this next-generation media to work on behalf of users.

  • We want to ensure user privacy. You shouldn’t have to give an art store website access to pictures of your home and everything in it in order to see how a poster would look on your wall.
  • We want to make AR and VR accessible to the widest possible audience. We’re committed to removing barriers for people.
  • We want to help creators make content that works on all devices, so users can access mixed reality experiences with the device they have, or want to use.
  • We want to enable the long tail of creators, not just big studios and well-known brands. Everyone who wants to should be able to augment the world, not just those who can get an app into a store.

The WebXR community is working on draft specifications that target some of the constraints of today’s wireless devices. For instance, creating a skybox setting you can use to can change the background image of a web page. We’re also working on a way to expose the world-sensing capabilities of early AR platforms to the web, so developers can determine where surfaces are without needing to run complex computer vision code on a battery-powered device.

Support in Firefox

We’re proud that Firefox supports WebVR today, so people can use current technology while we’re working to implement the next-generation specification. We have begun work to add WebXR support to Firefox. An early implementation will be available in Firefox Nightly in the coming months, so developers and early adopters can turn it on and give it a test-drive.

New API to Bring Augmented Reality to the Web

Some parts of the WebXR specification are still in motion. Rather than waiting for a final version of the spec, we’re going to move forward with what we have now and adjust to any changes along the way. The roadmap for the upcoming Firefox Reality browser will be similar to the Firefox desktop version, with initial support for immersive browsing using WebVR, and WebXR support to follow.

In time, we plan to support WebXR everywhere that we support WebVR currently, including Windows, Linux, macOS, and Android/GeckoView platforms. We will continue supporting WebVR until most popular sites and engines have completed the transition to WebXR. Want more technical details? Check out this WebXR explainer.

Today’s AR Experiments

If you can’t wait to dive into augmented reality, here’s something you can try today: Mozilla’s WebXR Viewer for iOS. It’s a way you can get a glimpse of the future right on your iPhone (6s or newer) or iPad. To be clear: this app is an experiment based on a proposed interim API we created last year. We are currently converting it to use the WebXR Device API.

We created this app as a way to experiment with AR and to find out how easy it was to get it working on iOS using Apple's ARKit. If you want to have a look at the code for the iOS app, it’s posted on GitHub. For Android users, Google has a similar experiment going with early support for the immersive web.

Want to keep up with the progress of WebXR and the new WebXR Device API? Follow @mozillareality on twitter, or subscribe to the Mozilla Mixed Reality blog for our weekly roundup of XR news.

The Mozilla BlogFast Company Innovation by Design Award for Common Voice

Today Common Voice — our crowdsourcing-initiative for an open and publicly available voice dataset that anyone can use to train speech-enabled applications — was honored as a Finalist in the Experimental category in Fast Company’s 2018 Innovation by Design Awards.

Fast Company states that Innovation by Design is the only competition to honor creative work at the intersection of design, business, and innovation. 

The awards, which can be found in the October 2018 issue of Fast Company, on stands September 18th, recognize people, teams, and companies solving problems through design. After spending a year researching and reviewing applicants Fast Company is honoring an influential and diverse group of 398 leaders in fashion, architecture, graphic design and data visualization, social good, user experience, and more. To see the complete list go to:

“The future of design is about more than coddling users,” says Stephanie Mehta, editor-in-chief of Fast Company. “It’s about giving them power over their technology.” We as Mozilla couldn’t agree more. And not only to the extent of how they use technology but also how and for whom it is being developed. The recognised Common Voice experience didn’t “just happen” by chance. From the very beginning, the team around the Open Innovation project has been diligent about bringing in additional minds and perspectives from day-to-day users and experts likewise, testing and revising prototypes, all while challenging initial, strongly held assumptions. Another visible result of this is the ongoing, collaborative iteration of the project website and contribution methods with the project’s diverse communities.

For those interested in how human-centered research and design has shaped the direction of Common Voice, the Open Innovation Team has kicked-off a series of articles to share learnings about the application of Service Design to Open Source projects. These will include how  communities and new experiments may benefit from, and engage with, this perspective. To read more, visit the Open Innovation Medium Blog.

The Innovation by Design Award is the second distinction for the project, after Tech publication InfoWorld, along with Open Source software developer Black Duck, named Common Voice one of the seven Open Source Rookies of the Year for 2018. Back then Common Voice made the cut from an initial list of roughly 11,000 Github/openhub projects.

The post Fast Company Innovation by Design Award for Common Voice appeared first on The Mozilla Blog.

Daniel PocockAn FSFE Fellowship Representative's dilemma

The FSFE Fellowship representative role may appear trivial, but it is surprisingly complicated. What's best for FSFE, what is best for the fellows and what is best for free software are not always the same thing.

As outlined in my blog Who are/were the FSFE Fellowship?, fellows have generously donated over EUR 1,000,000 to FSFE and one member of the community recently bequeathed EUR 150,000. Fellows want to know that this money is spent well, even beyond their death.

FSFE promised them an elected representative, which may have given them great reassurance about the checks and balances in the organization. In practice, I feel that FSFE hasn't been sincere about this role and it is therefore my duty to make fellows aware of what representation means in practice right now.

This blog has been held back for some time in the hope that things at FSFE would improve. Alas, that is not the case and with the annual general meeting in Berlin only four weeks away, now is the time for the community to take an interest. As fellowship representative, I would like to invite members of the wider free software community to attend as guests of the fellowship and try to help FSFE regain legitimacy.

Born with a conflict of interest

According to the FSFE e.V. constitution, as it was before elections were abolished, the Fellows elected according to §6 become members of FSFE e.V.

Yet all the other fellows who voted, the people being represented, are not considered members of FSFE e.V. Sometimes it is possible to view all fellows together as a unit, a separate organization, The Fellowship. Sometimes not all fellows want the same thing and a representative has to view them each as individuals.

Any representative of this organization, The Fellowship and the individual fellows, has a strong ethical obligation to do what is best for The Fellowship and each fellow.

Yet as the constitution recognizes the representative as a member of FSFE e.V., some people have also argued that he/she should do what is best for FSFE e.V.

What happens when what is best for The Fellowship is not in alignment with what is best for FSFE e.V.?

It is also possible to imagine situations where doing what is best for FSFE e.V. and doing what is best for free software in general is not the same thing. In such a case the representative and other members may want to resign.

Censorship of the Fellowship representatives by FSFE management

On several occasions management argued that communications to fellows need to be censored adapted to help make money. For example, when discussing an email to be sent to all fellows in February about the risk of abolishing elections, the president warned:

"people might even stop to support us financially"

if they found out about the constitutional changes. He subsequently subjected the email to censorship modification by other people.

This was not a new theme: in a similar discussion in August 2017 about communications from the representatives, another senior member of the executive team had commented:

"It would be beneficial if our PR team could support in this, who have the experience from shaping communication in ways which support retention of our donors."

A few weeks later, on 20 March, FSFE's management distributed a new censorship communications policy, requiring future emails to prioritize FSFE's interests and mandating that all emails go through the censors PR team. As already explained, a representative has an ethical obligation to prioritize the interests of the people represented, The Fellowship, not FSFE's interests. The censorship communications policy appears deliberately incompatible with that obligation.

As the elected representative of a 1500-strong fellowship, it seems obscene that communications to the people represented are subject to censorship by the very staff the representative scrutinizes. The situation is even more ludicrous when the organization concerned claims to be an advocate of freedom.

This gets to the core of our differences: FSFE appeared to be hoping a representative would be a stooge, puppet or cheerleader who's existence might "support retention of ... donors". Personally, I never imagined myself like that. Given the generosity of fellows and the large amounts of time and money contributed to FSFE, I feel obliged to act as a genuine representative, ensuring money already donated is spent effectively on the desired objectives and ensuring that communications are accurate. FSFE management appear to hope their clever policy document will mute those ambitions.

Days later, on 25 March, FSFE management announced the extraordinary general meeting to be held in the staff office in Berlin, to confirm the constitutional change and as a bonus, try to abruptly terminate the last representative, myself. Were these sudden changes happening by coincidence, or rather, a nasty reprisal for February's email about constitutional changes? I had simply been trying to fulfill my ethical obligations to fellows and suddenly I had become persona non grata.

When I first saw this termination proposal in March, it really made me feel quite horrible. They were basically holding a gun to my head and planning a vote on whether to pull the trigger. For all purposes, it looked like gangster behavior happening right under my nose in a prominent free software organization.

Both the absurdity and hostility of these tactics was further underlined by taking this vote on my role behind my back on 26 May, while I was on a 10 day trip to the Balkans pursuing real free software activities in Albania and Kosovo, starting with OSCAL.

In the end, while the motion to abolish elections was passed and fellows may never get to vote again, only four of the official members of the association backed the abusive motion to knife me and that motion failed. Nonetheless, it left me feeling I would be reluctant to trust FSFE again. An organization that relies so heavily on the contributions of volunteers shouldn't even contemplate treating them, or their representatives, with such contempt. The motion should never have been on the agenda in the first place.

Bullet or boomerang?

In May, I thought I missed the bullet but it appears to be making another pass.

Some senior members of FSFE e.V. remain frustrated that a representative's ethical obligations can't be hacked with policy documents and other juvenile antics. They complain that telling fellows the truth is an act of treason and speaking up for fellows in a discussion is a form of obstruction. Both of these crimes are apparently grounds for reprisals, threats, character assassination and potentially expulsion.

In the most outrageous act of scapegoating, the president has even tried to suggest that I am responsible for the massive exodus from the fellowship examined in my previous blog. The chart clearly shows the exodus coincides with the attempt to force-migrate fellows to the supporter program, long after the date when I took up this role.

Senior members have sent me threats to throw me out of office, most recently the president himself, simply for observing the basic ethical responsibilities of a representative.

Leave your conscience at the door

With the annual general meeting in Berlin only four weeks away, the president is apparently trying to assemble a list of people to throw the last remaining representative out of the association completely. It feels like something out of a gangster movie. After all, altering and suppressing the results of elections and controlling the behavior of the candidates are the modus operandi of dictators and gangsters everywhere.

Will other members of the association exercise their own conscience and respect the commitment of representation that was made to the community? Or will they leave their conscience at the door and be the president's puppets, voting in block like in many previous general meetings?

The free software ecosystem depends on the goodwill of volunteers and donors, a community that can trust our leaders and each other. If every free software organization behaved like this, free software wouldn't exist.

A president who conspires to surround himself with people who agree with him, appointing all his staff to be voting members of the FSFE e.V. and expelling his critics appears unlikely to get far promoting the organization's mission when he first encounters adults in the real world.

The conflict of interest in this role is not of my own making, it is inherent in FSFE's structure. If they do finally kill off the last representative, I'll wear it like a badge of honor, for putting the community first. After all, isn't that a representative's role?

As the essayist John Gardner wrote

“The citizen can bring our political and governmental institutions back to life, make them responsive and accountable, and keep them honest. No one else can.”

Daniel Stenberglibcurl gets a URL API

libcurl has done internet transfers specified as URLs for a long time, but the URLs you'd tell libcurl to use would always just get parsed and used internally.

Applications that pass in URLs to libcurl would of course still very often need to parse URLs, create URLs or otherwise handle them, but libcurl has not been helping with that.

At the same time, the under-specification of URLs has led to a situation where there's really no stable document anywhere describing how URLs are supposed to work and basically every implementer is left to handle the WHATWG URL spec, RFC 3986 and the world in between all by themselves. Understanding how their URL parsing libraries, libcurl, other tools and their favorite browsers differ is complicated.

By offering applications access to libcurl's own URL parser, we hope to tighten a problematic vulnerable area for applications where the URL parser library would believe one thing and libcurl another. This could and has sometimes lead to security problems. (See for example Exploiting URL Parser in Trending Programming Languages! by Orange Tsai)

Additionally, since libcurl deals with URLs and virtually every application using libcurl already does some amount of URL fiddling, it makes sense to offer it in the "same package". In the curl user survey 2018, more than 40% of the users said they'd use an URL API in libcurl if it had one.

Handle based

Create a handle, operate on the handle and then cleanup the handle when you're done with it. A pattern that is familiar to existing users of libcurl.

So first you just make the handle.

/* create a handle */
CURLU *h = curl_url();

Parse a URL

Give the handle a full URL.

/* "set" a URL in the handle */
curl_url_set(h, CURLUPART_URL,
    "", 0);

If the parser finds a problem with the given URL it returns an error code detailing the error.  The flags argument (the zero in the function call above) allows the user to tweak some parsing behaviors. It is a bitmask and all the bits are explained in the curl_url_set() man page.

A parsed URL gets split into its components, parts, and each such part can be individually retrieved or updated.

Get a URL part

Get a separate part from the URL by asking for it. This example gets the host name:

/* extract host from the URL */
char *host;
curl_url_get(h, CURLUPART_HOST, &host, 0);

/* use it, then free it */

As the example here shows, extracted parts must be specifically freed with curl_free() once the application is done with them.

The curl_url_get() can extract all the parts from the handle, by specifying the correct id in the second argument. scheme, user, password, port number and more. One of the "parts" it can extract is a bit special: CURLUPART_URL. It returns the full URL back (normalized and using proper syntax).

curl_url_get() also has a flags option to allow the application to specify certain behavior.

Set a URL part

/* set a URL part */
curl_url_set(h, CURLUPART_PATH,
  "/index.html", 0);

curl_url_set() lets the user set or update all and any of the individual parts of the URL.

curl_url_set() can also update the full URL, which also accepts a relative URL in case an existing one was already set. It will then apply the relative URL onto the former one and "transition" to the new absolute URL. Like this;

/* first an absolute URL */
curl_url_set(h, CURLUPART_URL,
  "", 0);

/* .. then we set a relative URL "on top" */
curl_url_set(h, CURLUPART_URL,
   "../new/place", 0);

Duplicate a handle

It might be convenient to setup a handle once and then make copies of that...

CURLU *n = curl_url_dup(h);

Cleanup the handle

When you're done working with this URL handle, free it and all its related resources.



This API is marked as experimental for now and ships for the first time in libcurl 7.62.0 (October 31, 2018). I will happily read your feedback and comments on how it works for you, what's missing and what we should fix to make it even more usable for you and your applications!

We call it experimental to reserve the right to modify it slightly  going forward if necessary, and as soon as we remove that label the API will then be fixed and stay like that for the foreseeable future.

See also

The URL API wiki page.

Andy McKayMy fourth Gran Fondo

Yesterday was my fourth Gran Frondo, the last was in 2017.

The Fondo for me has become the signature event of the year. It's at the end of the season, as the days get shorter and the evenings draw closer and the rain starts to arrive. The ride provides a nice gauge for how well the training has gone and your level of fitness over the year. Also, since it's the same ride every year you can compare yourself historically to the ride.

Last year I dropped 17 minutes off my time and that made me really happy. I finished last years post by saying "I'm going to get below 4hr 30min next year". So that was my goal.

Equipment wise I got a new bike and upgraded to a Cervélo S3. It's lighter, more aero, bigger gears and faster. I love my S3. I also got a new computer and upgraded to a Wahoo to get a cadence, heart rate and speed monitor.

My riding patterns changed too, partly due to a change in jobs from Mozilla to GitHub. That meant I had no office to go to and no daily commute.

2017 (up to Fondo) 2018 (up to Fondo)
Time 243h 232h 16m
Distance 5,050km 5,279.3km
Rides 198 136

So in 2018, I spent less time on the bike, but went father and did longer average rides. There was less time grinding out a commute to and from the office, which whilst good exercise, might not have been good training.

I did Mount Seymour 7 times (only made it 3 times last year). I managed quite a few 100km plus rides: including the 160km Tour De Victoria race, the challenge route of the Ride to Conquer Cancer, 150km up the Sunshine Coast to Savary Island - and 173km back, 132km up to Whistler and then back.

The theory was that those long rides would help, if I can do a race lasting 160km, perhaps a 122km race won't seem so bad. Also this year I moved up a group at Steed and rode regularly with the group 3 riders, instead of the group 4. That meant longer rides with less stopping.

Over the winter, I took up running and did my first 10km run and that helped me keep my weight under control and hopefully keep some level of cardio ability over the winter so I didn't have to start all over again.

So yesterday came and I spent a few days before hand excited and not sleeping too well. I realised that I was looking forward to and excited by this ride. Given that my weight loss hadn't been as much as I wanted and some recent times, I wasn't expecting to hit my goal of under 4h 30m. I fully expected to be similar to last year, around 4h 45m.

This year I worked my way into the 4.5 hour area instead of starting further back. Turns out this made a difference as I was able to get into a group of people going similar paces and that really helped. It set a good pace and let me do some good drafting.

Weather was mixed on the first half, some rain - there was a big downpour at Porteau Cove and rain Squamish. But I made good time on the first half. This year instead of focusing on hitting time at check points I focused on two statistics, my heart rate and my average time. Last year 4h 45m meant a speed of 25km/hr. Knowing that the hills are in the second half of the course, I tried to get my average speed high but keep my heart rate low so I had something in the tank.

There's one of the few long flats around Squamish and I got in behind someone pulling along at 38km/hr. That got my average up to 30km/hr and kept my heart rate under control.

Weather improved after Squamish and was dry from then on. My challenge was to take the hills and keep my average up and it started dropping. But I was determined to keep it up above 25 km/hr. Around Daisy Lake, I got a good draft for a while at 35km/hr pace. But had to keep on racing hard to keep that speed up. You can see the effect this had on my heart rate as it starts to climb.

The result? I ended up crossing at 4h 19m. That's 25 minutes faster than a younger version of me. I was shocked, surprised and so happy and incoherent as I crossed the finish line.

Can't believe I did it.

I've got not idea what goal to set for next year, yes I've signed up again.

Daniel PocockWho are/were the FSFE Fellowship? Starting Fellowship 2.0?

Since the FSFE Fellowship elected me as representative in April 2017, I've received a lot of questions from fellows and the wider community about what the Fellowship actually is. As representative, it is part of my role to help ensure that fellows are adequately informed and I hope to work towards that with this blog.

The FSFE Fellowship was started in 2005 and has grown over the years.

In 2009, around the time the Fellowship elections commenced, Georg Greve, FSFE's founder commented

The Fellowship is an activity of FSFE, and indeed one of the primary ways to get involved in the organisation. It is a place for community action, collaboration, communication, fun, and recruitment that also helps fund the other activities of FSFE, for example, the political work.

Later in 2009, articles appeared in places like Linux Pro Magazine promising

From November 2009, the Free Software Foundation Europe will be offering three free Fellowships each month to open source activists.

In May 2018, when Fellowship elections were abolished by a group of nine people, mainly staff, meeting in Berlin, a small news item was put out on a Saturday, largely unnoticed by the community, arguing that fellows have no right to vote because

the community would never accept similar representation for corporate donors it is inappropriate to have such representation for any purely financial contributor.

How can long-standing FSFE members responsible for "community action, collaboration, communication, fun, and recruitment" be mistaken for a "purely financial contributor"? If open source activists were given free Fellowships, how can they be even remotely compared to a "corporate donor" at all? How can FSFE so easily forget all the effort fellows put in over the years?

The minutes show just one vote to keep democracy.

I considered resigning from the role but I sincerely hope that spending more time in the role might help some remaining Fellows.

Financial contributions

Between 2009 and 2016, fellows gave over EUR 1,000,000 to FSFE. Some are asking what they got in return, the financial reports use just six broad categories to show how EUR 473,595 was spent in 2016. One person asked if FSFE only produced EUR 37,464 worth of t-shirts and stickers, is the rest of the budget just overhead costs? At the very least, better public reporting is required. The budget shows that salaries are by far the biggest expense, with salaries, payroll overheads and office facilities being almost all of the budget.

In 2016 one single donor bequeathed EUR 150,000 to FSFE. While the donor's name may legitimately be suppressed for privacy reasons, management refuse to confirm if this person was a fellow or give the Fellowship representatives any information to ensure that the organization continues to remain consistent to the philosophy in practice whenever the will had been written. For an organization that can so easily abandon its Fellowship and metamorphise into a corporate lobby group, it is easy to imagine that a donor who wrote a will five or ten years ago may not recognize the organization today.

With overall revenues (2016) of EUR 650,000 and fellows contributing less than thirty percent of that, management may feel they don't need to bother with fellows or elections any more and they can rely on corporate funding in future. How easy it is to forget the contributions of individual donors and volunteers who helped FSFE reach the point they are in today.

Force-migration to the supporter program

Ultimately, as people have pointed out, the Fellowship has been a sinking ship. Membership was growing consistently for eight months after the community elected me but went into reverse from about December 2017 when fellows were force-migrated to the supporter program. Fellows have a choice of many free software organizations to contribute their time, skill and donations to and many fellows were prompted to re-evaluate after the Fellowship changes. Naturally, I have been contemplating the same possibilities.

Many fellows had included their status as an FSFE Fellow in their email signature and business card. When speaking at conferences, many fellows have chosen to be introduced as an FSFE Fellow. Fellows tell me that they don't want to change their business card to say FSFE Supporter, it feels like a downgrade. Has FSFE made this change in a bubble and misjudged the community?

A very German organization

FSFE's stronghold is Germany, 665 fellows, roughly half the Fellowship. With membership evaporating, maybe FSFE can give up trying to stretch into the rest of Europe and try to regroup at home. For example, in France, FSFE has only 42 fellows, that is one percent of the 4,000 members in April, the premier free software organization of the French speaking world. FSFE's standing in other large countries like the UK (83), Italy (62), Netherlands (59) and Spain (65) is also very rudimentary.

Given my very basic level of German (somewhere between A1 and A2), I feel very privileged that a predominantly German community has chosen to vote for me as their representative.

Find your country in the data set.

FSFE beyond the fellowship

As the elections have been canceled, any members of the community who want to continue voting as a member of the FSFE association or attend the annual meeting, whether you were a fellow or not, are invited to do so by clicking here to ask for the president to confirm your status as an FSFE member.

Fellowship 2.0?

Some people have asked whether the Fellowship should continue independently of FSFE.

It is clear that the fellows in Germany, Austria and Switzerland have the critical mass to set up viable associations of their own, for example, a Free Software Fellowship e.V.. If German fellows did this, they could elect their own board and run their own bank account with revenues over EUR 100,000 per year just from the existing membership base.

Personally, I volunteered to act as a representative of fellows but not as the leader or founder of a new organization. An independent Fellowship could run its own bank account to collect donations and then divide funds between different organizations instead of sending it all to the central FSFE account. An arrangement like this could give fellows more leverage to demand transparency and accounting about campaign costs, just as a large corporate donor would. If you really want your money to go as far as possible and get the best results for free software, this is a very sensible approach and it will reward those organizations who have merit.

If other fellows want to convene a meeting to continue the Fellowship, please promote it through the FSFE mailing lists and events.

Concluding remarks

Volunteers are a large and crucial part of the free software movement. To avoid losing a community like the Fellowship, it is important to treat volunteers equally and fully engage them in decision making through elections and other means. I hope that this blog will help fellows understand who we are so we can make our own decisions about our future instead of having FSFE staff tell us who to be.

Download data used in this blog.

Mozilla Open Policy & Advocacy BlogEU copyright reform: the facts

On Wednesday 12 September, Members of the European Parliament will hold a crucial vote on new copyright rules that could fundamentally damage the internet in Europe. If adopted, the new rules will force online services to universally monitor and filter the content that users post online. Ahead of the vote, we wish to set the facts straight and explain exactly what these new rules will mean for openness and decentralisation in Europe.

FACT: The proposed new copyright rules will harm Europe’s open source community.

Mandatory upload filters and copyright licensing provisions in article 13 of the proposed law are unworkable for open source software firms like Mozilla and the open source ecosystem generally. The obligations cover all forms of copyright-protected content, including software. Indeed, the cost and legal risk associated with these new rules would push smaller open source software developers out of Europe and threaten the code-sharing platforms (e.g. GitHub) on which they depend to innovate. The fluid nature of technology and software development means that any carve-outs — say for software development platforms — would still risk creating a risk-laden environment.

FACT: The proposed new copyright rules will negatively affect everyday user’s internet experiences.

When internet users want to share a witty meme online, or a home movie in which background music is audible, or even a photo of themselves wearing a t-shirt with an album cover printed on it, they may well find that their favourite online service blocks the content upload. Internet services of all sizes will be forced to implement automatic filtering technology,  likely suppressing anything that looks like it might be infringing copyright, irrespective of whether the user has a right or permission to use the content. Given the crucial role the internet plays in citizens’ everyday lives, the impact on creativity, communication, and free expression from such blanket filtering would be palpable.

FACT: The proposed new copyright rules will lead to direct surveillance of users’ activities online.​

Article 13 demands that online services build or buy specific technology to monitor and categorise each and every user upload. At a time when the EU is showing global leadership on privacy and data protection, it is deeply regrettable that lawmakers are nonetheless seeking to codify a regime that would compel service providers to monitor European internet users’ activity with even more vigour.

FACT: The proposed new copyright rules will negatively impact independent creators.

Article 13 will be used to restrict the freedom of expression and creative potential of independent artists who depend upon online services to directly reach their audience and bypass the rigidities and limitations of the commercial content industry. Sadly, the fight over this legislation has been construed as giant rightsholders versus giant online platforms. But in reality, the true victims will be creators and fans themselves. There’s a bitter irony at play: the directors, actors, songwriters, and artists who benefit from the viral sharing of their creations are now pitted against their fans, who in fact do some of the most efficient online marketing artists can hope for.

FACT: Smaller online services – and not giant platforms – will be hit hardest by the new rules

In addition to its impact on user experience, this law will have another, more insidious impact: it will further entrench the power of the biggest online platforms. Only a handful of the largest tech companies have the technical and financial means to operate the sprawling filtering systems that this law demands. Ironically, the companies at which this law is aimed are already filtering content — and so will have a competitive advantage vis-a-vis their smaller rivals and startups, who will need to invest heavily to comply with the law. In addition, the biggest platforms also have the resources and clout to mount legal defences when larger corporate rightsholders seek to suppress legal content. This is not an option for smaller players who will face a high-stakes game of legal risk.


We encourage anyone who shares these concerns to reach out to members of the European Parliament – you can call them directly via

The post EU copyright reform: the facts appeared first on Open Policy & Advocacy.

Ehsan AkhgariOn leveling the playing field and online tracking

(Please note that this post does not reflect Mozilla’s position or policies.)

Like many parts of our computing systems, some of the core parts of the Web platform weren’t designed with security in mind and as a result, users are suffering to this date.  The web platform has tried to provide a secure sandboxed environment where users can run applications from untrusted sources without the fear of their devices or data being compromised.  But the fact is that if we were to design a second iteration of this platform from scratch, we would probably make vastly different choices when it comes to issues such as execution of third-party code, or persistence of global data exposed to third-parties.

Over the years, browsers have spent significant efforts to restrict the attempts that these third-parties that are present on the Web today can do.  However, these basic foundational problems have remained unsolved in most browsers.  As a result, third-parties have been engaged in activities like collecting the user’s browsing history, personal data, information about their device, and so on, which is a subversion of the built-in protections that browsers provide to prevent the “straightforward” ways of getting this data from the third-party’s own website (aka, their own users).  Safari is the notable exception in at least the area of exposure of global data to third-parties.  I think they got the right defaults from the beginning which was hugely advantageous for both Safari and the browser community at large — for the latter since it showed that the “holy grail” of exposing no global data to third-parties is achievable, not some far-into-the-future dream which will never happen.

What’s worse, the presence and actions of these third-parties is often hidden from the user.  Even when their presence is obvious (e.g. through a visible iframe) their appearance may give the impression that they’re inert until interacted with, which is far from what’s actually going on behind the scenes.  As a result, when the user uses a browser, they often have very little knowledge of the implications of any of the actions they’re taking while browsing, in terms of the presence of these third-parties.  After all, the browser interface has traditionally been designed around the concept of a safe sandboxed environment where the user can navigate from page to page freely (and the browser would intervene if something would go wrong by putting up a prompt).  The whole online tracking ecosystem is fundamentally incompatible with the basic UI principles of browser design IMO.  Not that the problem is on the browser design side.  🙂

One thing that has been interesting is the response of the industry to the norms enforced by the browser.  Safari’s privacy protections have been under attack many times (such as by Google and Criteo).  This pattern of circumvention of browser provider privacy protections shows a will to exceed the limits of doing what’s allowed.  It also demonstrates that the third-party side of the picture here is willing to enter an arms race.

But what about users in this picture?  Right now, they have very little power, if any at all, in this picture.  In social and political sciences, power is defined as the ability to control or shape other people’s behavior.  Users need to have some ability to change the behavior of these third-parties, if we have any hopes of the Web improving.  There are many potential solutions one could think of, and some have been tried, but I think users could use more technical leverage here.  One problem is that most browsers have traditionally been on the side of the third-parties, through not clamping down on the problematic practices hard enough, so the playing field is highly skewed for the benefit of these actors.

I think there is also an equity aspect to this.  Those with technical know-how typically learn enough to protect themselves through installation of tracking protection extensions, and using more privacy friendly browsers.  But based on the public data available we know the reach of these add-ons is quite tiny compared to the population of users who are on the Web.  Furthermore, the situation is astonishingly bad in Chrome-majority Android markets on mobile, where users often stick to the OS-provided browser, contractually required by Google, which currently has no plans to support extensions on mobile, even though they have been shown viable for years by competitors such as Firefox for Android, Yandex Browser (based on Chromium), etc.  So many users there are stuck with a browser that doesn’t even allow them to find a way to protect themselves, unless if they seek a secondary browser, and know which one to pick.  The technical know-how required for this sometimes corresponds to aspects of the individual such as the background of their family, where they came from, their wealth and social class, etc.  Whereas privacy should really be considered a human right, irrespective of any of these factors.  In order to address this aspect, we need protections that work out of the box, don’t need configuring anything, and don’t get in the way of the user, and don’t need educating the user, and don’t put any burden on the user by assuming they’re going to understand or care about the technical details of how online tracking works.

Safari has led the way here in the past few years with ITP, and Mozilla recently announced that Firefox will be changing its approach going forward as well.  We need other browsers to join us in this battle as well, and we need to engage on many fronts and try to win back our users’ privacy bit by bit.  When thinking about the future, one can look at browsers realigning themselves with the user’s privacy expectations as leveling the playing field between the user, the website and the third-party.  We may never find the perfect balance, but we can surely do better than the Web that we have on our hands so far.

Firefox NightlyDeveloper Tools support for Web Components in Firefox 63

Shadow DOM and Web Components are enabled by default in Firefox 63 and the Developer Tools are ready for them ! If you are using Web Components in your project, or want to experiment, download Nightly, and check out how we integrated these new technologies into the Inspector and Debugger 🙂

<template> elements, which are useful to create the internal Shadow DOM structure of a custom element, can now be inspected as you would inspect other types of nodes.

The Shadow DOM inside an element can also be inspected. Look for a #shadow-root node in the Inspector –and note that the mode (closed or open) it was created with is indicated as well.

If your Shadow DOM contains slots, you can inspect those as well!

And as a nice bonus, if you click on the arrow icon on a slotted node, you will jump to the location of the original node:

And speaking of jumping, if you would like to jump from a custom  element in the Inspector to its definition in the Debugger, you can do it by clicking the custom… badge besides the element:

Lastly, you can see how the CSS cascade affects the Shadow DOM, modify styles, inspect the layout, etc. in the CSS pane on the right side of the Inspector.

We hope that helps you with your Web Components work. As always, we are trying to improve the Developer Tools –you can peek at what is coming next here. And if you find a bug, or have some suggestions or feedback, you will be more than welcome to share them in DevTools’ Slack community or IRC channel.

Happy coding!

QMOFirefox 63 Beta 6 Testday, September 14th

Greetings Mozillians!

We are happy to let you know that Friday, September 14th, we are organizing Firefox 63 Beta 6 Testday. We’ll be focusing our testing on Devtools Doorhanger menu, Web Compatibility features and PDF actions. We will also have fixed bugs verification  and unconfirmed bugs triage ongoing.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Hacks.Mozilla.OrgFirefox 62 – Tools Cool for School!

Hello there! It’s been six-odd weeks, and the march of progress continues to, uh… march… progressingly. That means we have a brand new Firefox to share, with an abundance of bug fixes, performance improvements, and (in particular) sweet developer tool treats! So tuck in your napkin and enjoy this tasting menu of some of what’s new in Firefox 62.

Shape Up Your Floats

A CSS shape around some grapes

CSS Shapes lets a floated element sculpt the flow of content around it beyond the classic rectangular bounding box we’ve been constrained to. For instance, in the above screenshot and linked demo, the text is wrapping to the shape of the grapes vs the image’s border. There are properties for basic shapes all the way up to complex polygons. There are of course great docs on all of this, but Firefox 62 also includes new tooling to both inspect and visually manipulate CSS Shapes values.

You can learn more in Josh Marinacci’s post on the new CSS Shapes tooling from yesterday.

Variable Fonts Are Here!

Screenshot of the new font tool in the Firefox DevTools

No punny title, I’m just excited! OpenType Font Variations allow a single font file to contain multiple instances of the same font, encoding the differences between instances. In addition to being in one file, font creators can expose any number of variation axes that give developers fine-grained control on how a font is rendered. These can be standard variations like font weight (font weight 536 looks right? no problem!) or things that were never previously available via CSS (x-height! serif-size!). In addition to the candy-store possibilities for typography nerds, being able to serve a single file with multiple variants is a major page weight savings. Dan Callahan goes much deeper on the grooviness to be found and how Firefox makes it easy to tweak these new custom values.

Devtools Commands

The Developer Toolbar was an alternate command repl input in the Firefox Developer tools, apart from the Web Console. I say “was” because as of Firefox 62, it has been removed. It was always a bit hard to find and not as well-advertised as it could be, but did encapsulate some powerful commands. Most of these commands have been progressively migrated elsewhere in the devtools, and this is wrapped up in Firefox 62, so we’ve removed the toolbar altogether.

One of the last commands to be migrated is screenshot, which is a power-user version of the “take a screenshot” button available in the devtools UI. The screenshot command is now available as :screenshot in the Web Console! For example, have you ever needed a high-res screenshot of a page for print? You can specify a higher pixel density for a screenshot via the command:

:screenshot --dpr 4

There are a bunch of other options as well, such as specifying output filenames, capture delays, and selector-cropped screenshots. Eric Meyer wrote a great primer on the power of :screenshot on his blog, and it will change your page capture game!

🌠 Did You Know: In addition to :screenshot, there are a bunch of other helpful commands and magic variables available from within the Web Console? You can learn about them on MDN Web Docs.

Mo’ Pixels, Mo’ Panels

Do you have a 4k monitor? Do your browser windows bathe in a wash of ample screen real-estate? Let your devtools stretch their legs with a new 3-column mode in the Page Inspector. You can now pop the CSS Rules view into its own column, to let you view style information and the excellent Grid tooling or Animations panel side-by-side.

<figcaption>The Three-Column View toggle can be found in the top-left of the Inspector side panel.</figcaption>

Streamlining MediaStream

If you’ve worked with WebRTC’s getUserMedia API, you may be familiar with a bit of branching logic when attaching a MediaStream object to a <video> or <audio> tag:

navigator.mediaDevices.getUserMedia({ audio: true, video: true })
.then(function(stream) {
  if ("srcObject" in video) {
    videoEl.srcObject = stream;
  } else {
    videoEl.src = URL.createObjectURL(stream);

It’s true that earlier support for WebRTC required the use of the URL API, but this was non-standard and is no longer necessary. Firefox 62 removes support for passing a MediaStream to createObjectURL, so be sure you’re using a proper capability check as above.

Why stop here?

I’ve shown you a glimpse of what’s new and exciting in Firefox 62, but there’s more to learn and love! Be sure to check out the product release notes for user-facing features as well as a more complete list of developer facing changes on MDN.

Happy building!

The post Firefox 62 – Tools Cool for School! appeared first on Mozilla Hacks - the Web developer blog.

Wladimir PalantKeybase: "Our browser extension subverts our encryption, but why should we care?"

Two days ago I decided to take a look at Keybase. Keybase does crypto, is open source and offers security bug bounties for relevant findings — just the perfect investigation subject for me. It didn’t take long for me to realize that their browser extension is deeply flawed, so I reported the issue to them via their bug bounty program. The response was rather… remarkable. It can be summed up as: “Yes, we know. But why should we care?” Turns out, this is a common response, see update at the bottom.

What is Keybase?

The self-description of Keybase emphasizes its secure end-to-end encryption (emphasis in original):

Imagine a Slack for the whole world, except end-to-end encrypted across all your devices. Or a Team Dropbox where the server can’t leak your files or be hacked.

So the app allows you to exchange messages or files with other people, with the encryption happening on sender’s computer in such a way that decryption is only possible by the designated recipient. This app is available for both desktop and mobile platforms. And for desktop you get a bonus: you can install the Keybase browser extension. It will add a “Keybase Chat” button to people’s profiles on Facebook, Twitter, GitHub, Reddit or Hacker News. This button allows you to connect to people easily.

Clicking the button will open a chat window and allow you to enter a message directly in the browser. Only after that initial message is sent the conversation will be transferred to the Keybase app.

So what’s the issue?

The issue here is a very common one, merely a week ago I listed it as #6 in this article. The extension injects its user interface (the button and the chat window) into third-party websites, yet it fails to isolate it from these websites. So the first consequence is: the Keybase message you enter on Facebook is by no means private. Facebook’s JavaScript code can read it out as you type it in, so much for end-to-end encryption. This is quite contrary to the promise Keybase still makes on their Mozilla Add-ons and Chrome Web Store installation pages.

Don’t believe that Facebook would intentionally spy on you? Maybe not, but by now it is pretty common to protocol all of user’s actions, for “site optimization” purposes — this includes anything entered into text fields of course. But in my opinion, that’s not even the worst issue.

A website could do more than passively spying on you. It could just as well instrument the Keybase user interface in order to send messages in your name, while also making this user interface invisible so that you don’t notice anything. Why would Facebook want to do something like that? Not necessary them, rather anybody who discovered a Cross-Site Scripting (XSS) vulnerability in one of the websites that Keybase integrates with. So if hundreds of people complain about you sending them spam messages via Keybase, it might be somebody exploiting the Keybase extension on your computer via an XSS vulnerability in Reddit. Have fun explaining how you didn’t do it, even though the messages were safely encrypted on your computer.

What does Keybase think about this?

According to Keybase, “this is all clearly described on the install page and is known.” In fact, close to the bottom of that page you find the following:

What if my browser is compromised?

The Keybase extension uses a compose box inside your browser. If you fear your browser or the social network site’s JavaScript has been compromised — say by another extension or even the social network acting fishy — then just compose the message inside the Keybase app directly. Or send a quick hello note through the extension and save the jucier private details for inside the app.

To me, this is thoroughly confusing. First of all, “browser is compromised” to me sounds more like malware. Trouble is, malware affecting the browser will affect the Keybase app just as well, so the advise makes no sense. But let’s say that it really is “the social network acting fishy,” how are you supposed to know? And is Facebook spying on you “fishy” or just its usual self?

It’s not that this issue is unavoidable. Avoiding it is fairly easy, by isolating all of the extension’s user interface in an <iframe> element. This would prevent both the website and other extensions from accessing it. Disaster averted, nothing to see here. But according to Keybase:

there were technical reasons why iframes didn’t work, though I forget the details

I translate this as: “Using iframes required a slightly more complicated approach, so we couldn’t figure it out.” Also:

It’s such a minor feature for us, it’s not worth a fix.

I translate this as: “We will keep pushing this extension because it gets users to promote our app for free. But we don’t care enough to make it secure.”

And now?

The only advise I can give you: uninstall the Keybase browser extension ASAP. As to the app itself, it might be secure. But as experience shows, the claim “end-to-end encryption” doesn’t automatically translate into a secure implementation. Initially, I planned to take a closer look at the crypto in Keybase, to see whether I can find weaknesses in their implementation. But that’s off the table now.

Update (2018-09-10): After I wrote this, EdOverflow pointed out that he made a similar experience with Keybase in the past. He could demonstrate that the domain ownership validation approach used by Keybase is flawed, yet Keybase wasn’t really interested in fixing this issue. Why they don’t require their keybase.txt file to be always located within the .well-known/ directory is beyond me, it solves the security issue here without any obvious downsides.

And then I also found this older vulnerability report on HackerOne about the Keybase extension opening up XSS issues on websites. The reporter recommended staying clear of innerHTML and using safe DOM methods instead, something that I have also been preaching for years. The response he received sounded very familiar:

There was some reason our extension developer decided against that approach, though he agrees it’s better in theory.

In other words: “We don’t know how to do it, but we’ll claim that we have a good reason instead of asking for help.”

Daniel StenbergDoH in curl

DNS-over-HTTPS (DoH) is being designed (it is not an RFC quite yet but very soon!) to allow internet clients to get increased privacy and security for their name resolves. I've previously explained the DNS-over-HTTPS functionality within Firefox that ships in Firefox 62 and I did a presentation about DoH and its future in curl at curl up 2018.

We are now introducing DoH support in curl. I hope this will not only allow users to start getting better privacy and security for their curl based internet transfers, but ideally this will also provide an additional debugging tool for DoH in other clients and servers.

Let's take a look at how we plan to let applications enable this when using libcurl and how libcurl has to work with this internally to glue things together.

How do I make my libcurl transfer use DoH?

There's a primary new option added, which is the "DoH URL". An application sets the CURLOPT_DOH_URL for a transfer, and then libcurl will use that service for resolving host names. Easy peasy. There should be nothing else in the transfer that changes or appears differently. It'll just resolve the host names over DoH instead of using the default resolver!

What about bootstrap, how does libcurl find the DoH server's host name?

Since the DoH URL itself typically is given using a host name, that first host name will be resolved using the normal resolver - or if you so desire, you can provide the IP address for that host name with the CURLOPT_RESOLVE option just like you can for any host name.

If done using the resolver, the resolved address will then be kept in libcurl's DNS cache for a short while and the DoH connection will be kept in the regular connection pool with the other connections, making subsequent DoH resolves on the same handle much faster.

How do I use this from the command line?

Tell curl which DoH URL to use with the new --doh-url command line option:

$ curl --doh-url

How do I make my libcurl code use this?

curl = curl_easy_init();
curl_easy_setopt(curl, CURLOPT_URL,
curl_easy_setopt(curl, CURLOPT_DOH_URL,
res = curl_easy_perform(curl);


Internally, libcurl itself creates two new easy handles that it adds to the existing multi handles and they are then performing two HTTP requests while the original transfer sits in the "waiting for name resolve" state. Once the DoH requests are completed, the original transfer's state can progress and continue on.

libcurl handles parallel transfers perfectly well already and by leveraging the already existing support for this, it was easy to add this new functionality and still work non-blocking and even event-based correctly depending on what libcurl API that is being used.

We had to add a new little special thing that makes libcurl handle the end of a transfer in a new way since there are now easy handles that are created and added to the multi handle entirely without the user's knowledge, so the code also needs to remove and delete those handles when they're done serving their purposes.

Was this hard to add to a 20 year old code base?

Actually, no. It was surprisingly easy, but then I've also worked on a few different client-side DoH implementations already so I had gotten myself a clear view of how I wanted the functionality to work plus the fact that I'm very familiar with the libcurl internals.

Plus, everything inside libcurl is already using non-blocking code and the multi interface paradigms so the foundation for adding parallel transfers like this was already in place.

The entire DoH patch for curl, including documentation and test cases, was a mere 1500 lines.


This is merged into the master branch in git and is planned to ship as part of the next release: 7.62.0 at the end of October 2018.

Chris H-CThe End of Firefox Windows XP Support

Firefox 62 has been released. Go give it a try!

At the same time, on the Extended Support Release channel, we released Firefox ESR 60.2 and stopped supporting Firefox ESR 52: the final version of Firefox with Windows XP support.

Now, we don’t publish all-channel user proportions grouped by operating system, but as part of the Firefox Public Data Report we do have data from the release channel back before we switched our XP users to the ESR channel. At the end of February 2016, XP users made up 12% of release Firefox. By the end of February 2017, XP users made up 8% of release Firefox.

If this trend continued without much change after we switched XP users to ESR, XP Firefox users would presently amount to about 2% of release users.

That’s millions of users we kept safe on the Internet despite running a nearly-17-year-old operating system whose last patch was over 4 years ago. That’s a year and a half of extra support for users who probably don’t feel they have much ability to protect themselves online.

It required effort, and it required devoting resources to supporting XP well after Microsoft stopped doing so. It meant we couldn’t do other things, since we were busy with XP.

I think we did a good thing for these users. I think we did the right thing for these users. And now we’re wishing these users the very best of luck.

…and that they please oh please upgrade so we can go on protecting them into the future.



Mozilla Security BlogWhy we need better tracking protection

Mozilla has recently announced a change in our approach to protecting users against tracking. This announcement came as a result of extensive research, both internally and externally, that shows that users are not in control of how their data is used online. In this post, I describe why we’ve chosen to pursue an approach that blocks tracking by default.

People are uncomfortable with the data collection that happens on the web. The actions we take on the web are deeply personal, and yet we have few options to understand and control the data collection that happens on the web. In fact, research has repeatedly shown that the majority of people dislike the collection of personal data for targeted advertising. They report that they find the data collection invasive, creepy, and scary.

The data collected by trackers can create real harm, including enabling divisive political advertising or shaping health insurance companies’ decisions. These are harms we can’t reasonably expect people to anticipate and take steps to avoid. As such, the web lacks an incentive mechanism for companies to compete on privacy.

Opt-in privacy protections have fallen short. Firefox has always offered a baseline set of protections and allowed people to opt into additional privacy features. In parallel, Mozilla worked with industry groups to develop meaningful privacy standards, such as Do Not Track.

These efforts have not been successful. Do Not Track has seen limited adoption by sites, and many of those that initially respected that signal have stopped honoring it. Industry opt-outs don’t always limit data collection and instead only forbid specific uses of the data; past research has shown that people don’t understand this. In addition, research has shown that people rarely take steps to change their default settings — our own data agrees.

Advanced tracking techniques reduce the effectiveness of traditional privacy controls. Many people take steps to protect themselves online, for example, by clearing their browser cookies. In response, some trackers have developed advanced tracking techniques that are able to identify you without the use of cookies. These include browser fingerprinting and the abuse of browser identity and security features for individual identification.

The impact of these techniques isn’t limited to the the website that uses them; the linking of tracking identifiers through “cookie syncing” means that a single tracker which uses an invasive technique can share the information they uncover with other trackers as well.

The features we’ve announced will significantly improve the status quo, but there’s more work to be done. Keep an eye out for future blog posts from us as we continue to improve Firefox’s protections.

The post Why we need better tracking protection appeared first on Mozilla Security Blog.

Hacks.Mozilla.OrgMake your web layouts bust out of the rectangle with the Firefox Shape Path Editor

The web doesn’t have to be boxy. Historically, every element in a page is rendered as a rectangle of some kind, but it doesn’t have to be this way. With CSS Shapes you can create web layouts every bit as stylish as print magazines, but with all of the advantages of the web.

CSS Shapes let your web designs break out of the rectangular grid. All of those classic magazine design elements like non-rectangular text flow and shaped images can be yours, for the low low price of using a new CSS standard. Text can flow, images can be rounded, even just a few non-parallel lines can make your site stand out and make your brand distinctive. Standing out is the biggest challenge most sites face today. Shapes can help!

Save The Trees mockup with leaf-shaped icon, and flowed lorem ipsum text

Image by Sara Soueidan

The Standard

The shape of your elements can be controlled with just two CSS properties: shape-outside and clip-path.

The shape-outside property changes the way content flows outside of a floated DOM element. It affects layout, not drawing. The clip-path property changes the clipping boundary of how the DOM element is drawn. It affects drawing, not layout.

clipping the image of a kitten into a circular shape

The clip-path and shape-outside properties.

Because these two properties are separate, you can use one, or both, or none — to get just exactly the effect you are looking for. The good news is that both of these use the same basic-shape syntax.

Want to clip your image to be in a circle? Just use clip-path: circle(50%). Want to make text wrap around your image as if it were a circle, just use shape-outside: circle(50%). The shape syntax supports rectangles, circles, ellipses, and full polygons. Of course, manually positioning polygons with numbers is slow and painful. Fortunately there is a better way.

The Shape Path Editor

With the Shape Path Editor in Firefox 62, you can visually edit the shape directly from the CSS inspector. Open your page in Firefox, and use Firefox Developer Tools to select the element whose shape you want to modify. Once you select the element there will be a little icon next to the shape-outside and clip-path properties if you have used one of them. If not, add shape-outside and clip-path to that element first. Click on that little icon to start the visual editor. Then you can directly manipulate the shape with your mouse.

Using the shape editor in Firefox Dev Tools

Image courtesy of placekitten, text courtesy of catipsum.

Open the Inspector and select the element you want to modify:

using the inspector to modify a kitten photo

Click the icon next to clip-path or shape-outside. If the element doesn’t have one of these properties, add it, then select it.

modifying the image element with the shape editor

Edit the clip path:

editing the clip path

Edit the outside shape:

editing the outside shape

Check out this live demo on glitch.


To learn more about how to use the CSS Shape Editor read the full documentation.

Progressive Enhancement

CSS shapes are here and they work today in most browsers, and most importantly they degrade gracefully. Readers with current browsers will get a beautiful experience and readers with non-compliant browsers will never know they are missing anything.

kitten with shape supportkitten image without shape support degrades progressively

Stunning Examples

Here are just a few examples of the amazing layouts you can do with CSS Shapes:

Page layout text effects with clip-path:

Codepen by Mandy Michael called "Create"

via Mandy Michael

Plants and background effect using clip-path:

Minion using shape-outside:

Break out of the Box

Shapes on the web are here today, thanks to shape-outside and clip-path. Using the Firefox Shape Path Editor makes them even easier to use.

How will you make your website break out of the box? Let us know how you’re using Shapes.

The post Make your web layouts bust out of the rectangle with the Firefox Shape Path Editor appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogLatest Firefox Releases Available Today

The latest versions of Firefox for desktop, Android and iOS launched today. Since our last release update, we’ve been working on a couple improvements and laying the foundation for upcoming future releases. To get the details on what’s new with today’s release, check out the release notes.

In the coming months, we’ll unveil and share new features that help people feel safe while on the web, and worry less about who’s collecting their personal data. You can read more about it in our blog post where we talked about our approach to Anti-tracking.

Latest Firefox for iOS Updates Add Greater Personalization

Recently, we unveiled the latest features in Firefox for iOS to personalize your web experience.

Change your Firefox from Dark to Light

Now, in Firefox for iOS you have the ability to change your theme from dark to light just as easily as you can switch up the wallpaper on your phone. For some people, it might depend on the sites they visit, and for others it’s just a matter of preference. Whatever your choice, you can easily switch between dark and light themes either manually or automatically.

There are two ways to accomplish this. You can tap “Settings” in the menu panel. Then, tap “Display,” and choose either Light or Dark. And you’re all set. Another option, you can also automatically turn it on by using the Automatic switch.

Search, Switch and Easily Manage Tabs

We’re making it much simpler to get to the content you want with several improvements to tabs in Firefox for iOS. You can now manage tab settings in a single view allowing you to make changes easily and quickly. Additionally, you’ll be able to search your open tabs and seamlessly switch between normal and private browsing.

Manage tab settings in a single view

Check out and download the latest version of Firefox Quantum available here. For the latest version of Firefox for iOS, visit the App Store.


The post Latest Firefox Releases Available Today appeared first on The Mozilla Blog.

Daniel Stenbergcurl 7.61.1 comes with only bug-fixes

Already at the time when we shipped the previous release, 7.61.0, I had decided I wanted to do a patch release next. We had some pretty serious HTTP/2 bugs in the pipe to get fixed and there were a bunch of other unresolved issues also awaiting their treatments. Then I took off on vacation and and the HTTP/2 fixes took a longer time than expected to get on top of, so I subsequently decided that this would become a bug-fix-only release cycle. No features and no changes would be merged into master. So this is what eight weeks of only bug-fixes can look like.


the 176th release
0 changes
56 days (total: 7,419)

102 bug fixes (total: 4,640)
151 commits (total: 23,439)
0 new curl_easy_setopt() options (total: 258)

0 new curl command line option (total: 218)
46 contributors, 21 new (total: 1,787)
27 authors, 14 new (total: 612)
  1 security fix (total: 81)

Notable bug-fixes this cycle

Among the many small fixes that went in, I feel the following ones deserve a little extra highlighting...

NTLM password overflow via integer overflow

This latest security fix (CVE-2018-14618) is almost identical to an earlier one we fixed back in 2017 called CVE-2017-8816, and is just as silly...

The internal function Curl_ntlm_core_mk_nt_hash() takes a password argument, the same password that is passed to libcurl from an application. It then gets the length of that password and allocates a memory area that is twice the length, since it needs to expand the password. Due to a lack of checks, this calculation will overflow and wrap on a 32 bit machine if a password that is longer than 2 gigabytes is passed to this function. It will then lead to a very small memory allocation, followed by an attempt to write a very long password to that small memory buffer. A heap memory overflow.

Some mitigating details: most architectures support 64 bit size_t these days. Most applications won't allow passing in passwords that are two gigabytes.

This bug has been around since libcurl 7.15.4, released back in 2006!

Oh, and on the curl web site we now use the CVE number in the actual URL for all the security vulnerabilities to make them easier to find and refer to.

HTTP/2 issues

This was actually a whole set of small problems that together made the new crawler example not work very well - until fixed. I think it is safe to say that HTTP/2 users of libcurl have previously used it in a pretty "tidy" fashion, because I believe I corrected four or five separate issues that made it misbehave.  It was rather pure luck that has made it still work as well as it has for past users!

Another HTTP/2 bug we ran into recently involved us discovering a little quirk in the underlying nghttp2 library, which in some very special circumstances would refuse to blank out the stream id to struct pointer mapping which would lead to it delivering a pointer to a stale (already freed) struct at a later point. This is fixed in nghttp2 now, shipped in its recent 1.33.0 release.

Windows send-buffer tuning

Making uploads on Windows from between two to seven times faster than before is certainly almost like a dream come true. This is what 7.61.1 offers!

Upload buffer size increased

In tests triggered by the fix above, it was noticed that curl did not meet our performance expectations when doing uploads on really high speed networks, notably on localhost or when using SFTP. We could easily double the speed by just increasing the upload buffer size. Starting now, curl allocates the upload buffer on demand (since many transfers don't need it), and now allocates a 64KB buffer instead of the previous 16KB. It has been using 16KB since the 2001, and with the on-demand setup and the fact that computer memories have grown a bit during 17 years I think it is well motivated.

A future curl version will surely allow the application to set this upload buffer size. The receive buffer size can already be set.

Darwinssl goes ALPN

While perhaps in the grey area of what a bugfix can be, this fix  allows curl to negotiate ALPN using the darwinssl backend, which by extension means that curl built to use darwinssl can now - finally - do HTTP/2 over HTTPS! Darwinssl is also known under the name Secure Transport, the native TLS library on macOS.

Note however that macOS' own curl builds that Apple ships are no longer built to use Secure Transport, they use libressl these days.

The Auth Bearer fix

When we added support for Auth Bearer tokens in 7.61.0, we accidentally caused a regression that now is history. This bug seems to in particular have hit git users for some reason.

-OJ regression

The introduction of bold headers in 7.61.0 caused a regression which made a command line like "curl -O -J" to fail, even if a Content-Disposition: header with a correct file name was passed on.

Cookie order

Old readers of this blog may remember my ramblings on cookie sort order from back in the days when we worked on what eventually became RFC 6265.

Anyway, we never did take all aspects of that spec into account when we sort cookies on the HTTP headers sent off to servers, and it has very rarely caused users any grief. Still, now Daniel Gustafsson did a glorious job and tweaked the code to also take creation order into account, exactly like the spec says we should! There's still some gotchas in this, but at least it should be much closer to what the spec says and what some sites might assume a cookie-using client should do...

Unbold properly

Yet another regression. Remember how curl 7.61.0 introduced the cool bold headers in the terminal? Turns out I of course had my escape sequences done wrong, so in a large number of terminal programs the end-of-bold sequence ("CSI 21 m") that curl sent didn't actually switch off the bold style. This would lead to the terminal either getting all bold all the time or on some terminals getting funny colors etc.

In 7.61.1, curl sends the "switch off all styles" code ("CSI 0 m") that hopefully should work better for people!

Next release!

We've held up a whole bunch of pull requests to ship this patch-only release. Once this is out the door, we'll open the flood gates and accept the nearly 10 changes that are eagerly waiting merge. Expect my next release blog post to mention several new things in curl!

David LawrenceHappy BMO Push Day!

the following changes have been pushed to

  • [602313] Allow creation of attachments by pasting an image from clipboard, as well as by drag-and-dropping a file from desktop
  • [1482475] Add extensive testing framework
  • [1480878] Monitor the health of PhabBugz connector job processing
  • [1473958] Update Thunderbird logo, replace Data Platform and Tools icon on easy product selector
  • [1482145] PhabBot changes are showing up as from the wrong user, and also sending email incorrectly (based on the wrong current user)

discuss these changes on

Hacks.Mozilla.OrgVariable Fonts Arrive in Firefox 62

Firefox 62, which lands in general release this week, adds support for Variable Fonts, an exciting new technology that makes it possible to create beautiful typography with a single font file. Variable fonts are now supported in all major browsers.

What are Variable Fonts?

Font families can have dozens of variations: different weights, expanded or condensed widths, italics, etc. Traditionally, each variant required its own separate font file, which meant that Web designers had to balance typographic nuance with pragmatic concerns around page weight and network performance.

Compared to traditional fonts, variable fonts contain additional data, which make it possible to generate different styles of the font on demand. For one example, consider Jost*, an open-source, Futura-inspired typeface from indestructible type*. Jost* comes in nine weights, each with regular and italic styles, for a total of eighteen files.

Screenshot of 18 traditional TTF files next to to a single, variable TTF file that that can replace the 18 other files.

Jost* also comes as a single variable font file which is able to generate not only those same eighteen variations, but also any intermediate weight at any degree of italicization.

Design Axes

Jost* is an example of a “two-axis” variable font: it can vary in both weight and italics. Variable fonts can have any number of axes, and each axis can control any aspect of the design. Weight is the most common axis, but typographers are free to invent their own.

Illustration of the various weights and italics settings for the Jost* font

One typeface that invented its own axis is Slovic. Slovic is a Cyrillic variable font with a single axis, style, that effectively varies history. At one extreme, characters are drawn similarly to how they appear in 9th century manuscripts, while at the other, they adopt modern sans-serif forms. In between are several intermediate styles. Variable font technology allows the design to adapt and morph smoothly across the entire range of the axis.

Illustration of the Slovic font's letterforms morphing with different values of the "style" variable font axis

The sky’s the limit! To see other examples of variable fonts, check out and Axis Praxis.

Better Tools for Better Typography on the Web

Great features deserve great tools, and that’s why we’re hard at work building an all new Font Editor into the Firefox DevTools. Here’s a sneak peek:

You can find the Font Editor as a panel inside the Page Inspector in the Dev Tools. If you have enough space on your screen, it’s helpful to enable 3-pane mode so you can see the DOM tree, CSS Rules, and Font Editor all side-by-side.

When you click on an element in the DOM tree, the Font Editor updates to show information about the selected element’s font, as well as tools for editing its properties. The Font Editor works on all fonts, but really shines with variable ones. For instance, the weight control subtly changes from stepped slider to continuous one in the presence of a variable font with a weight axis.

A comparison of the DevTools Font Editor when inspecting a variable font versus a traditional font, showing how the varible font axes appear as continuous, smooth sliders, while the traditional font has toggles or stepped sliders to adjust things like italic or weight

Similarly, each design axis in a variable font gets its own widget in the editor, allowing you to directly customize the font’s appearance and immediately see the results on your page.

The new Font Editor will arrive with Firefox 63 in October, but you can use it today by downloading Firefox Nightly. Let us know what you think! Your feedback is an essential guide as we continue to build and refine Firefox’s design tools.

Editor’s note: Attention MacOS users — variable fonts require MacOS 10.13+

The post Variable Fonts Arrive in Firefox 62 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogWelcome Alan Davidson, Mozilla’s new VP of Global Policy, Trust and Security

I’m excited to announce that Alan Davidson is joining us today as our new Vice President of Global Policy, Trust and Security.

At a time when people are questioning the impact of technology on their lives and looking for leadership from organizations like Mozilla, Alan will add considerable capacity to our public policy, trust and security efforts, drawing from his extensive professional history working to advance a free and open digital economy.

Alan will work closely with me to help scale and reinforce our policy, trust and security capabilities and impact. He will be responsible for leading Mozilla’s public policy work promoting an open Internet and a healthy web around the world. He will also supervise a trust and security team focused on promoting innovative privacy and security features that put people in control of their online lives.

“For over 15 years, Mozilla has been a driving force for a free and open Internet, building open source products with industry-leading privacy and security features. I am thrilled to be joining an organization so committed to putting the user first, and to making technology a force for good in people’s lives,” says Alan Davidson, Mozilla’s new Vice President of Global Policy, Trust and Security.

Alan is not new to Mozilla. He was a Mozilla Fellow for a year in 2017-2018. During his tenure with us, Alan worked on advancing policies and practices to support the nascent field of public interest technologists — the next generation of leaders with expertise in technology and public policy who we need to guide our society through coming challenges such as encryption, autonomous vehicles, blockchain, cybersecurity, and more.

“Alan was a tremendous asset to the Commerce Department in our groundbreaking work to promote a strong and prosperous digital economy for all Americans,” said Penny Pritzker, former United States Secretary of Commerce and the Chairman of PSP Capital. “I am sure he will be a terrific addition to Mozilla and its role as a leading voice for a free and open Internet around the world.” Until early 2017, Alan served as the first Director of Digital Economy at the U.S. Department of Commerce and a Senior Advisor to the Secretary of Commerce.

Alan joins Mozilla from his most recent engagements as Senior Program Fellow with New America in Washington D.C. and as a private consultant. Prior to joining the U.S. Department of Commerce, he was the director of New America’s Open Technology Institute. Prior to that, Alan opened and grew Google’s Washington D.C. office, and led the company’s public policy and government relations efforts in North and South America for seven years.

Join me in welcoming Alan to Mozilla!


The post Welcome Alan Davidson, Mozilla’s new VP of Global Policy, Trust and Security appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 250

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is cgroups, a native Rust library for managing control groups under Linux. Thanks to yoshuawuyts for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

109 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Zeitgeist of Rust: developing load bearing software that will survive us.

Bryan Cantrill on Youtube: "The Summer of Rust (1:08:10)".

Thanks to Matthieu M for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Niko MatsakisRust pattern: Iterating an over a Rc<Vec>

This post examines a particular, seemingly simple problem: given ownership of a Rc<Vec<u32>>, can we write a function that returns an impl Iterator<Item = u32>? It turns out that this is a bit harder than it might at first appear – and, as we’ll see, for good reason. I’ll dig into what’s going on, how you can fix it, and how we might extend the language in the future to try and get past this challenge.

The goal

To set the scene, let’s take a look at a rather artifical function signature. For whatever reason, this function has to take ownership of an Rc<Vec<u32>> and it wants to return an impl Iterator<Item = u32>1 that iterates over that vector.

fn iterate(data: Rc<Vec<u32>>) -> impl Iterator<Item = u32> {
    ... // what we want to write!

(This post was inspired by a problem we hit in the NLL working group. The details of that problem were different – for example, the vector in question was not given as an argument but instead cloned from another location – but this post uses a simplified example so as to focus on interesting questions and not get lost in other details.)

First draft

The first thing to notice is that our function takes ownership of a Rc<Vec<u32>> – that is, a reference counted2 vector of integers. Presumably, this vector is reference counted because it is shared amongst many places.

The fact that we have ownership of a Rc<Vec<u32>> is precisely what makes our problem challenging. If the function were taking a Vec<u32>, it would be rather trivial to write: we could invoke data.into_iter() and be done with it (try it on play).

Alternatively, if the function took a borrowed vector of type &Vec<u32>, there would still be an easy solution. In that case, we couldn’t use into_iter, because that requires ownership of the vector. But we could write data.iter().cloned()data.iter() gives us back references (&u32) and the cloned() adapter then “clones” them to give us back a u32 (try it on play).

But we have a Rc<Vec<u32>>, so what can we do? We can’t invoke into_iter, since that requires complete ownership of the vector, and we only have partial ownership (we share this same vector with whoever else has an Rc handle). So let’s try using .iter().cloned(), like we did with the shared reference:

// First draft
fn iterate(data: Rc<Vec<u32>>) -> impl Iterator<Item = u32> {

If you try that on playground, you’ll find you get this error:

error[E0597]: `data` would be dropped while still borrowed
 --> src/
 4 |     data.iter().cloned()
   |     ^^^^ borrowed value does not live long enough
 5 | }
   | - borrowed value only lives until here
   = note: borrowed value must be valid for the static lifetime...

This error is one of those frustrating error messages – it says exactly what the problem is, but it’s pretty hard to understand. (I’ve filed #53882 to improve it, though I’m not yet sure what I think it should say.) So let’s dig in to what is going on.

iter() borrows the collection it is iterating over

Fundamentally, the problem here is that when we invoke iter, it borrows the variable data to create a reference (of type &[u32]). That reference is then part of the iterator that is getting returned. The problem is that the memory that this reference refers to is owned by the iterate function, and when iterate returns, that memory will be freed. Therefore, the iterator we give back to the caller will refer to invalid memory.

If we kind of ‘inlined’ the iter call a bit, what’s going on would look like this:

fn iterate(data: Rc<Vec<u32>>) -> impl Iterator<Item = u32> {
    let iterator = Iterator::new(&data); // <-- call to iter() returns this
    let cloned_iterator = ClonedIterator::new(iterator); <-- call to cloned()

Here you can more clearly see that data is being borrowed in the first line.

drops in Rust are deterministic

Another crucial ingredient is that the local variable data will be “dropped” when iterate returns. “Dropping” a local variable means two things:

  • We run the destructor, if any, on the value within.
  • We free the memory on the stack where the local variable is stored.

Dropping in Rust proceeds at fixed point. data is a local variable, so – unless it was moved before that point – it will be dropped when we exit its scope. (In the case of temporary values, we use a set of syntactic rules to decide its scope.) In this case, data is a parameter to the function iterate, so it is going to be dropped when iterate returns.

Another key thing to understand is that the borrow checker does not “control” when drops happen – that is controlled entirely by the syntactic structure of the code.3 The borrow checker then comes after and looks to see what could go wrong if that code were executed. In this case, it seems that we have a reference to data that will be returned, but – during the lifetime of that reference – data will be dropped. That is bad, so it gives an error.

What is the fundamental problem here?

This is actually a bit of a tricky problem to fix. The problem here is that Rc<Vec<u32>> only has shared ownership of the Vec<u32> within – therefore, it does not offer any API that will return you a Vec<u32> value. You can only get back &Vec<u32> values – that is, references to the vector inside.

Furthermore, the references you get back will never be able to outlive the Rc<Vec<u32>> value they came from! That is, they will never be able to outlive data. The reason for this is simple: once data gets dropped, those references might be invalid.

So what all of this says is that we will never be able to return an iterator over data unless we can somehow transfer ownership of data back to our caller.

It is interesting to compare this example with the alternative signatures we looked at early on:

  • If iterate took a Vec<u32>, then it would have full ownership of the vector. It can use into_iter to transfer that ownership into an iterator and return the iterator. Therefore, ownership was given back to the caller.
  • If iterate took a &Vec<u32>, it never owned the vector to begin with! It can use iter to create an iterator that references into that vector. We can return that iterator to the caller without incident because the data it refers to is owned by the caller, not us.

How can we fix it?

As we just saw, to write this function we need to find some way to give ownership of data back to the caller, while still yielding up an iterator. One way to do it is by using a move closure, like so (playground):

fn iterate(data: Rc<Vec<u32>>) -> impl Iterator<Item = u32> {
    let len = data.len();
    (0..len).map(move |i| data[i])

So why does this work? In the first line, we just read out the length of the data vector – note that, in Rust, any vector stored in a Rc is also immutable (only a full owner can mutate a vector), so we know that this length can never change. Now that we have the length len, we can create an iterator 0..len over the integers from 0 to len. Then we can map from each index i to the data using data[i] – since the data inside is just an integer, it gets copied out.

In terms of ownership, the key point is that here the closure is taking ownership of data. The closure is then placed into the iterator, and the iterator is returned. So indeed ownership of the vector is passing back to the caller as part of the iterator.

What about if I don’t have integers?

You could use the same trick to return an iterator of any type, but you must be able to clone it. For example, you could iterate over strings (playground):

fn iterate(data: Rc<Vec<String>>) -> impl Iterator<Item = String> {
    let len = data.len();
    (0..len).map(move |i| data[i].clone())

Why is it important that we clone it? Why can’t we return references? This falls out from how the Iterator trait is designed. If you look at the definition of iterator, it states that it gives ownership of each item that it iterates over:

trait Iterator {
    type Item;
    fn next<'s>(&'s self) -> Option<Self::Item>;
    //           ^^ This would normally be written
    //           `&self`, but I'm giving it a name
    //           so I can refer to it below.

In particular, the next function borrows self only for the duration of the call to next. Self::Item, the return type, does not mention the lifetime 's of the self reference, so it cannot borrow from self. This means that I can write generic code where we extract an item, drop the iterator, and then go on using the item:

fn dump_first<I>(some_iter: impl Iterator<Item = I>)
    I: Debug,
    // Get an item from the iterator.
    let item =;
    // Drop the iterator early.
    // Keep using the item.
    println!("{:?}", item);

Now, imagine what would happen it we permitted the closure to return move |i| &data[i] and we then passed the resulting iterator to dump_first:

  1. We would first extract a reference into data and store it in item.
  2. We would then drop the iterator, which in turn would drop data, potentially freeing the vector (if this is the last Rc handle).
  3. Finally, we would then go on to use item, which has a reference into the (now possibly freed) vector.

So, the lesson is: if you want to return an iterator over borrowed data, per the design of the Iterator trait, you must be iterating over a borrowed reference to begin with (i.e., iterate would need to take a &Rc<Vec<u32>>, &Vec<u32>, or &[u32]).

How could we extend the language to help here?

Self references

This is an interesting question. If we focus just on the original problem – that is, how to return an impl Iterator<Item = u32> – then most obvious thing is the idea of extending the lifetime system to permit “self-references” – for example, it would be nice if you could have a struct that owns some data (e.g., our Rc<Vec<u32>>) and also had a reference into that data (e.g., the result of invoking iter). This might allow us a nicer way of writing the solution to our original problem (returning an impl Iterator<Item = u32>). In particular, what we effectively did in our solution was to use an integer as a kind of “reference” into the vector – each step, we index again. Since indexing is very cheap, this is fine for iterating over a vector, but it wouldn’t work with (say) a Rc<HashMap<K, V>>.

My personal hope is that once we wrap up work on the MIR borrow-checker (NLL) – and we are starting to get close! – we can start to think about self-references and how to model them in Rust. I’d like to transition to a Polonius-based system first, though.

Auxiliary values

Another possible direction that has been kicked around is having some way for a function to return data that its caller must store, which can then be referenced by the “real” return value. The idea would be that iterate would somehow “store” the Rc<Vec<u32>> into its caller’s stack frame, and then return an iterator over that. Ultimately, this is very similar to the “self-reference” concept: the difference is that, with self-references, iterate has to return one value that stores both the Rc<Vec<u32>> and the iterator over it. With this “store data in caller” approach, iterate would return just the iterator, but would specify that the iterator borrows from this other value (the Rc<Vec<u32>>) which is returned in a separate channel.

Interestingly, this idea of returning “auxiliary” values might permit us to return an iterator that gives back references – even though I said that was impossible, per the design of the Iterator trait. How could that work? Well, the problem fundamentally is that we want a signature like this, where the iterator yields up &T references:

fn iterate<T>(data: Rc<Vec<T>>) -> impl Iterator<Item = &T>

Right now, we can’t have this signature, because we have no lifetime to assign to the &T type. In particular, the answer to the question “where are those references borrowing from?” is that they are borrowing from the function iterate itself, which won’t work (as we’ve seen).

But if we had some “auxiliary” slot of data that we could fill and then reference, we might be able to give it a lifetime – let’s call it 'aux. Then we could return impl Iterator<Item = &'aux T>.

Anyway, this is just wild, irresponsible speculation. I don’t have concrete ideas for how this would work4. But it’s an interesting thought.


I’ve opened a users thread to discuss this blog post (along with other Rust pattern blog posts).


  1. This just means it wants to return “some iterator that yields up u32 values”.

  2. Also worth nothing: in Rust, reference counted data is typically immutable.

  3. In other words, lifetime inference doesn’t affect execution order. This is crucial – for example, it is the reason we can move to NLL without breaking backwards compatibility.

  4. In terms of the underlying semantics, though, I imagine it could be a kind of sugar atop either self-references or out pointers. But that’s sort of as far as I got. =)

Cameron KaiserTenFourFox FPR9 available, and introducing Talospace

TenFourFox Feature Parity Release 9 final is now available (downloads, hashes, release notes). There are no changes from beta 3 except for outstanding security patches. Assuming no changes, it will go live Tuesday evening Pacific due to the US Labor Day holiday.

Allow me to also take the wraps off of Talospace, the new spin-off blog primarily oriented at the POWER9 Raptor Talos family of systems but will also be where I'll post general Power ISA and PowerPC items, refocusing this blog back to Power Macs specifically. Talospace is a combination of news bits, conjecture and original content "first person" items. For a period of time until it accumulates its own audience, I'll crosspost links here to seed original content (for the news pieces, you'll just have to read it or subscribe to the RSS feed).

As the first long-form article, read this two-part series on running Mac OS X under KVM-PPC (first part, second part). Upcoming: getting the damn Command key working "as you expect it" in Linux.

Mozilla Addons BlogExtensions in Firefox 63

Firefox 63 is rolling into Beta and it’s absolutely loaded with new features for extensions. There are some important new API, some major enhancements to existing API, and a large collection of miscellaneous improvements and bug fixes. All told, this is the biggest upgrade to the WebExtensions API since the release of Firefox Quantum.

An upgrade this large would not have been possible in a single release without the hard work of our Mozilla community. Volunteer contributors landed over 25% of all the features and bug fixes for WebExtensions in Firefox 63, a truly remarkable effort. We are humbled and grateful for your support of Firefox and the open web. Thank you.

Note: due to the large volume of changes in this release, the MDN documentation is still catching up. I’ve tried to link to MDN where possible, and more information will appear in the weeks leading up to the public release of Firefox 63.

Less Kludgy Clipboard Access

A consistent source of irritation for developers since the WebExtensions API was introduced is that clipboard access is not optimal. Having to use execCommand() to cut, copy and paste always felt like a workaround rather than a valid way to interact with the clipboard.

That all changes in Firefox 63. Starting with this release, parts of the official W3C draft spec for asynchronous clipboard API is now available to extensions. When using the clipboard, extensions can use standard the WebAPI to read and write to the clipboard using navigator.clipboard.readText() and navigator.clipboard.writeText().  A couple of things to note:

  • clipboard.writeText is available to secure contexts and extensions, without requiring any permissions, as long as it is used in a user-initiated event callback.  Extensions can request the clipboardWrite permission if they want to use clipboard.writeText outside of a user-initiated event callback. This preserves the same use conditions as document.execCommand(“copy”).
  • clipboard.readText is available to extensions only and requires the clipboardRead permission. There currently is no way to expose the clipboard.readText API to web content since no permission system exists for it outside of extensions. This preserves the same use conditions as document.execCommand(“paste”).

In addition, the text versions of the API are the only ones available in Firefox 63.  Support for the more general and clipboard.write() API are awaiting clarity around the W3C spec and will be added in a future release.

Selecting Multiple Tabs

One of the big changes coming in Firefox 63 is the ability to select multiple tabs simultaneously by either Shift- or CTRL-clicking on tabs beyond the currently active tab. This allows you to easily highlight a set of tabs and move, reload, mute or close them, or drag them into another window.  It is a very convenient feature that power users will appreciate.

In concert with this user-facing change, extensions are also gaining support for multi-select tabs in Firefox 63.  Specifically:

  • The tabs.onHighlighted event now handles multiple selected tabs in Firefox.
  • The tabs.highlight API accepts an array of tab indices that should be selected.
  • The tabs.Tab object properly sets the value of the highlighted property.
  • The tabs.query API now accepts “highlighted” as a parameter and will return an array of the currently selected tabs.
  • The tabs.update API can alter the status of selected tabs by setting the highlighted property.

A huge amount of gratitude goes to Oriol Brufau, the volunteer contributor who implemented every single change listed above.  Without his hard work, multi-select tabs would not be available in Firefox 63. Thank you, Oriol!

P.S.  Oriol wasn’t satisfied doing all of the work for multi-select tabs, he also fixed several issues with extension icons.

What You’ve Been Searching For

Firefox 63 introduces a completely new API namespace that allows extensions to enumerate and access the search engines built into Firefox.  Short summary:

  • The new search.get() API returns an array of search engine objects representing all of the search engines currently installed in Firefox.
  • Each search engine object contains:
    • name (string)
    • isDefault (boolean)
    • alias (string)
    • favIconUrl (URL string)
  • The new API takes a query string and returns the results. It accepts an optional search engine name (default search engine is used, if omitted) and an optional tab ID where the results should be displayed (opens a new tab, if omitted).
  • Extensions must declare the search permission to use either API.
  • The API can only be called from inside a user-input handler, such as a button, context menu or keyboard shortcut.

More Things to Theme

Once again, the WebExtensions API for themes has received some significant enhancements.

  • The built-in Firefox sidebars can now be themed separately using:
    • sidebar
    • sidebar_text
    • sidebar_highlight
    • sidebar_highlight_text
  • Support for theming the new tab page was added via the properties ntp_background and ntp_color (both of which are compatible with Chrome).
  • The images in the additional_backgrounds property are aligned correctly to the toolbox, making all the settings in additional_backgrounds_alignment work properly.  Note that this also changes the default z-order of additional_backgrounds, making those image stack on top of any headerURL image.
  • By default, all images for additional_backgrounds are anchored to the top right of the browser window.  This was variable in the past, based on which properties were included in the theme.
  • The browser action theme_icons property now works with more themes.
  • Themes now enforces a maximum limit of 15 images for additional_backgrounds.
  • The theme properties accentcolor and textcolor are now optional.

Finally, there is a completely new feature for themes called theme_experiment that allows theme authors to declare their own theme properties based on any Firefox CSS element. You can declare additional properties in general, additional elements that can be assigned a color, or additional elements that can be images.  Any of the items declared in the theme_experiment section of the manifest can be used inside the theme declaration in the same manifest file, as if those items were a native part of the WebExtensions theme API.

theme_experiment is available only in the Nightly and Developer editions of Firefox and requires that the ‘extensions.legacy.enabled’ preference be set to true.  And while it also requires more detailed knowledge of Firefox, it essentially gives authors the power to completely theme nearly every aspect of the Firefox user interface. Keep on eye on MDN for detailed documentation on how to use it (here is the bugzilla ticket for those of you who can’t wait).

Similar to multi-select tabs, all of the theme features listed above were implemented by a single contributor, Tim Nguyen. Tim has been a long-time contributor to Mozilla and has really been a champion for themes from the beginning. Thank you, Tim!

Gaining More Context

We made a concerted effort to improve the context menu subsystem for extensions in Firefox 63, landing a series of patches to correct or enhance the behavior of this heavily used feature of the WebExtensions API.

  • A new API, menus.getTargetElement, was added to return the element for a context menu that was either shown or clicked.  The menus.onShown and menus.onClicked events were updated with a new info.targetElementId integer that is accepted by getTargetElement.  Available to all extension script contexts (content scripts, background pages, and other extension pages), menus.getTargetElement has the advantage of allowing extensions to detect the clicked element without having to insert a content script into every page.
  • The “visible” parameter for menus.create and menus.update is now supported, making it much easier for extensions to dynamically show and hide context menu items.
  • Context menus now accept any valid target URL pattern, not just those supported by valid match patterns.
  • Extensions can now set a keyboard access key for a context menu item by preceding it with the & symbol in the menu item label.
  • The activeTab permission is now granted for any tab on which a context menu is shown, allowing for a more intuitive user experience without extensions needing to request additional permissions.
  • The menus.create API was fixed so that the callback is also called when a failure occurs
  • Fixed how menu icons and extensions icons are displayed in context menus to match the MDN documentation.
  • The menus.onClick handler can now call other methods that require user input.
  • menus.onShown now correctly fires for the bookmark context.
  • Made a change that allows menus.refresh() to operate without an onShown listener.

Context menus will continue to be a focus and you can expect to see even more improvements in the Firefox 64 timeframe.

A Motley Mashup of Miscellany

In addition to the major feature areas mentioned above, a lot of other patches landed to improve different parts of the WebExtensions API.  These include:

Thank You

A total of 111 features and improvements landed as part of Firefox 63, easily the biggest upgrade to the WebExtensions API since Firefox Quantum was released in November of 2017.  Volunteer contributors were a huge part of this release and a tremendous thank you goes out to our community, including: Oriol Brufau, Tim Nguyen, ExE Boss, Ian Moody, Peter Simonyi, Tom Schuster, Arshad Kazmi, Tomislav Jovanovic and plaice.adam+persona. It is the combined efforts of Mozilla and our amazing community that make Firefox a truly unique product. If you are interested in contributing to the WebExtensions ecosystem, please take a look at our wiki.


The post Extensions in Firefox 63 appeared first on Mozilla Add-ons Blog.