Mozilla L10NL10n Report: February Edition

Welcome!

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New localizers

  • Bora of Kabardian joined us through the Common Voice project
  • Kevin of Swahili
  • Habib and Shamim of Bengali joined us through the WebThings Gateway project

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

Firefox is now officially following a 4-weeks release cycle:

  • Firefox 74 is currently in beta and it will be released on March 10. The deadline to update your localization is February 25.
  • Firefox 75, currently in Nightly, will move to Beta when 74 is officially released. The deadline to update localizations for that version will be on March 24 (4 weeks after the current deadline).

In terms of localization priority and deadlines, note that the content of the What’s new panel, available at the bottom of the hamburger menu, doesn’t follow the release train. For example, content for 74 has been exposed on February 17, and it will be possible to update translations until the very end of the cycle (approximately March 9), beyond the normal deadline for that version.

What’s new or coming up in mobile

We have some exciting news to announce on the Android front!

Fenix, the new Android browser, is going to release in April. The transition from Firefox for Android (Fennec) to Fenix has already begun!  Now that we have an in-app locale switcher in place, we have the ability to add languages even when they are not supported by the Android system itself.

As a result, we’ve opened up the project on Pontoon to many new locales (89 total). Our goal is to reach Firefox for Android parity in terms of completion and number of locales.

This is a much smaller project than Firefox for Android, and a very innovative and quick software. We hope this excites you as much as us. And we truly hope to deliver to users across the world the same experience as with Firefox for Android, in terms of localization.

Delphine will be away for the next few months. Jeff is standing in for her on the PM front, with support from Flod on the technical front. While Delphine is away, we won’t be enabling new locales on mobile products outside of Fenix. This is purely because our current resourcing allows us to give Fenix the priority, but at the expense of other products. Stay tuned for more open and individual outreach from Jeff about Fenix and other mobile projects.

What’s new or coming up in web projects

Mozilla.org

Changes are coming to mozilla.org. The team behind mozilla.org has been working all year to transition from the .lang format to Fluent. Communications on the details around this transition will be coming through the mailing list.

Additionally, the following pages were added since the last report

New:

  • firefox/browsers.lang
  • firefox/products.lang
  • firefox/whatsnew_73.lang
  • firefox/set-default-thanks.lang
Community Participation Guideline (CPG)

The CPG has a major update, including a new page and additional locales. Feel free to review and provide feedback by filing a bug.

Languages:  ar, de, es-ES, fr, hi-IN, id, it, ja, nl, pl, pt-BR, ru, zh-CN, and zh-TW.

What’s new or coming up in SUMO

Firefox 73 is out but did not require updated localization and many articles were still valid from 72.

The most exciting event of January was All Hands in Berlin. Giulia has written a blog post on the SUMO journey experience at All Hands, you can read it at this link.

Regarding localization, we discussed a lot on how to keep the communication open with the community and there are going to be exciting news soon. Keep an eye on the forum!

Events

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver, and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Do you know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers, and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

SUMO BlogWhat’s happening on the SUMO Platform: Sprint updates

So what’s going on with the SUMO platform? We’re moving forward in 2020 with new plans, new challenges and a new roadmap.

We’re continuing this year to track all development work in 2 week sprints. You can see everything that is currently being worked on and our current sprint here (please note: this is only a project tracking board, do not use it to file bugs, bugs should continue to be filed via Bugzilla)

In order to be more transparent about what’s going on we are starting a round of blog posts to summarize every sprint and plan for the next. We’ve just closed Sprint no. 3 of 2020 and we’re moving into Sprint no.4

What happened in the last two weeks?

During the last two weeks we have been working tirelessly together with our partner, Lincoln Loop, to get Responsive Redesign out the door. The good news is that we are almost done.

We have also been working on a few essential upgrades. Currently support.mozilla.org is running on Python 2.7 which is no longer supported. We have been working on upgrading to Python3.7 and the latest Django Long Term Support (LTS) version 2.2. This is also almost done and we are expecting to move into the QA and bug fixing phase.

What’s happening in the next sprint?

During the next two weeks we’re going to start wrapping up Responsive redesign as well as the Python/Django upgrade and  focus on QA and bug fixing. We’re also planning to finalize a Celery 4 upgrade.

The next big thing is the integration of Firefox Accounts. As of May 2019 we have been working towards using Firefox Accounts as the authentication system on support.mozilla.org  Since the first phase of this project was completed we have been using both login via Firefox Accounts as well as the old SUMO login. It is now time to fully switch to Firefox Accounts. The current plan is to do this mid-March but expect to see some communication about this later this week.

For more information please check out our roadmap and feel free to reach out if you have any questions

Open Policy & AdvocacyThe new EU digital strategy: A good start, but more to be done

In a strategy and two white papers published today, the Commission has laid out its vision for the next five years of EU tech policy: achieving trust by fostering technologies working for people, a fair and competitive digital economy, and a digital and sustainable society. This vision includes big ambitions for content regulation, digital competition, artificial intelligence, and cybersecurity. Here we give some recommendations on how the Commission should take it forward.

We welcome this vision the Commission sketches out and are eager to contribute, because the internet today is not what we want it to be. A rising tide of illegal and harmful content, the pervasiveness of the surveillance economy, and increased centralisation of market power have damaged the internet’s original vision of openness. We also believe that innovation and fundamental rights are complementary and should always go hand in hand – a vision we live out in the products we build and the projects we take on. If built on carefully, the strategy can provide a roadmap to address the many challenges we face, in a way that protects citizens’ rights and enhances internet openness.

However, it’s essential that the EU does not repeat the mistakes of the past, and avoids misguided, heavy handed and/or one-size-fits-all regulations. The Commission should look carefully at the problems we’re trying to solve, consider all actors impacted and think innovatively about smart interventions to open up markets and protect fundamental rights. This is particularly important in the content regulation space, where the last EU mandate saw broad regulatory interventions (e.g. on copyright or terrorist content) that were crafted with only the big online platforms in mind, undermining individuals’ rights and competition. Yet, and despite such interventions, big platforms are not doing enough to tackle the spread of illegal and harmful content. To avoid such problematic outcomes, we encourage the European Commission to come up with a comprehensive framework for ensuring that tech companies really do act responsibly, with a focus on the companies’ practices and processes.

Elsewhere we are encouraged to see that the Commission intends on evaluating and reviewing EU competition rules to ensure that they remain fit for purpose. The diminishing nature of competition online and the accelerating trend towards web centralisation in the hands of a few powerful companies goes against the open and diverse internet ecosystem we’ve always fought for. The nature of the networked platform ecosystem is giving rise to novel competition challenges, and it is clear that the regulatory toolbox for addressing them is not fit-for-purpose. We look forward to working with EU lawmakers on how EU competition policy can be modernised, to take into account bundling, vertical integration, the role of data silos, and the potential of novel remedies.

We’re also happy to see the EU take up the mantle of AI accountability and seek to be a standard-setter for better regulation in this space. This is an area that will be of crucial importance in the coming years, and we are intent on shaping a progressive, human-centric approach in Europe and beyond.

The opportunity for EU lawmakers to truly lead and to set the future of tech regulation on the right path is theirs for the taking. We are eager and willing to help contribute and look forward to continuing our own work to take back the web.

The post The new EU digital strategy: A good start, but more to be done appeared first on Open Policy & Advocacy.

The Mozilla BlogThank You, Ronaldo Lemos


Ronaldo Lemos joined the Mozilla Foundation board almost six years ago. Today he is stepping down in order to turn his attention to the growing Agora! social movement in Brazil.

Over the past six years, Ronaldo has helped Mozilla and our allies advance the cause of a healthy internet in countless ways. Ronaldo played a particularly important role on policy issues including the approval of the Marco Civil in Brazil and shaping debates around net neutrality and data protection. More broadly, he brought his experience as an academic, lawyer and active commentator in the fields of intellectual property, technology and culture to Mozilla at a time when we needed to step up on these topics in an opinionated way.

As a board member, Ronaldo also played a critical role in the development of Mozilla Foundation’s movement building strategy. As the Foundation evolved it’s programs over the  past few years, he brought to bear extensive experience with social movements in general — and with the open internet movement in particular. This was an invaluable contribution.

Ronaldo is the Director of the Institute for Technology & Society of Rio de Janeiro (ITSrio.org), Professor at the Rio de Janeiro State University’s Law School and Partner with the law firm Pereira Neto Macedo.

He recently co-founded a political and social movement in Brazil called Agora!. Agora! is a platform for leaders engaged in the discussion, formulation and implementation of public policies in Brazil. It is an independent, plural and non-profit movement that believes in a more humane, simple and sustainable Brazil — in an efficient and connected state, which reduces inequalities and guarantees the well-being of all citizens.

Ronaldo remains a close friend of Mozilla, and we’ll no doubt find ample opportunity to work together with him, ITS and Algora! in the future. Please join me in thanking Ronaldo for his tenure as a board member, and wishing him tremendous success in his new endeavors.

Mozilla is now seeking talented new board members to fill Ronaldo’s seat. More information can be found here: https://mzl.la/MoFoBoardJD

The post Thank You, Ronaldo Lemos appeared first on The Mozilla Blog.

hacks.mozilla.orgWebThings Gateway Goes Global

Today, we’re releasing version 0.11 of the WebThings Gateway. For those of you running a previous version of our Raspberry Pi build, you should have already received the update. You can check in your UI by navigating to Settings ➡ Add-ons.

Translations and Platforms

The biggest change in this release is our ability to reach new WebThings Gateway users who are not native English speakers. Since the release of 0.10, our incredible community has contributed 24 new language translations via Pontoon, Mozilla’s localization platform, with even more in the works! If your native (or favorite) language is still not available, we would love to have you contribute a translation.

WebThings Gateway UI in Japanese

Users are also now able to install WebThings Gateway in even more ways. We have packages for several Debian, Ubuntu, and Fedora Linux versions available on our releases page. In addition, there is a package for Arch Linux available on the AUR. All of these packages complement our existing Raspberry Pi and Docker images.

Experiments

In this release, we’ve made some changes to our two active experiments.

First, the logs experiment has been promoted! It is now a first-class citizen, enabled for all users. Logging allows you to track changes in property values for your devices over a time period, using interactive graphs.

Logs UI

In other news, we’ve decided to say goodbye to our experimental voice-based virtual assistant. While this was a fun experiment, it was never a practical feature. In our 0.12 release, the back-end commands API, which was used by the virtual assistant, will also be removed, so applications using that interface will need to be updated. Our preferred approach going forward is to have add-ons use the Web Thing API for everything, including voice interactions. Fear not, though. In addition to our Mycroft skill, people in the WebThings community have created multiple add-ons to allow you to interface with your gateway via voice, which are available for installation through Settings ➡ Add-ons.

Miscellaneous

In addition to the notable changes above, there are a host of other updates.

  • Users of our Raspberry Pi image can now disable automatic OTA (over the air) updates, if they so choose.
  • Users can now access the web interface on their local network via http://, so that they’re not faced with an ugly, scary security warning each time.
  • The Progressive Web App (PWA) should be much more stable and reliable now.
  • As always, there have been numerous bug fixes.

What Now?

We invite you to download the new WebThings Gateway 0.11 release and continue to build your own web things with the latest WebThings Framework libraries. If you already have WebThings Gateway installed on a Raspberry Pi, you can expect the Gateway to be automatically updated.

As always, we welcome your feedback on Discourse. Please submit issues and pull requests on GitHub. You can also now chat with us directly on Matrix, in #iot.

The post WebThings Gateway Goes Global appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Gfx TeamChallenge: Snitch on the glitch! Help the Graphics team track down an interesting WebRender bug…

For the past little while, we have been tracking some interesting WebRender bugs that people are reporting in release. Despite best efforts, we have been unable to determine clear steps to reproduce these issues and have been unable to find a fix for them. Today we are announcing a special challenge to the community – help us track down steps to reproduce (a.k.a STR) for this bug and you will win some special, limited edition Firefox Graphics team swag! Read on for more details if you are interested in participating.

What we know so far about the bug:

Late last year we started seeing reports of random UI glitching bugs that people were seeing in release. You can check out some of the reports on Bugzilla. Here is what we know so far about this bug:

  • At seemingly random intervals, either two things seem to happen:
<figcaption>Glitches!</figcaption>
<figcaption>Black boxes!</figcaption>
  • The majority of the reports we have seen so far have come from people using NVIDIA graphics cards, although we have seen reports come in of this happening on Intel and AMD as well. That could be though because the majority of the people we have officially shipped WR to in release are on NVIDIA cards.
  • There doesn’t seem to be one clear driver version correlated to this bug, so we are not sure if it is a driver bug.
  • All reporters so far have been using Windows 10
  • No one who has reported the bug thus far has been able to determine clear and consistent STR, and no one on the Graphics team has found a way to reproduce it either. We all use WebRender daily and none of us have encountered the bug.

How can you help?

Without having a way to reliably reproduce this bug, we are at a loss on how to solve it. So we decided to hold a challenge to engage the community further to help us understand this bug better. If you are interested in helping us get to the root of this tricky bug, please do the following:

  • Download Firefox Nightly (if you don’t already use it)
  • Ideally you are using Windows 10 (but if you see this bug on other platforms, we are interested in hearing about it!)
  • Ensure WebRender is enabled
    • Go to about:config and set gfx.webrender.all to true, then restart your browser
  • If you encounter the bug, help us by filing a bug in Bugzilla with the following details:
    • What website are you on when the bug happens?
    • Does it seem to happen when specific actions are taken?
    • How frequently does the bug happen and can you ‘make’ it happen?
    • Attach the contents of your about:support as a text file
  • The main thing we really need are consistent steps that result in the bug showing up. We will send some limited edition Graphics swag to the first 3 bug reporters who can give us consistent STR!

Even if you can’t easily find STR, we are still interested in hearing about whether you see this bug!

Challenge guidelines

The winners of this challenge will be chosen based on the following criteria:

  • The bug report contains clear and repeatable steps to make the bug happen
    • This can include things like having a specific hardware configuration, using certain add ons and browsing certain sites – literally anything as long as it can reliably and consistently cause the bug to appear
    • BONUS: A member of the Graphics team can follow your steps and can also make the bug appear
  • We will choose the first 3 reporters who can meet this criteria (we say 3 because it is possible there is more than one bug and more than one way to reproduce it)
  • Winners will receive special limited edition Graphics Team swag! (t-shirt and stickers)

Update: we have created the channel #gfx-wr-glitch:mozilla.org on Matrix so you can ask questions/chat with us there. For more info about how to joing Matrix, check out: https://wiki.mozilla.org/Matrix

Firefox UXMaking features discoverable: A Case Study of Firefox’s Contextual Feature Recommender for Pin Tab

You know when someone shows you a useful trick in a product you use everyday and you have an “aha” moment? Firefox is full of aha-worthy, handy features that make browsing easier. Those features aren’t always discoverable, though. We wanted to fix that.

Defining the Challenge

The “How Might We” or HMW exercise is an effective brainstorming method to help frame the problem at hand by suggesting there are multiple potential solutions. This exercise was used to help us formulate the right question:

How might we recommend browser features that enhance the browsing experience and increase engagement while balancing notification fatigue?

…and we used that framing to develop a solution to explore:

Create a more personalized experience on Firefox by contextually introducing users to features based on how they use the browser.

Background

We had already experimented with recommending extensions in Firefox through our “contextual feature recommender” (CFR). For example, if a user regularly visits Facebook, a “recommendation” button will appear in the address bar. If the user clicks on it, a panel drops down to recommend Facebook Container, an extension that prevents Facebook from tracking your browsing activity.

After the success of recommending extensions, we wanted to expand our recommendations to include browser features. We decided to experiment with the “Pin Tab” feature, which allows users to ‘pin’, or save, a website to their tab strip so that it’s easily accessible.

<figcaption>Recommendation button in the address bar</figcaption>

Is this a good idea?

Mozilla takes user privacy seriously, and follows a set of Data Privacy Principles to respect and protect user’s personal information. In that same vein, we developed our CFR program to preserve user privacy.

We also wanted to make sure that if we were going to interrupt a user with a message, that the recommendation was useful, timely, and dismissible. The CFR is an effective form of context-based communication but can easily become an annoyance and ineffective when overused. With this in mind, we set a recipe that would show the Pin Tab recommendation to users who frequently visited the same websites.

We felt this interruption was worthwhile and useful because the recommendation was shown to users who visit the same sites frequently and could save time with Pin Tab; it was timely because it appeared when the user was in the act of visiting the same site repeatedly; and we would ensure the design itself was dismissable.

Considerations

Working with content strategist, Meridel Walkington, and user researcher, Kamyar Ardekani, we ran unmoderated user testing early on to get feedback on users’ understanding of the CFR for Pin Tab. That feedback informed the content and design work that followed.

Content

Meridel drafted several versions of the UX copy. She experimented with the clearest way to explain what Pin Tab is (do people even know what a ‘tab strip’ or ‘tab bar’ refers to?), why the feature is useful, and how to use it. Copy was refined over the course of user testing.

Animation

We also user tested an animated graphic versus a static one in the CFR panel. There was a noticeable difference in comprehension between the two options, with animation greatly improving understanding of the “Pin Tab” feature by providing more context of what happens to a tab after it’s pinned (tab condenses and moves to the left of the tab strip).

While we had anticipated that an animation would improve comprehension, the research results made the case for including it in the MVP (minimal viable product). This helped us communicate to engineering that animation was more than just a “nice to have.”

Accessibility

We reviewed the Web Content Accessibility Guidelines (WCAG) to ensure the design and animation met accessibility standards. One of the modifications we made was adding a play/pause button in the panel for users with motion sensitivity so they could pause the animation.

Localization

Another aspect that came up was localization, in particular the difficulty of supporting multiple languages in the animation. Since this was an animated graphic, we could not show text in the graphic portion of the panel unless we created an animation for each language.

User Testing

We were ready to put these design and content considerations to the test. Through testing we wanted to find out if users understood:

  • What a pinned tab is and how is it different than a regular tab
  • The benefit of pinning a tab versus other forms of saving a website such as bookmarking
  • How to pin a tab (both from the CFR panel and from the browser context menu)

Testing included two versions of the animation: one without text and one with text. We discovered that text did not impact comprehension.

<figcaption>Text version (left), non-text version (right) of the CFR for Pin Tab.</figcaption>

We tested several content variations…

…and incorporated feedback like this…

“[would like] a little bit more explanation of what it actually does, because it looked like it just shrunk the tab rather than saving it anywhere.”

Having user testing early in the design process helped us iterate quickly and assured us that we were on the right track.

We iterated on the design and messaging until users were able to successfully answer the questions above and perform the task of pinning a tab.

<figcaption>Final design for Pin Tab in the CFR panel and confirmation pop-up</figcaption>
<figcaption>CFR for Pin Tab panel in context</figcaption>

The Results

We ran an A/B test in Firefox 67 to test the CFR for Pin Tab panel. When users frequented app-like sites (i.e email, social media) and never used the Pin Tab feature, we recommended Pin Tab in the CFR panel. We compared the number of unique users who pinned tabs after being shown the CFR (experiment branch) to the number of users pinning tabs who weren’t shown the CFR (control branch).

The results found that the average number of users taking advantage of the Pin Tab feature was 50% higher for the experiment branch. This was a good sign that users were engaging with the CFR and tried our recommendation. This encouraged us to continue using the CFR as part of our user facing communication strategy.

<figcaption>The average number of users per thousand taking advantage of the Pin Tab feature is 50% higher for the experiment branch</figcaption>

Shortly after this project, I moved to the Firefox mobile team where I’ve since introduced a mobile version of the CFR in Firefox Preview and helped define scenarios of when a CFR should be used (and when it shouldn’t). Since we had insight from our desktop testing, this allowed us to make an informed decision to apply a similar component on mobile.

Conclusion

A longer term study is needed to determine the effects of the CFR panel in relation to user retention and engagement on desktop and mobile, but this work has informed us of a few things, both from a product and process perspective.

From a product perspective, we saw the potential impact of making contextual recommendations that are valuable and timely to a particular user group while respecting user control via an opt-out path.

In terms of process, we saw the benefits of user testing early on as a valuable exercise in making informed decisions on product features. We also found value in approaching the problem cross-functionally by involving design, content, research, and engineering throughout the project. We can take these learnings with us as we consider how to approach the next challenge with a similar strategy.


Making features discoverable: A Case Study of Firefox’s Contextual Feature Recommender for Pin Tab was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla Add-ons BlogFAQ for extension support in new Firefox for Android

There are a lot of Firefox applications on the Google Play store. Which one is the new Firefox for Android?

The new Firefox for Android experience is currently available for early testing on the Firefox Preview Nightly and Firefox Preview production channels.

In February 2020, we will change which Firefox applications remain available in the Play store. Once we’ve completed this transition, Firefox Preview Nightly will no longer be available. New feature development will take place on what is currently Firefox Preview.

We encourage users who are eager to make use of extensions to stay on Firefox Preview. This will ensure you continue to receive updates while still being among the first to see new developments.

Which version supports add-ons?

Support for one extension, uBlock Origin, has been enabled for Firefox Preview Nightly. Every two weeks, the code for Firefox Preview Nightly gets migrated to the production release of Firefox Preview. Users of Firefox Preview should be able to install uBlock Origin by mid-February 2020.

We expect to start transferring the code from the production release of Firefox Preview to the Firefox for Android Beta channel during the week of February 17.

I’m using one of the supported channels but I haven’t been able to install an extension yet. Why?

We are rolling out the new Firefox for Android experience to our users in small increments to test for bugs and other unexpected surprises. Don’t worry — you should receive an update that will enable extension support soon!

Can I install extensions from addons.mozilla.org to Firefox for Android?

No, in the near term you will need to install extensions from the Add-ons Manager on the new Firefox for Android. For the time being, you will not be able to install extensions directly from addons.mozilla.org.

What add-ons are supported on the new Firefox for Android?

Currently, uBlock Origin is the only supported extension for the new Firefox for Android. We are working on building support for other extensions in our Recommended Extensions program.

Will more add-ons be supported in the future?

We want to ensure that the first add-ons supported in the new Firefox for Android provide an exceptional, secure mobile experience to our users. To this end, we are prioritizing Recommended Extensions that cover common mobile use cases and that are optimized for different screen sizes. For these reasons, it’s possible that not all the add-ons you have previously installed in Firefox for Android will be supported in the near future.

Will add-ons not part of the Recommended Extensions program ever be supported on the new Firefox for Android?

We would like to expand our support to other add-ons. At this time, we don’t have details on enabling support for extensions not part of the Recommended Extensions program in the new Firefox for Android. Please follow the Add-ons Blog for future updates.

What is GeckoView?

GeckoView is Mozilla’s mobile browser engine. It takes Gecko, the engine that powers the desktop version of Firefox, and packages it as a reusable Android library. Rebuilding our Firefox for Android browser with GeckoView means we can leverage our Firefox expertise in creating safe and robust online experiences for mobile.

What’s happening to add-ons during the migration?

Support for uBlock Origin will be migrated for users currently on Firefox Nightly, Firefox Beta, and Firefox Production. All other add-ons will be disabled for now.

The post FAQ for extension support in new Firefox for Android appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgFirefox 73 is upon us

Another month, another new browser release! Today we’ve released Firefox 73, with useful additions that include CSS and JavaScript updates, and numerous DevTools improvements.

Read on for the highlights. To find the full list of additions, check out the following links:

Note: Until recently, this post mentioned the new form method requestSubmit() being enabled in Firefox 73. It has come to light that requestSubmit() is in fact currently behind a flag, and targetted for a release in Firefox 75. Apologies for the error. (Updated Friday, 14 February.)

Web platform language features

Our latest Firefox offers a fair share of new web platform additions; let’s review the highlights now.

We’ve added to CSS logical properties, with overscroll-behavior-block and overscroll-behavior-inline.

These new properties provide a logical alternative to overscroll-behavior-x and overscroll-behavior-y, which allow you to control the browser’s behavior when the boundary of a scrolling area is reached.

The yearName and relatedYear fields are now available in the DateTimeFormat.prototype.formatToParts() method. This enables useful formatting options for CJK (Chinese, Japanese, Korean) calendars.

DevTools updates

There are several interesting DevTools updates in this release. Upcoming features can be previewed now in Firefox DevEdition.

We continually survey DevTools users for input, often from our @FirefoxDevTools Twitter account. Many useful updates come about as a result. For example, thanks to your feedback on one of those surveys, it is now possible to copy cleaner CSS snippets out of the Inspector’s Changes panel. The + and - signs in the output are no longer part of the copied text.

Solid & Fast

The DevTools engineering work for this release focused on pushing performace forward. We made the process of collecting fast-firing requests in the Network panel a lot more lightweight, which made the UI snappier. In the same vein, large source-mapped scripts now load much, much faster in the Debugger and cause less strain on the Console as well.

Loading the right sources in the Debugger is not straightforward when the DevTools are opened on a loaded page. In fact, modern browsers are too good at purging original files when they are parsed, rendered, or executed, and no longer needed. Firefox 73 makes script loading a lot more reliable and ensures you get the right file to debug.

Smarter Console

Console script authoring and logging gained some quality of life improvements. To date, CORS network errors have been shown as warnings, making them too easy to overlook when resources could not load. Now they are correctly reported as errors, not warnings, to give them the visibility they deserve.

Variables declared in the expression will now be included in the autocomplete. This change makes it easier to author longer snippets in the multi-line editor. Furthermore, the DevTools setting for auto-closing brackets is now working in the Console as well, bringing you closer to the experience of authoring in an IDE.

Did you know that console logs can be styled using backgrounds? For even more variety, you can add images, using data-uris. This feature is now working in Firefox, so don’t hesitate to get creative. For example, we tried this in one of our Fetch examples:

console.log('There has been a problem with your fetch operation: %c' +
e.message, 'color: red; padding: 2px 2px 2px 20px; background: yellow 3px no-repeat
url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAwAAAAMCAYAAABWdVznAAAACXBIWXMAAA
7EAAAOxAGVKw4bAAAApUlEQVQoz5WSwQ3DIBAE50wEEkWkABdBT+bhNqwoldBHJF58kzryIp+zgwiK5JX2w+
2xdwugMMZ4IAIZeCszELX2hYhcgQIkEQnOOe+c8yISgAQU1Rw3F2BdlmWig56tQNmdIpA68Qbcu6akWrJat7
gp27EDkCdgttY+uoaX8oBq5gsDiMgToNY6Kv+OZIzxfZT7SP+W3oZLj2JtHUaxnnu4s1/jA4NbNZ3AI9YEA
AAAAElFTkSuQmCC);');

And got the following result:

styled console message with yellow highlighter effect

We’d like to thank Firefox DevTools contributor Edward Billington for the data-uri support!

We now show arguments by default. We believe this makes logging JavaScript functions a bit more intuitive.

And finally for this section, when you perform a text or regex search in the Console, you can negate a search item by prefixing it with ‘-’ (i.e. return results not including this term).

WebSocket Inspector improvements

The WebSocket inspector that shipped in Firefox 71 now nicely prints WAMP-formatted messages (in JSON, MsgPack, and CBOR flavors).

a screencapture showing WAMP MessagPack in the WebSocket Inspector

You won’t needlessly wait for updates, as the Inspector now also indicates when a WebSocket connection is closed.

A big thanks to contributor Tobias Oberstein for implementing the WAMP support, and to saihemanth9019 for the WebSocket closed indicator!

New (power-)user features

We wanted to mention a couple of nice power user Preferences features dropping in Firefox 73.

First of all, the General tab in Preferences now has a Zoom tool. You can use this feature to set the magnification level applied to all pages you load. You can also specify whether all page contents should be enlarged, or only text. We know this is a hugely popular feature because of the number of extensions that offer this functionality. Selective zoom as a native feature is a huge boon to users.

The DNS over HTTPS control in the Network Settings tab includes a new provider option, NextDNS. Previously, Cloudflare was the only available option.

The post Firefox 73 is upon us appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyMozilla Mornings on the EU Digital Services Act: Making responsibility a reality

On 3 March, Mozilla will host the next installment of Mozilla Mornings – our regular breakfast series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

In 2020 Mozilla Mornings is adopting a thematic focus, starting with a three-part series on the upcoming Digital Services Act. This first event on 3 March will focus on how content regulation laws and norms are shifting from mere liability frameworks to more comprehensive responsibility ones, and our panelists will discuss how the DSA should fit within this trend.

Speakers
 hhhh
Prabhat Agarwal
Acting Head of Unit, E-Commerce and Platforms
European Commission, DG CNECTfff
Karen Melchior MEP
Renew Europe

Siada El-Ramly
Director-General, EDiMA

Owen Bennett
EU Internet Policy Manager, Mozilla

Moderated by Jennifer Baker
EU Tech Journalist

Logistical information
3 March, 2020
08:30-10:30
The Office cafe, Rue d’Arlon 80, Brussels 1040
jjj
Register your attendance here

Mozilla Add-ons BlogExtensions in Firefox 73

As promised, the update on changes in Firefox 73 is short: There is a new sidebarAction.toggle API that will allow you to open and close the sidebar. It requires being called from a user action, such as a context menu or click handler. The sidebar toggle was brought to you by Mélanie Chauvel. Thanks for your contribution, Mélanie!

On the backend, we fixed a bug that caused tabs.onCreated and tabs.onUpdated events to be fired out-of-order.

We have also added more more documentation on changing preferences for managing settings values with experimental WebExtensions APIs. As a quick note, you will need to set the preference extensions.experiments.enabled to true to enable experimental WebExtensions APIs starting with Firefox 74.

That’s all there is to see for Firefox 73. We’ll be back in a few weeks to highlight changes in Firefox 74.

The post Extensions in Firefox 73 appeared first on Mozilla Add-ons Blog.

about:communityFirefox 73 new contributors

With the release of Firefox 73, we are pleased to welcome the 19 developers who contributed their first code change to Firefox in this release, 18 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Mozilla VR BlogVisual Development in Hello WebXR!

Visual Development in Hello WebXR!

This is a post that tries to cover many aspects of the visual design of our recently released demo Hello WebXR! (more information in the introductory post), targeting those who can create basic 3D scenes but want to find more tricks and more ways to build things, or simply are curious about how the demo was made visually. Therefore this is not intended to be a detailed tutorial or a dogmatic guide, but just a write-up of our decisions. End of the disclaimer :)

Here it comes a mash-up of many different topics presented in a brief way:

  • Concept
  • Pipeline
  • Special Shaders and Effects
  • Performance
  • Sound Room
  • Vertigo Room
  • Conclusion

Concept


From the beginning, our idea was to make a simple, down-paced, easy to use experience that gathered many different interactions and mini-experiences that introduces VR newcomers to the medium, and also showcased the recently released WebXR API. It would run on almost any VR device but our main target device was the Oculus Quest, so we thought that we could have some mini-experiences that could share the same physical space, but other experiences should have to be moved to a different scene (room), either for performance reasons and also due its own nature.

We started by gathering references and making concept art, to figure out how the "main hall" would look like:

Visual Development in Hello WebXR!<figcaption>Assorted images taken from the web and Sketchfab</figcaption>

Then, we used Blender to start sketching the hall and test it on VR to see how it feels. It should have to be welcoming and nice, and kind of neutral to be suitable for all audiences.

Visual Development in Hello WebXR!
Visual Development in Hello WebXR!
Visual Development in Hello WebXR!<figcaption>Look how many pedestals and doors for experiences we initially planned to add :_D</figcaption>

Pipeline

3D models were exported to glTF format (Blender now comes with an exporter, and three.js provides a loader), and for textures PNG was used almost all the time, although on a late stage in the development of the demo all textures were manually optimized to drastically reduce the size of the assets. Some textures were preserved in PNG (handles transparency), others were converted to JPG, and the bigger ones were converted to BASIS using the basisu command line program. Ada Rose Cannon’s article introducing the format and how to use it is a great read for those interested.

glTF files were exported without materials, since they were created manually by code and assigned to the specific objects at load time to make sure we had the exact material we wanted and that we could also tweak easily.

In general, the pipeline was pretty traditional and simple. Textures were painted or tweaked using Photoshop. Meshes and lightmaps were created using Blender and exported to glTF and PNG.

For creating the lightmap UVs, and before unwrapping, carefully picked edges were marked as seams and then the objects were unwrapped using the default unwrapper, in the majority of cases. Finally, UVs were optimized with UVPackMaster 2 PRO.

Draco compression was also used in the case of the photogrammetry object, which reduced the size of the asset from 1.41MB to 683KB, less than a half.

Special Shaders and Effects

Some custom shaders were created for achieving special effects:

Beam shader

This was achieved offseting the texture along one axis and rendered in additive mode:

Visual Development in Hello WebXR!

The texture is a simple gradient. Since it is rendered in additive mode, black turns transparent (does not add), and dark blue adds blue without saturating to white:

Visual Development in Hello WebXR!

And the ray target is a curved mesh. The top cylinder and the bottom disk are seamlessly joined, but their faces and UVs go in opposite directions.

Visual Development in Hello WebXR!
Door shader

This is for the star field effect in the doors. The inward feeling is achieved by pushing the mesh from the center, and scaling it in Z when it is hovered by the controller’s ray:

Visual Development in Hello WebXR!
Visual Development in Hello WebXR!

This is the texture that is rendered in the shader using polar coordinates and added to a base blue color that changes in time:

Visual Development in Hello WebXR!
Panorama ball shader

Used in the deformation (in shape and color) of the panorama balls.

Visual Development in Hello WebXR!

The halo effect is just a special texture summed to the landscape thumbnail, which is previously modified by shifting red channel to the left and blue channel to the right:

Visual Development in Hello WebXR!
Zoom shader

Used in the zoom effect for the paintings, showing only a portion of the texture and also a white circular halo. The geometry is a simple plane, and the shader gets the UV coordinates of the raycast intersection to calculate the amount of texture to show in the zoom.

Visual Development in Hello WebXR!
SDF Text shader

Text rendering was done using the Troika library, which turned out to be quite handy because it is able to render SDF text using only a url pointing to a TTF file, without  having to generate a texture.

Performance

Oculus Quest is a device with mobile performance, and that requires a special approach when dealing with polygon count, complexity of materials and textures; different from what you could do for desktop or high end devices. We wanted the demo to perform smoothly and be indistinguishable from native or desktop apps, and these are some of the techniques and decisions we took to achieve that:

  • We didn't want a low-poly style, but something neat and smooth. However, polygon count was reduced to the minimum within that style.
Visual Development in Hello WebXR!
  • Meshes were merged whenever it was possible. All static objects that could share the same material where merged and exported as a single mesh:
Visual Development in Hello WebXR!
  • Materials were simplified, reduced and reused. Almost all elements in the scene have a constant (unlit) material, and only two directional lights (sun and fill) are used in the scene for lighting the controllers. PBR materials were not used. Since constant materials cannot be lit, lightmaps must be precalculated to give the feeling of lighting. Lightmaps have two main advantages:

    - Lighting quality can be superior to real time lighting, since the render is done “offline”. This is done beforehand, without any time constraint. This allows us to do full global illumination with path tracing in Blender, simulating light close to real life.

    - Since no light calculations are done realtime, constant shading is the one that has the best performance: it just applies a texture to the model and nothing else.

    However, lightmaps also have two main disadvantages:

    - It is easy to get big, noticeable pixels or pixel noise in the texture when applied to the model (due to the insufficient resolution of the texture or to the lack of smoothness or detail in the render). This was solved by using 2048x2048 textures, rendered with an insane amount of samples (10,000 in our case since we didn’t have CUDA or Denoising available at that moment). 4096px textures were initially used and tested in Firefox Mixed Reality, but Oculus Browser did not seem to be able to handle them so we switched to 2048, reducing texture quality a bit but improving load time along the way.

    - You cannot change the lighting dynamically, it must be static. This was not really an issue for us, since we did not need any dynamic lighting.
Visual Development in Hello WebXR!<figcaption>Hall, Vertigo and Angel lightmaps, respectively.</figcaption>

Sound Room

Visual Development in Hello WebXR!<figcaption>Sketches for the visual hints in the sound room</figcaption>

Each sound in the sound room is accompanied by a visual hint. These little animations are simple meshes animated using regular keyframes on position/rotation/scale transforms.

Visual Development in Hello WebXR!
Visual Development in Hello WebXR!<figcaption>Blender setup for the sound room</figcaption>

Vertigo Room

The first idea for the vertigo room was to build a low-poly but convincing city and put the user on top of a skyscraper. After some days of Blender work:

Visual Development in Hello WebXR!

We tried this in VR, and to our surprise it did not produce vertigo! We tested different alternatives and modifications to the initial design without success. Apparently, you need more than just lifting the user to 500m to produce vertigo. Texture scale is crucial for this and we made sure they were at a correct scale, but there is more needed. Vertigo is about being in a situation of risk, and there were some factors in this scene that did not make you feel unsafe. Our bet is that the position and scale of the other buildings compared to the player situation make them feel less present, less physical, less tangible. Also, unrealistic lighting and texture may have influenced the lack of vertigo.

So we started another scene for the vertigo, focusing on the position and scale of the buildings, simplifying the texture to a simple checkerboard, and adding the user in a really unsafe situation.

Visual Development in Hello WebXR!

The scene is comprised of only two meshes: the buildings and the teleport door. Since the range of movements of the user in this scene is very limited, we could remove all those sides of the buildings that face away from the center of the scene. It is a constant material with a repeated checkerboard texture to give sense of scale, and a lightmap texture that provides lighting and volume.

Visual Development in Hello WebXR!

Conclusion

Things that did not go very well:

  • We didn’t use the right hardware to render lightmaps, so it took 11 hours to render, which did not help us iterate quickly.
  • Wasted a week refining the first version of the vertigo room without testing properly if the vertigo effect worked or not. We were overconfident about it.
  • We had a tricky bug with Troika SDF text library on Oculus Browser for many days, which was finally solved thanks to its author.
  • There is something obscure in the mipmapping of BASIS textures and the Quest. The level of mipmap chosen is always lower than it should, so textures look lower quality. This is noticeable when getting closer to the paintings from a distance, for example. We played with basisu parameters, but it was not of much help.
  • There are still many improvements we can make to the pipeline to speed up content creation.

Things we like how it turned out:

  • Visually it turned out quite clean and pleasing to the eye, without looking cheap despite using simple materials and reduced textures.
  • The effort we put into merging meshes and simplifying materials was worth it, performance wise, the demo is very solid. Although we did not test while developing on lower end devices, we loved seeing that it runs smoothly on 3dof devices like Oculus Go and phones, and on all browsers.
  • Despite some initial friction, new formats and technologies like BASIS or Draco work well and bring real improvements. If all textures were JPG or PNG, loading and starting times would be many times longer.

We uploaded the Blender files to the Hello WebXR repository.

If you want to know the specifics of something, do not hesitate to contact me at @feiss or the whole team at @mozillareality.

Thanks for reading!

SUMO BlogBrrrlin 2020: a SUMO journal from All Hands

Hello, SUMO Nation!

Berlin 2020 has been my first All Hands and I am still experiencing the excitement the whole week gave me.

Contributors picture

The intensity an event of this scale is able to build is slightly overwhelming (I suppose all the introverts reading this can easily get me), but the gratification and insights everyone of us has taken home are priceless.

The week started last Monday, on January 27th, when everyone landed in Berlin from all over the world. An amazing group of contributors, plus every colleague I had always only seen on a small screen, was there, in front of me, flesh and bones. I was both excited and scared by the number of people that suddenly were inhabiting the corridors of our conference/dorm/workspace.

The schedule for the SUMO team and SUMO contributors was a little tight, but we managed to make it work: Kiki and I decided to share our meetings between the days and I am happy about how we balanced the work/life energy.

On Tuesday we opened the week by having a conversation over the past, the current state and the future of SUMO. The community meeting was a really good way to break the ice, the whole SUMO team was there and gave updates from the leadership, products, as well as the platform team.  This meeting was necessary also to lay down the foundations for the priorities of the week and develop an open conversation.

On Wednesday, Kiki and I were fully in the game. We decided to have two parallel sessions: one regarding the Forum and Social support and one focusing on the KB localization. The smaller groups were both really vibrant and lively. We highlighted pain points, things that are working and issues that we as community managers could focus more on at this time. In the afternoon, we had a face to face meeting between the community and the Respond Tool team. It was a feedback-based discussion on features and bugs.

Thursday was ON FIRE. In the morning we had the pleasure to host Vesta Zare, the Product Manager of Fenix, and we had a session focusing on Firefox Preview and its next steps. Vesta was thrilled to meet the SUMO community, excited to share information, and happy to answer questions. After the session, we had a 2-hour-long brainstorming workshop organized by Kiki and me for the community to help us build a priority pipeline for the Community plan we have been working on in the last few months. The session was long but incredibly helpful and everyone who participated was active and rich in insights. The day was still running at a fast pace and the platform team had an Ask-Me-Anything session with the contributors. Madalina and Tasos were great and they both set real expectations while leaving the community open doors to get involved.

On Friday the community members were free to follow their own schedule, while the SUMO team had the last meetings to run up to. The week was closing up with one of the most incredible parties I have ever experienced, and that was a great opportunity to finally collect the last feedback and friendly connections we lost along the way of this really busy week.

Here is a recollection of the pain points we got from the meetings with contributors:

  • On-boarding new contributors: retainment is low for many reasons (time, skillset, etc.)
  • Contributors’ tools, first and foremost, Kitsune, need attention.
  • The bus factor is still very much real.
  • The community needs Forum, Social and Respond Tool analyze:
    • Which questions are being skipped and not answered?
    • Device coverage from contributors.
  • What about the non-EN locales on the community events?
  • Localization quality and integrity are at risk.
  • Language level of the KB is too technical and does not reach every audience.

We have also highlighted the many successes that we have from last year:

  • The add-on apocalypse
  • The 7 SUMO Sprints (Fx 65-71)
  • The 36 community meetings
  • More than 300 articles localized in every language
  • One cool addons (SUMO Live Helper) (Thanks to Jhonatas, Wesley, and Danny!)
  • The Respond tool campaign

As you’ve probably heard before, we’re currently working with an external agency called Context Partners on the community strategy project. The result from that collaboration is a set of recommendations on 3 areas that we managed to discuss during the all hands.

Recommedations

Obviously, we wouldn’t be able to do all of them, so we need your help.

Which recommendation do you believe would provide the greatest benefit to the SUMO community? 

Is there a recommendation you would make that is missing from this list?

Your input would be very valuable for us since the community is all about you. We will collect all of your feedback with us to be discussed in our final meeting with the Context Partner team in Toronto in mid-February. We’ll appreciate any additional feedback that we can gather before the end of next week (02/14/2020).

Please read carefully and think about the questions above. Kiki and I have opened a Discourse post and Contributor Forum thread to collect feedbacks on this. You can also reach out directly to us with your questions or feedbacks.

I feel lucky to be part of this amazing community and to work alongside passionate and lively people I can look up to everyday. Remember that SUMO is made by you and you should be proud to identify yourself as part of this incredible group of people who honestly enjoy helping others.

As a celebration of the All Hands and the SUMO community, I would like to share the poem that Seburo kindly shared with us:

It is now over six months since Mozilla convened last,
and All Hands is now coming up so fast.
From whatever country, nation or state they currently be in,
Many MoCo and MoFo staff, interns and contributors are converging on Berlin.
Twenty Nineteen was a busy year,
Much is going on with Firefox Voice, so I hear.
The new Fenix is closer to release,
the GeckoView team’s efforts will not cease.
MoFo is riding high after an amazing and emotional MozFest,
For advice on how to make the web better, they are the best.
I hope that the gift guide was well read,
Next up is putting concerns about AI to bed…?
Please don’t forget contributors who are supporting the mission from wide and far,
Writing code, building communities and looking to Mozilla’s north star.
The SUMO team worked very hard during the add-on apocalypse,
And will not stop helping users with useful advice and tips.
I guess I should end with an attempt at a witty one liner.
So here it is.
For one week in January 2020,
Mozillianer sind Berliner.

Thank you for being part of SUMO,

See you soon!

Giulia

Mozilla Add-ons BloguBlock Origin available soon in new Firefox for Android Nightly

Last fall, we announced our intention to support add-ons in Mozilla’s reinvented Firefox for Android browser. This new, high-performance browser for Android has been rebuilt from the ground up using GeckoView, Mozilla’s mobile browser engine and has been available for early testing as Firefox Preview. A few weeks ago, Firefox Preview moved into the Firefox for Android Nightly pre-release channel, starting a new chapter of the Firefox experience on Android.

In the next few weeks, uBlock Origin will be the first add-on to become available in the new Firefox for Android. It is currently available on Firefox Preview Nightly and will soon be available on Firefox for Android Nightly. As one of the most popular extensions in our Recommended Extensions program, uBlock Origin helps millions of users gain control of their web experience by blocking intrusive ads and improving page load times.

As GeckoView builds more support for WebExtensions APIs, we will continue to enable other Recommended Extensions to work in the new Firefox for Android.

We want to ensure that any add-on supported in the new Firefox for Android provides an exceptional, secure mobile experience to our users. To this end, we are prioritizing Recommended Extensions that are optimized for different screen sizes and cover common mobile use cases. For these reasons, it’s possible that not all the add-ons you have previously installed in Firefox for Android will be supported in the near future. When an add-on you previously installed becomes supported, we will notify you.

When we have more information about how we plan to support add-ons in Firefox for Android beyond our near-term goals, we will post them on this blog. We hope you stay tuned!

The post uBlock Origin available soon in new Firefox for Android Nightly appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgIt’s the Boot for TLS 1.0 and TLS 1.1

Coming to a Firefox near you in March

The Transport Layer Security (TLS) protocol is the de facto means for establishing security on the Web. The protocol has a long and colourful history, starting with its inception as the Secure Sockets Layer (SSL) protocol in the early 1990s, right up until the recent release of the jazzier (read faster and safer) TLS 1.3. The need for a new version of the protocol was born out of a desire to improve efficiency and to remedy the flaws and weaknesses present in earlier versions, specifically in TLS 1.0 and TLS 1.1. See the BEAST, CRIME and POODLE attacks, for example.

With limited support for newer, more robust cryptographic primitives and cipher suites, it doesn’t look good for TLS 1.0 and TLS 1.1. With the safer TLS 1.2 and TLS 1.3 at our disposal to adequately project web traffic, it’s time to move the TLS ecosystem into a new era, namely one which doesn’t support weak versions of TLS by default. This has been the abiding sentiment of browser vendors – Mozilla, Google, Apple and Microsoft have committed to disabling TLS 1.0 and TLS 1.1 as default options for secure connections. In other words, browser clients will aim to establish a connection using TLS 1.2 or higher. For more on the rationale behind this decision, see our earlier blog post on the subject.

What does this look like in Firefox?

We deployed this in Firefox Nightly, the experimental version of our browser, towards the end of 2019. It is now also available in Firefox Beta 73. In Firefox, this means that the minimum TLS version allowable by default is TLS 1.2. This has been executed in code by setting security.tls.version.min=3, a preference indicating the minimum TLS version supported. Previously, this value was set to 1. If you’re connecting to sites that support TLS 1.2 and up, you shouldn’t notice any connection errors caused by TLS version mismatches.

What if a site only supports lower versions of TLS?

In cases where only lower versions of TLS are supported, i.e., when the more secure TLS 1.2 and TLS 1.3 versions cannot be negotiated, we allow for a fallback to TLS 1.0 or TLS 1.1 via an override button. As a Firefox user, if you find yourself in this position, you’ll see this:

screenshot showing "Secure Connection Failed" message that allows user to override the TLS 1.0 and 1.1 deprecation

As a user, you will have to actively initiate this override. But the override button offers you a choice. You can, of course, choose not to connect to sites that don’t offer you the best possible security.

This isn’t ideal for website operators. We would like to encourage operators to upgrade their servers so as to offer users a secure experience on the Web. We announced our plans regarding TLS 1.0 and TLS 1.1 deprecation over a year ago, in October 2018, and now the time has come to make this change. Let’s work together to move the TLS ecosystem forward.

Deprecation timeline

We plan to monitor telemetry over two Firefox Beta cycles, and then we’re going to let this change ride to Firefox Release. So, expect Firefox 74 to offer TLS 1.2 as its minimum version for secure connections when it ships on 10 March 2020. We plan to keep the override button for now; the telemetry we’re collecting will tell us more about how often this button is used. These results will then inform our decision regarding when to remove the button entirely. It’s unlikely that the button will stick around for long. We’re committed to completely eradicating weak versions of TLS because at Mozilla we believe that user security should not be treated as optional.

Again, we would like to stress the importance of upgrading web servers over the coming months, as we bid farewell to TLS 1.0 and TLS 1.1. R.I.P, you’ve served us well.

The post It’s the Boot for TLS 1.0 and TLS 1.1 appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityMulti-Account Containers Add-on Sync Feature

Image of the Multi-Account Containers Sync On-boarding ScreenThe Multi-Account Containers Add-on will now sync your container configuration and site assignments.

Firefox Multi-Account Containers allows users to separate their online identities into different tab types called Containers. Each Container has its own separate storage and cookies.  This way, their browsing activity in one Container is not accessible to websites in other Containers. This privacy feature allows users to assign sites to only open in a specific Container. For instance, it permits them to set your shopping websites to always open in a Shopping Container. This keeps advertising tracking data from those websites separate from the user’s Work Container. Users can also use Containers for separate areas of their life, like work and personal email. The user can separate email accounts from the same provider, so they don’t have to log in and out of each account. For more information about how to use the containers add on, visit the Mozilla support page.

The new sync feature will align Multi-Account Containers on different computers. The add-on carries over Container names, colors, icons, and site assignments on to any other machines with the same Firefox account.

If you have allowed automatic updates of the add-on, your extension should update on its own. The first time you click the Multi-Account Container icon after the update, an on-boarding panel will allow you to activate sync.

In order to use this feature, you will need to be signed in to a Firefox account in Firefox.

The post Multi-Account Containers Add-on Sync Feature appeared first on Mozilla Security Blog.

The Mozilla Thunderbird BlogThunderbird’s New Home

As of today, the Thunderbird project will be operating from a new wholly owned subsidiary of the Mozilla Foundation, MZLA Technologies Corporation. This move has been in the works for a while as Thunderbird has grown in donations, staff, and aspirations. This will not impact Thunderbird’s day-to-day activities or mission: Thunderbird will still remain free and open source, with the same release schedule and people driving the project.

There was a time when Thunderbird’s future was uncertain, and it was unclear what was going to happen to the project after it was decided Mozilla Corporation would no longer support it. But in recent years donations from Thunderbird users have allowed the project to grow and flourish organically within the Mozilla Foundation. Now, to ensure future operational success, following months of planning, we are forging a new path forward. Moving to MZLA Technologies Corporation will not only allow the Thunderbird project more flexibility and agility, but will also allow us to explore offering our users products and services that were not possible under the Mozilla Foundation. The move will allow the project to collect revenue through partnerships and non-charitable donations, which in turn can be used to cover the costs of new products and services.

Thunderbird’s focus isn’t going to change. We remain committed to creating amazing, open source technology focused on open standards, user privacy, and productive communication. The Thunderbird Council continues to  steward the project, and the team guiding Thunderbird’s development remains the same.

Ultimately, this move to MZLA Technologies Corporation allows the Thunderbird project to hire more easily, act more swiftly, and pursue ideas that were previously not possible. More information about the future direction of Thunderbird will be shared in the coming months.

Update: A few of you have asked how to make a contribution to Thunderbird under the new corporation, especially when using the monthly option. Please check out our updated site at give.thunderbird.net!

The Mozilla BlogMapping the power of Mozilla’s Rebel Alliance

At Mozilla, we often speak of our contributor communities with gratitude, pride and even awe. Our mission and products have been supported by a broad, ever-changing rebel alliance — full of individual volunteers and organizational contributors — since we shipped Firefox 1.0 in 2004. It is this alliance that comes up with new ideas, innovative approaches and alternatives to the ongoing trends towards centralisation and an internet that doesn’t always work in the interests of people.

But we’ve been unable to speak in specifics. And that’s a problem, because the threats to the internet we love have never been greater. Without knowing the strength of the various groups fighting for a healthier internet, it’s hard to predict or achieve success.

We know there are thousands around the globe who help build, localize, test, de-bug, deploy, and support our products and services. They help us advocate for better government regulation and ‘document the web’ through the Mozilla Developer Network. They speak about Mozilla’s mission and privacy-preserving products and technologies at conferences around the globe. They help us host events around the globe too, like this year’s 10th anniversary of MozFest, where participants hacked on how to create a multi-lingual, equitable internet and so much more.

With the publication of the Mozilla and the Rebel Alliance report, we can now speak in specifics. And what we have to say is inspiring. As we rise to the challenges of today’s internet, from the injustices of the surveillance economy to widespread misinformation and the rise of untrustworthy AI, we take heart in how powerful we are as a collective.

Making the connections

In 2018, well over 14,000 people supported Mozilla by contributing their expertise, work, creativity, and insights. Between 2017 and 2019, more than 12,000 people contributed to Firefox. These counts only consider those people whose contributions we can see, such as through Bugzilla, GitHub, or Kitsune, our support platform. They don’t include non-digital contributions. Firefox and Gecko added almost 3,500 new contributors in 2018. The Mozilla Developer Network added over 1,000 in 2018. 52% of all traceable contributions in 2018 came from individual volunteers and commercial contributors, not employees.

Firefox Community Health

The report’s network graphs demonstrate that there are numerous Mozilla communities, not one. Many community members participate across multiple projects: core contributors participate in an average of 4.3 of them. Our friends at Analyse & Tal helped create an interactive version of Mozilla’s contributor communities, highlighting common patterns of contribution and distinguishing between levels of contribution by project. Also, it’s important to note what isn’t captured in the report: the value of social connections, the learning and the mutual support people find in our communities.

We can make a reasonable estimate of the discrete value of some contributions from our rebel alliance. For example, community contributions comprise 58% of all filed Firefox regression bugs, which are particularly costly in their impact on the number of people who use and keep using the browser.

But the real value in our rebel alliance and their contributions is in how they inform and amplify our voice. The challenges around the state of the internet are daunting: disinformation, algorithmic bias and discrimination, the surveillance economy and greater centralisation. We believe this report shows that with the creative strength of our diverse contributor communities, we’re up for the fight.

If you’d like to contribute yourself: check out various opportunities here or dive right into one of our Activate Campaigns!)

The post Mapping the power of Mozilla’s Rebel Alliance appeared first on The Mozilla Blog.

The Mozilla BlogFirefox Team Looks Within to Lead Into the Future

For Firefox products and services to meet the needs of people’s increasingly complex online lives, we need the right organizational structure. One that allows us to respond quickly as we continue to excel at delivering existing products and develop new ones into the future.

Today, I announced a series of changes to the Firefox Product Development organization that will allow us to do just that, including the promotion of long-time Mozillian Selena Deckelmann to Vice President, Firefox Desktop.

“Working on Firefox is a dream come true,” said Selena Deckelmann, Vice President, Firefox Desktop. “I collaborate with an inspiring and incredibly talented team, on a product whose mission drives me to do my best work. We are all here to make the internet work for the people it serves.”

Selena Deckelmann, VP Firefox Desktop

During her eight years with Mozilla, Selena has been instrumental in helping the Firefox team address over a decade of technical debt, beginning with transitioning all of our build infrastructure over from Buildbot. As Director of Security and then Senior Director, Firefox Runtime, Selena led her team to some of our biggest successes, ranging from big infrastructure projects like Quantum Flow and Project Fission to key features like Enhanced Tracking Protection and new services like Firefox Monitor. In her new role, Selena will be responsible for growth of the Firefox Desktop product and search business.

Rounding out the rest of the Firefox Product Development leadership team are:

Joe Hildebrand, who moves from Vice President, Firefox Engineering into the role of Vice President, Firefox Web Technology. He will lead the team charged with defining and shipping our vision for the web platform.

James Keller who currently serves as Senior Director, Firefox User Experience will help us better navigate the difficult trade-off between empowering teams while maintaining a consistent user journey. This work is critically important because since the Firefox Quantum launch in November 2017 we have been focused on putting the user back at the center of our products and services. That begins with a coherent, engaging and easy to navigate experience in the product.

I’m extraordinarily proud to have such a strong team within the Firefox organization that we could look internally to identify this new leadership team.

These Mozillians and I, will eventually be joined by two additional team members. One who will head up our Firefox Mobile team and the other who will lead the team that has been driving our paid subscription work. Searches for both roles will be posted.

Alongside Firefox Chief Technology Officer Eric Rescorla and Vice President, Product Marketing Lindsey Shepard, I look forward to working with this team to meet Mozilla’s mission and serve internet users as we build a better web.

You can download Firefox here.

The post Firefox Team Looks Within to Lead Into the Future appeared first on The Mozilla Blog.

Mozilla VR BlogHello WebXR

Hello WebXR

We are happy to share a brand new WebXR experience we have been working on called Hello WebXR!

Here is a preview video of how it looks:

We wanted to create a demo to celebrate the release of the WebXR v1.0 API!.

The demo is designed as a playground where you can try different experiences and interactions in VR, and introduce newcomers to the VR world and its special language in a smooth, easy and nice way.

How to run it

You just need to open the Hello WebXR page on a WebXR (Or WebVR thanks to the WebXR polyfill) capable browser like Firefox Reality or Oculus Browser on standalone devices such as the Oculus Quest, or with Chrome on Desktop >79. For an updated list of supported browsers please visit the ImmersiveWeb.dev support table.

Features

  • The demo starts in the main hall where you can find:
  • Floating spheres containing 360º mono and stereo panoramas
  • A pair of sticks that you can grab to play the xylophone
  • A painting exhibition where paintings can be zoomed and inspected at will
  • A wall where you can use a graffiti spray can to paint whatever you want
  • A twitter feed panel where you can read tweets with hashtag #hellowebxr
  • Three doors that will teleport you to other locations:
    • A dark room to experience positional audio (can you find where the sounds come from?)
    • A room displaying a classical sculpture captured using photogrammetry
    • The top of a building in a skyscrapers area (are you scared of heights?)

Goals

Our main goal for this demo was to build a nice looking and nice performing experience where you could try different interactions and explore multiple use cases for WebXR. We used Quest as our target device to demonstrate WebXR is a perfectly viable platform not only for powerful desktops and headsets but also for more humble devices like the Quest or Go, where resources are scarce.

Also, by building real-world examples we learn how web technologies, tools, and processes can be optimized and improved, helping us to focus on implementing actual, useful solutions that can bring more developers and content to WebXR.

Tech

The demo was built using web technologies, using the three.js engine and our ECSY framework in some parts. We also used the latest standards such as glTF with Draco compression for models and Basis for textures. The models were created using Blender, and baked lighting is used throughout all the demo.

We also used third party content like the photogrammetry sculpture (from this fantastic scan by Geoffrey Marchal in Sketchfab), public domain sounds from freesound.org and classic paintings are taken from the public online galleries of the museums where they are exhibited.

Conclusions

There are many things we are happy with:

  • The overall aesthetic and “gameplay” fits perfectly with the initial concepts.
  • The way we handle the different interactions in the same room, based on proximity or states made everything easier to scale.
  • The demo was created initially using only Three.js, but we successfully integrated some functionality using ECSY.

And other things that we could improve:

  • We released fewer experiences than we initially planned.
  • Overall the tooling is still a bit rough and we need to keep on improving it:
    • When something goes wrong it is hard to debug remotely on the device. This is even worse if the problem comes from WebGL. ECSY tools will help here in the future.
    • State of the art technologies like Basis or glTF still lack good tools.
  • Many components could be designed to be more reusable.

What’s next?

  • One of our main goals for this project is also to have a sandbox that we could use to prototype new experiences and interactions, so you can expect this demo to grow over time.
  • At the same time, we would like to release a template project with an empty room and a set of default VR components, so you can build your own experiments using it as a boilerplate.
  • Improve the input support by using the great WebXR gamepads module and the WebXR Input profiles.
  • We plan to write a more technical postmortem article explaining the implementation details and content creation.
  • ECSY was released after the project started so we only used it on some parts of the demo. We would like to port other parts in order to make them reusable in other projects easily.
  • Above all, we will keep investing in new tools to improve the workflow for content creators and developers.

Of course, the source code is available for everyone. Please give Hello World! a try and share your feedback or issues with us on the github repository.

The Mozilla BlogICANN Directors: Take a Close Look at the Dot Org Sale


As outlined in two previous posts, we believe that the sale of the nonprofit Public Interest Registry (PIR) to Ethos Capital demands close and careful scrutiny. ICANN — the body that granted the dot org license to PIR and which must approve the sale — needs to engage in this kind of scrutiny.

When ICANN’s board meets in Los Angeles over the next few days, we urge directors to pay particular attention to the question of how the new PIR would steward and be accountable to the dot org ecosystem. We also encourage them to seriously consider the analysis and arguments being made by those who are proposing alternatives to the sale, including the members of the Cooperative Corporation of .ORG Registrants.

As we’ve said before, there are high stakes behind this sale: Public interest groups around the world rely on the dot org registrar to ensure free expression protections and affordable digital real estate. Should this reliance fail under future ownership, a key part of the public interest internet infrastructure would be diminished — and so would the important offline work it fuels.

Late last year, we asked ISOC, PIR and Ethos to answer a series of questions about how the dot org ecosystem would be protected if the sale went through. They responded and we appreciate their engagement, but key questions remain unanswered.

In particular, the responses from Ethos and ISOC proposed a PIR stewardship council made up of representatives from the dot org community. However, no details about the structure, role or powers of this council have been shared publicly. Similarly, Ethos has promised to change PIR’s corporate structure to reinforce its public benefit orientation, but provided few details.

Ambiguous promises are not nearly enough given the stakes. A crystal-clear stewardship charter — and a chance to discuss and debate its contents — are needed before ICANN and the dot org community can even begin to consider whether the sale is a good idea.

One can imagine a charter that provides the council with broad scope, meaningful independence, and practical authority to ensure PIR continues to serve the public benefit. One that guarantees Ethos and PIR will keep their promises regarding price increases, and steer any additional revenue from higher prices back into the dot org ecosystem. One that enshrines quality service and strong rights safeguards for all dot orgs. And one that helps ensure these protections are durable, accounting for the possibility of a future resale.

At the ICANN board meeting tomorrow, directors should discuss and agree upon a set of criteria that would need to be satisfied before approving the sale. First and foremost, this list should include a stewardship charter of this nature, a B corp registration with a publicly posted charter, and a public process of feedback related to both. These things should be in place before ICANN considers approving the sale.

ICANN directors should also discuss whether alternatives to the current sale should be considered, including an open call for bidders. Internet stalwarts like Wikimedia, experts like Marietje Schaake and dozens of important non-profits have proposed other options, including the creation of a co-op of dot orgs. In a Washington Post op-ed, former ICANN chair Esther Dyson argues that such a co-op would “[keep] dot-org safe, secure and free of any motivation to profit off its users’ data or to upsell them pricy add-ons.”

Throughout this process, Mozilla will continue to ask tough questions, as we have on December 3 and December 19. And we’ll continue to push ICANN to hold the sale up against a high bar.

The post ICANN Directors: Take a Close Look at the Dot Org Sale appeared first on The Mozilla Blog.

Mozilla Add-ons BlogExtensions in Firefox 72

After the holiday break we are back with a slightly belated update on extensions in Firefox 72. Firefox releases are changing to a four week cycle, so you may notice these posts getting a bit shorter. Nevertheless, I am excited about the changes that have made it into Firefox 72.

Welcome to the (network) party

Firefox determines if a network request is considered third party and will now expose this information in the webRequest listeners, as well as the proxy onRequest listener. You will see a new thirdParty property. This information can be used by content blockers as an additional factor to determine if a request needs to be blocked.

Doubling down on security

On the road to Manifest v3, we also recently announced the possibility to test our new content security policy for content scripts. The linked blog post will fill you in on all the information you need to determine if this change will affect you.

More click metadata for browser- and pageActions

If your add-on has a browserAction or pageAction button, you can now provide additional ways for users to interact with them. We’ve added metadata information to the onClicked listener, specifically the keyboard modifier that was active and a way to differentiate between a left click or a middle click. When making use of these features in your add-on, keep in mind that not all users are accustomed to using keyboard modifiers or different mouse buttons when clicking on icons. You may need to guide your users through the new feature, or consider it a power-user feature.

Changing storage.local using the developer tools

In Firefox 70 we reported that the storage inspector will be able to show keys from browser.storage.local. Initially the data was read-only, but since Firefox 72 we also have limited write support. We hope this will allow you to better debug your add-ons.

Miscellaneous

  • The captivePortal API now provides access to the canonicalURL property. This URL is requested to detect the captive portal state and defaults to http://detectportal.firefox.com/success.txt
  • The browserSettings API now supports the onChange listener, allowing you to react accordingly if browser features have changed.
  • Extension files with the .mjs extension, commonly used with ES6 modules, will now correctly load. You may come across this when using script tags, for example.

A shout out goes to contributors Mélanie Chauvel, Trishul Goel, Myeongjun Go, Graham McKnight and Tom Schuster for fixing bugs in this version of Firefox. Also we’ve received a patch from James Jahns from the MSU Capstone project. I would also like to thank the numerous staff members from different corners of Mozilla who have helped to make extensions in Firefox 72 a success. Kudos to all of you!

The post Extensions in Firefox 72 appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyWhat could an “Open” ID system look like?: Recommendations and Guardrails for National Biometric ID Projects

Digital ID systems are increasingly the battlefield where the fight for privacy, security, competition, and social inclusion is playing out. In our ever more connected world, some form of identity is almost always mediating our interactions online and offline. From the corporate giants that dominate our online lives using services like Apple ID and Facebook and Google’s login systems to government IDs which are increasingly required to vote, get access to welfare benefits, loans, pay taxes, get on transportation or access medical care.

Part of the push to adopt digital ID comes from the international development community who argue that this is necessary in order to expand access to legal ID. The UN Sustainable Development Goals (SDGs) call for “providing legal identity for all, including birth registration” by 2030. Possessing legal identity is increasingly a precondition to accessing basic services and entitlements from both state and private services. For the most marginalised communities, using digital ID systems to access essential services and entitlements from both state and private services are often one of their first interactions with digital technologies. Without these commonly recognized forms of official identification, individuals are at risk of exclusion and denial of services. However, the conflation of digital identity as the same as (or an extension of) “legal identity”, especially by the international development community, has led to an often uncritical embrace of digital ID projects.

In this white paper, we survey the landscape around government digital ID projects and biometric systems in particular. We recommend several policy prescriptions and guardrails for these systems, drawing heavily from our experiences in India and Kenya.

In designing, implementing, and operating digital ID systems, governments must make a series of technical and policy choices. It is these choices that largely determine if an ID system will be empowering or exploitative and exclusionary. While several organizations have published principles around digital identity, too often they don’t act as a meaningful constraint on the relentless push to expand digital identity around the world. In this paper, we propose that openness provides a useful framework to guide and critique these choices and to ensure that identity systems put people first. Specifically, we examine and make recommendations around five elements of openness: multiplicity of choices, decentralization, accountability, inclusion, and participation.

  • Openness as in multiplicity of choices: There should be a multiplicity of choices with which to identify aspects of one’s identity, rather than the imposition of a single and rigid ID system across purposes. The consequences of insisting on a single ID can be dire. As the experiences in India and Peru demonstrate, not having a particular ID or failure of authentication via that ID can lead to denial of essential services or welfare for the most vulnerable.
  • Openness as in decentralisation: Centralisation of sensitive biometric data presents a single point of failure for malicious attacks. Centralisation of authentication records can also amplify the surveillance capability of those entities that have visibility into the records. Digital IDs should, therefore, be designed to prevent their use as a tool to amplify government and private surveillance When national IDs are mandatory for accessing a range of services; the resulting authentication record can be a powerful tool to profile and track individuals.
  • Openness as in accountability: Legal and technical accountability mechanisms must bind national ID projects. Data protection laws should be in force and with a strong regulator in place before the rollout of any national biometric ID project. National ID systems should also be technically auditable by independent actors to ensure trust and security.
  • Openness as in inclusion: Governments must place equal emphasis on ensuring individuals are not denied essential services simply because they lack that particular ID or because the system didn’t work, as well as ensuring individuals have the ability to opt-out of certain uses of their ID. This is particularly vital for those marginalised in society who might feel the most at risk of profiling and will value the ability to restrict the sharing of information across contexts.
  • Openness as in participation: Governments must conduct wide-ranging consultation on the technical, legal, and policy choices involved in the ID systems right from the design stage of the project. Consultation with external experts and affected communities will allow for critical debate over which models are appropriate if any. This should include transparency in vendor procurement, given the sensitivity of personal data involved.

Read the white paper here: Mozilla Digital ID White Paper

The post What could an “Open” ID system look like?: Recommendations and Guardrails for National Biometric ID Projects appeared first on Open Policy & Advocacy.

hacks.mozilla.orgThe Mozilla Developer Roadshow: Asia Tour Retrospective and 2020 Plans

Editor’s Note: This post is also available in 繁體中文 (Chinese) on the Mozilla Taiwan blog. (Updated February 4, 2020.)

It’s a wrap!

November 2019 was a busy month for the Mozilla Developer Roadshow, with stops in five Asian cities —Tokyo, Seoul, Taipei, Singapore, and Bangkok. Today, we’re releasing a playlist of the talks presented in Asia.

We are extremely pleased to include subtitles for all these talks in languages spoken in the countries on this tour: Japanese, Korean, Chinese, Thai, as well as English. One talk, Hui Jing Chen’s “Making CSS from Good to Great: The Power of Subgrid”, was delivered in Singlish (a Singaporean creole) at the event in Singapore!

In addition, because our audiences included non-native English speakers, presenters took care to include local language vocabulary in their talks, wherever applicable, and to speak slowly and clearly. We hope to continue to provide multilingual support for our video content in the future, to increase access for all developers worldwide.

Mozillians Ali Spivak, Hui Jing Chen, Kathy Giori, and Philip Lamb presented at all five stops on the tour.

Additional speakers Karl Dubost, Brian Birtles, and Daisuke Akatsuka joined us for the sessions in Tokyo and Seoul.

Dev Roadshow Asia talk videos (all stops):

Ali SpivakIntroduction: What’s New at Mozilla

Hui Jing ChenMaking CSS from Good to Great: The Power of Subgrid

Philip LambDeveloping for Mixed Reality on the Open Web Platform
Kathy GioriMozilla WebThings: Manage Your Own Private Smart Home Using FOSS and Web Standards

Tokyo and Seoul:

Karl Dubost and Daisuke AkatsukaBest Viewed With… Let’s Talk about WebCompatibility
Brian Birtles10 things I hate about Web Animation (and how to fix them)

The Dev Roadshow took Center Stage (literally!) at Start Up Festival, one of the largest entrepreneurship events in Taiwan. Mozilla Taiwan leaders Stan Leong, VP for Emerging Markets, and Max Liu, Head of Engineering for Firefox Lite joined us to share their perspectives on why Mozilla and Firefox matter for developers in Asia. Our video playlist includes an additional interview with Stan Leong at Mozilla’s Taipei office.

Taiwan videos:

Interview with Stan Leong
Stan LeongThe state of developers in the Asia Pacific region, and Mozilla’s outreach and impact within this region
Max LiuLearn more about Firefox Lite!

Venetia Tay and Sri Subramanian joined us in Singapore, at the Grab offices high above Marina One Towers.

Singapore videos:

Venetia TayDesigning for User-centered Privacy
Sriraghavan SubramanianPercentage Rollout for Single Page web applications
Hui Jing ChenMaking CSS from Good to Great: The Power of Subgrid (Singlish version)

In Asia, we kept the model of past Dev Roadshows. Again, our goal was to meet with local developer communities and deliver free, high-quality, relevant technical talks on topics relevant to Firefox, Mozilla and the web.

At every destination, developers shared unique perspectives on their needs. We learned alot. In some communities, concern for security and privacy is not a top priority. In other locations, developers have extremely limited influence and autonomy in selecting tools or frameworks to use in their work. We realized that sometimes the “best” solutions are out of reach due to factors beyond our control.

Nevertheless, all the developers we spoke to, across all the locales we visited, expressed a strong desire to support diversity in their communities. Everyone we met championed the value of inclusion: attracting more people with diverse backgrounds and growing community, positively.

The Mozilla DevRel team is planning what’s ahead for our Developer Roadshow program in 2020. One of our goals is to engage even more deeply with local developer community leaders and speakers in the year ahead. We’d like to empower dev community leaders and speakers to organize and produce Roadshow-style events in new locations. We’re putting together a program and application process (open February 2020 – watch here and on our @MozHacks twitter account for update), and will share more information soon!

The post The Mozilla Developer Roadshow: Asia Tour Retrospective and 2020 Plans appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityCRLite: Speeding Up Secure Browsing

CRLite pushes bulk certificate revocation information to Firefox users, reducing the need to actively query such information one by one. Additionally this new technology eliminates the privacy leak that individual queries can bring, and does so for the whole Web, not just special parts of it. The first two posts in this series about the newly-added CRLite technology provide background: Introducing CRLite: All of the Web PKI’s revocations, compressed and The End-to-End Design of CRLite.

Since mid-December, our pre-release Firefox Nightly users have been evaluating our CRLite system while performing normal web browsing. Gathering information through Firefox Telemetry has allowed us to verify the effectiveness of CRLite.

The questions we particularly wanted to ask about Firefox when using CRLite are:

  1. What were the results of checking the CRLite filter?
    1. Did it find the certificate was too new for the installed CRLite filter;
    2. Was the certificate valid, revoked, or not included;
    3. Was the CRLite filter unavailable?
  2. How quickly did the CRLite filter check return compared to actively querying status using the Online Certificate Status Protocol (OCSP)?

How Well Does CRLite Work?

With Telemetry enabled in Firefox Nightly, each invocation of CRLite emits one of these results:

  • Certificate Valid, indicating that CRLite authoritatively returned that the certificate was valid.
  • Certificate Revoked, indicating that CRLite authoritatively returned that the certificate was revoked.
  • Issuer Not Enrolled, meaning the certificate being evaluated wasn’t included in the CRLite filter set, likely because the issuing Certificate Authority (CA) did not publish CRLs.
  • Certificate Too New, meaning the certificate being evaluated was newer than the CRLite filter.
  • Filter Not Available, meaning that the CRLite filter either had not yet been downloaded from Remote Settings, or had become so stale as to be out-of-service.
Show that >50% of TLS connections would have been using CRLite

Figure 1: One month of CRLite results in Firefox Nightly (5 December 2019 to 6 Jan 2020)

Immediately, one sees that over 54% of secure connections (500M) could have benefited from the improved privacy and performance of CRLite for Firefox Nightly users.

Of the other data:

  • We plan to publish updates up to 4 times per day, which will reduce the incidence of the Certificate Too New result.
  • The Filter Not Available bucket correlates well with independent telemetry indicating a higher-than-expected level of download issues retrieving CRLite filters from Remote Settings; work to improve that is underway.
  • Certificates Revoked but used actively on the Web PKI are, and should be, rare. This number is in-line with other Firefox Nightly telemetry for TLS connection results.

How Much Faster is CRLite?

In contrast to OCSP which requires a network round-trip to complete before a web page can load, CRLite needs only to perform a handful of hash calculations and memory or disk lookups. We expected that CRLite would generally outperform OCSP, but to confirm we added measurements and let OCSP and CRLite race each other in Firefox Nightly.

Show that CRLite is faster than existing technology 99% of the time

Figure 2: How often is CRLite faster than OCSP? (11 December 2019 to 6 January 2020)

Over the month of data, CRLite was faster to query than OCSP 98.844% of the time.

CRLite is Faster 99% of the Time

The speedup of CRLite versus OCSP was rather stark; 56% of the time, CRLite was over 100 milliseconds faster than OCSP, which is a substantial and perceptible improvement in browsing performance.

Distribution of speedups of CRLite

Figure 3: Distribution of occurrences where CRLite outperformed OCSP, which was 99% of CRLite operations. [source]

Almost 10% of the collected data reports showed an entire second of speedup, indicating that the OCSP reached the default timeout. The delay in this figure shows time spent where a Firefox user is waiting for the page to start loading, so this has a substantial impact to perceived quickness in the browser.

To verify that outlier at the timeout, our OCSP telemetry probe shows that over the same period, 9.9% of OCSP queries timed out:

10% of OCSP queries time out

Figure 4: Results of Live OCSP queries in Firefox Nightly [source]

Generally speaking, when loading a website where OCSP wasn’t already cached, 10% of the time Firefox users pause for a full second before the site loads, and they don’t even get revocation data in exchange for the wait.

The 1% When OCSP is Faster

The 500k times that OCSP was faster than CRLite, it was generally not much faster: 50% of these occasions it was less than 40 milliseconds faster. Only 20% of the occasions found OCSP 100 milliseconds faster.

Distribution of slowdowns from CRLite

Figure 5: Distribution of occurrences where OCSP outperformed CRLite, which was 1% of CRLite operations. [source]

Interesting as this is, it represents only 1% of CRLite invocations for Firefox Nightly users in this time period. Almost 99% of CRLite operations were faster, much faster.

Much Faster, More Private, and Still Secure

Our study confirmed that CRLite will maintain the integrity of our live revocation checking mechanisms while also speeding up TLS connections.

At this point it’s clear that CRLite lets us keep checking certificate revocations in the Web PKI without compromising on speed, and the remaining areas for improvement are on shrinking our update files closer to the ideal described in the original CRLite paper.

In the upcoming Part 4 of this series, we’ll discuss the architecture of the CRLite back-end infrastructure, the iteration from the initial research prototype, and interesting challenges of working in a “big data” context for the Web PKI.

In Part 5 of this series, we will discuss what we’re doing to make CRLite as robust and as safe as possible.

The post CRLite: Speeding Up Secure Browsing appeared first on Mozilla Security Blog.

SeaMonkey2.53.1b1 is out!

Hi All,

Yes, 2.53.1b1 is finally out!

No pithy comments this time.

Thanks to all involved! [*You* know who you are… *wink*]

:ewong

SeaMonkey2.53.1.b1… soon

Dear All,

On behalf of the SeaMonkey project, I’d like to wish everyone a very Happy, safe, prosperous and healthy New Year!

First and foremost, 2.53.1b1 will soon be released; pending final checks.  Once that’s done; it’s good as going gold.   That said, I did miss a few items (updating the checks… *sigh*); but it’s not going to be a biggy.  (The crashreporter files for all platforms except Win* were missing.)

Next, I just want to take this opportunity to mention that the delay in the release was not frg’s or Ian’s or anyone else’s fault but mine.  I take the full blame for the delay in the release as I was trying to streamline the release scripts, which took the brunt of the time.  So please, I appreciate that if anyone has any complaints, please direct your ire and frustration at me, not them.   They’ve done a HUGE amount of work doing the builds, something which was supposed to be my doing.  So they deserve all the kudos and gratitudes for getting things done.

I’m hoping that before the next release, I’ll at least get some semblance of the builds and updates working so that they don’t need to bother with the builds and concentrate on getting the fixes in.

So I need to apologize to everyone for the delay.  While fixing the infrastructure has been a very long and arduous project that *still* hasn’t come to any fruition yet.  I’m not making any excuses for my tardiness (despite being in a job and a family to feed…), I take this responsibility seriously.

In any event, thank you everyone for your patience with the project and with me.

:ewong

 

 

The Mozilla BlogReadying for the Future at Mozilla

Mozilla must do two things in this era: Continue to excel at our current work, while we innovate in the areas most likely to impact the state of the internet and internet life. From security and privacy network architecture to the surveillance economy, artificial intelligence, identity systems, control over our data, decentralized web and content discovery and disinformation — Mozilla has a critical role to play in helping to create product solutions that address the challenges in these spaces.

Creating the new products we need to change the future requires us to do things differently, including allocating resources for this purpose. We’re making a significant investment to fund innovation. In order to do that responsibly, we’ve also had to make some difficult choices which led to the elimination of roles at Mozilla which we announced internally today.

Mozilla has a strong line of sight on future revenue generation from our core business. In some ways, this makes this action harder, and we are deeply distressed about the effect on our colleagues. However, to responsibly make additional investments in innovation to improve the internet, we can and must work within the limits of our core finances.

We make these hard choices because online life must be better than it is today. We must improve the impact of today’s technology. We must ensure that the tech of tomorrow is built in ways that respect people and their privacy, and give them real independence and meaningful control. Mozilla exists to meet these challenges.

The post Readying for the Future at Mozilla appeared first on The Mozilla Blog.

hacks.mozilla.orgHow we built Picture-in-Picture in Firefox Desktop with more control over video

Picture-in-Picture support for videos is a feature that we shipped to Firefox Desktop users in version 71 for Windows users, and 72 for macOS and Linux users. It allows the user to pull a <video> element out into an always-on-top window, so that they can switch tabs or applications, and keep the video within sight — ideal if, for example, you want to keep an eye on that sports game while also getting some work done.

As always, we designed and developed this feature with user agency in mind. Specifically, we wanted to make it extremely easy for our users to exercise greater control over how they watch video content in Firefox.

Firefox is shown playing a video, and a mouse cursor enters the frame. Upon clicking on the Picture-in-Picture toggle on the video, the video pops out into its own always-on-top player window.<figcaption>Using Picture-in-Picture in Firefox is this easy!</figcaption>

In these next few sections, we’ll talk about how we designed the feature and then we’ll go deeper into details of the implementation.

The design process

Look behind and all around

To begin our design process, we looked back at the past. In 2018, we graduated Min-Vid, one of our Test Pilot experiments. We asked the question: “How might we maximize the learning from Min-Vid?“. Thanks to the amazing Firefox User Research team, we had enough prior research to understand the main pain points in the user experience. However, it was important to acknowledge that the competitive landscape had changed quite a bit since 2018. How were users and other browsers solving this problem already? What did users think about those solutions, and how could we improve upon them?

We had two essential guiding principles from the beginning:

  1. We wanted to turn this into a very user-centric feature, and make it available for any type of video content on the web. That meant that implementing the Picture-in-Picture spec wasn’t an option, as it requires developers to opt-in first.
  2. Given that it would be available on any video content, the feature needed to be discoverable and straight-forward for as many people as possible.

Keeping these principles in mind helped us to evaluate all the different solutions, and was critical for the next phase.

Three sketches showing a possible drag and drop interaction for picture-in-picture<figcaption>Exploring different interactions for Picture-in-Picture</figcaption>

Try, and try again

Once we had an understanding of how others were solving the problem, it was our turn to try. We wanted to ensure discoverability without making the feature intrusive or annoying. Ultimately, we wanted to augment — and not disrupt — the experience of viewing video. And we definitely didn’t want to cause issues with any of the popular video players or platforms.

A screenshot of a YouTube page with a small blue rectangle on the right edge of the video, center aligned<figcaption>A screenshot of one of our early prototypes</figcaption>

This led us to building an interactive, motion-based prototype using Framer X. Our prototype provided a very effective way to get early feedback from real users. In tests, we didn’t focus solely on usability and discoverability. We also took the time to re-learn the problems users are facing. And we learned a lot!

The participants in our first study appreciated the feature, and while it did solve a problem for them, it was too hard to discover on their own.

So, we rolled our sleeves up and tried again. We knew what we were going after, and we now had a better understanding of users’ basic expectations. We explored, brainstormed solutions, and discussed technical limitations until we had a version that offered discoverability without being intrusive. After that, we spent months polishing and refining the final experience!

Stay tuned

From the beginning, our users have been part of the conversation. Early and ongoing user feedback is a critical aspect of product design. It was particularly exciting to keep Picture-in-Picture in our Beta channel as we engaged with users like you to get your input.

We listened, and you helped us uncover new blind spots we might have missed while designing and developing. At every phase of this design process, you’ve been there. And you still are. Thank you!

Implementation detail

The Firefox Picture-in-Picture toggle exists in the same privileged shadow DOM space within the <video> element as the built-in HTML <video> controls. Because this part of the DOM is inaccessible to page JavaScript and CSS stylesheets, it is much more difficult for sites to detect, disable, or hijack the feature.

Into the shadow DOM

Early on, however, we faced a challenge when making the toggle visible on hover. Sites commonly structure their DOM such that mouse events never reach a <video> that the user is watching.

Often, websites place transparent nodes directly over top of <video> elements. These can be used to show a preview image of the underlying video before it begins, or to serve an interstitial advertisement. Sometimes transparent nodes are used for things that only become visible when the user hovers the player — for example, custom player controls. In configurations like this, transparent nodes prevent the underlying <video> from matching the :hover pseudo-class.

Other times, sites make it explicit that they don’t want the underlying <video> to receive mouse events. To do this, they set the pointer-events CSS property to none on the <video> or one of its ancestors.

To work around these problems, we rely on the fact that the web page is being sent events from the browser engine. At Firefox, we control the browser engine! Before sending out a mouse event, we can check to see what sort of DOM nodes are directly underneath the cursor (re-using much of the same code that powers the elementsFromPoint function).

If any of those DOM nodes are a visible <video>, we tell that <video> that it is being hovered, which shows the toggle. Likewise, we use a similar technique to determine if the user is clicking on the toggle.

We also use some simple heuristics based on the size, length, and type of video to determine if the toggle should be displayed at all. In this way, we avoid showing the toggle in cases where it would likely be more annoying than not.

A browser window within a browser

The Picture-in-Picture player window itself is a browser window with most of the surrounding window decoration collapsed. Flags tell the operating system to keep it on top. That browser window contains a special <video> element that runs in the same process as the originating tab. The element knows how to show the frames that were destined for the original <video>. As with much of the Firefox browser UI, the Picture-in-Picture player window is written in HTML and powered by JavaScript and CSS.

Other browser implementations

Firefox is not the first desktop browser to ship a Picture-in-Picture implementation. Safari 10 on macOS Sierra shipped with this feature in 2016, and Chrome followed in late 2018 with Chrome 71.

In fact, each browser maker’s implementation is slightly different. In the next few sections we’ll compare Safari and Chrome to Firefox.

Safari

Safari’s implementation involves a non-standard WebAPI on <video> elements. Sites that know the user is running Safari can call video.webkitSetPresentationMode("picture-in-picture"); to send a video into the native macOS Picture-in-Picture window.

Safari includes a context menu item for <video> elements to open them in the Picture-in-Picture window. Unfortunately, this requires an awkward double right-click to access video on sites like YouTube that override the default context menu. This awkwardness is shared with all browsers that implement the context menu option, including Firefox.

<figcaption>Safari’s video context menu on YouTube.</figcaption>

Safari users can also right-click on the audio indicator in the address bar or the tab strip to trigger Picture-in-Picture:

The Safari web browser playing a video, with the context menu for the audio toggle in the address bar displayed. “Enter Picture in Picture” is one of the menu items.<figcaption>Here’s another way to trigger Picture-in-Picture in Safari.</figcaption>

On newer MacBooks, Safari users might also notice the button immediately to the right of the volume-slider. You can use this button to open the currently playing video in the Picture-in-Picture window:

A close-up photograph of the MacBook Pro touchbar when a video is playing. There is an icon next to the playhead scrubber that opens the video in an always-on-top player window.<figcaption>Safari users with more recent MacBooks can use the touchbar to enter Picture-in-Picture too.</figcaption>

Safari also uses the built-in macOS Picture-in-Picture API, which delivers a very smooth integration with the rest of the operating system.

Comparison to Firefox

Despite this, we think Firefox’s approach has some advantages:

  • When multiple videos are playing at the same time, the Safari implementation is somewhat ambiguous as to which video will be selected when using the audio indicator. It seems to be the most recently focused video, but this isn’t immediately obvious. Firefox’s Picture-in-Picture toggle makes it extremely obvious which video is being placed in the Picture-in-Picture window.
  • Safari appears to have an arbitrary limitation on how large a user can make their Picture-in-Picture player window. Firefox’s player window does not have this limitation.
  • There can only be one Picture-in-Picture window system-wide on macOS. If Safari is showing a video in Picture-in-Picture, and then another application calls into the macOS Picture-in-Picture API, the Safari video will close. Firefox’s window is Firefox-specific. It will stay open even if another application calls the macOS Picture-in-Picture API.

Chrome’s implementation

The PiP WebAPI and WebExtension

Chrome’s implementation of Picture-in-Picture mainly centers around a WebAPI specification being driven by Google. This API is currently going through the W3C standardization process. Superficially, this WebAPI is similar to the Fullscreen WebAPI. In response to user input (like clicking on a button), site authors can request that a <video> be put into a Picture-in-Picture window.

Like Safari, Chrome also includes a context menu option for <video> elements to open in a Picture-in-Picture window.

The Chrome web browser playing a video, with the context menu for the video element hovering over top of it. “Picture in Picture” is one of the menu items.<figcaption>Chrome’s video context menu on YouTube.</figcaption>

This proposed WebAPI is also used by a PiP WebExtension from Google. The extension adds a toolbar button. The button finds the largest video on the page, and uses the WebAPI to open that video in a Picture-in-Picture window.

The Chrome web browser playing a video. The mouse cursor clicks a button in the toolbar provided by a WebExtension which pops the video out into an always-on-top player window.<figcaption>There’s also a WebExtension for Chrome that adds a toolbar button for opening Picture-in-Picture.</figcaption>

Google’s WebAPI lets sites indicate that a <video> should not be openable in a Picture-in-Picture player window. When Chrome sees this directive, it doesn’t show the context menu item for Picture-in-Picture on the <video>, and the WebExtension ignores it. The user is unable to bypass this restriction unless they modify the DOM to remove the directive.

Comparison to Firefox

Firefox’s implementation has a number of distinct advantages over Chrome’s approach:

  • The Chrome WebExtension which only targets the largest <video> on the page. In contrast, the Picture-in-Picture toggle in Firefox makes it easy to choose any <video> on a site to open in a Picture-in-Picture window.
  • Users have access to this capability on all sites right now. Web developers and site maintainers do not need to develop, test and deploy usage of the new WebAPI. This is particularly important for older sites that are not actively maintained.
  • Like Safari, Chrome seems to have an artificial limitation on how big the Picture-in-Picture player window can be made by the user. Firefox’s player window does not have this limitation.
  • Firefox users have access to this Picture-in-Picture capability on all sites. Websites are not able to directly disable it via a WebAPI. This creates a more consistent experience for <video> elements across the entire web, and ultimately more user control.

Recently, Mozilla indicated that we plan to defer implementation of the WebAPI that Google has proposed. We want to see if the built-in capability we just shipped will meet the needs of our users. In the meantime, we’ll monitor the evolution of the WebAPI spec and may revisit our implementation decision in the future.

Future plans

Now that we’ve shipped the first version of Picture-in-Picture in Firefox Desktop on all platforms, we’re paying close attention to user feedback and bug intake. Your inputs will help determine our next steps.

Beyond bug fixes, we’d like to share some of the things we’re considering for future feature work:

  • Repositioning the toggle when there are visible, clickable elements overlapping it.
  • Supporting video captions and subtitles in the player window.
  • Adding a playhead scrubber to the player window to control the current playing position of a <video>.
  • Adding a control for the volume level of the <video> to the player window.

How are you using Picture-in-Picture?

Are you using the new Picture-in-Picture feature in Firefox? Are you finding it useful? Please us know in the comments section below, or send us a Tweet with a screenshot! We’d love to hear what you’re using it for. You can also file bugs for the feature here.

The post How we built Picture-in-Picture in Firefox Desktop with more control over video appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Gfx Teammoz://gfx newsletter #50

Hi there! Another gfx newsletter incoming.

Glenn and Sotaro’s work on integrating WebRender with DirectComposition on Windows is close to being ready. We hope to let it ride the trains for Firefox 75. This will lead to lower GPU usage and energy consumption. Once this is done we plan to follow up with enabling WebRender by default for Windows users with (some subset of) Intel integrated GPUs, which is both challenging (these integrated GPUs are usually slower than discrete GPUs and we have run into a number of driver bugs with them on Windows) and rewarding as it represents a very large part of the user base.

Edit: Thanks to Robert in the comments section of this post for mentioning the Linux/Wayland progress! I copy-pasted it here:

Some additional highlights for the Linux folks: Martin Stránský is making good progress on the Wayland front, especially concerning DMABUF. It will allow better performance for WebGL and hardware decoding for video (eventually). Quoting from https://bugzilla.mozilla.org/show_bug.cgi?id=1586696#c2:

> there’s a WIP dmabuf backend patch for WebGL, I see 100% performance boost with it for simple WebGL samples at GL compositor (it’s even faster than chrome/chromium on my box).

And there is active work on partial damage to reduce power consumption: https://bugzilla.mozilla.org/show_bug.cgi?id=1484812

What’s new in gfx

  • Handyman fixed fixed a crash in the async plugin infrastructure.
  • Botond fixed (2) various data races in the APZ code.
  • Sean Feng fixed another race condition in APZ code.
  • Andrew fixed a crash with OMTP and image decoding.
  • Sotaro fixed a crash with the GL compositor on Wayland.
  • Botond worked with Facebook developers to resolve a scrolling-related usability problem affecting Firefox users on messenger.com, primarily on MacOS.
  • Botond fixed (2) divisions by zero various parts of the APZ.
  • Sean Feng added some telemetry for touch input latency.
  • Timothy made sure all uses of APZCTreeManager::mGeckoFixedLayerMargins are protected by the proper mutex.
  • Boris Chiou moved animations of transforms with preserve-3d off the main thread
  • Jamie clamped some scale transforms at 32k to avoid excessively large rasterized areas.
  • Jonathan Kew reduced the emboldening strength used for synthetic-bold faces with FreeType.
  • Andrew implemented NEON accelerated methods for unpacking RGB to RGBA/BGRA.
  • Alex Henrie fixed a bug in Moz2D’s Skia backend.

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for the web written in Rust, currently powering Firefox‘s rendering engine as well as Mozilla’s research web browser servo.

  • Miko avoided calculating snapped bounds twice for some display items.
  • Kris fixed snapping and rounding errors causing picture caching invalidation when zoomed in.
  • Glenn fixed a picture caching invalidation bug.
  • Kvark ensured shader programs are bound after changing the blend mode. While not necessary for OpenGL, this makes it easier to efficiently implement backends for vulkan and other modern GPU APIs.
  • Glenn refactored the OS compositor abstraction.
  • Jamie implemented a texture upload path that plays better with Adreno OpenGL drivers.
  • Jonathan Kew reduced the emboldening strength used for synthetic-bold faces with FreeType.
  • Nical prevented invalid glyphs from generating expensive rasterization requests every frame.
  • Nical reduced the number of memory allocations associated with clip chain stacks.
  • Nical reduced the number of memory allocations in various parts of the picture caching code.
  • Glenn fixed a picture caching invalidation issue when scrollbars are disabled.
  • Glenn and Andrew adjusted tradeoffs between text rendering quality and performance.
  • Miko simplified some of the scene building code.
  • Jamie switched to local raster space when animating a double tap zoom to avoid uploading glyphs continuously on Android.
  • Glenn fixed an intermittent compositor surface creation bug.
  • Andrew fixed a shader compilation error on some Android devices.
  • Bert improved the way picture cache tile sizes are selected for scroll bars.
  • Gankra removed some unsafe code in wrench.
  • Glenn fixed an issue with picture cache tile merging heuristics.
  • Glenn fixed tile opacity getting out of sync with compositor surfaces.
  • Glenn added an API for tagging image descriptors as candidates for native compositor surfaces (typically video frames).
  • Sotaro followed up by tagging the approriate image descriptors on the content side.
  • Andrew removed removed pixel snapping from most shaders, now that it is handled earlier in the pipeline.
  • Glenn improved the invalidation logic for images with picture caching.
  • Glenn improved the logic to detect identical frames and skip composition.
  • Glenn fixed the shader implementation of rounded rectangles with very small radii.
  • Kris fixed misplaced text selection popup with GeckoView.
  • Markus fixed a ton of issues with WebRender/CoreAnimation integration.
  • Markus shared the depth buffer between OS compositor tiles on MacOS to save memory.
  • Sotaro fixed image bitmap canvases with WebRender.
  • Sotaro fixed a crash at the intersection between picture-in-picture and WebRender frame throttling.
  • Timothy implemented support for respecting fixed layer margins during hit-testing.
  • Timothy implemented GeckoView’s setVerticalClipping API for WebRender.
  • Jeff fixed an SVG rendering bug.
  • Jamie fixed an issue where the screen would remain black after resuming Firefox for Android.

To enable WebRender in Firefox, in the about:config page, enable the pref gfx.webrender.all and restart the browser.

WebRender is available under the MPLv2 license as a standalone crate on crates.io (documentation) for use in your own rust projects.

What’s new in Wgpu

  • Kvark implemented buffer creation and mapping, with an ability to both provide data and read it back from the GPU.
  • Kvark set up the synchronization from Mozilla Central to Github repository.
  • jdashg created a separate category for WebGPU mochitests.
  • Kvark heavily reworked lifetime and usage tracking of resources.
  • Many fixes and improvements were made by the contributors to wgpu (thank you!)

 

Web Application SecurityJanuary 2020 CA Communication

Mozilla has sent a CA Communication to inform Certificate Authorities (CAs) who have root certificates included in Mozilla’s program about current events relevant to their membership in our program and to remind them of upcoming deadlines. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 7 action items:

  1. Read and fully comply with version 2.7 of Mozilla’s Root Store Policy.
  2. Ensure that their CP and CPS complies with the updated policy section 3.3 requiring the proper use of “No Stipulation” and mapping of policy documents to CA certificates.
  3. Confirm their intent to comply with section 5.2 of Mozilla’s Root Store Policy requiring that new end-entity certificates include an EKU extension expressing their intended usage.
  4. Verify that their audit statements meet Mozilla’s formatting requirements that facilitate automated processing.
  5. Resolve issues with audits for intermediate CA certificates that have been identified by the automated audit report validation system.
  6. Confirm awareness of Mozilla’s Incident Reporting requirements and the intent to provide good incident reports.
  7. Confirm compliance with the current version of the CA/Browser Forum Baseline Requirements.

The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

With this CA Communication, we reiterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

The post January 2020 CA Communication appeared first on Mozilla Security Blog.

Open Policy & AdvocacyCompetition and Innovation in Software Development Depend on a Supreme Court Reversal in Google v. Oracle

Today, Mozilla filed a friend of the court brief with the Supreme Court in Google v. Oracle, the decade-long case involving questions of copyright for functional elements of Oracle’s Java SE. This is the fourth amicus brief so far that Mozilla has filed in this case, and we are joined by Medium, Cloudera, Creative Commons, Shopify, Etsy, Reddit, Open Source Initiative, Mapbox, Patreon, Wikimedia Foundation, and Software Freedom Conservancy.

Arguing from the perspective of small, medium, and open source technology organizations, the brief urges the Supreme Court to reverse the Federal Circuit’s holdings first that the structure, sequence, and organization (“SSO”) of Oracle’s Java API package was copyrightable, and subsequently that Google’s use of that SSO was not a “fair use” under copyright law.

At bottom in the case is the issue of whether copyright law bars the commonplace practice of software reimplementation, “[t]he process of writing new software to perform certain functions of a legacy product.” (Google brief p.7) Here, Google had repurposed certain functional elements of Java SE (less that 0.5% of Java SE overall, according to Google’s brief, p. 8) in its Android operating system for the sake of interoperability—enabling Java apps to work with Android and Android apps to work with Java, and enabling Java developers to build apps for both platforms without needing to learn the new conventions and structure of an entirely new platform.

Mozilla believes that software reimplementation and the interoperability it facilitates are fundamental to the competition and innovation at the core of a flourishing software development ecosystem. However, the Federal Circuit’s rulings would upend this tradition of reimplementation not only by prohibiting it in the API context of this case but by calling into question enshrined tenets of the software industry that developers have long relied on to innovate without fear of liability. With the consequence that small software developers are disadvantaged and innovations are fewer, incumbents’ positions in the industry are reinforced with a decline in incentive to improve their products, and consumers lose out. We believe that a healthy internet depends on the Supreme Court reversing the Federal Circuit and reaffirming the current state of play for software development, in which copyright does not stand in the way of software developers reusing SSOs for API packages in socially, technologically, and economically beneficial ways.

The post Competition and Innovation in Software Development Depend on a Supreme Court Reversal in <i>Google v. Oracle</i> appeared first on Open Policy & Advocacy.

Web Application SecurityThe End-to-End Design of CRLite

CRLite is a technology to efficiently compress revocation information for the whole Web PKI into a format easily delivered to Web users. It addresses the performance and privacy pitfalls of the Online Certificate Status Protocol (OCSP) while avoiding a need for some administrative decisions on the relative value of one revocation versus another. For details on the background of CRLite, see our first post, Introducing CRLite: All of the Web PKI’s revocations, compressed.

To discuss CRLite’s design, let’s first discuss the input data, and from that we can discuss how the system is made reliable.

Designing CRLite

When Firefox securely connects to a website, the browser validates that the website’s certificate has a chain of trust back to a Certificate Authority (CA) in the Mozilla Root CA Program, including whether any of the CAs in the chain of trust are themselves revoked. At this time Firefox knows the issuing certificate’s identity and public key, as well as the website’s certificate’s identity and public key.

To determine whether the website’s certificate is trusted, Firefox verifies that the chain of trust is unbroken, and then determines whether the website’s certificate is revoked. Normally that’s done via OCSP, but with CRLite Firefox simply has to answer the following questions:

  1. Is this website’s certificate older than my local CRLite Filter, e.g., is my filter fresh enough?
  2. Is the CA that issued this website’s certificate included in my local CRLite Filter, e.g. is that CA participating?
  3. If “yes” to the above, and Firefox queries the local CRLite Filter, does it indicate the website’s certificate is revoked?

That’s a lot of moving parts, but let’s inspect them one by one.

Freshness of CRLite Filter Data

Mozilla’s infrastructure continually monitors all of the known Certificate Transparency logs for new certificates using our CRLite tooling; the details of how that works will be in a later blog post about the infrastructure. Since multiple browsers now require that all website certificates are disclosed to Certificate Transparency logs to be trusted, in effect the tooling has total knowledge of the certificates in the public Web PKI.

CRLite high level information blocks

Figure 1: CRLite Information Flow. More details on the infrastructure will be in Part 4 of this blog post series.

Four times per day, all website certificates that haven’t reached their expiration date are processed, drawing out lists of their Certificate Authorities, their serial numbers, and the web URLs where they might be mentioned in a Certificate Revocation List (CRL).

All of the referenced CRLs are downloaded, verified, processed, and correlated against the lists of unexpired website certificates.

The process flow for generating CRLite filters

Figure 2: CRLite Filter Generation Process

At the end, we have a set of all known issuers that publish CRLs we could use, the identification numbers of every certificate they issued that is still unexpired, and the identification numbers of every certificate they issued that hasn’t expired but was revoked.

With this knowledge, we can build a CRLite Filter.

Structure of A CRLite Filter

CRLite data comes in the form of a series of cascading Bloom filters, with each filter layer adding data to the one before it. Individual Bloom filters have a certain chance of false-positives, but using Certificate Transparency as an oracle, the whole Web PKI’s certificate corpus is verified through the filter. When a false-positive is discovered, the algorithm adds it to another filter layer to resolve the false positive.

The query structure of a CRLite filter

Figure 3: CRLite Filter Structure

The certificate’s identifier is defined as shown in Figure 4:

The data structure used for certificate identification

Figure 4: CRLite Certificate Identifier

For complete details of this construction see Section III.B of the CRLite paper.

After construction, the included Web PKI’s certificate corpus is again verified through the filter, ensuring accuracy at that point-in-time.

Ensuring Filter Accuracy

A CRLite filter is accurate at a given point-in-time, and should only be used for the certificates that were both known to the filter generator, and for which there is revocation information.

We can know whether a certificate could be included in the filter if that certificate has delivered with it a Signed Certificate Timestamp from a participating Certificate Transparency log that is at least one Maximum Merge Delay older than our CRLite filter date.

If that is true, we also determine whether the certificate’s issuer is included in the CRLite filter, by referencing our preloaded Intermediate data for a boolean flag reporting whether CRLite includes its data. Specifically, the CA must be publishing accessible, fresh, verified CRL files at a URL included within their certificates’ Authority Information Access data. This flag is updated with the same cadence as CRLite itself, and generally remains constant.

Firefox’s Revocation Checking Algorithm Today

Today, Firefox Nightly is using CRLite in telemetry-only mode, meaning that Firefox will continue to rely on OCSP to determine whether a website’s certificate is valid. If an OCSP response is provided by the webserver itself — via OCSP Stapling — that is used. However, at the same time, CRLite is evaluated, and that result is reported via Firefox Telemetry but not used for revocation.

At a future date, we will prefer to use CRLite for revocation checks, and only if the website cannot be validated via CRLite would we use OCSP, either live or stapled.

Firefox Nightly has a preference security.pki.crlite_mode which controls CRLite; set to 1 it gathers telemetry as stated above. Set to 2, CRLite will enforce revocations in the CRLite filter, but still use OCSP if the CRLite filter does not indicate a revocation.  A future mode will permit CRLite-eligible certificates to bypass OCSP entirely, which is our ultimate goal.

Participating Certificate Authorities

Only public CAs within the Mozilla Root Program are eligible to be included, and CAs are automatically enrolled when they publish CRLs. If a CA stops publishing CRLs, or problems arise with their CRLs, they will be automatically excluded from CRLite filters until the situation is resolved.

As mentioned earlier, if a CA chooses not to log a certificate to a known Certificate Transparency log, then CRLite will not be used to perform revocation checking for that certificate.

Ultimately, we expect CAs to be very interested in participating in CRLite, as it could significantly reduce the cost of operating their OCSP infrastructure.

Listing Enrolled Certificate Authorities

The list of CAs currently enrolled is in our Intermediate Preloading data served via Firefox Remote Settings. In the FAQ for CRLite on Github, there’s information on how to download and process that data yourself to see what CAs revocations are included in the CRLite state.

Notably, Let’s Encrypt currently does not publish CRLs, and as such their revocations are not included in CRLite. The CRLite filters will increase in size as more CAs become enrolled, but the size increase is modeled to be modest.

Portion of the Web PKI Enrolled

Currently CRLite covers only a portion of the Web PKI as a whole, though a sizable portion: As-generated through roughly a period covering December 2019, CRLite covered approximately 100M certificates in the WebPKI, of which about 750k were revoked.

100M enrolled unrevoked vs 700k enrolled revoked certificates

Figure 5: Number of Enrolled Revoked vs Enrolled But Not Revoked Certificates

The whole size of the WebPKI trusted by Mozilla with any CRL distribution point listed is 152M certificates, so CRLite today includes 66% of the potentially-compatible WebPKI  [Censys.io]. The missing portion is mostly due to CRL downloading or processing errors which are being addressed. That said, approximately 300M additional trusted certificates do not include CRL revocation information, and are not currently eligible to be included in CRLite.

Data Sizes, Update Frequency, and the Future

CRLite promises substantial compression of the dataset; the binary form of all unexpired certificate serial numbers comprises about 16 GB of memory in Redis; the hexadecimal form of all enrolled and unexpired certificate serial numbers comprises about 6.7 GB on disk, while the resulting binary Bloom filter compresses to approximately 1.3 MB.

Size of CRLite filters over time

Figure 6: CRLite Filter Sizes over the month of December 2019 (in kilobytes)

To ensure freshness, our initial target was to produce new filters four times per day, with Firefox users generally downloading small delta difference files to catch-up to the current filter. At present, we are not shipping delta files, as we’re still working toward an efficient delta-expression format.

Filter generation is a reasonably fast process even on modest hardware, with the majority of time being spent aggregating together all unexpired certificate serial numbers, all revoked serial numbers, and producing a final set of known-revoked and known-not-revoked certificate issuer-serial numbers (mean of 35 minutes). These aggregated lists are then fed into the CRLite bloom filter generator, which follows the process in Figure 2 (mean of 20 minutes).

 

Distribution of time needed to generate filters

Figure 7: Filter Generation Time [source]

For the most part, faster disks and more efficient (but not human-readable) file formats would speed this process up, but the current speeds are more than sufficient to meet our initial goals, particularly while we continue improving other aspects of the system.

Our next blog post in this series, Part 3, will discuss the telemetry results that our current users of Firefox Nightly are seeing, while Part 4 will discuss the design of the infrastructure.

The post The End-to-End Design of CRLite appeared first on Mozilla Security Blog.

Web Application SecurityIntroducing CRLite: All of the Web PKI’s revocations, compressed

CRLite is a technology proposed by a group of researchers at the IEEE Symposium on Security and Privacy 2017 that compresses revocation information so effectively that 300 megabytes of revocation data can become 1 megabyte. It accomplishes this by combining Certificate Transparency data and Internet scan results with cascading Bloom filters, building a data structure that is reliable, easy to verify, and easy to update.

Since December, Firefox Nightly has been shipping with with CRLite, collecting telemetry on its effectiveness and speed. As can be imagined, replacing a network round-trip with local lookups makes for a substantial performance improvement. Mozilla currently updates the CRLite dataset four times per day, although not all updates are currently delivered to clients.

Revocations on the Web PKI: Past and Present

The design of the Web’s Public Key Infrastructure (PKI) included the idea that website certificates would be revocable to indicate that they are no longer safe to trust: perhaps because the server they were used on was being decommissioned, or there had been a security incident. In practice, this has been more of an aspiration, as the imagined mechanisms showed their shortcomings:

  • Certificate Revocation Lists (CRLs) quickly became large, and contained mostly irrelevant data, so web browsers didn’t download them;
  • The Online Certificate Status Protocol (OCSP) was unreliable, and so web browsers had to assume if it didn’t work that the website was still valid.

Since revocation is still crucial for protecting users, browsers built administratively-managed, centralized revocation lists: Firefox’s OneCRL, combined with Safe Browsing’s URL-specific warnings, provide the tools needed to handle major security incidents, but opinions differ on what to do about finer-grained revocation needs and the role of OCSP.

The Unreliability of Online Status Checks

Much has been written on the subject of OCSP reliability, and while reliability has definitely improved in recent years (per Firefox telemetry; failure rate), it still suffers under less-than-perfect network conditions: even among our Beta population, which historically has above-average connectivity, over 7% of OCSP checks time out today.

Because of this, it’s impractical to require OCSP to succeed for a connection to be secure, and in turn, an adversarial monster-in-the-middle (MITM) can simply block OCSP to achieve their ends. For more on this, a couple of classic articles are:

Mozilla has been making improvements in this realm for some time, implementing OCSP Must-Staple, which was designed as a solution to this problem, while continuing to use online status checks whenever there’s no stapled response.

We’ve also made Firefox skip revocation information for short-lived certificates; however, despite improvements in automation, such short-lived certificates still make up a very small portion of the Web PKI, because the majority of certificates are long-lived.

Does Decentralized Revocation Bring Dangers?

The ideal in question is whether a Certificate Authority’s (CA) revocation should be directly relied upon by end-users.

There are legitimate concerns that respecting CA revocations could be a path to enabling CAs to censor websites. This would be particularly troubling in the event of increased consolidation in the CA market. However, at present, if one CA were to engage in censorship, the website operator could go to a different CA.

If censorship concerns do bear out, then Mozilla has the option to use its root store policy to influence the situation in accordance with our manifesto.

Does Decentralized Revocation Bring Value?

Legitimate revocations are either done by the issuing CA because of a security incident or policy violation, or they are done on behalf of the certificate’s owner, for their own purposes. The intention becomes codified to render the certificate unusable, perhaps due to key compromise or service provider change, or as was done in the wake of Heartbleed.

Choosing specific revocations to honor and refusing others dismisses the intentions of all left-behind revocations attempts. For Mozilla, it violates principle 6 of our manifesto, limiting participation in the Web PKI’s security model.

There is a cost to supporting all revocations – checking OCSP:

  1. Slows down our first connection by ~130 milliseconds (CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME, https://mzl.la/2ogT8TJ),
  2. Fails unsafe, if an adversary is in control of the web connection, and
  3. Periodically reveals to the CA the HTTPS web host that a user is visiting.

Luckily, CRLite gives us the ability to deliver all the revocation knowledge needed to replace OCSP, and do so quickly, compactly, and accurately.

Can CRLite Replace OCSP?

Firefox Nightly users are currently only using CRLite for telemetry, but by changing the preference security.pki.crlite_mode to 2, CRLite can enter “enforcing” mode and respect CRLite revocations for eligible websites. There’s not yet a mode to disable OCSP; there’ll be more on that in subsequent posts.

This blog post is the first in a series discussing the technology for CRLite, the observed effects, and the nature of a collaboration of this magnitude between industry and academia. The next post discusses the end-to-end design of the CRLite mechanism, and why it works. Additionally, some FAQs about CRLite are available on Github.

The post Introducing CRLite: All of the Web PKI’s revocations, compressed appeared first on Mozilla Security Blog.

The Mozilla BlogExpanding Mozilla’s Boards in 2020

Mozilla is a global community that is building an open and healthy internet. We do so by building products that improve internet life, giving people more privacy, security and control over the experiences they have online. We are also helping to grow the movement of people and organizations around the world committed to making the digital world healthier.

As we grow our ambitions for this work, we are seeking new members for the Mozilla Foundation Board of Directors. The Foundation’s programs focus on the movement building side of our work and complement the products and technology developed by Mozilla Corporation.

What is the role of a Mozilla board member?

I’ve written in the past about the role of the Board of Directors at Mozilla.

At Mozilla, our board members join more than just a board, they join the greater team and the whole movement for internet health. We invite our board members to build relationships with management, employees and volunteers. The conventional thinking is that these types of relationships make it hard for the Executive Director to do his or her job. I wrote in my previous post that “We feel differently”. This is still true today. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

It’s worth noting that Mozilla is an unusual organization. We’re a technology powerhouse with broad internet openness and empowerment at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the technology industry.

It’s important that our board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

What are we looking for?

Last time we opened our call for board members, we created a visual role description. Below is an updated version reflecting the current needs for our Mozilla Foundation Board.

Here is the full job description: https://mzl.la/MoFoBoardJD

Here is a short explanation of how to read this visual:

  • In the vertical columns, we have the particular skills and expertise that we are looking for right now. We expect new board members to have at least one of these skills.
  • The horizontal lines speaks to things that every board member should have. For instance, to be a board member, you should have to have some cultural sense of Mozilla. They are a set of things that are important for every candidate. In addition, there is a set of things that are important for the board as a whole. For instance, international experience. The board makeup overall should cover these areas.
  • The horizontal lines will not change too much over time, whereas the vertical lines will change, depending on who joins the Board and who leaves.

Finding the right people who match these criteria and who have the skills we need takes time. We hope to have extensive discussions with a wide range of people. Board candidates will meet the existing board members, members of the management team, individual contributors and volunteers. We see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It also helps us feel comfortable including someone at this senior level of stewardship.

We want your suggestions

We are hoping to add three new members to the Mozilla Foundation Board of Directors over the next 18 months. If you have candidates that you believe would be good board members, send them to msurman@mozillafoundation.org. We will use real discretion with the names you send us.

The post Expanding Mozilla’s Boards in 2020 appeared first on The Mozilla Blog.

Open Policy & AdvocacyOpen Letter to Indian IT Minister by Mozilla, GitHub, and Cloudflare: Release draft intermediary liability rules, assuage concerns voiced during public consultation

Given the Indian government’s impending commitment to the Supreme Court to notify the intermediary liability amendments by January 15 2020, global internet organizations Mozilla, GitHub, and Cloudflare have penned an open letter to the Union Minister of Electronics & Information Technology, Shri. Ravi Shankar Prasad. The letter highlights significant concerns with the rules and calls for improved transparency by allowing the public an opportunity to see a final version of these amendments prior to their enactment.

An excerpt from the letter is extracted below, and the full letter is available online:

“On behalf of a group of global internet organisations with millions of users in India, we are writing to urge you to ensure the planned amendments to India’s intermediary liability regime allow for the Internet to remain an open, competitive, and empowering space for Indians. We understand and respect the need to ensure the internet is a safe space where large platforms take appropriate responsibility. However, the last version of these amendments which were available in the public domain suggest that the rules will promote automated censorship, tilt the playing field in favour of large players, substantially increase surveillance, and prompt a fragmentation of the internet in India that would harm users while failing to empower Indians.

The current safe harbour liability protections have been fundamental to the growth of the internet in India. They have enabled hosting platforms to innovate and flourish without fear that they would be crushed by a failure to police every action of their users. Imposing the obligations proposed in these new rules would place a tremendous, and in many cases fatal, burden on many online intermediaries – especially new organizations and companies. A new community or a startup would be significantly challenged by the need to build expensive filtering infrastructure and hire an army of lawyers.

Given your government’s commitment to the Supreme Court of India to notify these rules by January 15, 2020, it is vital that the public has the opportunity to see a final version of these amendments to help ensure that they assuage the concerns which have been voiced by a wide variety of stakeholders during the public consultation. We appeal for this increased transparency and we remain committed to working with you to achieve the broader objective of these amendments while allowing Indians to benefit from a global internet.”

 

About Mozilla
Mozilla is the not-for-profit behind the popular web browser, Firefox. We believe the Internet is a global public resource, open and accessible to all. We work to ensure it stays open by building products, technologies and programs that put people in control of their online lives, and contribute to a healthier internet. Mozilla is also leading a public petition to Shri. Ravi Shankar Prasad, India’s IT Minister, to make the latest draft of the intermediary liability amendments public prior to their enactment.

About GitHub
GitHub is the developer company. We make it easier for developers to be developers: to work together, to solve challenging problems, to create the world’s most important technologies. We foster a collaborative community that can come together—as individuals and in teams—to create the future of software and make a difference in the world.

About Cloudflare
Cloudflare, Inc. (NYSE: NET / www.cloudflare.com / @cloudflare) is on a mission to help build a better Internet. Cloudflare’s platform protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.

The post Open Letter to Indian IT Minister by Mozilla, GitHub, and Cloudflare: Release draft intermediary liability rules, assuage concerns voiced during public consultation appeared first on Open Policy & Advocacy.

hacks.mozilla.orgFirefox 72 — our first song of 2020

2020 is upon us, folks. We’d like to wish everyone reading this a happy new year, wherever you are. As you take your first steps of the new year, figuring out what your next move is, you may find it comforting to know that there’s a new Firefox release to try out!

Version 72 to be exact.

One of the highlights that we are most proud of is that user gestures are now required for a number of permission-reliant methods, such as Notification.requestPermission(). User research commonly brings up permission prompt spam as a top user annoyance, so we decided to do something about it. This change reduces permission spam and strengthens users’ agency over their online experience.

This release brings several other new features, including DevTool improvements such as Watchpoints, WebSockets inspector improvements, and resource download times; support for CSS features like shadow parts, motion path, and transform properties; and JS/API features such as event-based form participation and the nullish coalescing operator.

Read on for more highlights. To find the full list of additions, check out the following MDN articles:

Now that we’ve moved to a 4-week browser release cycle, you’ll see fewer new features in each individual release, but features will be added to Firefox more often. This gives you faster access to new functionality and bug fixes. You can read our full rationale for the change in Moving Firefox to a faster 4-week release cycle.

DevTools improvements

First, we’ll look at Firefox 72 DevTools improvements in more detail.

Pause on variable access or change

Watchpoints are a new type of breakpoint that can pause execution when an object property gets read or set. You can set watchpoints from the context menu of any object listed in the Scopes panel.

setting watchpoints in the debugger, using options in the context menu of objects in the scopes panel

This feature is described in more detail in the Use watchpoints article on MDN, and Debugging Variables With Watchpoints in Firefox 72 on Hacks.

Firefox DevEdition only: Asynchronous Stacks in Console

Console stacks capture the full async execution flow for console.trace() and console.error(). This lets you understand scheduling of timers, events, promises, generators, etc. over time, which would otherwise be invisible.

an asynchronous callstack being shown in the javascript console

They are only enabled in Firefox Developer Edition for now. We are working to make this feature available to all users after improving performance. Async stacks will also be rolled out to more types of logs, and of course the Debugger.

SignalR formatting & download/upload size for WebSockets

Before shipping the new WebSocket inspector in 71 we had it available in Firefox DevEdition and asked for your input. We didn’t just get a lot of fantastic ideas, some of you even stepped up to contribute code. Thanks a lot for that, and keep it coming!

Messages sent in ASP.NET’s Core SignalR format are now parsed to show nicely-formatted metadata. The bug was filed thanks to feedback from the ASP.NET community and then picked up by contributor Bryan Kok.

Similarly, the community asked to have the total transfer size for download and upload available. This is now a reality thanks to contributor Hayden Huang, who took up the bug as their first Firefox patch.

Start and end times for Network resources

The Timings tab of the Network Monitor now displays timings for each downloaded resource, making dependency analysis a lot easier:

  • Queued — When the resource was queued for download.
  • Started — When the resource started downloading.
  • Downloaded — When the the resource finished downloading.

And as always, faster and more reliable

Here are just a few highlights from our continued performance and quality investments:

  • In the Inspector, editing CSS is no longer blocked by CSP rules.
  • The Inspector‘s badge for Custom Elements now correctly opens the original script for source maps.
  • The Inspector now correctly preserves the selected element for <iframes> when reloading.
  • The Debugger now loads faster when many tabs are open, by prioritizing visible tabs first.

CSS additions

Now let’s move on to the most interesting new CSS features in Firefox 72.

Shadow Parts

One problem with styling elements contained inside a Shadow DOM is that you can’t just style them from CSS applied to the main document. To make this possible, we’ve implemented Shadow Parts, which allow shadow hosts to selectively expose chosen elements from their shadow tree to the outside page for styling purposes.

Shadow parts require two new features. The part attribute exposes an element inside a shadow tree to the outside page:

<custom-element>
  <p part="example">A paragraph</p>
</custom-element>

The ::part() pseudo-element is then used to select elements with a specific part attribute value:

custom-element::part(example) {
  border: solid 1px black;
  border-radius: 5px;
  padding: 5px;
}

CSS Motion Path

Motion Path is an interesting new spec for all you animators out there. The idea here is that you can define a path shape and then animate a DOM node along that path. The spec proposes an alternative to having to animate transform: translate(), position properties like top, right, and so on, or use some other property that often isn’t ideal and could result in very complex sets of keyframes.

With motion path, you define the shape of the path using offset-path:

offset-path: path('M20,20 C20,100 200,0 200,100');

Define an animation to animate the element between different values of the offset-distance property, which defines how far along the defined path you want the element to appear:

@keyframes move {
  0% {
    offset-distance: 0%;
  }

  100% {
    offset-distance: 100%;
  }
}

Then, animate the element using those keyframes:

animation: move 3000ms infinite alternate ease-in-out;

This is a simple example. There are additional properties available, such as offset-rotate and offset-anchor. With offset-rotate, you can specify how much you want to rotate the element being animated. Use offset-anchor to specify which background-position of the animated element is anchored to the path.

Individual transform properties

In this release the following individual transform properties are enabled by default: scale, rotate, and translate. These can be used to set transforms on an element, like so:

scale: 2;
rotate: 90deg;
translate: 100px 200px;

These can be used in place of:

transform: scale(2);
transform: rotate(90deg);
transform: translate(100px 200px);

Or even:

transform: scale(2) rotate(90deg) translate(100px 200px);

These properties are easier to write than the equivalent individual transforms, map better to typical user interface usage, and save you having to remember the exact order of multiple transform functions specified in the transform property.

JavaScript and WebAPI updates

If JavaScript is more your thing, this is the section for you. 72 has the following updates.

User gestures required for a number of permission-reliant methods

Notification permission prompts always show up in research as a top web annoyance, so we decided to do something about it. To improve security and avoid unwanted and annoying permission prompts, a number of methods have been changed so that they can only be called in response to a user gesture, such as a click event. These are Notification.requestPermission(), PushManager.subscribe(), and MediaDevices.getDisplayMedia().

By requiring a user gesture before the permission prompts are shown, Firefox significantly reduces permission spam, thereby strengthening users’ agency over their online experience.

So, for example, prompting for notification permission on initial page load is no longer possible. You now need something like this:

btn.addEventListener('click', function() {
  Notification.requestPermission();
  // Handle other notification permission stuff in here
});

For more detail on associated coding best practices for Notification permissions, read Using the Notifications API.

Nullish coalescing operator

The nullish coalescing operator, ??, returns its right-hand side operand when its left-hand side operand is null or undefined. Otherwise, it returns its left-hand side operand.

This is a useful timesaver in a number of ways, and it is also useful when you only consider null and undefined to be unwanted values, and not other falsy values like 0 and ' '.

For example, if you want to check whether a value has been set and return a default value if not, you might do something like this:

let value;

if(!value) {
  value = 'default';
}

That’s a bit long, so you might instead use this common pattern:

let value;
let value = value || 'default';

This also works OK, but will return unexpected results if you want to accept values of 0 or ' '.

With ??, you can do this instead, which is concise and solves the problem described above:

let value;
value = value ?? 'default';

Event-based form participation

Event-based form participation is now enabled by default. This involves using the new FormData event, which fires when the form is submitted, but can also be triggered by the invocation of a FormData() constructor. This allows a FormData object to be quickly obtained in response to a formdata event firing, rather than needing to create it yourself — useful when you want to submit a form via XHR, for example.

Here’s a look at this feature in action:

formElem.addEventListener('submit', (e) => {
  // on form submission, prevent default
  e.preventDefault();

  // construct a FormData object, which fires the formdata event
  new FormData(formElem);
});

formElem.addEventListener('formdata', (e) => {
  console.log('formdata fired');

  // Get the form data from the event object
  let data = e.formData;

  // submit the data via XHR
  let request = new XMLHttpRequest();
  request.open("POST", "/formHandler");
  request.send(data);
});

Picture-in-picture for video now available on macOS & Linux

In the previous release post, we announced that Picture-in-picture had been enabled in Firefox 71, albeit this was for Windows only. However,today we have the goods that this very popular feature is now available on macOS and Linux too!

picture in picture on mac os; a video being played in a separate overlay form the page where it is actually embedded

The post Firefox 72 — our first song of 2020 appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityFirefox 72 blocks third-party fingerprinting resources

Privacy is a human right, and is core to Mozilla’s mission. However many companies on the web erode privacy when they collect a significant amount of personal information. Companies record our browsing history and the actions we take across websites. This practice is known as cross-site tracking, and its harms include unwanted targeted advertising and divisive political messaging.

Last year we launched Enhanced Tracking Protection (ETP) to protect our users from cross-site tracking. In Firefox 72, we are expanding that protection to include a particularly invasive form of cross-site tracking: browser fingerprinting. This is the practice of identifying a user by the unique characteristics of their browser and device. A fingerprinting script might collect the user’s screen size, browser and operating system type, the fonts the user has installed, and other device properties—all to build a unique “fingerprint” that differentiates one user’s browser from another.

Fingerprinting is bad for the web. It allows companies to track users for months, even after users clear their browser storage or use private browsing mode. Despite a near complete agreement between standards bodies and browser vendors that fingerprinting is harmful, its use on the web has steadily increased over the past decade.

We are committed to finding a way to protect users from fingerprinting without breaking the websites they visit. There are two primary ways to protect against fingerprinting: to block parties that participate in fingerprinting, or to change or remove APIs that can be used to fingerprint users.

Firefox 72 protects users against fingerprinting by blocking all third-party requests to companies that are known to participate in fingerprinting. This prevents those parties from being able to inspect properties of a user’s device using JavaScript. It also prevents them from receiving information that is revealed through network requests, such as the user’s IP address or the user agent header.

We’ve partnered with Disconnect to provide this protection. Disconnect maintains a list of companies that participate in cross-site tracking, as well a list as those that fingerprint users. Firefox blocks all parties that meet both criteria [0]. We’ve adapted measurement techniques  from past academic research to help Disconnect discover new fingerprinting domains. Disconnect performs a rigorous, public evaluation of each potential fingerprinting domain before adding it to the blocklist.

Firefox’s blocking of fingerprinting resources represents our first step in stemming the adoption of fingerprinting technologies. The path forward in the fight against fingerprinting will likely involve both script blocking and API-level protections. We will continue to monitor fingerprinting on the web, and will work with Disconnect to build out the set of domains blocked by Firefox. Expect to hear more updates from us as we continue to strengthen the protections provided by ETP.

 

[0] A tracker on Disconnect’s blocklist is any domain in the Advertising, Analytics, Social, Content, or Disconnect category. A fingerprinter is any domain in the Fingerprinting category. Firefox blocks domains in the intersection of these two classifications, i.e., a domain that is both in one of the tracking categories and in the fingerprinting category.

The post Firefox 72 blocks third-party fingerprinting resources appeared first on Mozilla Security Blog.

Mozilla Digital Memory BankEpisode 05- Marcia Knous

This week we bring you some excerpts from an interview with Marcia Knous of the Mozilla Store. She has some interesting insights into the people who but Mozilla products, as well as commentary on the Mozilla community and a little history from the Netscape days.

Listen to the full interview here.

As always, please take a moment to contribute to the archive.

Running Time: 9:01

Download the .mp3.

Mozilla Digital Memory BankEpisode 04-Mike Beltzner

This week we feature highlights from an interview from last June with Mike Beltzner. Canadian to the bone, this interview is laden with hockey and Tim Horton’s, in addition to some compelling ideas regarding the open source movement. The full interview, with transcript, can be found here.

Enjoy the interview and please contribute to the Memory Bank. Much thanks to Gervase Markham for contributing his blog, which will be appearing in the archive soon.

~Ken

Mozilla Digital Memory BankDo You

We recently went out and interviewed George Mason University students who were Firefox users, to get their impressions of the FF user experience. We hadn’t been received any video submissions, yet, so it seemed a good way to show off our capacity to host video. Check out a few examples here, here, and here.

Looking through the videos, we realized we had the makings of a good promotional video to try to help get the word out about the Memory Bank. A little bit of video editing later, and we’d put together the first Mozilla Digital Memory Bank Promo Video:

Check it out– send it to anyone you think might be interested in the Memory Bank! Help get the word out!

(And if you really enjoyed the video, a longer version– the “director’s cut,” if you will– can be found here.)

In a similar vein, trying to get the word out, we’ve recently created a Facebook page and a LiveJournal page, in addition to our Youtube page.

Mozilla Digital Memory BankEpisode 12 Mitchell Baker

Last June, Mitchell was kind enough to fit us into her busy schedule and offer some history and thoughts on her experiences at Mozilla. It was an interesting interview, and I highly recommend listening to the full interview which you can find here.

Please take a moment to contribute to the archive.

Running time: 7:44.

Download the .mp3.

Mozilla Digital Memory BankEpisode 11 Mike Pinkerton

Camino project lead Mike Pinkerton met us here at George Mason University last June, and, after a few technical snafus on our end, gave us what I think was an extremely thoughtful and provocative interview. Being a veteran of the Netscape era, as well as working on a project not named Firefox, Mike’s interview is somewhat unique among those currently in the archive.

Listen to the full interview here.

Please take a moment and contribute to the archive.

Running time: 13:31

Download the .mp3.

Mozilla Digital Memory BankEpisode 10 Blake Ross

Firefox co-founder Blake Ross agreed to meet with us at a California public library last June, for an interview. We didn’t realize, however, the intricacies of reserving space in a study room. Several contradictory answers and one library card later, we finally were able to sit down with Blake. His ideas about Firefox, Mozilla, and open source were truly unique, and I highly recommend visiting the archive to hear the entire interview.

Listen to the full interview here.

Please take a moment and contribute to the archive.

Running time: 8:41

Download the .mp3

Mozilla Digital Memory BankEpisode 09 David Baron

Longtime Mozilla developer David Baron sat down with us last June, and revealed some fascinating insights into the inner workings of the Mozilla community. In particular, his perspectives on Mozilla’s changes over time and the volunteer process are very intriguing.

Listen to the full interview here.

Please take a moment and contribute to the archive.

Running time: 12:26

Download the .mp3.

Mozilla Digital Memory BankEpisode 08 Rafael Ebron

This week we are excited to bring you highlights from an interview with former Mozilla Product Manager, Rafael Ebron. There are some very interesting observations about Mozilla’s marketing strategy, as well as some entertaining anecdotes about the naming of Firefox.

Listen to the full interview here.

Please take a moment and contribute to the archive.

Running time: 13:52

Download the .mp3.

Mozilla Digital Memory BankEpisode 07 Tristan Nitot

From across the Atlantic we bring you Tristan Nitot, president of Mozilla Europe. Highlights include a very interesting discussion of Mozilla Europe’s history as well as the French unemployment system.

Listen to the full interview here.

As always, please take a moment to contribute to the archive.

Running Time: 10:07

Download the .mp3.

Mozilla Digital Memory BankEpisode 06 Neil Deakin

This week we bring you some highlights from an an oral history Olivia Ryan took from XUL developer Neil Deakin last June. Neil has some interesting insights into what it is like to move from a volunteer to a full-time Mozilla employee.

Listen to the full interview here.

Earlier this month we were able to chat with Alex Vincent in a marathon instant message interview. There is some great stuff here. You can view the transcript here.

As always, please take a moment to contribute to the archive.

Running Time: 8:56

Download the .mp3.

Mozilla VR BlogMozilla Announces Deal to Bring Firefox Reality to Pico Devices

Mozilla Announces Deal to Bring Firefox Reality to Pico Devices

For more than a year, we at Mozilla have been working to build a browser that was made to showcase the best of what you love about browsing, but tailor made for Virtual Reality.

Now we are teaming up with Pico Interactive to bring Firefox Reality to its latest VR headset, the Neo 2 – an all-in-one (AIO) device with 6 degrees of freedom (DoF) head and controller tracking that delivers key VR solutions to businesses. Pico’s Neo 2 line includes two headsets: the Neo 2 Standard and the Neo 2 Eye featuring eye tracking and foveated rendering. Firefox Reality will also be released and shipped with previous Pico headset models.

Mozilla Announces Deal to Bring Firefox Reality to Pico Devices

This means anytime someone opens a Pico device, they’ll be greeted with the speed, privacy, and great features of Firefox Reality.

Firefox Reality includes the ability to sync your Firefox Account enabling you to send tabs, sync history and bookmarks, making great content easily discoverable. There’s also a curated section of top VR content, so there’s always something fresh to enjoy.

“We are pleased to be partnered with Pico to bring Firefox Reality to their users, especially the opportunity to reach more people through their large Enterprise audience,” says Andre Vrignaud, Head of Mixed Reality Platform Strategy at Mozilla. “We look forward to integrating Hubs by Mozilla to bring fully immersive collaboration to business.”

As part of Firefox Reality, we are also bringing Hubs by Mozilla to all Pico devices. In Hubs, users can easily collaborate online around virtual objects, spaces, and tasks - all without leaving the headset.

The virtual spaces created in Hubs can be used similarly to a private video conference room to meet up with your coworkers and share documents and photos, but with added support for all of your key 3D assets. You can fully brand the environment and avatars for your business, and with web-based access the meetings are just a link away, supported on any modern web browser.

Firefox Reality will be available on Pico VR headsets later in Q1, 2020. Stay tuned to our mixed reality blog and twitter account for more details.

about:communityFirefox 72 new contributors

With the release of Firefox 72, we are pleased to welcome the 36 developers who contributed their first code change to Firefox in this release, 28 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Open Policy & AdvocacyBringing California’s privacy law to all Firefox users in 2020

2019 saw a spike of activity to protect online privacy as governments around the globe grappled with new revelations of data breaches and privacy violations. While much of the privacy action came from outside the U.S., such as the passage of Kenya’s data protection law and Europe’s enforcement of its GDPR privacy regulation, California represented a bright spot for American privacy.

Amidst gridlock in Congress over federal privacy rules, California marched forward with its landmark privacy law, the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020. Mozilla has long been a supporter of data privacy laws that empower people — including CCPA. In fact, we were one of the few companies to endorse CCPA back in 2018 when it was before the California legislature.

The California Consumer Privacy Act (CCPA) expands the rights of Californians over their data – and provides avenues for the Attorney General to investigate and enforce those rights, as well as allowing Californians to sue. Californians now have the right to know what personal information is being collected, to access it, to update and correct it, to delete it, to know who their data is being shared with, and to opt-out of the sale of their data.

Much of what the CCPA requires companies to do moving forward is in line with how Firefox already operates and handles data. We’ve long believed that your data is not our data, and that privacy online is fundamental. Nonetheless, we are taking steps to go above and beyond what’s expected in CCPA.

Here’s how we are bringing CCPA to life for Firefox users.

CCPA rights for everyone.

When Europe passed its GDPR privacy law we made sure that all users, whether located in the EU or not, were afforded the same rights under the law.  As a company that believes privacy is fundamental to the online experience, we felt that everyone should benefit from the rights laid out in GDPR. That is why our new settings and privacy notice applied to all of our users.

With the passage and implementation of CCPA, we will do the same. Changes we are making in the browser will apply to every Firefox user, not just those in California.

Deleting your data.

One of CCPA’s key new provisions is its expanded definition of “personal data” under CCPA. This expanded definition allows for users to request companies delete their user specific data.

As a rule, Firefox already collects very little of your data. In fact, most of what we receive is to help us improve the performance and security of Firefox. We call this telemetry data. This telemetry doesn’t tell us about the websites you visit or searches you do; we just know general information, like a Firefox user had a certain amount of tabs opened and how long their session was. We don’t collect telemetry about private browsing mode and we’ve always given people easy options to disable telemetry in Firefox. And because we’ve long believed that data should not be stored forever, we have strict limits on how long we keep telemetry data.

We’ve decided to go the extra mile and expand user deletion rights to include deleting this telemetry data stored in our systems. To date, the industry has not typically considered telemetry data “personal data” because it isn’t identifiable to a specific person, but we feel strongly that taking this step is the right one for people and the ecosystem.

In line with the work we’ve done this year to make privacy easier and more accessible to our users, the deletion control will be built into Firefox and will begin rolling out in the next version of the browser on January 7. This setting will provide users a way to request deletion for desktop telemetry directly from Firefox – and a way for us, at Mozilla, to perform that deletion.

For Firefox, privacy is not optional. We don’t think people should have to choose between the technology they love and their privacy. We think you should have both. That’s why we are taking these steps to bring additional protection to all our users under CCPA. And why we will continue to press in 2020 – through the products we build and the policies we advocate – for an Internet that gives people the privacy and security they deserve.

The post Bringing California’s privacy law to all Firefox users in 2020 appeared first on Open Policy & Advocacy.

Mozilla VR BlogHappy New Year from Hubs!

Happy New Year from Hubs!

As we wrap up 2019, The Hubs team says thank you to the Mozilla Mixed Reality Community for an incredible year! We’ve been looking back and we’re excited about the key milestones that we’ve hit in our mission to make private social VR readily available to the general public. At the core of what we’re doing, our team is exploring the ways that spatial computing and shared environments can improve the ways that we connect and collaborate, and thanks to the feedback and participation of our users and community as a whole, we got to spend a lot of time this year working on new features and experiments.

Early in the year, we wanted to dive into our hypothesis that social 3D spaces could integrate into our existing platforms and tools that the team was regularly using. We launched the Hubs Discord Bot back in April, which bridged chat between the two platforms and added an optional authentication layer to restrict access to rooms created with the bot to users in a given server. Since launching the Discord bot, we’ve learned more about the behaviors and frameworks that enable healthy community development and management, and we released a series of new features that supported multiple moderators, configurable room permissions, closing rooms, and more.

One of our goals for this year was to empower users to more easily personalize their Hubs experiences by making it easy to create custom content. This work kicked off with making Spoke available as a hosted web application, so creators no longer had to download a separate application to build scenes for Hubs. We followed with new features that improved how avatars could be created, shared, remixed, and discovered, and we wrapped up the year by releasing several pre-configured asset kits for building unique environments, starting with the Spoke Architecture Kit release that also included a number of ease-of-use feature updates.

We’ve also just had a lot of fun connecting with users and growing our team and community, and we’ve learned a lot about what we’re working on and how to improve Hubs for different use cases. When we joined Twitter, we got to start interacting with a lot more of you on a regular basis and we’ve loved seeing how you’ve been using Hubs when you share your own content with us! The number of new scenes, avatars, and even public events that have been shared within our community gets us even more excited for what we think 2020 can bring.

As we look ahead into the next year, we’ll be sharing a big update in January and go in-depth with work we’ve been doing to make Hubs a more versatile platform. If you want to follow along with our roadmap, you can keep an eye on the work we have planned on GitHub and follow us on Twitter @ByHubs. Happy 2020!

Mozilla UXPeople who listen to a lot of podcasts really are different

Podcast Enthusiasts and Podcast Newbies

Podcasts are quickly becoming a cultural staple. Between 2013 and 2018, the percent of Americans over age 12 who had ever listened to a podcast jumped from 27% to 44%, according to the Pew Research Center. Yet just 17% of Americans have listened to a podcast in the past week. So we wanted to know: What distinguishes people who listen to podcasts weekly, or even daily, from people who only listen occasionally? Do frequent and infrequent podcast listeners have different values, needs and preferences? To put it another way, are there different kinds of podcast listeners?

To explore this question, Mozilla did a series of surveys and interviews to understand how people listen to podcasts — how often they listen, how many shows they listen to, what devices they use, how they discover content, and what features of the listening experience matter most to them. Here’s what we found.

There is a subset of dedicated, frequent podcast listeners…and they listen a lot

We released a short survey on podcast listening habits to a representative of sample of Americans (as recruited through Survey Monkey) and a targeted group of audio-enthusiasts (distributed via subReddits such as r/podcast and r/audiodrama and Mozilla’s social media accounts). In this survey, we asked people how often they listen to podcasts:

How often do you listen to podcasts (across all devices)?

Bimodal distribution: people listen never or always.

We found that 38% of our survey respondents listen to podcasts daily. Note that we asked this question for each device (i.e., How often do you listen on your phone? On a smart speaker? etc.) The graph above shows the highest listening frequency each person. For example, someone who listens on Alexa a few times a month and on a phone daily would be classified as a daily listener. This could result in an underestimate of each respondent’s overall listening frequency.

A bimodal pattern is emerging: People tend to either listen very infrequently (a few times a month) or very frequently (every day). At first, we found it surprising that podcast listenership in our survey was much more common than in Pew’s results. However, when we separated out the results by the Survey Monkey panel (which is roughly comparable to the general U.S. population) and our Reddit and social media channels, here’s what we found:

How often do you listen to podcasts (across all devices)?

We saw our Reddit users were much heavier podcast listeners than the general population

In the Survey Monkey panel, 56% of people at least occasionally listen to podcasts, which is still higher than Pew’s findings, but more more comparable. In contrast, only 91% of the people who accessed the survey via Reddit and Mozilla’s social media channels listen to podcasts at least occasionally, and 62% say they listen daily.

The listening distribution of these two populations are inverted. People who follow podcasting-related social media tend to listen a lot. This may seem like an obvious connection, but it suggests that we may find some interesting results if we look at the daily listeners and other podcast listeners separately.

Frequent and infrequent podcast listeners use different technologies

Smartphones are by far the dominant devices for podcast listening. But when we split apart listeners by frequency, we see that smartphone listening is more dominant among daily listeners, whereas laptop and desktop listening is more dominant among monthly listeners: 38% of podcast listeners use smartphones to listen daily; conversely, 27% of podcast listeners use laptops or desktops to listen a few times a month. We also found that frequent podcast listeners are more likely to use multiple types of devices to listen to podcasts.

How often do you listen to podcasts on these different devices?

Smartphones are dominant devices

This chart shows how often people listen to podcasts on particular types of devices (smartphones, laptops or desktops, smart speakers) for survey respondents who listen to podcasts at least a few times a month (n = 575).

This distinction in technology use also plays out when we look at the apps/software people use to listen. Apple Podcasts/Apple iTunes is the most popular listening app across all listeners. However, daily listeners use a broader distribution of apps. This could indicate that frequent listeners are experimenting to find the listening experience that best fits their needs. Monthly listeners, on the other hand, are much more likely to listen in a web browser (and may not even have a podcasting app installed on their phone at all). YouTube is popular across all listeners, but proportionately more common with infrequent listeners.

Which podcasting apps do you use?

Apple podcasts continues to have a dominant position in the market

This chart displays podcast listeners, segmented by listening frequency, and the apps that they use. (Note that we didn’t explicitly ask how often people use each app. But we do know that, for example, of the 310 survey respondents who listen to podcasts daily, 85 use Apple Podcasts/Apple iTunes). For all listeners, Apple Podcasts/iTunes is the most popular platform. For weekly and monthly users, YouTube and web browsers are the next most popular platforms.

Why might infrequent listeners be more likely to listen in web browsers and on platforms like YouTube? Perhaps newer and infrequent podcast listeners haven’t developed listening routines, or haven’t committed to a particular device or app for listening. If they are accessing audio content ad hoc, the web may be easier and more convenient than using an app.

In addition to this broad scale survey data, we can learn more from in-depth interviews with podcast listeners. Podcasting newbies and podcast enthusiasts have different behaviors — but what about their values? To dig into this question, we interviewed seven people who self-define as podcast enthusiasts, as well as drawing from fieldwork over the summer in Seattle and three European cities to understand listening behaviors. We learned a few key things from those studies, particularly around how people think about subscriptions, and how they learn about new podcasts.

“Subscriptions” don’t fully capture how people actually listen

While avid podcast listeners may subscribe to a long list of shows (up to 72 among the people we interviewed), they tend to be devoted to a smaller subset of shows, typically between 2 and 10, that they listen to on a regular basis. With these “regular rotation” shows, listeners catch new episodes soon after they are released and might even go back and re-listen to episodes multiple times. For listeners who have a core set of shows in their regular rotation, diving into a completely new podcast requires a significant amount of mental effort and time.

Several people we interviewed use subscriptions as a “save for later” feature, storing shows that they might want to listen to some day. But having a long list of aspirational podcasts can be overwhelming. One listener, for example, only wants shows “to be in front of me when I’m in the mood…So I’m trying be meticulous about subscribing and unsubscribing. They should have a different action that you can do, like your list of ‘when I’m ready for something new.’”

Relationships with podcasts come and go. As one listener described it, every day, “I’m going to eat breakfast. But I definitely have gone through phases in my life. Every morning I eat oatmeal….And then suddenly I hate that…I kind of feel like my podcast listening comes and goes and waves like that.”

One listener we interviewed is more of a grazer, roaming from show to show based on topics she is currently interested in: “I’ll just jump around, and I’ll try different things…I usually don’t subscribe.” For her, the concept of subscription doesn’t fit her listening patterns at all.

These themes indicate that perhaps the notion of “subscription” isn’t nuanced enough to capture the complex and dynamic ways people develop and break relationships with podcast content.

Word of mouth and podcast cross-promotion are powerful ways to discover content

Podcast enthusiasts use many strategies to figure out what to listen to, but one strategy dominates: When we asked podcast enthusiasts how they discover new content, every single person brought up word of mouth. The interviewees all also found cross-promotion — when podcast hosts mention another show they enjoy — to be effective because it’s a recommendation that comes from a trusted voice.

The podcast enthusiasts we spoke with described additional ways they discover content,  including browsing top charts, looking to trusted brands, finding recommendations on social media, reading “best of” lists, and following a content producer from another medium (like an author or a TV star) onto a podcast. However, none of these strategies were as common, or as salient, as word of mouth or cross-promotion. Methods of content discovery can reinforce each other, producing a snowball effect. One listener noted, “I might hear it from like the radio. Sort of an anonymous source first, and then I hear it from a friend, ‘Like oh I heard about that. You just told me about it. I should definitely go check it out now.’” If listeners hear about a show from multiple avenues, they are more likely to invest time in listening to it.

Word of mouth goes both ways and podcast listeners’ enthusiasm for talking about podcasts isn’t limited to other fanatics. They often recommend podcasts to non-listeners, both entire shows and specific episodes that are contextually relevant. For example, one interviewee noted that, “Whenever I have a conversation about something interesting with someone I’ll say, ‘Oh I heard a Planet Money about that’ and I will refer them to it.” For frequent podcast listeners, podcast content serves as a kind of conversational currency.

What does this all mean?

Podcast listeners are not a homogenous group. Product designers should consider people who listen a little and people who listen a lot; people who are new to podcasts and people who are immersed in podcast culture; people who are still figuring out how to listen and people who have built strong listening habits and routines. These distinct groups each bring their own values and preferences to the listening experience. By considering and addressing them, we can design listening products that better fit diverse listening needs.

We also asked about listening behaviors beyond just podcasts. To learn more about that, check out our companion post, Listening: It’s not just for audio.

A sketch of two podcast presenters arguing

Sketch by Jordan Wirfs-Brock

 

Mozilla UXListening: It’s not just for audio

Understanding how people listen

When we first set out to study listening behaviors, we focused on audio content. After all, audio is what people listen to, right? It quickly became apparent, however, that people also often listen to videos and multimedia content. Listening isn’t just for audio — it’s for any situation where we don’t (or can’t) use our eyes and thus our ears dominate.

Why do we care that people are listening to videos as a primary mode of accessing content? Because in the past, technologists and content creators have often treated video, audio and text as distinct content types — after all, they are different types of file formats. But the people consuming content care less about the media or file type and more about the experience of accessing content. With advances in web, mobile, and ubiquitous technology, we’re seeing a convergence in media experience. We anticipate this convergence will continue with the emergence of voice-based platforms.

How do we know people are “listening” to video?

In our survey on podcast listening behaviors (find out more in our companion blog post), we asked what apps people use to listen. YouTube was the second most popular app, with 24% of podcast listeners. Only Apple Podcasts had more listeners:

Which of these do you use to listen to podcasts?

Youtube is the second most popular channel for podcasts, after Apple Podcasts.

We found that 38% of our survey respondents listen to podcasts daily. Note that we asked this question for each device (i.e., How often do you listen on your phone? On a smart speaker? etc.) The graph above shows the highest listening frequency each person. For example, someone who listens on Alexa a few times a month and on a phone daily would be classified as a daily listener. This could result in an underestimate of each respondent’s overall listening frequency.

Our survey also showed that YouTube and web browsers are more popular with infrequent podcast listeners and are often used as a secondary app. (More here!)

We found the prevalence of YouTube as a listening platform surprising, so we conducted a follow-up survey to get more information on the range of things people listen to in addition to podcasts. In this survey, deployed via the Firefox web browser, we asked which listening related activities people do at least once a month. Here’s what we found:

60% of people surveyed listen to podcasts at least once a month.

In the Survey Monkey panel, 56% of people at least occasionally listen to podcasts, which is still higher than Pew’s findings, but more more comparable. In contrast, 91% of the people who accessed the survey via Reddit and Mozilla’s social media channels listen to podcasts at least occasionally, and 62% say they listen daily.

We found that 60% of survey respondents said they “listen” to streaming videos at least once a month (note that we explicitly used the word listen, not watch). Of the range of listening activities we asked about, “listening” to streaming videos was more popular than listening to podcasts or listening to radio. In fact, it was more popular than every activity except listening to streaming music.

How and why are people listening to video?

We were also curious about how often people listen to video content, what platforms they use to listen to video content, and why they listen to video content.

We asked people how often they do various listening activities (listening to streaming music, listening to podcasts, listening to content on a smart speaker, listening to streaming videos, etc.) and then sorted them based on frequency:

People listen to music a lot; audio books are pretty rare.

On the left are activities tend to do rarely (50% of audiobook listeners say they do this a few times a month or less). On the right are activities that people tend to do daily (more than 60% of streaming video listeners say they do this daily). Note that “listening” to videos, either on the TV or on the web, falls in the middle. People are split pretty evenly between doing this a few times a week and doing them daily.

We also asked open-ended questions about the type of content people listen to and why they listen. People use streaming video as a listening platform for three main reasons: (1) access to content, (2) adaptability to environmental contexts, (3) integration of features that aren’t common in podcasting apps.

Content: Access to content you can’t get anywhere else, and it’s all in one place

Our survey respondents noted that lots of audio-focused content that is only available on YouTube or on the web. People pointed to video and audio podcasts (“A lot of podcasts are only uploaded to YouTube nowadays”) as well as lectures, debates, old radio programs, movies and TV. People valued both the availability of this content as well as the convenience of being able to listen to multiple types of content (audio or otherwise) in one place. As one person commented, “I can seamlessly switch from audio content (podcasts) to video content.”

Context: In situations where you simply can’t watch, you listen to video

One survey respondent listens to news from YouTube videos while driving. Another person says a, “web browser allows me to listen at work in another tab.” In both of these situations, the person is listening in order to multitask and because they can’t use their eyes to watch the video. We also got a lot of comments about transitioning between watching and listening, or between devices as people move from contexts where they can use their eyes to contexts where they can’t. One person wrote, “My dream scenario: start watching a video on my computer then pick up my phone and continue listening to the audio part of this video, then come back to my computer and continue with video.”

Features: Platforms like YouTube have features that aren’t common in podcasting apps

Many survey respondents also noted features that they valued from YouTube that aren’t available in some popular podcasting apps, like recommendations of what to listen to next, being able to comment on episodes, being able to pick up where they left off, and being able to manage playlists. One YouTube listener highlighted, “The fact that I get to comment on the content, rather than something like Apple’s Podcast app which doesn’t allow for discussion or feedback either to other listeners or to the creators.” Another pointed out, “Ability to bookmark and share at specific times.” Many of these features exist in some form in podcasting apps, but aren’t standard or aren’t as integrated into the listening experience.

What are the implications of listening to video?

As product designers and content producers, we tend to think about content in terms of media types — is this video, audio or text? But people experience media in a much more fluid manner. There is a flexibility inherent in a multimedia or multi-modal experience that allows people to listen, or watch, or read, or do any combination of the three when it best suits them. For example, one person uses YouTube as a listening platform because of the “auto-captions which I can export for future reading and citation.” Another listener treats video elements as supplementary to audio, noting: “I also like the added visual stimulation when I want it.” Instead of deciding “I need to watch a video now” or “I need to listen to audio content now,” people make media decisions based on what information is in content and how they can fit it into their lives.

Listening to video sketch

Sketch by Jordan Wirfs-Brock

Mozilla VR BlogHow much is that new VR headset really sharing about you?

How much is that new VR headset really sharing about you?

VR was big this holiday season - the Oculus Go sales hit the Amazon #1 electronics device list on Black Friday, and the Oculus Quest continues to sell. But in the spirit of Mozilla's Privacy Not Included guidelines, you might be wondering: what personal information is Oculus collecting while you use your device?

Reading the Oculus privacy policy, they say that they process and collect information like

  • information about your environment, physical movements, and dimensions
  • location-related information
  • information about people, games, content, apps, features, and experiences you interact with
  • identifiers that may be unique to you
  • and much much more!

That’s…a lot of data. Most of this data, like processing information about your physical movements is required for basic functionality of most MR experiences. For example, to track whether you avoid an obstacle in BeatSaber, your device needs to know the position of your head in space.

There’s a difference between processing and collecting. Like we mentioned, you can’t do much without processing certain data. Processing can either happen on the device itself, or on remote servers. Collecting data implies that it is stored remotely for a time period beyond what’s necessary for simply processing it.

Mozilla’s brand promise to our users is focused on security and privacy. So, while testing the Oculus Quest for Mozilla Mixed Reality products, we needed to know what kind of data was being sent to and from the device during a browsing session. The device has a developer mode that allows you to access advanced features by connecting it to your computer and using Android Debug Bridge (adb). From there, we used the developer mode and `adb` to install a custom trusted root certificate. This allows us to inspect the connections in depth.

So, what is Facebook transmitting from your device back to Facebook servers during a routine browsing session? From the data we saw, they’re reporting configuration and telemetry data, such as information about how long it took to fetch resources. For example, here’s a graph of the amount of data sent over time from the Oculus VR headset back to Facebook.

How much is that new VR headset really sharing about you?<figcaption>Bytes sent to Facebook IPs over time</figcaption>

The data is identified by both an id, which is consistent across browsing sessions, and a session_id. The id appears to be linked to the device hardware, because linking a Facebook account didn’t change the identifier (or any other information as far as we detected).

In addition to general timing information, Facebook also receives reports on more granular, URL level timing information that uses a unique URL ID.

"time_to_fetch": "1",
"url_uid": "d8657582",
"firstbyte_time": "0",

Like computers, mixed reality (MR) devices can collect data on the sites you visit and applications you interact with. They also have the ability to collect and transmit large amounts of other data, including biometrically-derived data (BDD). BDD includes any information that may be inferred from biometrics, like gaze, gait, and other nonverbal communication methods. 6DOF devices like the Oculus Quest track both head and body movement. Other devices, like the MagicLeap One and HoloLens 2, also track gaze. This type of data can reveal intrinsic characteristics about users, such as their height. Information about where they look can reveal details about a user’s sexual preferences and powerful insights into their psychology. Innocuous data like facial movements during a task have been used in research to predict high or low performers.

Fortunately, even though its privacy policy would allow it to, today Facebook does not appear to collect any of this MR-specific information from your Oculus VR headset. Instead, it focuses on collecting data about timing, application version, and other configuration and telemetry data. That doesn’t mean that they can’t do so in the future, according to their privacy policy.

In fact, Facebook just announced that Oculus VR data will now be used for ads if users are logged into Facebook. Horizon, Facebook's social VR experience, requires a linked Facebook account.

In addition to the difference between processing and collecting explained above, there’s a difference between committing to not collecting and simply not collecting. It’s not enough for Facebook to just not collect sensitive data now. They should commit not to collect it in the future. Otherwise, they could change the data they collect at any time without informing users of the change. Until BDD is protected and regulated, we need to be constantly vigilant.

How much is that new VR headset really sharing about you?

Currently, BDD (and other data that MR devices can track) lacks protections beyond whatever is stipulated in the privacy policy (which is regulated by contract law), so companies often reserve the right to collect and disseminate all the information they might possibly want to, knowing that consumers rarely read (let alone comprehend) the legalese they agree to. It’s time for regulators and legislators to take action and protect sensitive health, biometric, and derived data from misuse by tech companies.

SeaMonkeyMoving sale!

Actually,  not really.   We’ve kinda moved away, though we still have a presence and there’s nothing to sell…

If it wasn’t official then, it’s official now.  Mozilla has decided to switch off irc.mozilla.org in March 2020.  In its place, they’re going to use Riot/Matrix.

I just want to take this opportunity to thank all those hardworking sysops/ircops at irc.mozilla.org for the plethora of years of protecting the irc servers from undesirables [I do have a much better set of language, but this is a public, family friendly blog…soooo… use your imagination. 😛 ].

That said, just want to make sure everyone that still supports SeaMonkey to mosey on to Freenode’s #seamonkey channel.

Thanks irc.mozilla.org!  As they say,  irc.mozilla.org is dead… or dying..  Long Live irc.mozilla.org!   [kinda reminiscent of SCL3 is dead! Long live SCL3!]

:ewong

PS: And yes..  we’re still around.

Mozilla L10NL10n Report: December Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New community/locales added

  • Kabardian

New content and projects

What’s new or coming up in Firefox desktop

Firefox 72 is currently in Beta. The deadline to ship localization changes in this version is approaching fast, and will be on December 24th. For the next version, the deadline will be on January 28th.

Most of the new strings are in the onboarding and Content Feature Recommendations (CFR). You can see them in the What’s New panel in the app menu.

What’s new or coming up in mobile

There is a lot going on with mobile these days, especially in regards to the transition of Firefox for Android browser (internal name Fennec) to a brand new browser (currently Firefox Preview, internal name Fenix).

Since the transition is expected to happen some time early 2020 (exact plans are still being discussed internally), we wanted to make a call to action to localizers to start now. We are still waiting for the in-app language switcher to be implemented, but since it is planned for very soon, we think it’s important that localizers get access to strings so they can complete and test their work in time for the actual release of Fenix (final name to be determined still).

The full details about all this can be found in this thread here. Please reach out directly to Delphine in order to activate Fenix in Pontoon for your locale (requests from managers only please), or if you have any questions.

Looking forwards to the best localized Android browser yet!

What’s new or coming up in web projects

Mozilla.org

We added a few more pages recently. Though some pages are quite long, they do contain a lot of useful information on the advantages of using Firefox over other browsers. They come in handy when you want to promote Firefox products in your language.

New:

  • firefox/compare.lang
  • firefox/windows-64-bit.lang
  • firefox/welcome/page5.lang

Updates:

  • firefox/campaign-trailhead.lang
  • firefox/new/trailhead.lang
  • firefox/products/developer-quantum.lang
WebThings Gateway

This is a brand new product. The Mozilla WebThings is an open platform for monitoring and controlling devices over the web. It is a software distribution for smart home gateways focused on privacy, security and interoperability.Essentially, it is a smart home platform for bridging new and existing Internet of Things (IoT) devices to the web in a private and secure way.

More information can be found on the website. Speaking of the website, there is a plan to make the site localizable early next year. Stay tuned!

The initial localized content was imported from GitHub, content localized by contributors. Once imported, the localized content is by default in “translated” state. Locale managers and translators, please review these strings soon as they go directly to production.

What’s new or coming up in SuMo

This past month has been really busy for the community and for our content manager, we got new and updated articles for Firefox 71 on desktop and the release of many products on mobile: Firefox Preview and Firefox Lite.

Following is a selection of interesting new articles that have been translated:

Newly published localizer facing documentation

Style Guides:
Obsolete:

The Mozilla Localization Community page on Facebook has shut down. To find out how this decision was reached, please read it here.

Events

Three localization events were organized this quarter.

  • The Mozilla Nativo Workshop was held on the 28th – 29th of October in Antigua Guatemala. Localizers from nine localization communities attended the event.
  • The Bengali localization workshop took place in Dhaka, Bangladesh on the 9th – 10th of November. The details of the event were well documented by two l10n contributors in their blogs:  Tanha Islam and Monirul Alom.

    Bengali localization community

The weekend event was widely reported in the local press and social media in Bengali:

    • http://bit.ly/2r26ENr
    • http://bit.ly/2OpEZOy
    • https://www.be.bangla.report/post/45498-cfmmKTlib
    • http://bit.ly/2XrBJ9i
    • http://bit.ly/2CU1ciq
    • https://techshohor.com/161802/
  • The Arabic localization meetup was organized in Tunis, Tunisia on the 6th – 7th of December. The hosting community welcomed visiting localization contributors from Bahrain, Jordan, and Palestinian territories. During the two day workshop, the attendees discussed major challenges facing the geographically distributed community, identified better ways to collaborate, and steps and process to onboard and retain new contributors.

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

  • Kudos to Safa, one of the Arabic locale managers, who single handedly reviewed more than 500 pending suggestions, reviewed and updated the usage of Mozilla brands in Firefox desktop product. He is also leading the effort to improve communications between community members and new contributor onboarding process. Keep up with the good work!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Move nav bar to the top   

Mozilla UXHow people really, really use smart speakers

More and more people are using smart speakers everyday. But how are they really using them? Tawfiq Ammari, a doctoral candidate at the University of Michigan, in conjunction with researchers at Mozilla and Yahoo, published a paper which sheds some light on the question. To do this, he gathered surveys and user logs from 170  Amazon Alexa and Google Home users, and interviewed another 19 users, to analyze their daily use of voice assistants.

Users of both Google Home and Alexa devices can access a log showing all the interactions they’ve had with their device. Our 170 users gave us a copy of this log after removing any personal information, which meant we could understand what people were really using their devices for, rather than just what they remembered using their devices for when asked later. Together, these logs contained around 259,164 commands.

We collected 193,665 commands on Amazon Alexa which were issued between May 2015 and August 2017, a period of 851 days. On average, the datasets for our 82 Amazon Alexa users span 210 days. On the days when they used their VA, Alexa users issued, on average,18.2 commands per day. We collected 65,499 commands on Google Home between September 2016 and July 2017, a period of 293 days. On average, the datasets for each of the 88 Google Home users spans 110 days. On days when they used their VA,Google Home users issued, on average, 23.2 commands per day with a median of 10.0 commands per day.

For both Amazon Alexa and Google Home, the top three command categories were listening to music, hands-free search, and controlling IoT devices. The most prevalent command for Amazon Alexa was listening to music, while Google Home was used most for hands-free search. We also found a lot of items in the logs reflecting that both devices didn’t often understand queries, or mis-heard other conversation as commands — that’s 17% in the case of Google Home and 11% in the case of Alexa, although those aren’t quite comparable because of the way that each device logs errors.

People used their smart speakers for all sorts of searches. For example, some of our respondents use VAs to convert measurement units while cooking. Others used their VAs to look up trivia with friends. Users also searched for an artist who sang a particular song, or looked for a music album under a specific genre (e.g., classical music).

The third largest category was controlling Internet of Things (IoT) devices, making up about 10% of the Google Home commands and about 17% of the Alexa commands. These were most frequently turning smart lights on and off, although also included controlling smart thermostats and changing light colors. Users told us in interviews that they were frustrated by some of the aspects of IoT control. For example, Brad told us that he was frustrated that when he asked the smart speaker in his kitchen to “turn the light off,” it wouldn’t work. He had to tell it to “turn the kitchen light off”.

We also found a long list of particular uses of smart speakers: asking for jokes, weather reports, and setting timers or alarms, for example. One thing we found interesting was that on both platforms there were nearly twice as many requests to turn the volume down than requests to turn the volume up, which suggests that default volume levels may be set too high for most homes.

Despite their use of voice assistants, our interviewees had some real concerns about their voice assistants. Both Amazon Alexa and Google Home provide user logs where users can view their voice commands. They both also provide a feature to “mute” their VAs.  While most of our survey respondents were aware of the user history logs (~70%), more than a quarter of our respondents did not know that they could delete entries in their logs and only a small minority (~11%) had viewed or deleted entries in their logs.

Users also worried about whether their voice assistant was “listening all the time.” This was particularly contentious when family members and friends became “secondary users” of the voice assistant just by being in the same physical space. For example, Harriet told us that her “in-laws were mortified that someone could hack in and see what I’m doing, but what are they going to learn?”

Other users were worried about how their data was being processed on cloud services and shared with third party apps. John noted that he was concerned about how VAs “reach out to…third party services” when for example asking about the weather. He was concerned that he knew very little about what information is sent to third party services and how these data are stored.

While Mozilla has no plans to make a smart speaker, we do think it’s important to share our research as part of our mission to ensure that the Internet is a public resource, open and accessible to all. As more people install voice assistants in their homes, designers, engineers, and policy makers need to grapple with issues of usability and privacy. We take an advocacy stance, arguing that as personal assistance become part of people’s daily experiences, we have the responsibility to study their use, and make design and policy recommendations that incorporate users’ needs and address their concerns.

Mozilla VR BlogBrowsing from the Edge

Browsing from the Edge

We are currently seeing two changes in computing: improvements in network bandwidth and latency brought on by the deployment of 5G networks, and a large number of low-power mobile devices and headsets. This provides an opportunity for rich web experiences, driven by off-device computing and rendering, delivered over a network to a lightweight user agent.

As we’ve improved our Firefox Reality browser for VR headsets and the content available on the web kept getting better, we have learned that the biggest things limiting more usage are the battery life and compute capabilities of head-worn devices. These are designed to be as lightweight, cool, and comfortable as possible - which is directly at odds with hours of heavy content consumption. Whether it’s for VR headsets or AR headsets, offloading the computation to a separate high-end machine that renders and encodes the content suitable for viewing on a mobile device or headset can enable potentially massive scenes to be rendered and streamed even to low-end devices.

Browsing from the Edge

Mozilla’s Mixed Reality team has been working on embedding Servo, a modern web engine which can take advantage of modern CPU and GPU architectures, into GStreamer, a streaming media platform capable of producing video in a variety of formats using hardware-accelerated encoding pipelines. We have a proof-of-concept implementation that uses Servo as a back end, rendering web content to a GStreamer pipeline, from which it can be encoded and streamed across a network. The plugin is designed to make use of GPUs for hardware-accelerated graphics and video encoding, and will avoid unnecessary readback from the GPU to the CPU which can otherwise lead to high power consumption, low frame rates, and additional latency. Together with Mozilla’s Webrender, this means web content will be rendered from CSS through to streaming video without ever leaving the GPU.

Today, the GStreamer Servo plugin is available from our Github repo, and can be used to stream 2D non-interactive video content across a network. This is still a work in progress! We are hoping to add immersive, interactive experiences, which will make it possible to view richer content on a wide set of mobile devices and headsets. Contact mr@mozilla.com if you’re looking for specific support for your hardware or platform!

The Mozilla BlogMore Questions About .org


A couple of weeks ago, I posted a set of questions about the Internet Society’s plan to sell the non-profit Public Interest Registry (PIR) to Ethos capital here on the Mozilla blog.

As the EFF recently explained, the stakes of who runs PIR are high. PIR manages all of the dot org domain names in the world. It is the steward responsible for ensuring millions of public interest orgs have domain names with reliable uptime and freedom from censorship.

The importance of good dot org stewardship spurred not only Mozilla but also groups like  EFF, Packet Clearing House and ICANN itself to raise serious questions about the sale.

As I noted in our original post, a private entity managing the dot org registry isn’t an inherently bad thing — but the bar for it being a good thing is pretty high. Strong rights protections, price controls and accountability mechanisms would need to be in place for a privately run PIR to be trusted by the dot org community. Aimed at the Internet Society, Ethos and ICANN, our questions focused on these topics, as well as the bidding process around the sale.

On Monday, Ethos CEO Erik Brooks published a blog post replying to Mozilla’s questions. The public response is appreciated — an open conversation means more oversight and more public engagement.

However, there are still critical questions about accountability and the bidding process that have yet to be answered before we can say whether this sale is good or bad for public interest organizations. These questions include:

1. For the Internet Society: what criteria, in addition to price, were used to review the bids for the purchase of PIR? Were the ICANN criteria originally applied to dot org bidders in 2002 considered? We realize that ISOC may not be able to disclose the specific bidders, but it’s well within reason to disclose the criteria that guided those bidders.

2. For Ethos: will accountability mechanisms such as the Stewardship Council and the incorporation of PIR as a public benefit corporation be in place before the sale closes? And, will outside parties be able to provide feedback on the charters for the B-corp before they are finalized? Both are essential if the mechanisms are going to be credible.

3. Finally, and possibly most importantly, for ICANN: will you put a new PIR contract in place as a condition of approving the deal? If so, will it provide robust oversight and accountability measures related to service quality and censorship issues?

We need much more information — and action — about this deal before it goes ahead. It is essential that Ethos and the Internet Society not close the PIR deal — and that ICANN does not approve the deal — until there are clear, strong provisions in place that protect service quality, prevent censorship and satisfy the dot org community.

As I wrote in my previous blog, Mozilla understands that a balance between commercial profit and public benefit is critical to a healthy internet. Much of the internet is and should be commercial. But significant parts of the internet — like the dot org ecosystem — must remain dedicated to the public interest.

The post More Questions About .org appeared first on The Mozilla Blog.

hacks.mozilla.orgPresenting the MDN Web Developer Needs Assessment (Web DNA) Report

Meet the first edition

We are  very happy to announce the launch of the first edition of a global, annual study of designer and developer needs on the web: The MDN Web Developer Needs Assessment. With your participation, this report is intended to shape the future of the web platform.

The MDN Web DNA Report 2019.

On single-vendor platforms, a single entity is responsible for researching developer needs. A single organization gets to decide how to address needs and prioritize for the future. On the web, it’s not that straightforward. Multiple organizations must participate in feature decisions, from browser vendors to standards bodies and the industry overall. As a result, change can be slow to come. Therefore, pain points may take a long time to address.

In discussions with people involved in the standardization and implementation of web platform features, they told us: “We need to hear more from developers.”

And that is how the MDN Web Developer Needs Assessment came to be. We aspire to represent the voices of developers and designers working on the web. We’ve analyzed the data you provided, and identified 28 discrete needs. Then, we sorted them into 14 different themes. Four of the top 5 needs relate to browser compatibility, our #1 theme. Documentation, Debugging, Frameworks, Security and Privacy round out the top 10.

DNA survey fundamentals

Like the web community itself, this assessment is not owned by a single organization. The survey was not tailored to fit the priorities of participating browser vendors, nor to mirror other existing assessments. Our findings are published under the umbrella of the MDN Product Advisory Board (PAB). The survey used for data collection was designed with input from more than 30 stakeholders. They represented PAB member organizations, including browser vendors, the W3C, and industry colleagues.

This report would not exist without the input of more than 28,000 developers and designers from 173 countries. Thanks to the thousands of you who took the twenty minutes to complete the survey. Individual participants from around the world contributed more than 10,000 hours of insight. Your thoughtful responses are helping us understand the pain points, wants, and needs of people working to build the web.

Where do we go from here

The input provided by survey participants is already influencing how browser vendors prioritize feature development to address your needs, both on and off the web. By producing this report annually, we will have the means to track changing needs and pain points over time. In fact, we believe developers, designers, and all stakeholders should be able to see the impact of their efforts on the future of the web we share.

You can download the report in its entirety here:

MDN Web DNA Report

Want to learn more about MDN Web Docs? Join the MDN Web Docs community, subscribe to our weekly newsletter, or follow MozDevNet on Twitter, to stay in the know.

Got a specific question about the MDN DNA Survey? Please share your constructive feedback and questions here or tweet us under the #mdnWebDNA hashtag.

The post Presenting the MDN Web Developer Needs Assessment (Web DNA) Report appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogKeeping the Internet Open & Accessible To All As It Evolves: Mozilla Research Grants

We are very happy to announce the results of our Mozilla Research Grants for the second half of 2019. This was an extremely competitive process, and we selected proposals which address seven strategic priorities for the Internet and for Mozilla. This includes studies of privacy in India and Indonesia, proposals to rethink how we might manage personal data, and explorations of the future of voice interfaces. The Mozilla Research Grants program is part of our commitment to being a world-class example of using inclusive innovation to impact culture, and reflects Mozilla’s commitment to open innovation.

2019:
Lead Researchers Institution Project Title
Nicola Dell Information Science, Cornell University | New York, NY, USA Analyzing Perceptions of Privacy and Security Among Small Business Owners in India and Indonesia
Sangeeta Mahapatra Institute of Asian Studies, German Institute of Global and Area Studies | Hamburg, Germany Digital surveillance and its chilling effect on journalists: Finding strategies and solutions to safely seek and share information online
Jennifer King Center for Internet and Society, Stanford Law School | Stanford, CA, USA Exploring User Perceptions of Personal Data Ownership and Management
Nick Nikiforakis Department of Computer Science, Stony Brook University | Stony Brook NY, USA BreakHound: Automated and Scalable Detection of Website Breakage
Janne Lindqvist Aalto University | Espoo, Finland Understanding Streaming Video User Experiences
Jordan Wirfs-Brock Department of Information Science, University of Colorado, Boulder | Boulder CO, USA Creating an Open, Trustworthy News and Information Ecosystem for Voice
Alisa Frik International Computer Science Institute / UC Berkeley | Berkeley CA, USA Exploring the Boundaries of Passive Listening in Voice Assistants

The post Keeping the Internet Open & Accessible To All As It Evolves: Mozilla Research Grants appeared first on The Mozilla Blog.

Firefox UXReflecting on “It’s Our Research” at Mozilla

Authored by Jennifer Davidson, with contributions from Julia Starkov, Jim Thomas, Marissa (Reese) Wood, and Michael Verdi

I thought we involved stakeholders in research pretty perfectly, here at Mozilla. Stakeholders come to research studies, listen attentively to research report-outs, and generally value the work our team does. Then I read “It’s Our Research”. I still think the Firefox User Research team has great buy-in across the organization, but after reading Tomer Sharon’s book, there are so many more things we could improve on! And what fun is a job, if there’s nothing to improve, right?

I’d like to call in some stakeholders to help me tell four short stories related to four pieces of advice Tomer Sharon provides in his book. (By the way, there are so many ideas in that book, it was hard to pick just four.)

Let’s start with the “failures”. Failure is a big word, but they’re really two examples where I could’ve done better, as a user researcher.

Never skip a field debriefing

An artistic bridge in Salem, Oregon with a clear blue sky.<figcaption>An artistic bridge in Salem, Oregon with a clear blue sky. The weather was just too nice to debrief.</figcaption>

Tomer Sharon recommends that we never skip a debrief. A debrief includes those precious moments immediately after interviewing a participant or visiting a participants’ home. They capture your first reactions after the experience. Sharon details that they’re important because it helps prepare for analysis, and helps stakeholders remember the sessions. On the Firefox UX team, we have a great debrief culture. We reserve time after interviews, even if they’re remote interviews to capture initial thoughts. This practice is more formal when we do field research — where we not only have an individual debrief form that we fill out immediately after each visit, but we also do group debriefs together after each interview to talk together about what we observed. I would say we always do debriefs after every interview or session with our users, but then I’d be lying. Let’s have Julia Starkov, a Mozilla IT UX designer talk about her experience when we skipped a debrief in the field.

The time we skipped a debrief, as told by Julia:

We wrapped up our Salem user research on a Friday afternoon; we were staying in different cities and some of us had plans right after the interview. We were also parked outside of the participant’s house so we decided to skip the group debrief and get the weekend started. While this felt like a huge relief at the time, I regret not pushing for us to meet at a cafe and wrap up the research together. By the time the weekend came and went, and I flew back to California, it was hard to recall specific parts of that interview, but mainly, I felt we could have definitely benefited from starting synthesis right away as a group. Overall, I thought the research effort was a total success, but I feel like I would have retained more insights and memories from the experience with a bit more team-time at the end.

Always include an executive summary

An executive summary is a few sentences, maybe one slide (if it’s in slide format) that summarizes a research study. I’ll tell you — it is the hardest part of a research study. Let’s have Reese Wood, Firefox VP of Product, explain the importance of an executive summary.

The importance of an executive summary, as explained by Reese:

The purpose of an executive summary is to describe the main and important points of your document in such a way that it will engage the readers. The intent is to have them want to learn more and continue to read the document. This is important because the reader will pay greater attention to detail and read the doc all the way through if they are engaged. In short, without a good executive summary, the document will likely not get the attention it deserves.

Reese puts it nicely, but I’ll reiterate, without an executive summary, people may not pay any attention to the study results (a researcher’s worst nightmare!). So over the past year, the Firefox UR team has been iterating on our executive summary style, to better fit the needs of our current executives. We try to be more succinct with our executive summaries than before, with clear takeaways and calls to action for each report.

Now let’s move on to talk about a couple of successes.

Make it easy to participate

A picnic table overlooking the Puget Sound, with a coffee, a packet, and post-it notes on it<figcaption>A picnic table overlooking the Puget Sound, with a coffee, a packet, and post-it notes on it. We took our research packets everywhere that week. Also, it’s never too nice to debrief.</figcaption>

Be prepared — not only with great research practices like doing a pilot (a practice research session before doing the rest of the study, to work out the kinks in a research protocol) — but also, by making it easy for stakeholders to participate. This means that the researcher, or a supportive operations manager, needs to do some administrative tasks. When you have stakeholders come with you on home visits, get them the materials they need to take notes or photos. With every field research trip I do, I find little tweaks I can make to improve the stakeholder experience. Earlier this year, I took a little extra planning time, and with a great example from other team members (particularly, Gemma!), created a packet in a waterproof (that’s important if you’re doing research in the rainy Pacific Northwest like we were) document folder for each stakeholder. It felt like making personalized goodie bags for a birthday party. The packet contained everything they needed to participate: pen, sharpie, note-taking guide, photograph shot list, post-its, a legal pad, and a label with their name on the packet. And Jim Thomas, Firefox Product Manager, will talk about his experience participating in field research.

The time I participated in field research, as told by Jim:

As a Product Manager, I know how important it is to spend time connecting with your users, learning how they think about your product, observing what they do and how they react. I’ve done this informally throughout my career, but formal field research seemed like it would involve a lot more rigor. In fact, it was even more rigorous, but it didn’t feel like it thanks to the prep work put in by the team. Every session had our roles and responsibilities planned ahead of time, with relevant materials in a convenient package. Debriefing after each session was simple even at restaurants or picnic tables because everything we needed was at our fingertips. All the process was handled for me so I was able to focus on my most important goal: learning about our users and their perspectives.

Analyze together

Sticky notes on a white board, arranged in columns, with various colors.<figcaption>Lots of sticky notes about how people learn to use new software and apps that the team pulled together. This was one of four walls that were covered in sticky notes by the end of the analysis.</figcaption>

Sharon recommends analyzing results together with stakeholders. He mentions that ideal outcomes from analyzing together are:

  • “Stakeholders are more likely to act upon study results because they participated in the study’s development”
  • “The design or product changes are more likely to be the right ones because they are based on multidisciplinary angles that try to solve the same problem”

Involving stakeholders in analysis can be tricky because analyzing data is difficult! It’s entirely too easy to use broad strokes to say what you think you saw, but to be rigorous with qualitative research analysis, we dive deep into the weeds with every action, every comment, to see if any themes or patterns emerge.

I mentioned earlier that Sharon recommends debriefing with stakeholders who are doing research with you. Notably, debriefs are not analysis. They may start the analysis juices going, but analysis is much more in-depth and complex.

We don’t always involve stakeholders in analysis, particularly if the timing is tight. However, in one case, I spent an extra few days in Germany, so we could do analysis immediately after field work. I think it went really well, and I was so happy to be about 80% done with analysis before even leaving Germany. Let’s hear what it was like for Michael Verdi, a Firefox UX designer and my partner on the project.

The time I participated in multiple days of analysis in Berlin, as told by Michael:

I’m really happy I got to participate in the analysis phase. I’d done it on a previous project and found it extremely valuable. In addition, I thought we were fortunate this time to be able to begin right away while everything was fresh. It’s so good to spend time digging into details and carefully considering everything we saw. It’s also difficult (but great practice) to hold off on trying to solve all the problems right away. I feel like by the time I did move on to design I’d really absorbed the things we learned.

This is not the end

While this blog post was a great reflection exercise about my practice as a user researcher with involving stakeholders in research, it is not the end of the story. The practice of user research is just that — a practice that takes practice, and continuous improvement. Next year, I hope to work on other concrete ideas that our Firefox UR team can implement to increase stakeholder involvement in user research even more.

Thank you to Gemma Petrie and Elisabeth Klann for reviewing this blog post.

Also published on the Firefox UX blog.


Reflecting on “It’s Our Research” at Mozilla was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXReflecting on “It’s Our Research” at Mozilla

I thought we involved stakeholders in research pretty perfectly, here at Mozilla. Stakeholders come to research studies, listen attentively to research report-outs, and generally value the work our team does. Then I read “It’s Our Research”. I still think the Firefox User Research team has great buy-in across the organization, but after reading Tomer Sharon’s book, there are so many more things we could improve on! And what fun is a job, if there’s nothing to improve, right?

I’d like to call in some stakeholders to help me tell four short stories related to four pieces of advice Tomer Sharon provides in his book. (By the way, there are so many ideas in that book, it was hard to pick just four.)

This post is authored by Jennifer Davidson, with contributions from Julia Starkov, Jim Thomas, Marissa (Reese) Wood, and Michael Verdi

Let’s start with the “failures”. Failure is a big word, but they’re really two examples where I could’ve done better, as a user researcher.

 

Never skip a field debriefing

An artistic bridge in Salem, Oregon with a clear blue sky.

An artistic bridge in Salem, Oregon with a clear blue sky. The weather was just too nice to debrief.

Tomer Sharon recommends that we never skip a debrief. A debrief includes those precious moments immediately after interviewing a participant or visiting a participants’ home. They capture your first reactions after the experience. Sharon details that they’re important because it helps prepare for analysis, and helps stakeholders remember the sessions. On the Firefox UX team, we have a great debrief culture. We reserve time after interviews, even if they’re remote interviews to capture initial thoughts. This practice is more formal when we do field research — where we not only have an individual debrief form that we fill out immediately after each visit, but we also do group debriefs together after each interview to talk together about what we observed. I would say we always do debriefs after every interview or session with our users, but then I’d be lying. Let’s have Julia Starkov, a Mozilla IT UX designer talk about her experience when we skipped a debrief in the field.

The time we skipped a debrief, as told by Julia: 

We wrapped up our Salem user research on a Friday afternoon; we were staying in different cities and some of us had plans right after the interview. We were also parked outside of the participant’s house so we decided to skip the group debrief and get the weekend started. While this felt like a huge relief at the time, I regret not pushing for us to meet at a cafe and wrap up the research together. By the time the weekend came and went, and I flew back to California, it was hard to recall specific parts of that interview, but mainly, I felt we could have definitely benefited from starting synthesis right away as a group. Overall, I thought the research effort was a total success, but I feel like I would have retained more insights and memories from the experience with a bit more team-time at the end.

 

Always include an executive summary

An executive summary is a few sentences, maybe one slide (if it’s in slide format) that summarizes a research study. I’ll tell you — it is the hardest part of a research study. Let’s have Marissa (Reese) Wood, Firefox VP of Product, explain the importance of an executive summary.

The importance of an executive summary, as explained by Reese:

The purpose of an executive summary is to describe the main and important points of your document in such a way that it will engage the readers. The intent is to have them want to learn more and continue to read the document. This is important because the reader will pay greater attention to detail and read the doc all the way through if they are engaged.  In short, without a good executive summary, the document will likely not get the attention it deserves.

Reese puts it nicely, but I’ll reiterate, without an executive summary, people may not pay any attention to the study results (a researcher’s worst nightmare!). So over the past year, the Firefox UR team has been iterating on our executive summary style, to better fit the needs of our current executives. We try to be more succinct with our executive summaries than before, with clear takeaways and calls to action for each report.

Now let’s move on to talk about a couple of successes.

Make it easy to participate

 

A picnic table overlooking the Puget Sound, with a coffee, a packet, and post-it notes on it

A picnic table overlooking the Puget Sound, with a coffee, a packet, and post-it notes on it. We took our research packets everywhere that week. Also, it’s never too nice to debrief.

Be prepared — not only with great research practices like doing a pilot (a practice research session before doing the rest of the study, to work out the kinks in a research protocol) — but also, by making it easy for stakeholders to participate. This means that the researcher, or a supportive operations manager, needs to do some administrative tasks. When you have stakeholders come with you on home visits, get them the materials they need to take notes or photos. With every field research trip I do, I find little tweaks I can make to improve the stakeholder experience. Earlier this year, I took a little extra planning time, and with a great example from other team members (particularly, Gemma!), created a packet in a waterproof (that’s important if you’re doing research in the rainy Pacific Northwest like we were) document folder for each stakeholder. It felt like making personalized goodie bags for a birthday party. The packet contained everything they needed to participate: pen, sharpie, note-taking guide, photograph shot list, post-its, a legal pad, and a label with their name on the packet. And Jim Thomas, Firefox Product Manager, will talk about his experience participating in field research.

 

The time I participated in field research, as told by Jim:

As a Product Manager, I know how important it is to spend time connecting with your users, learning how they think about your product, observing what they do and how they react. I’ve done this informally throughout my career, but formal field research seemed like it would involve a lot more rigor. In fact, it was even more rigorous, but it didn’t feel like it thanks to the prep work put in by the team. Every session had our roles and responsibilities planned ahead of time, with relevant materials in a convenient package. Debriefing after each session was simple even at restaurants or picnic tables because everything we needed was at our fingertips. All the process was handled for me so I was able to focus on my most important goal: learning about our users and their perspectives.

 

Analyze together

 

Sticky notes on a white board, arranged in columns, with various colors.

Lots of sticky notes about how people learn to use new software and apps that the team pulled together. This was one of four walls that were covered in sticky notes by the end of the analysis.

Sharon recommends analyzing results together with stakeholders. He mentions that ideal outcomes from analyzing together are:

  • “Stakeholders are more likely to act upon study results because they participated in the study’s development”
  • “The design or product changes are more likely to be the right ones because they are based on multidisciplinary angles that try to solve the same problem”

Involving stakeholders in analysis can be tricky because analyzing data is difficult! It’s entirely too easy to use broad strokes to say what you think you saw, but to be rigorous with qualitative research analysis, we dive deep into the weeds with every action, every comment, to see if any themes or patterns emerge.

I mentioned earlier that Sharon recommends debriefing with stakeholders who are doing research with you. Notably, debriefs are not analysis. They may start the analysis juices going, but analysis is much more in-depth and complex.

We don’t always involve stakeholders in analysis, particularly if the timing is tight. However, in one case, I spent an extra few days in Germany, so we could do analysis immediately after field work. I think it went really well, and I was so happy to be about 80% done with analysis before even leaving Germany. Let’s hear what it was like for Michael Verdi, a Firefox UX designer and my partner on the project.

The time I participated in multiple days of analysis in Berlin, as told by Michael: 

I’m really happy I got to participate in the analysis phase. I’d done it on a previous project and found it extremely valuable. In addition, I thought we were fortunate this time to be able to begin right away while everything was fresh. It’s so good to spend time digging into details and carefully considering everything we saw. It’s also difficult (but great practice) to hold off on trying to solve all the problems right away. I feel like by the time I did move on to design I’d really absorbed the things we learned.

This is not the end

While this blog post was a great reflection exercise about my practice as a user researcher with involving stakeholders in research, it is not the end of the story. The practice of user research is just that — a practice that takes practice, and continuous improvement. Next year, I hope to work on other concrete ideas that our Firefox UR team can implement to increase stakeholder involvement in user research even more.

 

Thank you to Gemma Petrie and Elisabeth Klann for reviewing this blog post. 

Also published on medium.com.

SUMO BlogIntroducing Joel Johnson / JR (Rina’s Maternity cover)

Hello everyone,

Please say hi to Joel Johnson who’s going to cover Rina Tambo Jensen while she’s away for her parental leave for the next 6 months. JR has an extensive background in starting and setting up support teams across different companies. We’re so excited to have him on our team.

Here is a short introduction from JR:

Hello Everyone! My Name is JoelRodney Johnson and I go by JR. I am from Dallas, Texas and have lived most of my life there. I spent several years in San Fransisco where I got started in Support and started a career in Tech. My guilty pleasure is reading Sci-Fi/Fantasy novels and if you were to take a look at my audible account you might be surprised at the amount of books I have in my library. I am so happy to be joining the Mozilla team as the Product Support Manager overseeing customer service for Mozilla products. I look forward to an exciting future here in Support.

Please join us to welcome him!

hacks.mozilla.orgMozilla Hacks’ 10 most-read posts of 2019

Like holiday music, lists are a seasonal cliche. They pique our interest year after year because we want a tl;dr for the 12 months gone by. To summarize, Mozilla Hacks celebrated its 10th birthday this past June, and now in December, we come to the end of a decade. Today, however, we’ll focus on the year that’s ending.

Topics and patterns

In fact, we covered plenty of interesting territory on Mozilla Hacks in 2019. Some of our most popular posts introduced experiments and special projects like Pyodide, extending the web platform for the scientific community. Mozilla WebThings, which also featured as one of 2018’s most popular posts, continued to engage attention and adoption. People want a smart home solution that is private, secure, and interoperable.

Not surprisingly, interest in Firefox release posts is stronger than ever. Firefox continues to deliver new developer tools and new consumer experiences to increase user agency, privacy, security, and choice — and our readers want the details.

Also, we’ve made remarkable progress on WebAssembly, as it extends beyond the browser and off the Web, via WASI (WebAssembly interface types) and associated tooling. Mozilla is a founding member of the Bytecode Alliance. Announced last month, this open source initiative is dedicated to creating secure new software foundations, built on new standards such as WebAssembly and WebAssembly System Interface (WASI). Plus, readers can’t get enough code cartoons, especially for visualizing complex concepts in programming.

The 2019 list

Some of the most high-traffic posts of 2019 were written in earlier years, and continue to attract readers. These are not included here. Instead, we’ll focus on what was new this year. And here they are:

  1. Pyodide: Bringing the scientific Python stack to the browser, by Michael Droettboom. On the heels of Project Iodide, this post describes Mozilla’s experimental project — a full Python data science stack that runs entirely in the browser.
  2. Standardizing WASI: A system interface to run WebAssembly outside the web by Lin Clark. WebAssembly needed a system interface for a conceptual operating system, in order to be run across all different OSs. WASI was designed as a true companion to WebAssembly, upholding the key principles of portability and security while running outside the browser. Code cartoons included.
  3. Introducing Mozilla WebThings. In this April post, Ben Francis announced the next phase of Mozilla’s work in IoT. The Mozilla WebThings platform for monitoring and controlling devices over the web consists of the WebThings Gateway, a software distribution for smart home gateways, and the WebThings Framework, a collection of reusable software components.
  4. Firefox’s New WebSocket Inspector. Recently, Jan “Honza” Odvarko and Harald Kirschner introduced Firefox DevTool’s Websocket Inspector, a much requested feature for visualizing and debugging real-time data communication flows.
  5. Implications of Rewriting a Browser Component in Rust. In the closing post of her Fearless Security series, Diane Hosfelt uses the Quantum CSS project as a case study exploring the real world impact of rewriting code in Rust.
  6. Technical Details on the Recent Firefox Add-on Outage. Firefox CTO and Levchin Prize winner Eric Rescorla tells it like it was. After all, who doesn’t love an in-depth, blow-by-blow post-mortem.
  7. Firefox 66 to block automatically playing audible video and audio by Chris Pearce. Unsolicited volume can be an annoying source of distraction and frustration for users of the web. Accordingly, in Firefox 66, the browser began to block audio and video from playing aloud until the user has initiated the audio. Firefox uses the HTMLMediaElement API to make this work.
  8. The Baseline Interpreter: a faster JS interpreter in Firefox 70 by Jan de Mooij. Meet the Baseline Interpreter in Firefox 70! Instead of writing a new interpreter from scratch, the JavaScript engine team added a new, generated JavaScript bytecode interpreter by sharing code with our existing Baseline JIT. Here’s how.
  9. Faster smarter JavaScript debugging in Firefox DevTools. Who wouldn’t want to run faster and smarter?! Especially where debugging is concerned. Firefox DevTools product guy Harald Kirschner describes in detail.
  10. WebAssembly Interface Types: Interoperate with All the Things!. People are excited about running WebAssembly outside the browser, and from languages like Python, Ruby, and Rust. No doubt about it. We round out the top ten with Lin Clark‘s illustrated look at WebAssembly Interface Types, and the proposed spec to make it possible for WASM to interoperate now and in future.

…And a happy new year!

Thank you for reading and sharing Mozilla Hacks in 2019. Here’s to the amazing decade that’s ending and the new one that’s almost here.

It’s always a good year to be learning. Want to keep up with Hacks? Follow @MozillaDev on Twitter, check out our new Mozilla Developer video channel, or subscribe to our always informative and unobtrusive weekly Mozilla Developer Newsletter below.

The post Mozilla Hacks’ 10 most-read posts of 2019 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogFirefox Announces New Partner in Delivering Private and Secure DNS Services to Users

NextDNS Joins Firefox’s Trusted Recursive Resolver Program Committing to Data Retention and Transparency Requirements that Respect User Privacy

Firefox announced a new partnership with NextDNS to provide Firefox users with private and secure encrypted Domain Name System (DNS) services through its Trusted Recursive Resolver Program. The company has committed to putting user privacy first in efforts to modernize DNS.

For more than 30 years, DNS has served as a key mechanism for accessing sites and services on the web. DNS is the Internet’s directory. It translates names we know like ​www.firefox.com​ to numeric Internet addresses that a computer understands. Almost every activity on the Internet begins with a DNS request.

The Domain Name System (DNS) is one of the oldest parts of internet architecture, and remains largely untouched by efforts to make the web safer and more private. Malicious actors can spy on or tamper with users’ browsing activity and DNS providers, including internet service providers (ISPs), can collect and monetize a user’s browsing activity.

Over the last two years, Firefox, in partnership with other industry stakeholders, has been working to develop, standardize, and deploy DNS over HTTPs (DoH). DoH aims to protect that same browsing activity from interception, manipulation, and collection in the middle of the network.

But encrypting DNS data with DoH is only the first step. Requiring the companies handling this data have rules in place – like the ones outlined in the TRR program – ensures that the access to that data is not being abused, is a necessary second.

“For most users, it’s very hard to know where their DNS requests go and what the resolver is doing with them.” said Eric Rescorla, Firefox CTO. “Firefox’s Trusted Recursive Resolver program allows Mozilla to negotiate with providers on your behalf and require that they have strong privacy policies before handling your DNS data. We’re excited to have NextDNS partner with us in our work to put people back in control of their data and privacy online.”

Our trusted recursive resolver program aims to standardize requirements for three areas: limiting data collection and retention from the resolver, ensuring transparency for any data retention that does occur, and limiting any potential use of the resolver to block access or modify content. By marrying the right technology – DoH – and strict operational requirements for those implementing it, we are improving user privacy by default by finding good partners, establishing legal agreements that put privacy first, and shipping a product we believe is best by default.

“We applaud Mozilla’s leading stance on privacy and we are proud to partner with them to offer the choice of a modern, fast and no-logs trusted DNS resolver to the Firefox community,” said Romain Cointepas, Co-founder, NextDNS.

NextDNS launched in March 2019 providing a fully customizable, modern and secure DNS resolver. Since then the company has worked to continue to improve the service and has released DNS-over-HTTPS apps for all major platforms (iOS, Android, macOS, Windows, Linux) and routers.

NextDNS is the latest resolver to join the TRR program. Cloudflare joined the program in 2018.

“Cloudflare joined the program back in 2018 with its launch of 1.1.1.1, the public DNS resolver built around the principle of privacy-first. We believe that giving consumers the ability to choose the fastest, most privacy-respecting DNS is a win-win. It’s good for them and it’s good for the Internet,” said Matthew Prince, co-founder & CEO, Cloudflare. “We hope more ISPs and DNS providers will follow this lead so we can finally encrypt one of the Internet’s most important protocols.”

While the TRR program, and its privacy first policies, is specific to Firefox’s implementation of DoH, we believe that all internet users are entitled to these protections. As the work to implement DoH continues, we look forward to bringing more partners into the TRR program who are committed to bringing the DNS system into the 21st century with the privacy and security protections users deserve and hope the rest of the industry follows suit.

The post Firefox Announces New Partner in Delivering Private and Secure DNS Services to Users appeared first on The Mozilla Blog.

Firefox UXDesigner, you can run a usability study: Usability Mentorship at Mozilla

Authors: Holly Collier, Jennifer Davidson

On the Firefox UX team, a human-centered design process and a “roll up your sleeves” attitude define our collaborative approach to shipping products and features that help further our mission. Over the past year, we’ve been piloting a Usability Mentorship program in an effort to train and empower designers to make regular research part of their design process, treating research as “a piece of the pie” rather than an extra slice on the side. What’s Mozilla’s Firefox UX team like? We have about twenty designers, a handful of user researchers, and a few content strategists.

This blog post is written by Holly (product designer, and mentee), and Jennifer (user researcher, and mentor).

A soda can in a coozy that says “User research is a team sport,” sitting on a table with people & laptops in the background.<figcaption>photo: Holly Collier; A coozy gift from Gemma Petrie. Credit for the phrase goes to Leslie Reichelt at GDS.</figcaption>

Why should I, a designer, learn user research skills?

Let’s start with Holly’s perspective.

I’m an interaction designer — I’ve been designing apps and websites (with and without the help of user research) for over a decade now, first in agencies and then in-house at an e-commerce giant. Part of what drew me to Mozilla and the Firefox UX team a year ago was the value that Mozillians place on user research. When I learned that we had an official Usability Mentorship program on the Firefox UX team, I was really excited — I had gotten a taste of helping to plan and run user research during my last gig, but I wanted to expand my skill set and to feel more confident conducting studies independently.

I think it’s really important to make user research an ongoing part of the product design process, and I’m always amazed by the insights it produces. By building up my own user research skill set, it means that I’m in a better position to identify user problems for us to solve and to improve the quality of the products I work on.

How does the mentorship program work?

And now onto Jennifer. She’ll talk about how this all worked.

A little bit about me before we dive in. I’m a user researcher — I’ve been in the industry for 6 years now, and at Mozilla on the Firefox User Research team for 3 years. I’ve worked at a couple big tech companies (HP & Intel) before coming to Mozilla. Prior to that, I worked hard at internships and got a PhD in Computer Science, focused on Human Computer Interaction. I love working at Mozilla, especially with designers like Holly, who are passionate about user research informing product design!

At Mozilla, our research team conducts three types of research (as written by Raja Jacob and Gemma Petrie):

  • Exploratory: Discovering and learning. Conducting research around a topic where a little is known. This type of research allows us to explore and learn about a problem space, challenge our assumptions on a topic, and more deeply understand the people we are designing for.
  • Generative: Generative research can help us develop concepts through activities such as participatory design sessions or help us better understand user behavior and mental models related to a specific problem/solution space.
  • Evaluative: Evaluative research is conducted to test a proposed solution to see if it meets people’s needs, is easy to use and understand, and creates a positive user experience. Usability testing falls under this category.

Like most organizations, we routinely have more designs that need usability testing than we have researchers. Gemma Petrie, our most senior User Researcher (a Principal User Researcher), started the mentorship program as a way to address this problem in her previous role as interim Director of User Research. By spreading usability testing abilities more broadly across the Firefox UX team, we could ensure that more designs got tested and ensure that our dedicated researchers could continue to do exploratory and generative research.

Because all of our designers and content strategists had different levels of familiarity with usability testing, Gemma brought in an external consultant to kick-off this effort and run a usability testing workshop with the entire UX team. This workshop was recorded so it can be cross-referenced later, and so that new team members can watch it as part of their onboarding.

At Mozilla, a mentorship project starts somewhat informally. Designers and content strategists “raise their hand” to show interest, and each researcher on the User Research team is a mentor. A designer gets paired with a mentor to figure out a (hopefully) low stress, low-stakes project to work on together. The designer takes the reins, and the researcher helps out along the way.

While we don’t have a strict curriculum, after a designer shows interest, each mentorship roughly follows these steps:

  • Pre-work: Watch the recorded usability training and fill out a simple intake form to describe their desired project.
  • First meeting — Set the bounds: To keep things simple, we restrict the method to a usability test on usertesting.com. This isn’t a survey, a foundational piece of work, or anything huge. The goal is to improve the design at hand.
  • After the first meeting — Homework: Look at past examples and come up with the research purpose and a draft of research questions.
  • Plan and protocol: Work hand-in-hand with the research mentor to create a research plan and protocol. Then collect feedback on the research questions from project stakeholders and write a protocol for the usability test tasks.
  • Analysis: One of our other researchers, Alice Rhee, created a great “analysis tips” document that we share with mentees: Set up a spreadsheet, watch the pilot video, make any necessary adjustments to the test, and then go from there. Direct quotes are captured, along with success or failure of tasks. Some quotes are bolded that are candidates to become “highlights” later.
  • Synthesis: Record answers to all research questions based on the summaries from the analysis. Is there anything missing? Anything you’re unsure about? Meet with research mentor to talk through this part.
  • Report: Use an existing report to get started. Start with a background and methods section, then clearly answer each research question.
  • Presentation: Work with Product Manager to schedule a time to share findings with the impacted team. Record it. Put it in the User Research repository.

But how does the mentorship program really work?

Let’s have Holly tell us about what she learned from her experience testing one of Firefox’s apps.

Stand on the shoulders of giants

We identified a product that needed usability testing: Firefox Lockwise for Android (then called “Firefox Lockbox”), a new password manager app that works in conjunction with logins that are saved in the Firefox browser. It’s in my team’s practice area, so I thought it would be a good chance to get to know a new product, but it was also a good fit in terms of my experience with the Android platform (all of my previous involvement in user research was on mobile apps).

There were a lot of materials available to help me get started — sample protocols, decks, analysis spreadsheets. The Firefox User Research team is great about documenting and saving research artifacts. The Lockwise team also had conducted in-person usability testing on their iPhone app the previous summer, so I had some usability questions to start with.

Firefox Lockbox app loader screen with screen recorder UI visible.<figcaption>Firefox Lockbox prototype in the usertesting.com screen recorder.</figcaption>

Designing and piloting an effective test protocol feels a lot like… product design and prototyping!

The process of gathering requirements, designing, piloting and revising before releasing a usability test to participants felt similar to the process of problem definition, design, prototyping and iteration we use for product design:

Requirements Gathering: This particular test had many constraints and requirements. Because this was a remote, unmoderated test, and because users had to have a Firefox account with Sync enabled in order to test the Lockbox app, the protocol for the test was pretty extensive.

Protocol Design: Getting high-quality test results required thinking through the test experience from the test taker’s point of view while also achieving our research goals:

  • How do I ensure that people to see everything we need them to see?
  • How do I construct questions to be clear but not leading?
  • How can I extend the protocol beyond usability to cover comprehension and desirability (make sure we’re designing the “right” thing) but also keep the test as short as possible?

Piloting: Before launching the real test, we launched a prototype of the test called a “pilot” and watched videos of a few participants to make sure the test instructions were understood and that the test was functioning as designed. Getting out of this pilot stage was challenging because of problems we discovered and had to troubleshoot along the way:

  • Our protocol had multiple sections and required a lot of steps before participants saw the actual thing that we were testing, so there were lots of potential failure points (and as a result, a lot of iterations around the wording for this part of the protocol before we got it right).
  • When a few participants weren’t seeing expected pieces of important functionality in the prototype, we figured out through talking with engineering that we needed to change the participant screening criteria to limit it to specific Android operating systems.
  • After getting a few recordings of participants screens that were totally black except for the usertesting.com mobile video recorder interface, we figured out that the prototype for our app, a password manager, had code in it that wouldn’t allow the mobile video recorder to capture participants’ screens. Our engineering team made us a special build for the rest of the tests. The lesson here: Talk with your engineering team, early & often!
Black screen with only screen recorder UI visible.<figcaption>The prototype wouldn’t allow the screen recorder to capture participants’ screens.</figcaption>

Once we addressed the issues with the protocol and the prototype that we uncovered during our pilot, we launched the test, and I moved on to watching participant videos and taking lots of notes (direct quotes!) that I’d mine for insights later.

Analysis and report writing don’t have to be scary

Analyzing the test data and delivering the findings (and recommendations!) was the most intimidating part of the process for me. As a designer, I’ve always looked to the research findings deck as a ‘beacon of truth’ in the design process. Now that I was running the research, I felt a lot of responsibility to deliver that same truth and guidance.

I worked through those feelings of intimidation by triple-checking my sources, mapping my data (including quotes) to the research questions and getting the following awesome perspective from Jennifer:

  • Usability tests are mostly about observation — just tell the story of what you saw.
  • The findings don’t need to be the ‘tablets coming down from the mountain,’ they just need to be accurate and backed by the data.
  • Design recommendations are suggestions, not directions. By phrasing them as “How might we?” questions (rather than being prescriptive about solutions), I could frame problems for the Lockwise team and rely on them to use their deep knowledge of the product and space to solve them.
The videochat interface, showing the Lockwise for Android findings presentation and attendees’ faces.<figcaption>The distributed Firefox Lockwise team during the findings presentation.</figcaption>

As it turned out, giving the findings presentation to the Lockwise team actually ended up being one of my favorite parts of the whole process. Telling the story was fun and inspired great conversation, and the Lockwise team really appreciated the “How might we?” format I used for the design recommendations.

How did the usability testing mentorship go for mentor?

Jennifer, take it away!

Helping Holly out with this project helped make tacit knowledge explicit. I’ve done many, many usability tests. I am almost on auto-pilot when I conduct them. So it was a great exercise to actually explain the process with a co-worker and perform a needed amount of reflection on my process. Especially with Holly, who was willing to learn and asks great questions.

Here are some of those great questions that made me reflect on my process:

How do I know when the pilot phase is over?

The pilot is over when the protocol “works” as intended. That means there are no show-stopping bugs in the platform that prevent someone from doing the task — and if there are showstoppers, adjusting the protocol. It also means that your questions have been phrased in a way that people understand. You can only determine this through observing a couple pilot participants.

Can I use my pilot data in the report?

The academic in me says that if the protocol changes at all from the pilot to running the rest of the tests, no, you can’t use the pilot data in the report. The industry researcher in me says that sure, you can include the results as long as you mark it clearly as pilot data.

How do I present ‘bad news’ to a team?

Most usability tests have at least some good news. Start with that! Be clear about when you’re going to deliver the bad news and come prepared with “How might we?” questions or recommendations on how to improve the experience.

How do I make sure I take my personal bias of how I understand this app out of the process?

Acknowledge your bias. Know what it is going in and voice it to your mentor. Do exactly what Holly mentioned earlier and double and triple-check your results against the videos, direct quotes, and research questions. Do you still feel like you might be stretching the interpretation of a result? Check with your mentor, or anyone who’s done a usability test before. Have a co-worker who isn’t very close to the project review your results and recommendations before you present them to the wider team.

And now, in parting.

Holly, the designer & mentee says:
Designers, you can do user research! Since completing the mentorship, I have conducted conducted several other studies, including a usability/concept test and an information architecture research study. It’s become a regular part of my design practice, and I think the products I work on are better for it.

Jennifer, the user researcher & mentor says:
User researchers out there: you can run your own usability testing mentorship program!
In the spirit of open source, here are some examples:

Thank you to Gemma Petrie, Anthony Lam, and Elisabeth Klann for reviewing this blog post. And a special thanks to Gemma Petrie for setting up the Usability Mentorship Program at Mozilla.

Also published on the Firefox UX Blog.


Designer, you can run a usability study: Usability Mentorship at Mozilla was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXDesigner, you can run a usability study: Usability Mentorship at Mozilla

On the Firefox UX team, a human-centered design process and a “roll up your sleeves” attitude define our collaborative approach to shipping products and features that help further our mission. Over the past year, we’ve been piloting a Usability Mentorship program in an effort to train and empower designers to make regular research part of their design process, treating research as “a piece of the pie” rather than an extra slice on the side. What’s Mozilla’s Firefox UX team like? We have about twenty designers, a handful of user researchers, and a few content strategists.

This blog post is written by Holly (product designer, and mentee), and Jennifer (user researcher, and mentor).

A soda can in a coozy that says "User research is a team sport," sitting on a table with people & laptops in the background.

photo: Holly Collier; A coozy gift from Gemma Petrie. Credit for the phrase goes to Leslie Reichelt at GDS.

<figcaption class="imageCaption"> </figcaption>

Why should I, a designer, learn user research skills?

Let’s start with Holly’s perspective.

I’m an interaction designer — I’ve been designing apps and websites (with and without the help of user research) for over a decade now, first in agencies and then in-house at an e-commerce giant. Part of what drew me to Mozilla and the Firefox UX team a year ago was the value that Mozillians place on user research. When I learned that we had an official Usability Mentorship program on the Firefox UX team, I was really excited — I had gotten a taste of helping to plan and run user research during my last gig, but I wanted to expand my skill set and to feel more confident conducting studies independently.

I think it’s really important to make user research an ongoing part of the product design process, and I’m always amazed by the insights it produces. By building up my own user research skill set, it means that I’m in a better position to identify user problems for us to solve and to improve the quality of the products I work on.

How does the mentorship program work?

And now onto Jennifer. She’ll talk about how this all worked.

A little bit about me before we dive in. I’m a user researcher — I’ve been in the industry for 6 years now, and at Mozilla on the Firefox User Research team for 3 years. I’ve worked at a couple big tech companies (HP & Intel) before coming to Mozilla. Prior to that, I worked hard at internships and got a PhD in Computer Science, focused on Human Computer Interaction. I love working at Mozilla, especially with designers like Holly, who are passionate about user research informing product design!

At Mozilla, our research team conducts three types of research (as written by Raja Jacob and Gemma Petrie):

  • Exploratory: Discovering and learning. Conducting research around a topic where a little is known. This type of research allows us to explore and learn about a problem space, challenge our assumptions on a topic, and more deeply understand the people we are designing for.
  • Generative: Generative research can help us develop concepts through activities such as participatory design sessions or help us better understand user behavior and mental models related to a specific problem/solution space.
  • Evaluative: Evaluative research is conducted to test a proposed solution to see if it meets people’s needs, is easy to use and understand, and creates a positive user experience. Usability testing falls under this category.

Like most organizations, we routinely have more designs that need usability testing than we have researchers. Gemma Petrie, our most senior User Researcher (a Principal User Researcher), started the mentorship program as a way to address this problem in her previous role as interim Director of User Research. By spreading usability testing abilities more broadly across the Firefox UX team, we could ensure that more designs got tested and ensure that our dedicated researchers could continue to do exploratory and generative research.

Because all of our designers and content strategists had different levels of familiarity with usability testing, Gemma brought in an external consultant to kick-off this effort and run a usability testing workshop with the entire UX team. This workshop was recorded so it can be cross-referenced later, and so that new team members can watch it as part of their onboarding.

At Mozilla, a mentorship project starts somewhat informally. Designers and content strategists “raise their hand” to show interest, and each researcher on the User Research team is a mentor. A designer gets paired with a mentor to figure out a (hopefully) low stress, low-stakes project to work on together. The designer takes the reins, and the researcher helps out along the way.

While we don’t have a strict curriculum, after a designer shows interest, each mentorship roughly follows these steps:

  • Pre-work: Watch the recorded usability training and fill out a simple intake form to describe their desired project.
  • First meeting — Set the bounds: To keep things simple, we restrict the method to a usability test on usertesting.com. This isn’t a survey, a foundational piece of work, or anything huge. The goal is to improve the design at hand.
  • After the first meeting — Homework: Look at past examples and come up with the research purpose and a draft of research questions.
  • Plan and protocol: Work hand-in-hand with the research mentor to create a research plan and protocol. Then collect feedback on the research questions from project stakeholders and write a protocol for the usability test tasks.
  • Analysis: One of our other researchers, Alice Rhee, created a great “analysis tips” document that we share with mentees: Set up a spreadsheet, watch the pilot video, make any necessary adjustments to the test, and then go from there. Direct quotes are captured, along with success or failure of tasks. Some quotes are bolded that are candidates to become “highlights” later.
  • Synthesis: Record answers to all research questions based on the summaries from the analysis. Is there anything missing? Anything you’re unsure about? Meet with research mentor to talk through this part.
  • Report: Use an existing report to get started. Start with a background and methods section, then clearly answer each research question.
  • Presentation: Work with Product Manager to schedule a time to share findings with the impacted team. Record it. Put it in the User Research repository.

But how does the mentorship program really work?

Let’s have Holly tell us about what she learned from her experience testing one of Firefox’s apps.

Stand on the shoulders of giants

We identified a product that needed usability testing: Firefox Lockwise for Android (then called “Firefox Lockbox”), a new password manager app that works in conjunction with logins that are saved in the Firefox browser. It’s in my team’s practice area, so I thought it would be a good chance to get to know a new product, but it was also a good fit in terms of my experience with the Android platform (all of my previous involvement in user research was on mobile apps).

There were a lot of materials available to help me get started — sample protocols, decks, analysis spreadsheets. The Firefox User Research team is great about documenting and saving research artifacts. The Lockwise team also had conducted in-person usability testing on their iPhone app the previous summer, so I had some usability questions to start with.

Firefox Lockbox app loader screen with screen recorder UI visible.

Firefox Lockbox prototype in the usertesting.com screen recorder.

Designing and piloting an effective test protocol feels a lot like… product design and prototyping!

The process of gathering requirements, designing, piloting and revising before releasing a usability test to participants felt similar to the process of problem definition, design, prototyping and iteration we use for product design:

Requirements Gathering (Problem Definition): This particular test had many constraints and requirements. Because this was a remote, unmoderated test, and because users had to have a Firefox account with Sync enabled in order to test the Lockbox app, the protocol for the test was pretty extensive.

Protocol Design (Product Design): Getting high-quality test results required thinking through the test experience from the test taker’s point of view while also achieving our research goals:

  • How do I ensure that people to see everything we need them to see?
  • How do I construct questions to be clear but not leading?
  • How can I extend the protocol beyond usability to cover comprehension and desirability (make sure we’re designing the “right” thing) but also keep the test as short as possible?

Piloting (Prototyping & Iteration): Before launching the real test, we launched a prototype of the test called a “pilot” and watched videos of a few participants to make sure the test instructions were understood and that the test was functioning as designed. Getting out of this pilot stage was challenging because of problems we discovered and had to troubleshoot along the way:

  • Our protocol had multiple sections and required a lot of steps before participants saw the actual thing that we were testing, so there were lots of potential failure points (and as a result, a lot of iterations around the wording for this part of the protocol before we got it right).
  • When a few participants weren’t seeing expected pieces of important functionality in the prototype, we figured out through talking with engineering that we needed to change the participant screening to limit it to specific Android operating systems.
  • After getting a few recordings of participants screens that were totally black except for the usertesting.com mobile video recorder interface, we figured out that the prototype for our app, a password manager, had code in it that wouldn’t allow the mobile video recorder to capture participants’ screens. Our engineering team made us a special build for the rest of the tests. The lesson here: Talk with your engineering team, early & often!
Black screen with only screen recorder UI visible.

The prototype wouldn’t allow the screen recorder to capture participants’ screens.

Once we addressed the issues with the protocol and the prototype that we uncovered during our pilot, we launched the test, and I moved on to watching participant videos and taking lots of notes (direct quotes!) that I’d mine for insights later.

Analysis and report writing don’t have to be scary

Analyzing the test data and delivering the findings (and recommendations!) was the most intimidating part of the process for me. As a designer, I’ve always looked to the research findings deck as a ‘beacon of truth’ in the design process. Now that I was running the research, I felt a lot of responsibility to deliver that same truth and guidance.

I worked through those feelings of intimidation by triple-checking my sources, mapping my data (including quotes) to the research questions and getting the following awesome perspective from Jennifer:

  • Usability tests are mostly about observation — just tell the story of what you saw.
  • The findings don’t need to be the ‘tablets coming down from the mountain,’ they just need to be accurate and backed by the data.
  • Design recommendations are suggestions, not directions. By phrasing them as “How might we?” questions (rather than being prescriptive about solutions), I could frame problems for the Lockwise team and rely on them to use their deep knowledge of the product and space to solve them.

The videochat interface, showing the Lockwise for Android findings presentation and attendees’ faces.

The distributed Firefox Lockwise team during the findings presentation.

As it turned out, giving the findings presentation to the Lockwise team actually ended up being one of my favorite parts of the whole process. Telling the story was fun and inspired great conversation, and the Lockwise team really appreciated the “How might we?” format I used for the design recommendations.

How did the usability testing mentorship go for mentor?

Jennifer, take it away!

Helping Holly out with this project helped make tacit knowledge explicit. I’ve done many, many usability tests. I am almost on auto-pilot when I conduct them. So it was a great exercise to actually explain the process with a co-worker and perform a needed amount of reflection on my process. Especially with Holly, who was willing to learn and asks great questions.

Here are some of those great questions that made me reflect on my process:

How do I know when the pilot phase is over? The pilot is over when the protocol “works” as intended. That means there are no show-stopping bugs in the platform that prevent someone from doing the task — and if there are showstoppers, adjusting the protocol. It also means that your questions have been phrased in a way that people understand. You can only determine this through observing a couple pilot participants.

Can I use my pilot data in the report? The academic in me says that if the protocol changes at all from the pilot to running the rest of the tests, no, you can’t use the pilot data in the report. The industry researcher in me says that sure, you can include the results as long as you mark it clearly as pilot data.

How do I present ‘bad news’ to a team? Most usability tests have at least some good news. Start with that! Be clear about when you’re going to deliver the bad news and come prepared with “How might we?” questions or recommendations on how to improve the experience.

How do I make sure I take my personal bias of how I understand this app out of the process? Acknowledge your bias. Know what it is going in and voice it to your mentor. Do exactly what Holly mentioned earlier and double and triple-check your results against the videos, direct quotes, and research questions. Do you still feel like you might be stretching the interpretation of a result? Check with your mentor, or anyone who’s done a usability test before. Have a co-worker who isn’t very close to the project review your results and recommendations before you present them to the wider team.

And now, in parting.

Holly, the designer & mentee says:
Designers, you can do user research! Since completing the mentorship, I have conducted conducted several other studies, including a usability/concept test and an information architecture research study. It’s become a regular part of my design practice, and I think the products I work on are better for it.

Jennifer, the user researcher & mentor says:
User researchers out there: you can run your own usability testing mentorship program!
In the spirit of open source, here are some examples:

Thank you to Gemma Petrie, Anthony Lam, and Elisabeth Klann for reviewing this blog post. And a special thanks to Gemma Petrie for setting up the Usability Mentorship Program at Mozilla.

Also published on medium.com.

Mozilla Add-ons BlogFriend of Add-ons: Jocelyn Li

Our newest Friend of Add-ons is Jocelyn Li! Jocelyn has been an active code contributor to addons.mozilla.org (AMO) since May 2018, when she found a frontend issue that involved broken CSS. She had known that Mozilla welcomed code contributions from community members, but hadn’t been sure if she was qualified to participate. As she looked at the CSS bug, she thought, “This doesn’t look that hard; maybe I can fix it,” and submitted her first patch a few hours later. She has been an avid contributor ever since.

Jocelyn says that contributing to a large public project like Mozilla has helped her grow professionally, thanks in part to positive interactions with staff members during code review. “They always give constructive comments and guide contributors,” she says. “When I learn either technical or non-technical skills, I can apply them to my own job.”

Mozilla and contributors alike benefit from the open source model, Jocelyn believes. “Mozilla receives contributions from the community. Contributors are like seeds all over the world and promote Mozilla’s projects or languages and improve their own companies at the same time.”

One of Jocelyn’s passions is learning new languages. Currently, she is learning Rust for a work project that uses node.js in typescript with tp-ts and Japanese to acclimate to Tokyo, where she moved earlier this year. “Every language provides different perspectives to us,” she notes. “One language may have terms or syntaxes that another language doesn’t have. It’s like acquiring a new skill.”

In her spare time, she enjoys reading, cooking, traveling, and learning how to play the cello. “I always feel like 24 hours in a day is not enough for me,” she says.

Thank you for your contributions, Jocelyn!

If you are interested in getting involved with the add-ons community, please take a look at our wiki for some opportunities to contribute to the project.

The post Friend of Add-ons: Jocelyn Li appeared first on Mozilla Add-ons Blog.

The Mozilla BlogPetitioning for rehearing in Mozilla v. FCC

Today, Mozilla continues the fight to preserve net neutrality protection as a fundamental digital right. Alongside other petitioners in our FCC challenge, Mozilla, Etsy, INCOMPAS, Vimeo and the Ad Hoc Telecom Users Committee filed a petition for rehearing and rehearing en banc in response to the D.C. Circuit decision upholding the FCC’s 2018 Order, which repealed safeguards for net neutrality.

Our petition asks the original panel of judges or alternatively the full complement of D.C. Circuit judges to reconsider the decision both because it conflicts with D.C. Circuit or Supreme Court precedent and because it involves questions of exceptional importance.

Mozilla’s petition focuses on the FCC’s reclassification of broadband as an information service and on the FCC’s failure to properly address competition and market harm. We explain why we believe the Court can in fact overturn the FCC’s new treatment of broadband service despite some of the deciding judges’ belief that Supreme Court precedent prevents rejection of what they consider a nonsensical outcome. In addition, we point out that the Court should have done more than simply criticize the FCC’s assertion that existing antitrust and consumer protection laws are sufficient to address concerns about market harm without engaging in further analysis. We also note inconsistencies in how the FCC handled evidence of market harm, and the Court’s upholding of the FCC’s approach nonetheless.

We are excited to continue to lead this effort as part of a broad community pressing for net neutrality protections, and Mozilla supports other petitioners’ filings at this stage that address additional important issues for reconsideration. See below for copies of the petitions filed.

Petition for rehearing and rehearing en banc filed by:

Mozilla, Etsy, INCOMPAS, Vimeo, and the Ad Hoc Telecom Users Committee

New America’s Open Technology Institute, Free Press, Public Knowledge, CDT, The Benton Institute for Broadband & Society, CCIA, and National Association of State Utility Consumer Advocates

National Hispanic Media Coalition

 

The post Petitioning for rehearing in Mozilla v. FCC appeared first on The Mozilla Blog.

QMOEnding QA community events, for now

Hello everyone,

We have an important announcement to make today, regarding the future of the Testday and Bugday community events we have been holding for our desktop product.

The state of things
QMO events have been around for several years now, with many loyal Mozilla contributors engaged in various types of manual testing activities– some centered around verification of bug fixes, others on trying out exciting new features or significant changes made to the browser’s core ones. The feedback we received through them, during the Nightly and Beta phases, helped us ship polished products with each iteration, and it’s something that we’re very grateful for.

We also feel that we could do more with the Testday and Bugday events. Their format has remained unchanged since we introduced them and the lack of a fresh new take on these events is now more noticeable than ever, as the overall interest in them has been dialing down for the past couple of years.

We think it’s time to take a step back, review things and think about new ways to engage the community going forward.

Goodbye, for now
Starting 2020, we are going to take some time to figure out what our next plans are. Test Days and Bugdays will be paused as a result, but we do plan to hold a final Testday this year, on December 20– we hope to see all of you there!

As we move forward, the #qa IRC channel will remain the best way to connect with us, so don’t hesitate to drop by and say hi. You’ll still be able to contribute towards bug fix verification by looking at bugs with the [good first verify] keyword.

Thank you all for your passion, loyalty and dedication to Firefox! As always, it’s inspiring to work with such amazing people!

Mozilla Add-ons BlogTest the new Content Security Policy for Content Scripts

As part of our efforts to make add-ons safer for users, and to support evolving manifest v3 features, we are making changes to apply the Content Security Policy (CSP) to content scripts used in extensions. These changes will make it easier to enforce our long-standing policy of disallowing execution of remote code.

When this feature is completed and enabled, remotely hosted code will not run, and attempts to run them will result in a network error. We have taken our time implementing this change to decrease the likelihood of breaking extensions and to maintain compatibility. Programmatically limiting the execution of remotely hosted code is an important aspect of manifest v3, and we feel it is a good time to move forward with these changes now.

We have landed a new content script CSP, the first part of these changes, behind preferences in Firefox 72. We’d love for developers to test it out to see how their extensions will be affected.

Testing instructions

Using a test profile in Firefox Beta or Nightly, please change the following preferences in about:config:

  • Set extensions.content_script_csp.enabled to true
  • Set extensions.content_script_csp.report_only to false to enable policy enforcement

This will apply the default CSP to the content scripts of all installed extensions in the profile.

Then, update your extension’s manifest to change your content_security_policy. With the new content script CSP,  content_scripts works the same as extension_pages. This means that the original CSP value moves under the extension_pages key and the new content_scripts key will control content scripts.

Your CSP will change from something that looks like:

content_security_policy: "script-src 'self'; object-src 'none'"

To something that looks like:

content_security_policy: {
  extension_pages: "script-src 'self'; object-src 'none'",
  content_scripts: "script-src 'self'; object-src 'none'"
}

Next, load your extension in about:debugging. The default CSP now applied to your content scripts will prevent the loading of remote resources, much like what happens when you try to  insert an image into a website over http, possibly causing your extension to fail. Similar to the old content_security_policy (as documented on MDN), you may make changes using the content_scripts key.

Please do not loosen the CSP to allow remote code, as we are working on upcoming changes to disallow remote scripts.

As a note, we don’t currently support any other keys in the content_security_policy object. We plan to be as compatible as possible with Chrome in this area will support the same key name they use for content_scripts in the future.

Please tell us about your testing experience on our community forums. If you think you’ve found a bug, please let us know on Bugzilla.

Implementation timeline

More changes to the CSP for extensions are expected to land behind preferences in the upcoming weeks. We will publish testing instructions once those updates are ready. The full set of changes should be finished and enabled by default in 2020, meaning that you will be able to use the new format without toggling any preferences in Firefox.

Even after the new CSP is turned on by default, extensions using manifest v2 will be able to continue using the string form of the CSP. The object format will only be required for extensions that use manifest v3 (which is not yet supported in Firefox).

There will be a transition period when Firefox supports both manifest v2 and manifest v3 so that developers have time to update their extensions. Stay tuned for updates about timing!

The post Test the new Content Security Policy for Content Scripts appeared first on Mozilla Add-ons Blog.

Mozilla L10NAgregamos la funcionalidad de traducir desde un idioma fuente alternativo en Pontoon

This article also exists in English.

¿Te gustaría localizar Firefox a tu idioma nativo, pero no mucha facilidad con el inglés como idioma fuente? Si entiendes otro idioma en el que se localiza Firefox, tenemos buenas noticias para ti.

En el último lanzamiento, Pontoon agregó soporte para el uso de localizaciones como cadenas fuentes. En lugar de la cadena original (en inglés), las traducciones del idioma fuente preferido se utilizarán en la lista de cadenas y en el panel de cadenas de origen.

Si todavía no hay una traducción disponible, mostraremos la cadena original, que también estará siempre disponible en el panel “Locales” si se utiliza el idioma fuente alternativo.

Your preferred source locale is used in the string list and the source string panel. You can see the original (English) string in the Locales panel.

Ejemplo del uso del español (es-ES) como idioma fuente para el guaraní.

Para seleccionar un idioma fuente alternativo, va a los ajustes de perfil y elige el idioma fuente preferido. Si quisieras la cadenas originales del proyecto (o sea, el inglés estadounidense) elige “Default project locale”.

Select your preferred source locale in settings

Se encuentra la preferencia de idioma fuente alternativo en los ajustes.

April Bowler es quien desarrollo esta funcionalidad. April nos va a acompañar como ingeniero de prácticas Outreachy del 3 de diciembre 2019 al 3 de marzo 2020. Gracias a las contribuciones de 8 candidatos Outreachy fantásticos, se han resuelto un enorme 40 bugs. Nos emocionamos para ver qué otras mejoras vienen de April en los meses siguientes.

Esperamos que esta última mejora ayudará a ampliar la localización de Firefox y otros productos más allá de los localizadores anglo-hablantes. Como siempre, si tienes alguna sugerencia para mejoras o cualquier duda, déjanos saberlo.