Mozilla Add-ons BlogContribute to selecting new Recommended extensions

Recommended extensions—a curated list of extensions that meet Mozilla’s highest standards of security, functionality, and user experience—are in part selected with input from a rotating editorial board of community contributors. Each board runs for six consecutive months and evaluates a small batch of new Recommended candidates each month. The board’s evaluation plays a critical role in helping identify new potential Recommended additions.

We are now accepting applications for community board members through 18 November. If you would like to nominate yourself for consideration on the board, please email us at amo-featured [at] mozilla [dot] org and provide a brief explanation why you feel you’d make a keen evaluator of Firefox extensions. We’d love to hear about how you use extensions and what you find so remarkable about browser customization. You don’t have to be an extension developer to effectively evaluate Recommended candidates (though indeed many past board members have been developers themselves), however you should have a strong familiarity with extensions and be comfortable assessing the strengths and flaws of their functionality and user experience.

Selected contributors will participate in a six-month project that runs from December – May.

Here’s the entire collection of Recommended extensions, if curious to explore what’s currently curated.

Thank you and we look forward to hearing from interested contributors by the 18 November application deadline!

The post Contribute to selecting new Recommended extensions appeared first on Mozilla Add-ons Blog.

Mozilla L10NIntroducing source string comments in Pontoon

Published on behalf of April Bowler.

When we first shipped the ability to add comments within Pontoon we mentioned that there were some additional features that would be coming, and I’m happy to announce that those features are now active.

What are those features, you say? Let’s have a look:

Mentions

If you have a comment that you need to ensure is seen by a particular person on the project, you can now ‘mention’ that person by typing the “@” symbol and their name. Once the comment is submitted that person will then be notified that they have been mentioned in a comment.

Pinned Comments

Project Managers can now Pin comments within the Comments panel in the 3rd column. This will not only add a visible pin to the comment, it will also place the comment within the source string Metadata section in the middle column and make it visible globally across all locales.

Request Context or Report Issue

Also present in the top section of the middle column is a new button that allows localizers to request more context or report an issue in the source string. When utilized this button will open the Comments panel and insert a mention for the contact person of the project. This will ensure that the contact person receives a notification about the comment.

Request context or report issue

Request Context or Report Issue button allows localizers to ping Project Managers, which can pin their responses to become globally visible for everyone.

Final Thoughts

As you can probably tell by the descriptions, all of these features work hand-in-hand with each other to help improve the workflow and communication within Pontoon.

For example, if you run into an ambiguous string, you can request context and the contact person will clarify it for you. If the meaning of the source string is not clear for the general audience, they can also pin their response, which will make it visible to everyone and even notify users who already translated or reviewed the string.

It has truly been a pleasure to work on this feature, first as part of my Outreachy internship and then as a contributor, and I hope that it has a positive impact on your work within Pontoon. I look forward to making continued contributions.

hacks.mozilla.orgMDN Web Docs evolves! Lowdown on the upcoming new platform

The time has come for Kuma — the platform that powers MDN Web Docs — to evolve. For quite some time now, the MDN developer team has been planning a radical platform change, and we are ready to start sharing the details of it. The question on your lips might be “What does a Kuma evolve into? A KumaMaMa?”

What? Kuma is evolving!

For those of you not so into Pokémon, the question might instead be “How exactly is MDN changing, and how does it affect MDN users and contributors”?

For general users, the answer is easy — there will be very little change to how we serve the great content you use everyday to learn and do your jobs.

For contributors, the answer is a bit more complex.

The changes in a nutshell

In short, we are updating the platform to move the content from a MySQL database to being hosted in a GitHub repository (codename: Project Yari).

Congratulations! Your Kuma evolved into Yari

The main advantages of this approach are:

  • Less developer maintenance burden: The existing (Kuma) platform is complex and hard to maintain. Adding new features is very difficult. The update will vastly simplify the platform code — we estimate that we can remove a significant chunk of the existing codebase, meaning easier maintenance and contributions.
  • Better contribution workflow: We will be using GitHub’s contribution tools and features, essentially moving MDN from a Wiki model to a pull request (PR) model. This is so much better for contribution, allowing for intelligent linting, mass edits, and inclusion of MDN docs in whatever workflows you want to add it to (you can edit MDN source files directly in your favorite code editor).
  • Better community building: At the moment, MDN content edits are published instantly, and then reverted if they are not suitable. This is really bad for community relations. With a PR model, we can review edits and provide feedback, actually having conversations with contributors, building relationships with them, and helping them learn.
  • Improved front-end architecture: The existing MDN platform has a number of front-end inconsistencies and accessibility issues, which we’ve wanted to tackle for some time. The move to a new, simplified platform gives us a perfect opportunity to fix such issues.

The exact form of the platform is yet to be finalized, and we want to involve you, the community, in helping to provide ideas and test the new contribution workflow! We will have a beta version of the new platform ready for testing on November 2, and the first release will happen on December 14.

Simplified back-end platform

We are replacing the current MDN Wiki platform with a JAMStack approach, which publishes the content managed in a GitHub repo. This has a number of advantages over the existing Wiki platform, and is something we’ve been considering for a number of years.

Before we discuss our new approach, let’s review the Wiki model so we can better understand the changes we’re making.

Current MDN Wiki platform

 

workflow diagram of the old kuma platform

It’s important to note that both content contributors (writers) and content viewers (readers) are served via the same architecture. That architecture has to accommodate both use cases, even though more than 99% of our traffic comprises document page requests from readers. Currently, when a document page is requested, the latest version of the document is read from our MySQL database, rendered into its final HTML form, and returned to the user via the CDN.

That document page is stored and served from the CDN’s cache for the next 5 minutes, so subsequent requests — as long as they’re within that 5-minute window — will be served directly by the CDN. That caching period of 5 minutes is kept deliberately short, mainly due to the fact that we need to accommodate the needs of the writers. If we only had to accommodate the needs of the readers, we could significantly increase the caching period and serve our document pages more quickly, while at the same time reducing the workload on our backend servers.

You’ll also notice that because MDN is a Wiki platform, we’re responsible for managing all of the content, and tasks like storing document revisions, displaying the revision history of a document, displaying differences between revisions, and so on. Currently, the MDN development team maintains a large chunk of code devoted to just these kinds of tasks.

New MDN platform

 

workflow diagram of the new yari platform

With the new JAMStack approach, the writers are served separately from the readers. The writers manage the document content via a GitHub repository and pull request model, while the readers are served document pages more quickly and efficiently via pre-rendered document pages served from S3 via a CDN (which will have a much longer caching period). The document content from our GitHub repository will be rendered and deployed to S3 on a daily basis.

You’ll notice, from the diagram above, that even with this new approach, we still have a Kubernetes cluster with Django-based services relying on a relational database. The important thing to remember is that this part of the system is no longer involved with the document content. Its scope has been dramatically reduced, and it now exists solely to provide APIs related to user accounts (e.g. login) and search.

This separation of concerns has multiple benefits, the most important three of which are as follows:

  • First, the document pages are served to readers in the simplest, quickest, and most efficient way possible. That’s really important, because 99% of MDN’s traffic is for readers, and worldwide performance is fundamental to the user experience.
  • Second, because we’re using GitHub to manage our document content, we can take advantage of the world-class functionality that GitHub has to offer as a content management system, and we no longer have to support the large body of code related to our current Wiki platform. It can simply be deleted.
  • Third, and maybe less obvious, is that this new approach brings more power to the platform. We can, for example, perform automated linting and testing on each content pull request, which allows us to better control quality and security.

New contribution workflow

Because MDN content is soon to be contained in a GitHub repo, the contribution workflow will change significantly. You will no longer be able to click Edit on a page, make and save a change, and have it show up nearly immediately on the page. You’ll also no longer be able to do your edits in a WYSIWYG editor.

Instead, you’ll need to use git/GitHub tooling to make changes, submit pull requests, then wait for changes to be merged, the new build to be deployed, etc. For very simple changes such as fixing typos or adding new paragraphs, this may seem like a step back — Kuma is certainly convenient for such edits, and for non-developer contributors.

However, making a simple change is arguably no more complex with Yari. You can use the GitHub UI’s edit feature to directly edit a source file and then submit a PR, meaning that you don’t have to be a git genius to contribute simple fixes.

For more complex changes, you’ll need to use the git CLI tool, or a GUI tool like GitHub Desktop, but then again git is such a ubiquitous tool in the web industry that it is safe to say that if you are interested in editing MDN, you will probably need to know git to some degree for your career or course. You could use this as a good opportunity to learn git if you don’t know it already! On top of that there is a file system structure to learn, and some new tools/commands to get used to, but nothing terribly complex.

Another possible challenge to mention is that you won’t have a WYSIWYG to instantly see what the page looks like as you add your content, and in addition you’ll be editing raw HTML, at least initially (we are talking about converting the content to markdown eventually, but that is a bit of a ways off). Again, this sounds like a step backwards, but we are providing a tool inside the repo so that you can locally build and preview the finished page to make sure it looks right before you submit your pull request.

Looking at the advantages now, consider that making MDN content available as a GitHub repo is a very powerful thing. We no longer have spam content live on the site, with us then having to revert the changes after the fact. You are also free to edit MDN content in whatever way suits you best — your favorite IDE or code editor — and you can add MDN documentation into your preferred toolchain (and write your own tools to edit your MDN editing experience). A lot of engineers have told us in the past that they’d be much happier to contribute to MDN documentation if they were able to submit pull requests, and not have to use a WYSIWYG!

We are also looking into a powerful toolset that will allow us to enhance the reviewing process, for example as part of a CI process — automatically detecting and closing spam PRs, and as mentioned earlier on, linting pages once they’ve been edited, and delivering feedback to editors.

Having MDN in a GitHub repo also offers much easier mass edits; blanket content changes have previously been very difficult.

Finally, the “time to live” should be acceptable — we are aiming to have a quick turnaround on the reviews, and the deployment process will be repeated every 24 hours. We think that your changes should be live on the site in 48 hours as a worst case scenario.

Better community building

Currently MDN is not a very lively place in terms of its community. We have a fairly active learning forum where people ask beginner coding questions and seek help with assessments, but there is not really an active place where MDN staff and volunteers get together regularly to discuss documentation needs and contributions.

Part of this is down to our contribution model. When you edit an MDN page, either your contribution is accepted and you don’t hear anything, or your contribution is reverted and you … don’t hear anything. You’ll only know either way by looking to see if your edit sticks, is counter-edited, or is reverted.

This doesn’t strike us as very friendly, and I think you’ll probably agree. When we move to a git PR model, the MDN community will be able to provide hands-on assistance in helping people to get their contributions right — offering assistance as we review their PRs (and offering automated help too, as mentioned previously) — and also thanking people for their help.

It’ll also be much easier for contributors to show how many contributions they’ve made, and we’ll be adding in-page links to allow people to file an issue on a specific page or even go straight to the source on GitHub and fix it themselves, if a problem is encountered.

In terms of finding a good place to chat about MDN content, you can join the discussion on the MDN Web Docs chat room on Matrix.

Improved front-end architecture

The old Kuma architecture has a number of front-end issues. Historically we have lacked a well-defined system that clearly describes the constraints we need to work within, and what our site features look like, and this has led to us ending up with a bloated, difficult to maintain front-end code base. Working on our current HTML and CSS is like being on a roller coaster with no guard-rails.

To be clear, this is not the fault of any one person, or any specific period in the life of the MDN project. There are many little things that have been left to fester, multiply, and rot over time.

Among the most significant problems are:

  • Accessibility: There are a number of accessibility problems with the existing architecture that really should be sorted out, but were difficult to get a handle on because of Kuma’s complexity.
  • Component inconsistency: Kuma doesn’t use a proper design system — similar items are implemented in different ways across the site, so implementing features is more difficult than it needs to be.

When we started to move forward with the back-end platform rewrite, it felt like the perfect time to again propose the idea of a design system. After many conversations leading to an acceptable compromise being reached, our design system — MDN Fiori — was born.

Front-end developer Schalk Neethling and UX designer Mustafa Al-Qinneh took a whirlwind tour through the core of MDN’s reference docs to identify components and document all the inconsistencies we are dealing with. As part of this work, we also looked for areas where we can improve the user experience, and introduce consistency through making small changes to some core underlying aspects of the overall design.

This included a defined color palette, simple, clean typography based on a well-defined type scale, consistent spacing, improved support for mobile and tablet devices, and many other small tweaks. This was never meant to be a redesign of MDN, so we had to be careful not to change too much. Instead, we played to our existing strengths and made rogue styles and markup consistent with the overall project.

Besides the visual consistency and general user experience aspects, our underlying codebase needed some serious love and attention — we decided on a complete rethink. Early on in the process it became clear that we needed a base library that was small, nimble, and minimal. Something uniquely MDN, but that could be reused wherever the core aspects of the MDN brand was needed. For this purpose we created MDN-Minimalist, a small set of core atoms that power the base styling of MDN, in a progressively enhanced manner, taking advantage of the beautiful new layout systems we have access to on the web today.

Each component that is built into Yari is styled with MDN-Minimalist, and also has its own style sheet that lives right alongside to apply further styles only when needed. This is an evolving process as we constantly rethink how to provide a great user experience while staying as close to the web platform as possible. The reason for this is two fold:

  • First, it means less code. It means less reinventing of the wheel. It means a faster, leaner, less bandwidth-hungry MDN for our end users.
  • Second, it helps address some of the accessibility issues we have begrudgingly been living with for some time, which are simply not acceptable on a modern web site. One of Mozilla’s accessibility experts, Marco Zehe, has given us a lot of input to help overcome these. We won’t fix everything in our first iteration, but our pledge to all of our users is that we will keep improving and we welcome your feedback on areas where we can improve further.

A wise person once said that the best way to ensure something is done right is to make doing the right thing the easy thing to do. As such, along with all of the work already mentioned, we are documenting our front-end codebase, design system, and pattern library in Storybook (see Storybook files inside the yari repo) with companion design work in Figma (see typography example) to ensure there is an easy, public reference for anyone who wishes to contribute to MDN from a code or design perspective. This in itself is a large project that will evolve over time. More communication about its evolution will follow.

The future of MDN localization

One important part of MDN’s content that we have talked about a lot during the planning phase is the localized content. As you probably already know, MDN offers facilities for translating the original English content and making the localizations available alongside it.

This is good in principle, but the current system has many flaws. When an English page is moved, the localizations all have to be moved separately, so pages and their localizations quite often go out of sync and get in a mess. And a bigger problem is that there is no easy way of signalling that the English version has changed to all the localizers.

General management is probably the most significant problem. You often get a wave of enthusiasm for a locale, and lots of translations done. But then after a number of months interest wanes, and no-one is left to keep the translations up to date. The localized content becomes outdated, which is often harmful to learning, becomes a maintenance time-suck, and as a result, is often considered worse than having no localizations at all.

Note that we are not saying this is true of all locales on MDN, and we are not trying to downplay the amount of work volunteers have put into creating localized content. For that, we are eternally grateful. But the fact remains that we can’t carry on like this.

We did a bunch of research, and talked to a lot of non-native-English speaking web developers about what would be useful to them. Two interesting conclusions were made:

  1. We stand to experience a significant but manageable loss of users if we remove or reduce our localization support. 8 languages cover 90% of the accept-language headers received from MDN users (en, zh, es, ja, fr, ru, pt, de), while 14 languages cover 95% of the accept-languages. We predict that we would expect to lose at most 19% of our traffic if we dropped L10n entirely.
  2. Machine translations are an acceptable solution in most cases, if not a perfect one. We looked at the quality of translations provided by automated solutions such as Google Translate and got some community members to compare these translations to manual translations. The machine translations were imperfect, and sometimes hard to understand, but many people commented that a non-perfect language that is up-to-date is better than a perfect language that is out-of-date. We appreciate that some languages (such as CJK languages) fare less well than others with automated translations.

So what did we decide? With the initial release of the new platform, we are planning to include all translations of all of the current documents, but in a frozen state. Translations will exist in their own mdn/translated-content repository, to which we will not accept any pull requests. The translations will be shown with a special header that says “This is an archived translation. No more edits are being accepted.” This is a temporary stage until we figure out the next step.

Note: In addition, the text of the UI components and header menu will be in English only, going forward. They will not be translated, at least not initially.

After the initial release, we want to work with you, the community, to figure out the best course of action to move forward with for translations. We would ideally rather not lose localized content on MDN, but we need to fix the technical problems of the past, manage it better, and ensure that the content stays up-to-date.

We will be planning the next phase of MDN localization with the following guiding principles:

  • We should never have outdated localized content on MDN.
  • Manually localizing all MDN content in a huge range of locales seems infeasible, so we should drop that approach.
  • Losing ~20% of traffic is something we should avoid, if possible.

We are making no promises about deliverables or time frames yet, but we have started to think along these lines:

  • Cut down the number of locales we are handling to the top 14 locales that give us 95% of our recorded accept-language headers.
  • Initially include non-editable Machine Learning-based automated translations of the “tier-1” MDN content pages (i.e. a set of the most important MDN content that excludes the vast long tail of articles that get no, or nearly no views). Ideally we’d like to use the existing manual translations to train the Machine Learning system, hopefully getting better results. This is likely to be the first thing we’ll work on in 2021.
  • Regularly update the automated translations as the English content changes, keeping them up-to-date.
  • Start to offer a system whereby we allow community members to improve the automated translations with manual edits. This would require the community to ensure that articles are kept up-to-date with the English versions as they are updated.

Sign up for the beta test

As we mentioned earlier, we are going to launch a beta test of the new MDN platform on November 2nd. We would love your help in testing out the new system and contribution workflow, letting us know what aspects seem good and what aspects seem painful, and suggesting improvements.

If you want to be notified when the new system is ready for testing, please let us know using this form.

Acknowledgements

I’d like to thank my colleagues Schalk Neethling, Ryan Johnson, Peter Bengtsson, Rina Tambo Jensen, Hermina Condei, Melissa Thermidor, and anyone else I’ve forgotten who helped me polish this article with bits of content, feedback, reviews, edits, and more.

The post MDN Web Docs evolves! Lowdown on the upcoming new platform appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogExtensions in Firefox 83

In addition to our brief update on extensions in Firefox 83, this post contains information about changes to the Firefox release calendar and a feature preview for Firefox 84.

Thanks to a contribution from Richa Sharma, the error message logged when a tabs.sendMessage is passed an invalid tabID is now much easier to understand. It had regressed to a generic message due to a previous refactoring.

End of Year Release Calendar

The end of 2020 is approaching (yay?), and as usual people will be taking time off and will be less available. To account for this, the Firefox Release Calendar has been updated to extend the Firefox 85 release cycle by 2 weeks. We will release Firefox 84 on 15 December and Firefox 85 on 26 January. The regular 4-week release cadence should resume after that.

Coming soon in Firefox 84: Manage Optional Permissions in Add-ons Manager

Starting with Firefox 84, currently available on the Nightly pre-release channel, users will be able to manage optional permissions of installed extensions from the Firefox Add-ons Manager (about:addons).

Optional permissions in about:addons

We recommend that extensions using optional permissions listen for the browser.permissions.onAdded and browser.permissions.onRemoved API events. This ensures the extension is aware of the user granting or revoking optional permissions.

The post Extensions in Firefox 83 appeared first on Mozilla Add-ons Blog.

SeaMonkeySeaMonkey 2.53.5 Beta 1..

Hi All,

Just want to drop a simple note announcing to the world that SeaMonkey 2.53.5 beta 1 has been released.

We, at the project, hope that everyone is staying vigilant against this pandemic.

Please do keep safe and healthy.

:ewong

 

Mozilla L10NL10n Report: October 2020 Edition

New content and projects

What’s new or coming up in Firefox desktop

Upcoming deadlines:

  • Firefox 83 is currently in beta and will be released on November 17. The deadline to update localization is on November 8 (see the previous l10n report to understand why it moved closer to the release date).
  • There might be changes to the release schedule in December. If that happens, we’ll make sure to cover them in the upcoming report.

The number of new strings remains pretty low, but there was a change that landed without new string IDs: English switched from a hyphen (-) to an em dash (–) as separator for window titles. Since this is a choice that belongs to each locale, and the current translation might be already correct, we decided to not invalidate all existing translations and notify localizers instead.

You can see the details of the strings that changed in this changeset. The full list of IDs, in case you want to search for them in Pontoon:

  • browser-main-window
  • browser-main-window-mac
  • page-info-page
  • page-info-frame
  • webrtc-indicator-title
  • tabs.containers.tooltip
  • webrtcIndicator.windowtitle
  • timeline.cssanimation.nameLabel
  • timeline.csstransition.nameLabel
  • timeline.scriptanimation.nameLabel
  • toolbox.titleTemplate1
  • toolbox.titleTemplate2
  • TitleWithStatus
  • profileTooltip

Alternatively, you can search for “ – “ in Pontoon, but you’ll have to skim through a lot of results, since Pontoon searches also in comments.

What’s new or coming up in mobile

Firefox for iOS v29, as well as iOS v14, both recently shipped – bringing with them the possibility to make Firefox your default browser for the first time ever!

v29 also introduced a Firefox homescreen widget, and more widgets will likely come soon. Congratulations to all for helping localize these awesome new features!

v30 strings have just recently been exposed on Pontoon, and the deadline for l10n strings completion is November 4th. Screenshots will be updated for testing very soon.

Firefox for Android (“Fenix”) is currently open for localizing v83 strings. The next couple of weeks should be used for completing your current pending strings, as well as testing your work for the release. Take a look at our updated docs here!

What’s new or coming up in web projects

Mozilla.org

Recently, the monthly WNP pages that went out with the Firefox releases contained content promoting features that were not available for global markets or campaign messages that targeted select few markets. In these situations, for all the other locales, the page was always redirected to the evergreen WNP page.  As a result, please make it a high priority if your locale has not completed the page.

More pages were added and migrated to Fluent format. For migrated content, the web team has decided not to make any edits until there is a major content update or page layout redesign. Please take the time to resolve errors first as the page may have been activated. If a placeable error is not fixed, it will be shown on production.

Common Voice & WebThings Gateway

Both projects now have a new point of contact. If you want to add a new language or have any questions with the strings, send an email directly to the person first. Follow the projects’ latest development through the channels on Discourse. The l10n-drivers will continue to provide support to both teams through the Pontoon platform.

What’s new or coming up in SuMo

Please help us localize the following articles for Firefox 82 (desktop and Android):

What’s new or coming up in Pontoon

Spring cleaning on the road to Django 3. Our new contributor Philipp started the process of upgrading Django to the latest release. In a period of 12 days, he landed 12(!) patches, ranging from library updates and replacing out-of-date libraries with native Python and Django capabilities to making our testing infrastructure more consistent and dropping unused code. Well done, Philipp! Thanks to Axel and Jotes for the reviews.

Newly published localizer facing documentation

Firefox for Android docs have been updated to reflect the recent changes introduced by our migration to the new “Fenix” browser. We invite you to take a look as there are many changes to the Firefox for Android localization workflow.

Events

Kudos to these long time Mozillians who found creative ways to spread the words promoting their languages and sharing their experience by taking the public airways.

  • Wim of Frisian was interviewed by a local radio station where he shared his passion localizing Mozilla projects. He took the opportunity to promote Common Voice and call for volunteers speaking Frisian. Since the airing of the story, Frisian saw an increase with 11 hours spoken clips and an increase between 250 to 400 people donating their voices. The interview would be aired monthly.
  • Quentin of Occitan made an appearance in this local news story in the south of French, talking about using Pontoon to localize Mozilla products in Occitan.

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Image by Elio Qoshi

  • Iskandar of Indonesian community for his huge efforts completing Firefox for Android (Fenix) localization in recent months.
  • Dian Ina and Andika of Indonesian community for their huge efforts completing the Thunderbird localization in recent months!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

hacks.mozilla.orgMDN Web Docs: Editorial strategy and community participation

We’ve made a lot of progress on moving forward with MDN Web Docs in the last couple of months, and we wanted to share where we are headed in the short- to mid-term, starting with our editorial strategy and renewed efforts around community participation.

New editorial strategy

Our updated editorial strategy has two main parts: the creation of content pillars and an editorial calendar.

The MDN writers’ team has always been responsible for keeping the MDN web platform reference documentation up-to-date, including key areas such as HTML, CSS, JavaScript, and Web APIs. We are breaking these key areas up into “content pillars”, which we will work on in turn to make sure that the significant new web platform updates are documented each month.

Note: This also means that we can start publishing our Firefox developer release notes again, so you can keep abreast of what we’re supporting in each new version, as well as Mozilla Hacks posts to give you further insights into what we are up to in Firefox engineering.

We will also be creating and maintaining an editorial calendar in association with our partners on the Product Advisory Board — and the rest of the community that has input to provide — which will help us prioritize general improvements to MDN documentation going forward. For example, we’d love to create more complete documentation on important web platform-related topics such as accessibility, performance, and security.

MDN will work with domain experts to help us update these docs, as well as enlist help from you and the rest of our community — which is what we want to talk about for the rest of this post.

Community call for participation

There are many day-to-day tasks that need to be done on MDN, including moderating content, answering queries on the Discourse forums, and helping to fix user-submitted content bugs. We’d love you to help us out with these tasks.

To this end, we’ve rewritten our Contributing to MDN pages so that it is simpler to find instructions on how to perform specific atomic tasks that will help burn down MDN backlogs. The main tasks we need help with at the moment are:

We hope these changes will help revitalize the MDN community into an even more welcoming, inclusive place where anyone can feel comfortable coming and getting help with documentation, or with learning new technologies or tools.

If you want to talk to us, ask questions, and find out more, join the discussion on the MDN Web Docs chat room on Matrix. We are looking forward to talking to you.

Other interesting developments

There are some other interesting projects that the MDN team is working hard on right now, and will provide deeper dives into with future blog posts. We’ll keep it brief here.

Platform evolution — MDN content moves to GitHub

For quite some time now, the MDN developer team has been planning a radical platform change, and we are ready to start sharing details of it. In short, we are updating the platform to move from a Wiki approach with the content in a MySQL database, to a JAMStack approach with the content being hosted in a Git repository (codename: Project Yari).

This will not affect end users at all, but the MDN developer team and our content contributors will see many benefits including a better contribution workflow (via Github), better ways in which we can work with our community, and a simplified, easier-to-maintain platform architecture. We will talk more about this in the next blog post!

Web DNA 2020

The 2019 Web Developer Needs Assessment (Web DNA) is a ground-breaking piece of research that has already helped to shape the future of the web platform, with input from more than 28,000 web developers’ helping to identify the top pain points with developing for the web.

The Web DNA will be run again in 2020, in partnership with Google, Microsoft, and several other stakeholders providing input into the form of the questions for this year. We launched the survey on October 12, and this year’s report is due out before the end of the year.

The post MDN Web Docs: Editorial strategy and community participation appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: Cross-Platform Language Binding Generation with Rust and “uniffi”

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

As the Glean SDK continues to expand its features and functionality, it has also continued to expand the number and types of consumers within the Mozilla ecosystem that rely on it for collection and transport of important metrics.  On this particular adventure, I find myself once again working on one of these components that tie into the Glean ecosystem.  In this case, it has been my work on the Nimbus SDK that has inspired this story.

Nimbus is our new take on a rapid experimentation platform, or a way to try out new features in our applications for subsets of the population of users in a way in which we can measure the impact.  The idea is to find out what our users like and use so that we can focus our efforts on the features that matter to them.  Like Glean, Nimbus is a cross-platform client SDK intended to be used on Android, iOS, and all flavors of Desktop OS that we support.  Also like Glean, this presented us with all of the challenges that you would normally encounter when creating a cross-platform library.  Unlike Glean, Nimbus was able to take advantage of some tooling that wasn’t available when we started Glean, namely: uniffi.

So what is uniffi?  It’s a multi-language bindings generator for Rust.  What exactly does that mean?  Typically you would have to write something in Rust and create a hand-written Foreign Function Interface (FFI) layer also in Rust.  On top of that, you also end up creating a hand-written wrapper in each and every language that is supported.  Instead, uniffi does most of the work for us by generating the plumbing necessary to transport data across the FFI, including the specific language bindings, making it a little easier to write things once and a lot easier to maintain multiple supported languages.  With uniffi we can write the code once in Rust, and then generate the code we need to be able to reuse these components in whatever language (currently supporting Kotlin, Swift and Python with C++ and JS coming soon) and on whatever platform we need.

So how does uniffi work?  The magic of uniffi works through generating a cdylib crate from the Rust code.  The interface is defined in a separate file through an Interface Description Language (IDL), specifically, a variant of WebIDL.  Then, using the uniffi-bindgen tool, we can scaffold the Rust side of the FFI and build our Rust code as we normally would, producing a shared library.  Back to uniffi-bindgen again to then scaffold the language bindings side of things, either Kotlin, Swift, or Python at the moment, with JS and C++ coming soon.  This leaves us with platform specific libraries that we can include in applications that call through the FFI directly into the Rust code at the core of it all.

There are some limitations to what uniffi can accomplish, but for most purposes it handles the job quite well.  In the case of Nimbus, it worked amazingly well because Nimbus was written keeping uniffi language binding generation in mind (and uniffi was written with Nimbus in mind).  As part of playing around with uniffi, I also experimented with how we could leverage it in Glean. It looks promising for generating things like our metric types, but we still have some state in the language binding layer that probably needs to be moved into the Rust code before Glean could move to using uniffi.  Cutting down on all of the handwritten code is a huge advantage because the Glean metric types require a lot of boilerplate code that is basically duplicated across all of the different languages we support.  Being able to keep this down to just the Rust definitions and IDL, and then generating the language bindings would be a nice reduction in the burden of maintenance.  Right now if we make a change to a metric type in Glean, we have to touch every single language binding: Rust, Kotlin, Swift, Python, C#, etc.

Looking back at Nimbus, uniffi does save a lot on overhead since we can write almost everything in Rust.  We will have a little bit of functionality implemented at the language layer, namely a callback that is executed after receiving and processing the reply from the server, the threading implementation that ensures the networking is done in a background thread, and the integration with Glean (at least until the Glean Rust API is available).  All of these are ultimately things that could be done in Rust as uniffi’s capabilities grow, making the language bindings basically just there to expose the API.  Right now, Nimbus only has a Kotlin implementation in support of our first customer, Fenix, but when it comes time to start supporting iOS and desktop, it should be as simple as just generating the bindings for whatever language that we want (and that uniffi supports).

Having worked on cross-platform tools for the last two years now, I can really appreciate the awesome power of being able to leverage the same client SDK implementation across multiple platforms.  Not only does this come as close as possible to giving you the same code driving the way something works across all platforms, it makes it a lot easier to trust that things like Glean collect data the same way across different apps and platforms and that Nimbus is performing randomization calculations the same across platforms and apps.  I have worked with several cross-platform technologies in my career like Xamarin or Apache Cordova, but Rust really seems to work better for this without as much of the overhead.  This is especially true with tools like uniffi to facilitate unlocking the cross-platform potential.  So, in conclusion, if you are responsible for cross-platform applications or libraries or are interested in creating them, I strongly urge you to think about Rust (there’s no way I have time to go into all the cool things Rust does…) and tools like uniffi to make that easier for you.  (If uniffi doesn’t support your platform/language yet, then I’m also happy to report that it is accepting contributions!)

The Mozilla BlogMozilla Reaction to U.S. v. Google

Today the US Department of Justice (“DOJ”) filed an antitrust lawsuit against Google, alleging that it unlawfully maintains monopolies through anticompetitive and exclusionary practices in the search and search advertising markets. While we’re still examining the complaint our initial impressions are outlined below.

Like millions of everyday internet users, we share concerns about how Big Tech’s growing power can deter innovation and reduce consumer choice. We believe that scrutiny of these issues is healthy, and critical if we’re going to build a better internet. We also know from firsthand experience there is no overnight solution to these complex issues. Mozilla’s origins are closely tied to the last major antitrust case against Microsoft in the nineties.

In this new lawsuit, the DOJ referenced Google’s search agreement with Mozilla as one example of Google’s monopolization of the search engine market in the United States. Small and independent companies such as Mozilla thrive by innovating, disrupting and providing users with industry leading features and services in areas like search. The ultimate outcomes of an antitrust lawsuit should not cause collateral damage to the very organizations – like Mozilla – best positioned to drive competition and protect the interests of consumers on the web.

For the past 20 years, Mozilla has been leading the fight for competition, innovation and consumer choice in the browser market and beyond. We have a long track record of creating innovative products and services that respect the privacy and security of consumers, and have successfully pushed the market to follow suit.

Unintended harm to smaller innovators from enforcement actions will be detrimental to the system as a whole, without any meaningful benefit to consumers — and is not how anyone will fix Big Tech. Instead, remedies must look at the ecosystem in its entirety, and allow the flourishing of competition and choice to benefit consumers.

We’ll be sharing updates as this matter proceeds.

The post Mozilla Reaction to U.S. v. Google appeared first on The Mozilla Blog.

hacks.mozilla.orgComing through with Firefox 82

As October ushers in the tail-end of the year, we are pushing Firefox 82 out the door. This time around we finally enable support for the Media Session API, provide some new CSS pseudo-selector behaviours, close some security loopholes involving the Window.name property, and provide inspection for server-sent events in our developer tools.

This blog post provides merely a set of highlights; for all the details, check out the following:

Inspecting server-sent events

Server-sent events allow for an inversion of the traditional client-initiated web request model, with a server sending new data to a web page at any time by pushing messages. In this release we’ve added the ability to inspect server-sent events and their message contents using the Network Monitor.

You can go to the Network Monitor, select the file that is sending the server-sent events, and view the received messages in the Response tab on the right-hand panel.

For more information, check out our Inspecting server-sent events guide.

Web platform updates

Now let’s look at the web platform additions we’ve got in store in 82.

Media Session API

The Media Session API enables two main sets of functionality:

  1. First of all, it provides a way to customize media notifications. It does this by providing metadata for display by the operating system for the media your web app is playing.
  2. Second, it provides event handlers that the browser can use to access platform media keys such as hardware keys found on keyboards, headsets, remote controls, and software keys found in notification areas and on lock screens of mobile devices. So you can seamlessly control web-provided media via your device, even when not looking at the web page.

The code below provides an overview of both of these in action:

if ('mediaSession' in navigator) {
  navigator.mediaSession.metadata = new MediaMetadata({
    title: 'Unforgettable',
    artist: 'Nat King Cole',
    album: 'The Ultimate Collection (Remastered)',
    artwork: [
      { src: 'https://dummyimage.com/96x96',   sizes: '96x96',   type: 'image/png' },
      { src: 'https://dummyimage.com/128x128', sizes: '128x128', type: 'image/png' },
      { src: 'https://dummyimage.com/192x192', sizes: '192x192', type: 'image/png' },
      { src: 'https://dummyimage.com/256x256', sizes: '256x256', type: 'image/png' },
      { src: 'https://dummyimage.com/384x384', sizes: '384x384', type: 'image/png' },
      { src: 'https://dummyimage.com/512x512', sizes: '512x512', type: 'image/png' },
    ]
  });

  navigator.mediaSession.setActionHandler('play', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('pause', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('seekbackward', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('seekforward', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('previoustrack', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('nexttrack', function() { /* Code excerpted. */ });
}

Let’s consider what this could look like to a web user — say they are playing music through a web app like Spotify or YouTube. With the first block of code above we can provide metadata for the currently playing track that can be displayed on a system notification, on a lock screen, etc.

The second block of code illustrates that we can set special action handlers, which work the same way as event handlers but fire when the equivalent action is performed at the OS-level. This could include for example when a keyboard play button is pressed, or a skip button is pressed on a mobile lock screen.

The aim is to allow users to know what’s playing and to control it, without needing to open the specific web page that launched it.

What’s in a Window.name?

This Window.name property is used to get or set the name of the window’s current browsing context — this is used primarily for setting targets for hyperlinks and forms. Previously one issue was that, when a page from a different domain was loaded into the same tab, it could access any information stored in Window.name, which could create a security problem if that information was sensitive.

To close this hole, Firefox 82 and other modern browsers will reset Window.name to an empty string if a tab loads a page from a different domain, and restore the name if the original page is reloaded (e.g. by selecting the “back” button).

This could potentially surface some issues — historically Window.name has also been used in some frameworks for providing cross-domain messaging (e.g. SessionVars and Dojo’s dojox.io.windowName) as a more secure alternative to JSONP. This is not the intended purpose of Window.name, however, and there are safer/better ways of sharing information between windows, such as Window.postMessage().

CSS highlights

We’ve got a couple of interesting CSS additions in Firefox 82.

To start with, we’ve introduced the standard ::file-selector-button pseudo-element, which allows you to select and style the file selection button inside <input type=”file”> elements.

So something like this is now possible:

input[type=file]::file-selector-button {
  border: 2px solid #6c5ce7;
  padding: .2em .4em;
  border-radius: .2em;
  background-color: #a29bfe;
  transition: 1s;
}
 
input[type=file]::file-selector-button:hover {
  background-color: #81ecec;
  border: 2px solid #00cec9;
}

Note that this was previously handled by the proprietary ::-webkit-file-upload-button pseudo, but all browsers should hopefully be following suit soon enough.

We also wanted to mention that the :is() and :where() pseudo-classes have been updated so that their error handling is more forgiving — a single invalid selector in the provided list of selectors will no longer make the whole rule invalid. It will just be ignored, and the rule will apply to all the valid selectors present.

WebExtensions

Starting with Firefox 82, language packs will be updated in tandem with Firefox updates. Users with an active language pack will no longer have to deal with the hassle of defaulting back to English while the language pack update is pending delivery.

Take a look at the Add-ons Blog for more updates to the WebExtensions API in Firefox 82!

The post Coming through with Firefox 82 appeared first on Mozilla Hacks - the Web developer blog.

about:communityNew Contributors, Firefox 82

With Firefox 82 hot off the byte presses, we are pleased to welcome the developers whose first code contributions shipped in this release, 18 of whom were new volunteers! Please join us in thanking each of them for their persistence and enthusiasm, and take a look at their contributions:

Open Policy & AdvocacyMozilla Mornings on addressing online harms through advertising transparency

On 29 October, Mozilla will host the next installment of Mozilla Mornings – our regular breakfast series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

A key focus of the upcoming Digital Services Act and European Democracy Action Plan initiatives is platform transparency – transparency about content curation, commercial practices, and data use to name a few. This installment of Mozilla Mornings will focus on transparency of online advertising, and in particular, how mechanisms for greater transparency of ad placement and ad targeting could mitigate the spread and impact of illegal and harmful content online.

As the European Commission prepares to unveil a series of transformative legislative proposals on these issues, the discussion promises to be timely and insightful.

Speakers
aaaa
Daniel Braun
Deputy Head of Cabinet of Ms. Věra Jourová, European Commission Vice-PresidentKarolina Iwańska
Lawyer and Policy Analyst, Panoptykon Foundation

Sam Jeffers
Co-Founder and Executive Director, Who Targets Me

With opening remarks by Raegan MacDonald
Head of Public Policy, Mozilla Corporation

Moderated by Jennifer Baker
EU Tech Journalist


Logistical information

29 October, 2020
10:30-12:00 CET
Zoom Webinar (conferencing details to be provided on morning of event)

Register your attendance here

The post Mozilla Mornings on addressing online harms through advertising transparency appeared first on Open Policy & Advocacy.

hacks.mozilla.orgA New Backend for Cranelift, Part 1: Instruction Selection

This post will describe my recent work on Cranelift as part of my day job at Mozilla. In this post, I will set some context and describe the instruction selection problem. In particular, I’ll talk about a revamp to the instruction selector and backend framework in general that we’ve been working on for the last nine months or so. This work has been co-developed with my brilliant colleagues Julian Seward and Benjamin Bouvier, with significant early input from Dan Gohman as well, and help from all of the wonderful Cranelift hackers.

Background: Cranelift

So what is Cranelift?

The project is a compiler framework written in Rust that is designed especially (but not exclusively) for just-in-time compilation. It’s a general-purpose compiler: its most popular use-case is to compile WebAssembly, though several other frontends exist, for example, cg_clif, which adapts the Rust compiler itself to use Cranelift. Folks at Mozilla and several other places have been developing the compiler for a few years now. It is the default compiler backend for wasmtime, a runtime for WebAssembly outside the browser, and is used in production in several other places as well.

We recently flipped the switch to turn on Cranelift-based WebAssembly support in nightly Firefox on ARM64 (AArch64) machines, including most smartphones, and if all goes well, it will eventually go out in a stable Firefox release. Cranelift is developed under the umbrella of the Bytecode Alliance.

In the past nine months, we have built a new framework in Cranelift for the “machine backends”, or the parts of the compiler that support particular CPU instruction sets. We also added a new backend for AArch64, mentioned above, and filled out features as needed until Cranelift was ready for production use in Firefox. This blog post sets some context and describes the design process that went into the backend-framework revamp.

Old Backend Design: Instruction Legalizations

To understand the work that we’ve done recently on Cranelift, we’ll need to zoom into the cranelift_codegen crate above and talk about how it used to work. What is this “CLIF” input, and how does the compiler translate it to machine code that the CPU can execute?

Cranelift makes use of CLIF, or the Cranelift IR (Intermediate Representation) Format, to represent the code that it is compiling. Every compiler that performs program optimizations uses some form of an Intermediate Representation (IR): you can think of this like a virtual instruction set that can represent all the operations a program is allowed to do. The IR is typically simpler than real instruction sets, designed to use a small set of well-defined instructions so that the compiler can easily reason about what a program means.

The IR is also independent of the CPU architecture that the compiler eventually targets; this lets much of the compiler (such as the part that generates IR from the input programming language, and the parts that optimize the IR) be reused whenever the compiler is adapted to target a new CPU architecture. CLIF is in Static Single Assignment (SSA) form, and uses a conventional control-flow graph with basic blocks (though it previously allowed extended basic blocks, these have been phased out). Unlike many SSA IRs, it represents φ-nodes with block parameters rather than explicit φ-instructions.

Within cranelift_codegen, before we revamped the backend design, the program remained in CLIF throughout compilation and up until the compiler emitted the final machine code. This might seem to contradict what we just said: how can the IR be machine-independent, but also be the final form from which we emit machine code?

The answer is that the old backends were built around the concept of “legalization” and “encodings”. At a high level, the idea is that every Cranelift instruction either corresponds to one machine instruction, or can be replaced by a sequence of other Cranelift instructions. Given such a mapping, we can refine the CLIF in steps, starting from arbitrary machine-independent instructions from earlier compiler stages, performing edits until the CLIF corresponds 1-to-1 with machine code. Let’s visualize this process:

Figure: legalization by repeated instruction expansion

A very simple example of a CLIF instruction that has a direct “encoding” to a machine instruction is iadd, which just adds two integers. On essentially any modern architecture, this should map to a simple ALU instruction that adds two registers.

On the other hand, many CLIF instructions do not map cleanly. Some arithmetic instructions fall into this category: for example, there is a CLIF instruction to count the number of set bits in an integer’s binary representation (popcount); not every CPU has a single instruction for this, so it might be expanded into a longer series of bit manipulations.

There are operations that are defined at a higher semantic level, as well, that will necessarily be lowered with expansions: for example, accesses to Wasm memories are lowered into operations that fetch the linear memory base and its size, bounds-check the Wasm address against the limit, compute the real address for the Wasm address, and perform the access.

To compile a function, then, we iterate over the CLIF and find instructions with no direct machine encodings; for each, we simply expand into the legalized sequence, and then recursively consider the instructions in that sequence. We loop until all instructions have machine encodings. At that point, we can emit the bytes corresponding to each instruction’s encoding.

Growing Pains, and a New Backend Framework?

There are a number of advantages to the legacy Cranelift backend design, which performs expansion-based legalization with a single IR throughout. As one might expect, though, there are also a number of drawbacks. Let’s discuss a few of each.

Single IR and Legalization: Pros

  1. By operating on a single IR all the way to machine-code emission, the same optimizations can be applied at multiple stages.
  2. If most of the Cranelift instructions become one machine instruction, and few legalizations are necessary, then this scheme can be very fast: it becomes simply a single traversal to fill in “encodings”, which were represented by small indices into a table.

Single IR and Legalization: Cons

  1. Expansion-based legalization may not always result in optimal code. There are sometimes single machine instructions that implement the behavior of multiple CLIF instructions, i.e. a many-to-one mapping. In order to generate efficient code, we want to be able to make use of these instructions; expansion-based legalization cannot.
  2. Expansion-based legalization rules must obey a partial order among instructions: if A expands into a sequence including B, then B cannot later expand into A. This could become a difficulty for the machine-backend author to keep straight.
  3. There are efficiency concerns with expansion-based legalization. At an algorithmic level, we prefer to avoid fixpoint loops (in this case, “continue expanding until no more expansions exist”) whenever possible. The runtime is bounded, but the bound is somewhat difficult to reason about, because it depends on the maximum depth of chained expansions.The data structures that enable in-place editing are also much slower than we would like. Typically, compilers store IR instructions in linked lists to allow for in-place editing. While this is asymptotically as fast as an array-based solution (we never need to perform random access), it is much less cache-friendly or ILP-friendly on modern CPUs. We’d prefer instead to store arrays of instructions and perform single passes over them whenever possible.
  4. Our particular implementation of the legalization scheme grew to be somewhat unwieldy over time (see, e.g., the GitHub issue #1141: Kill Recipes With Fire). Adding a new instruction required learning about “recipes”, “encodings”, and “legalizations” as well as mere instructions and opcodes; a more conventional code-lowering approach would avoid much of this complexity.
  5. A single-level IR has a fundamental tension: for analyses and optimizations to work well, an IR should have only one way to represent any particular operation. On the other hand, a machine-level representation should represent all of the relevant details of the target ISA. A single IR simply cannot serve both ends of this spectrum properly, and difficulties arose as CLIF strayed from either end.

For all of these reasons, as part of our revamp of Cranelift and a prerequisite to our new AArch64 backend, we built a new framework for machine backends and instruction selection. The framework allows machine backends to define their own instructions, separately from CLIF; rather than legalizing with expansions and running until a fixpoint, we define a single lowering pass; and everything is built around more efficient data-structures, carefully optimizing passes over data and avoiding linked lists entirely. We now describe this new design!

A New IR: VCode

The main idea of the new Cranelift backend is to add a machine-specific IR, with several properties that are chosen specifically to represent machine-code well. We call this VCode, which comes from “virtual-register code”, and the VCode contains MachInsts, or machine instructions. The key design choices we made are:

  • VCode is a linear sequence of instructions. There is control-flow information that allows traversal over basic blocks, but the data structures are not designed to easily allow inserting or removing instructions or reordering code. Instead, we lower into VCode with a single pass, generating instructions in their final (or near-final) order.
  • VCode is not SSA-based; instead, its instructions operate on registers. While lowering, we allocate virtual registers. After the VCode is generated, the register allocator computes appropriate register assignments and edits the instructions in-place, replacing virtual registers with real registers. (Both are packed into a 32-bit representation space, using the high bit to distinguish virtual from real.) Eschewing SSA at this level allows us to avoid the overhead of maintaining its invariants, and maps more closely to the real machine.

VCode instructions are simply stored in an array, and the basic blocks are recorded separately as ranges of array (instruction) indices. We designed this data structure for fast iteration, but not for editing.

Each instruction is mostly opaque from the point of view of the VCode container, with a few exceptions: every instruction exposes its (i) register references, and (ii) basic-block targets, if a branch. Register references are categorized into the usual “uses” and “defs” (reads and writes).

Note as well that the instructions can refer to either virtual registers (here denoted v0..vN) or real machine registers (here denoted r0..rN). This design choice allows the machine backend to make use of specific registers where required by particular instructions, or by the ABI (parameter-passing conventions).

Aside from registers and branch targets, an instruction contained in the VCode may contain whatever other information is necessary to emit machine code. Each machine backend defines its own type to store this information. For example, on AArch64, here are several of the instruction formats, simplified:

enum Inst {
    /// An ALU operation with two register sources and a register destination.
    AluRRR {
        alu_op: ALUOp,
        rd: Writable,
        rn: Reg,
        rm: Reg,
    },

    /// An ALU operation with a register source and an immediate-12 source, and a register
    /// destination.
    AluRRImm12 {
        alu_op: ALUOp,
        rd: Writable,
        rn: Reg,
        imm12: Imm12,
    },

    /// A two-way conditional branch. Contains two targets; at emission time, a conditional
    /// branch instruction followed by an unconditional branch instruction is emitted, but
    /// the emission buffer will usually collapse this to just one branch.
    CondBr {
        taken: BranchTarget,
        not_taken: BranchTarget,
        kind: CondBrKind,
    },

    // ...

These enum arms could be considered similar to “encodings” in the old backend, except that they are defined in a much more straightforward way, using type-safe and easy-to-use Rust data structures.

Lowering from CLIF to VCode

We’ve now come to the most interesting design question: how do we lower from CLIF instructions, which are machine-independent, into VCode with the appropriate type of CPU instructions? In other words, what have we replaced the expansion-based legalization and encoding scheme with?

We perform a single pass over the CLIF instructions, and at each instruction, we invoke a function provided by the machine backend to lower the CLIF instruction into VCode instruction(s). The backend is given a “lowering context” by which it can examine the instruction and the values that flow into it, performing “tree matching” as desired (see below). This naturally allows 1-to-1, 1-to-many, or many-to-1 translations. We incorporate a reference-counting scheme into this pass to ensure that instructions are only generated if their values are actually used; this is necessary to eliminate dead code when many-to-1 matches occur.

Tree Matching

Recall that the old design allowed for 1-to-1 and 1-to-many mappings from CLIF instructions to machine instructions, but not many-to-1. This is particularly problematic when it comes to pattern-matching for things like addressing modes, where we want to recognize particular combinations of operations and choose a specific instruction that covers all of those operations at once.

Let’s start by defining a “tree” that is rooted at a particular CLIF instruction. For each argument to the instruction, we can look “up” the program to find its producer (def). Because CLIF is in SSA form, either the instruction argument is an ordinary value, which must have exactly one definition, or it is a block parameter (φ-node in conventional SSA formulations) that represents multiple possible definitions. We will say that if we reach a block parameter (φ-node), we simply end at a tree leaf – it is perfectly alright to pattern-match on a tree that is a subset of the true dataflow (we might get suboptimal code, but it will still be correct). For example, given the CLIF code:

block0(v0: i64, v1: i64):
  brnz v0, block1(v0)
  jump block1(v1)

block1(v2: i64):
  v3 = iconst.i64 64
  v4 = iadd.i64 v2, v3
  v5 = iadd.i64 v4, v0
  v6 = load.i64 v5
  return v6

Let’s consider the load instruction: v6 = load.i64 v5. A simple code generator could map this 1-to-1 to the CPU’s ordinary load instruction, using the register holding v5 as an address. This would certainly be correct. However, we might be able to do better: for example, on AArch64, the available addressing modes include a two-register sum ldr x0, [x1, x2] or a register with a constant offset ldr x0, [x1, #64].

The “operand tree” might be drawn like this:

Figure: operand tree for load instruction

We stop at v2 and v0 because they are block parameters; we don’t know with certainty which instruction will produce these values. We can replace v3 with the constant 64. Given this view, the lowering process for the load instruction can fairly easily choose an addressing mode. (On AArch64, the code to make this choice is here; in this case it would choose the register + constant immediate form, generating a separate add instruction for v0 + v2.)

Note that we do not actually explicitly construct an operand tree during lowering. Instead, the machine backend can query each instruction input, and the lowering framework will provide a struct giving the producing instruction if known, the constant value if known, and the register that will hold the value if needed.

The backend may traverse up the tree (via the “producing instruction”) as many times as needed. If it cannot combine the operation of an instruction further up the tree into the root instruction, it can simply use the value in the register at that point instead; it is always safe (though possibly suboptimal) to generate machine instructions for only the root instruction.

Lowering an Instruction

Given this matching strategy, then, how do we actually do the translation? Basically, the backend provides a function that is called once per CLIF instruction, at the “root” of the operand tree, and can produce as many machine instructions as it likes. This function is essentially just a large match statement over the opcode of the root CLIF instruction, with the match-arms looking deeper as needed.

Here is a simplified version of the match-arm for an integer add operation lowered to AArch64 (the full version is here):

match op {
    // ...
    Opcode::Iadd => {
        let rd = get_output_reg(ctx, outputs[0]);
        let rn = put_input_in_reg(ctx, inputs[0]);
        let rm = put_input_in_rse_imm12(ctx, inputs[1]);
        let alu_op = choose_32_64(ty, ALUOp::Add32, ALUOp::Add64);
        ctx.emit(alu_inst_imm12(alu_op, rd, rn, rm));
    }
    // ...
}

There is some magic that happens in several helper functions here. put_input_in_reg() invokes the proper methods on the ctx to look up the register that holds an input value. put_input_in_rse_imm12() is more interesting: it returns a ResultRSEImm12, which is a “register, shifted register, extended register, or 12-bit immediate”. This set of choices captures all of the options we have for the second argument of an add instruction on AArch64.

The helper looks at the node in the operand tree and attempts to match either a shift or zero/sign-extend operator, which can be incorporated directly into the add. It also checks whether the operand is a constant and if so, could fit into a 12-bit immediate field. If not, it falls back to simply using the register input. alu_inst_imm12() then breaks down this enum and chooses the appropriate Inst arm (AluRRR, AluRRRShift, AluRRRExtend, or AluRRImm12 respectively).

And that’s it! No need for legalization and repeated code editing to match several operations and produce a machine instruction. We have found this way of writing lowering logic to be quite straightforward and easy to understand.

Backward Pass with Use-Counts

Now that we can lower a single instruction, how do we lower a function body with many instructions? This is not quite as straightforward as looping over the instructions and invoking the match-over-opcode function described above (though that would actually work). In particular, we want to handle the many-to-1 case more efficiently. Consider what happens when the add-instruction logic above is able to incorporate, say, a left-shift operator into the add instruction.

The add machine instruction would then use the shift’s input register, and completely ignore the shift’s output. If the shift operator has no other uses, we should avoid doing the computation entirely; otherwise, there was no point in merging the operation into the add.

We implement a sort of reference counting to solve this problem. In particular, we track whether any given SSA value is actually used, and we only generate code for a CLIF instruction if any of its results are used (or if it has a side-effect that must occur). This is a form of dead-code elimination but integrated into the single lowering pass.

We must see uses before defs for this to work. Thus, we iterate over the function body “backward”. Specifically, we iterate in postorder; this way, all instructions are seen before instructions that dominate them, so given SSA form, we see uses before defs.

Examples

Let’s see how this works in real life! Consider the following CLIF code:

function %f25(i32, i32) -> i32 {
block0(v0: i32, v1: i32):
  v2 = iconst.i32 21
  v3 = ishl.i32 v0, v2
  v4 = isub.i32 v1, v3
  return v4
}

We expect that the left-shift (ishl) operation should be merged into the subtract operation on AArch64, using the reg-reg-shift form of ALU instruction, and indeed this happens (here I am showing the debug-dump format one can see with RUST_LOG=debug when running clif-util compile -d --target aarch64):

VCode {
  Entry block: 0
Block 0:
  (original IR block: block0)
  (instruction range: 0 .. 6)
  Inst 0:   mov %v0J, x0
  Inst 1:   mov %v1J, x1
  Inst 2:   sub %v4Jw, %v1Jw, %v0Jw, LSL 21
  Inst 3:   mov %v5J, %v4J
  Inst 4:   mov x0, %v5J
  Inst 5:   ret
}

This then passes through the register allocator, has a prologue and epilogue attached (we cannot generate these until we know which registers are clobbered), has redundant moves elided, and becomes:

stp fp, lr, [sp, #-16]!
mov fp, sp
sub w0, w1, w0, LSL 21
mov sp, fp
ldp fp, lr, [sp], #16
ret

which is a perfectly valid function, correct and callable from C, on AArch64! (We could do better if we knew that this were a leaf function and avoided the stack-frame setup and teardown! Alas, many optimization opportunities remain.)

We’ve only scratched the surface of Cranelift’s design in this blog-post. Nevertheless, I hope this post has given a sense of the exciting work being done in compilers today!

This post is an abridged version; see the full version for more more technical details.

Thanks to Julian Seward and Benjamin Bouvier for reviewing this post and suggesting several additions and corrections.

The post A New Backend for Cranelift, Part 1: Instruction Selection appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyThe EU’s Current Approach to QWACs (Qualified Website Authentication Certificates) will Undermine Security on the Open Web

Since its founding in 1998, Mozilla has championed human-rights-compliant innovation as well as choice, control, and privacy for people on the Internet. We have worked hard to actualise this belief for the billions of users on the Web by actively leading and participating in the creation of Web standards that drive the Internet. We recently submitted our thoughts to the European Commission on its survey and public consultation regarding the eIDAS regulation, advocating for an interpretation of eIDAS that is better for user security and retains innovation and interoperability of the global Internet. 

Given our background in the creation of the Transport Layer Security (TLS) standard for website security, we believe that mandating an interpretation of eIDAS that requires Qualified Website Authentication Certificates (QWACs) to be bound with TLS certificates is deeply concerning. Along with weakening user security, it will cause serious harm to the single European digital market and its place within the global internet. 

Some high-level reasons for this position, as elucidated in our multiple recent submissions to the European Commission survey, are:

  1. It violates the eIDAS Requirements: The cryptographic binding of a QWAC to a connection or TLS certificate will violate several provisions of the eIDAS regulation, including Recital 67 (website authentication), Recital 27 (technological neutrality), and Recital 72 (interoperability). The move to cryptographically bind a QWAC to a connection or TLS certificate will negate this wise consideration and go against the legislative intent of the Council.
  2. It will undermine technical neutrality and interoperability:  Mandating TLS binding with QWACs will hinder technological neutrality and interoperability, as it will go against established best practices which have successfully helped keep the Web secure for the past two decades. Apart from being central to the goals of the eIDAS regulation itself, technological neutrality and interoperability are the pillars upon which innovation and competition take place on the web. Limiting them will severely hinder the ability of the EU digital single market to remain competitive within the global economy in a safe and secure manner.
  3. It will undermine privacy for end users: Validating QWACs, as currently envisaged by ETSI, poses serious privacy risks to end users. In particular, the proposal uses validation procedures or protocols that would reveal a user’s browsing activity to a third-party validation service. This third party service would be in a position to track and profile users based on this information. Even if this were to be limited by policy, this information is largely indistinguishable from a privacy-problematic tracking technique known as “link decoration”.
  4. It will create dangerous security risks for the Web: It has been repeatedly suggested that Trust Service Providers (TSPs) who issue QWACs under the eIDAS regulation automatically be included in the root certificate authority (CA) stores of all browsers. Such a move will amount to forced website certificate whitelisting by government dictate and will irremediably harm users’ safety and security. It goes against established best practices of website authentication that have been created by consensus from the varied experiences of the Internet’s explosive growth. The technical and policy requirements for a TSP to be included in the root CA store of Mozilla Firefox, for example, compare much more favourably than the framework created by the eIDAS for TSPs. They are more transparent, have more stringent audit requirements and provide for improved public oversight as compared to what eIDAS requires of TSPs.

As stated in our Manifesto and our white paper on bringing openness to digital identity, we believe individuals’ security and privacy on the Internet are fundamental and must not be treated as optional. The eIDAS regulation (even if inadvertently) using TLS certificates, enabling tracking, and requiring a de-facto whitelisting of TLS certificate issuers on the direction of government agencies is fundamentally incompatible with this vision of a secure and open Internet. We look forward to working with the Commission to achieve the objectives of eIDAS without harming the Open Web.

The post The EU’s Current Approach to QWACs (Qualified Website Authentication Certificates) will Undermine Security on the Open Web appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogExtensions in Firefox 82

Before we get to the Firefox 82 updates, I want to let you know that Philipp has passed the baton for these blog posts over to me. I plan to stick to the same format you know and hopefully love, but leave a comment if there’s anything you’d like to see change in future installments.

Language Packs

Starting with Firefox 82, language packs will be updated in tandem with Firefox updates. Users with an active language pack will no longer have to deal with the hassle of defaulting back to English while the language pack update is pending delivery.

Misc updates in Firefox 82

The cookie permission is no longer required in order to obtain the cookieStoreId for a tab, making it possible to identify container tabs without additional permissions.

The error message logged when a webRequest event listener is passed invalid match patterns in the urls value is now much easier to understand.

Firefox site isolation by default starting in Nightly 83

As mentioned earlier, we’re working on a big change to Firefox that isolates sites from one another. In the next few weeks, we’ll be rolling out an experiment to enable isolation by default for most Nightly users, starting in Firefox 83, with plans for a similar experiment on Beta by the end of the year.

For extensions that deal with screenshots, we’ve extended the captureTab and captureVisibleTab methods to enable capturing an arbitrary area of the page, outside the currently visible viewport. This should cover functionality previously enabled by the (long deprecated) drawWindow method, and you can find more details about new rect and scale options on the ImageDetails MDN page.

While we haven’t seen many reports of extension incompatibilities till now, Fission is a big architectural change to Firefox, and the web platform has many corner cases. You can help us find anything we missed by testing your extensions with Fission enabled, and reporting any issues on Bugzilla.

Thanks

Thank you to Michael Goossens for his multiple contributions to this release.

The post Extensions in Firefox 82 appeared first on Mozilla Add-ons Blog.

Blog of DataThis Week in Glean: FOG Progress report

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).


About a year ago chutten started the “This Week in Glean” series with an initial blog post about Glean on Desktop. Back then we were just getting started to bring Glean to Firefox Desktop. No code had been written for Firefox Desktop, no proposals had been written to discuss how we even do it.

Now, 12 months later, after four completed milestones, a dozen or so proposal and quite a bit of code, the Project Firefox on Glean (FOG) is finally getting into a stage where we can actually use and test it. It’s not ready for prime time yet, but FOG is enabled in Firefox Nightly already.

Over the past 4 weeks I’ve been on and off working on building out our support for a C++ and a JavaScript API. Soon Firefox engineers will be able to instrument their code using Glean. In C++ this will look like:

JavaScript developers will use it like this:

My work is done in bug 1646165. It’s still under review and needs some fine-tuning before that, but I hope to land it by next week. We will then gather some data from Nightly and validate that it works as expected (bug 1651110), before we let it ride the trains to land in stable (bug 1651111).

Our work won’t end there. With my initial API work landing we can start supporting all metric types, we still need to figure out some details of how FOG will handle IPC, and then there’s the whole process of convincing other people to use the new system.

2020 will be the year of Glean on the Desktop.

Mozilla Add-ons BlogNew add-on badges

A few weeks ago, we announced the pilot of a new Promoted Add-ons program. This new program aims to expand the number of add-ons we can review and verify as compliant with our add-on policies in exchange for a fee from participating developers.

We have recently finished selecting the participants for the pilot, which will run until the end of November 2020. When these extensions successfully complete the review process, they will receive a new badge on their listing page on addons.mozilla.org (AMO) and in the Firefox Add-ons Manager (about:addons).

Verified badge as it appears on AMO

Verified badge as it appears in the Firefox Add-ons Manager

We also introduced the “By Firefox” badge to indicate add-ons that are built by Mozilla. These add-ons also undergo manual review, and we are currently in the process of rolling them out.

By Firefox badge as it appears on AMO

By Firefox badge as it appears in the Firefox Add-ons Manager

Recommended extensions will continue to use the existing Recommended badge in the same locations.

We hope these badges make it easy to identify which extensions are regularly reviewed by Mozilla’s staff. As a reminder, all extensions that are not regularly reviewed by Mozilla display the following caution label on their AMO listing page:

If you’re interested in installing a non-badged extension, we encourage you to first read these security assessment tips.

The post New add-on badges appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyOpen Letter to South Korea’s ICT Minister, Mr. Ki-Young Choe: Ensure the TBA amendments don’t harm the open internet in South Korea

This week, as a global champion for net neutrality, Mozilla wrote to South Korea’s ICT Minister, Mr. Ki-Young Choe, to urge him to reconsider the Telecommunications Business Act (TBA) amendments. We believe they violate the principles of net neutrality and would make the South Korean internet a less open space.

In our view, forcing websites (content providers) to pay network usage fees will damage the Korean economy, harm local content providers, increase consumer costs, reduce quality of service, and make it harder for global services to reach South Koreans. While we recognise the need for stable network infrastructure, these amendments are the wrong way to achieve that goal and will lead to South Korea to be a less competitive economy on the global stage.

As we’ve detailed in our open letter, we believe the current TBA amendments will harm the South Korean economy and internet in the following ways:

  • Undermining competition: Such a move would unfairly benefit large players, preventing smaller players who can’t pay these fees or shoulder equivalent obligations from competing against them. Limiting its application only to those content providers meeting certain thresholds of daily traffic volume or viewer numbers will not solve the problem as it deprives small players of opportunities to compete against the large ones.  In the South Korean local content ecosystem, where players such as Naver, Kakao and other similar services are being forced to pay exorbitant fees to ISPs to merely reach users, the oppressive effects of the service stabilization costs can become further entry barriers.
  • Limiting access to global services: This move would oblige content providers from all over the world who want to reach South Korean users to either pay local ISPs or be penalised by the Korean government for providing unstable services.  As a result, many will likely choose to not offer services in South Korea rather than comply with this law and incur its costs and risks.
  • Technical infeasibility: Requiring content providers to be responsible for service stability is impractical in the technical design of how the internet operates. Last mile network conditions are largely the responsibility of ISPs and is the core commodity they sell to their customers. Network management practices can have an outsized impact on how a user experiences a service, such as latency and download speeds. Under the amended law, a content provider, who has not been paid for the delivery of its contents by anyone, could be held liable for actions of the ISP as part of routine network management practices, including underinvestment in capacity, which is both unfair and impractical.
  • Driving up consumer costs: When it comes to consumers, content providers will likely pass the costs associated with these additional fees and infrastructure along to them, potentially creating an impractical scenario where users and content providers pay greater fees for an overall decrease in quality of service. Consumers would also suffer from an overall decrease in quality of service as providers who refuse to install local caches due to mandatory fees will  be forced to use caches outside the territory of Korea (and will therefore be significantly slower, as we saw in 2017).

The COVID-19 pandemic has made clear that there has never been a greater need for an open and accessible internet that is a public resource available to all. This law, on the other hand, makes it harder for the Korean internet infrastructure to remain a model for the world to aspire towards in these difficult times. We urge the Ministry of Science and Information Communication Technologies to revisit the TBA amendment as a whole. We remain committed to engaging with the government and ensuring South Koreans enjoy access to the full diversity of the open internet.

The post Open Letter to South Korea’s ICT Minister, Mr. Ki-Young Choe: Ensure the TBA amendments don’t harm the open internet in South Korea appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogAdd-ons interns: developing software and careers

For the last several years, Mozilla has participated in the Google Summer of Code and Outreachy internship programs. Both programs offer paid three-month internship opportunities to students or other industry newcomers to work on a programming project with an open source organization. This year, we were joined by Lisa Chan and Atique Ahmed Ziad, from Outreachy and Google Summer of Code, respectively.

With mentorship from addons.mozilla.org (AMO) engineers Bob Silverberg and Andrew Williamson, Lisa built a Homepage Curation Tool to help our editorial staff easily make changes to the AMO homepage. Atique was mentored by Firefox engineers Luca Greco and Rob Wu, and senior add-on admin reviewer Andreas Wagner, and he developed a privileged extension for Firefox that monitors the activity of other installed extensions. This prototype is the starting point of a new feature that will help extension developers, add-on developers, and Firefox engineers investigate bugs in extensions or in the browser’s WebExtensions APIs.

We sat down with Lisa and Atique and asked them to share their internship experiences.

Question: What were the goals of your project? 

Lisa: I was able to achieve most of the goals of my project, which were to implement two major features and a minor feature on the AMO Homepage Curation Tool.

Two of the features (a major feature and the minor one) were associated with the admin user’s ability to select an image for the add-on in the primary hero area on the AMO homepage. Prior to implementing the major feature, the admin user could only choose an image from a gallery of pre-selected images. With the update, the admin user now has the option to upload an image as well. The admin tool also now utilizes the thumbnails of the images in the gallery following the minor feature update.

Screenshot of the AMO Homepage Admin tool

The second major feature involved providing the admin user with the ability to update and/or reorder the modules that appear under the hero area on the AMO homepage. These modules are sets of pre-defined add-on extensions and themes, such as Recommended Extensions, Popular Themes, etc, based on query data. The modules were previously built and rendered on the front-end separately, but they can now be created and managed on the server-side. When complete, the front-end will be able to retrieve and display the modules with a single call to a new API endpoint.

Screenshot of the AMO Homepage Curation Tool

Atique: My objective was to develop the minimum viable product of an Extension Activity Monitor. In the beginning, my mentors and I had formulated the user stories as a part of requirement analysis and agreed on the user stories to complete during the GSoC period.

The goals of Extension Activity Monitor were to be able to start and stop monitoring all installed extensions in Firefox, capture the real-time activity logs from monitored extensions and display them in a meaningful way, filter the activity logs, save the activity logs in a file, and load activity logs from a file.

Screenshot of Extension Activity Monitor prototype

Extension Activity Monitor in action

Question: What accomplishment from your internship are you most proud of? 

Lisa: Each pull request approval was a tiny victory for me, as I not only had to learn how to fix each issue, but learn Python and Django as well. I was only familiar with the front-end and had expected the internship project to be front-end, as all of my contributions on my Outreachy application were to the addons-frontend repository. While I had hoped my mentors would let me get a taste of the server-side by working on a couple of minor issues, I did not expect my entire project to be on the server-side. I am proud to have learned and utilized a new language and framework during my internship with the guidance of Bob and Andrew.

Atique: I am proud that I have successfully made a working prototype of Extension Activity Monitor and implemented most of the features that I had written in my proposal. But the accomplishment I am proudest of is that my skillset is now at a much higher level than it was before this internship.

Question: What was the most surprising thing you learned?

Lisa: I was surprised to learn how much I enjoyed working on the server-side. I had focused on learning front-end development because I thought I’d be more inclined to the design elements of creating the user interface and experience. However, I found the server-side to be just as intriguing, as I was drawn to creating the logic to properly manage the data.

Atique: My mentors helped me a lot in writing more readable, maintainable and good quality code, which I think is the best part of my learnings. I never thought about those things deeply in any project that I had worked on before this internship. My mentors said, “Code is for humans first and computer next,” which I am going to abide by forever.

Question: What do you plan to do after your internship?

Lisa: I plan to continue learning both front-end and back-end development while obtaining certifications. In addition, I’ll be seeking my next opportunity in the tech industry as well.

Atique: Participating in Google Summer of Code was really helpful to understand my strengths and shortcomings. I received very useful and constructive feedback from my mentors. I will be working on my shortcomings and make sure that I become a good software engineer in the near future.

Question: Is there anything else you would like to share? 

Lisa: I’m glad my internship project was on the server-side, as it not only expanded my skill set, but also opened up more career opportunities for me. I’m also grateful for the constant encouragement, knowledge, and support that Bob and Andrew provided me throughout this whole experience. I had an incredible summer with the Firefox Add-ons team and Mozilla community, and I am thankful for Outreachy for providing me with this amazing opportunity. ‘

Atique: I think the opportunities like Google Summer of Code and Outreachy provides a great learning experience to students. I am thankful to these programs’ organizers, also thankful to Mozilla for participating in programs that give great opportunities to students like me to work with awesome mentors.

Congratulations on your achievements, Lisa and Atique! We look forward to seeing your future accomplishments. 

The post Add-ons interns: developing software and careers appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgTo Eleventy and Beyond

In 2018, we launched Firefox Extension Workshop, a site for Firefox-specific extension development documentation. The site was originally built using the Ruby-based static site generator Jekyll. We had initially selected Jekyll for this project because we wanted to make it easy for editors to update the site using Markdown, a lightweight markup language.

Once the site had been created and more documentation was added, the build times started to grow. Every time we made a change to the site and wanted to test it locally, it would take ten minutes or longer for the site to build. The builds took so long that we needed to increase the default time limit for CircleCI, our continuous integration and continuous delivery service, because builds were failing when they ran past ten minutes with no output.

Investigating these slow builds using profiling showed that most of the time was being spent in the Liquid template rendering calls.

In addition to problems with build times, there were also issues with cache-busting in local development. This meant that things like a change of images wouldn’t show up in the site without a full rebuild after clearing the local cache.

As we were discussing how to best move forward, Jekyll 4 was released with features expected to improve build performance. However, an early test of porting to this version actually performed worse than the previous version. We then realized that we needed to find an alternative and port the site.

Update: 05-10-2020: A Jekyll developer reached out having investigated the slow builds. The findings were that Jekyll on its own isn’t slow, the high build times in our case were primarily caused by 3rd party plugins.

Looking at the Alternatives

The basic requirements for moving away from Jekyll were as follows:

  • Build performance needed to be better than the 10 minutes (600s+) it took to build locally.
  • Local changes should be visible as quickly as possible.
  • Ideally the solution would be JavaScript based, as the Add-ons Engineering team has a lot of collective JavaScript experience and a JavaScript based infrastructure would make it easier to extend.
  • It needed to be flexible, enough to meet the demands of adding more documentation in the future.

Hugo (written in Go), Gatsby (JS), and Eleventy (11ty) (JS) were all considered as options.

In the end, Eleventy was chosen for one main reason: it provided enough similarities to Jekyll that we could migrate the site without a major rewrite. In contrast, both Hugo and Gatsby would have required a significant refactoring.  Porting the site also meant less changes up front, which allowed us to focus on maintaining parity with the site already in production.

As Eleventy provides an option to use liquid templates, via LiquidJS, it meant that templates only needed relatively minimal changes to work.

The Current Architecture

There are four main building blocks in the Jekyll site:  Liquid templating, Markdown for documentation, Sass for CSS, and JQuery for behaviour.

To migrate to Eleventy we planned to minimize changes to how the site works, and focus on porting all of the existing documentation without changing the CSS or JavaScript.

Getting Started with the Port

The blog post Turning Jekyll up to Eleventy by Paul Lloyd was a great help in describing what would need to be done to get the site working under Eleventy.

The first step was to create the an Eleventy configuration file based on the old Jekyll one.

Data files were moved from YAML to JSON. This was done via a YAML to JSON conversion.

Next, the templates were updated to fix the differences in variables and the includes. The jekyll-assets plugin syntax was removed so that assets were directly served from the assets directory.

Up and Running with Statics

As a minimal fix to replace the Jekyll plugins for CSS and JS,  the CSS (Sass) and JS were built with Command Line Interface (CLI) scripts added to the package.json using Uglify and SASS.

For Sass, this required loading paths via the CLI and then just passing the main stylesheet:

sass
  --style=compressed 
  --load-path=node_modules/foundation-sites/scss/  
  --load-path=node_modules/slick-carousel/slick/
  _assets/css/styles.scss _assets/css/styles.css

For JS, every script was passed to uglify in order:

uglifyjs 
  node_modules/jquery/dist/jquery.js 
  node_modules/dompurify/dist/purify.js  
  node_modules/velocity-animate/velocity.js
  node_modules/velocity-ui-pack/velocity.ui.js
  node_modules/slick-carousel/slick/slick.js
  _assets/js/tinypubsub.js 
  _assets/js/breakpoints.js
  _assets/js/parallax.js 
  _assets/js/parallaxFG.js 
  _assets/js/inview.js 
  _assets/js/youtubeplayer.js 
  _assets/js/main.js 
  -o _assets/js/bundle.js

This was clearly quite clunky, but it got JS and CSS working in development albeit in a way that required a script to be run manually. However, with the CSS and JS bundles working, this made it possible to focus on getting the homepage up and running without worrying about anything more complicated to start with.

With a few further tweaks the homepage built successfully:

Screenshot of the Firefox Extension Workshop homepage

Screenshot of the Extension Workshop Homepage built with Eleventy

Getting the entire site to build

With the homepage looking like it should, the next task was fixing the rest of the syntax. This was a pretty laborious process of updating all the templates and removing or replacing plugins and filters that Eleventy didn’t have.

After some work fixing up the files, finally the entire site could be built without error! 🎉

What was immediately apparent was how fast Eleventy is. The entire site was built in under 3 seconds. That is not a typo; it is 3 seconds not minutes.

A gif of Doc from Back to the Future

Doc from Back to the Future when he first realizes how fast Eleventy is: “Under 3 seconds… Great Scott!”

Improving Static Builds

Building JS and CSS is not part of Eleventy itself. This means it’s up to you to decide how you want to handle statics.

The goals were as follows:

  • Keep it fast.
  • Keep it simple (especially for content authors)
  • Have changes be reflected as quickly as possible.

The first approach moved the CSS and JS to node scripts. These replicated the crude CLI scripts using APIs.

Ultimately we decided to decouple asset building from Eleventy entirely. This meant that Eleventy could worry about building the content, and a separate process would handle the static assets. It also meant that the static asset scripts could write out to the build folder directly.

This made it possible to have the static assets built in parallel with the Eleventy build. The downside was that Eleventy couldn’t tell the browserSync instance (the server Eleventy uses in development) to update as it wasn’t involved with this process. It also meant that watching JS and SASS source files needed to be handled with a separate watching configuration, which in this case was handled via chokidar-cli. Here’s the command used in the package.json script for the CSS build:

chokidar 'src/assets/css/*.scss' -c 'npm run sass:build'

The sass:build script runs bin/build-styles

Telling BrowserSync to update JS and CSS

With the JS and CSS being built correctly, we needed to update the browserSync instance to tell it that the JS and CSS had changed. This way, when you make a CSS change the page will reload without a refresh. Fast updates provide the ideal short feedback loop for iterative changes.

Fortunately, browserSync has a web API. We were able to use this to tell browserSync to update every time the CSS or JS build is built in development.

For the style bundle, the URL called is http://localhost:8081/__browser_sync__?method=reload&args=styles.css

To handle this, the build script fetches this URL whenever new CSS is built.

Cleaning Up

Next, we needed to clean up everything and make sure the site looked right. Missing plugin functionality needed to be replaced and docs and templates needed several adjustments.

Here’s a list of some of the tasks required:

  • Build a replacement for the Jekyll SEO plugin in templating and computed data.
  • Clean-up syntax-highlighting and move to “code fences.”
  • Use the 11ty “edit this page on github” recipe. (A great feature for making it easier to accept contributions to document improvements).
  • Clean-up templates to fix some errant markup. This was mainly about looking at whitespace control in liquid as in some cases there were some bad interactions with the markdown which resulted in spurious extra elements.
  • Recreate the pages API plugin. Under Jekyll this was used by the search, so we needed to recreate this for parity in order to avoid re-implementing the search from scratch.
  • Build tag pages instead of using the search. This was also done for SEO benefits.

Building for Production

With the site looking like it should and a lot of the minor issues tidied up, one of the final steps was to look at how to handle building the static assets for production.

For optimal performance, static assets should not need to be fetched by the browser if that asset (JS, CSS, Image, fonts etc) is in the browser’s cache. To do that you can serve assets with expires headers set into the future (typically one year). This means that if foo.js is cached, when asked to fetch it, the browser won’t re-fetch it unless the cache is removed or the cached asset is older than the expiration date set via the expires header in the original response.

Once you’re caching assets in this way, the URL of the resource needs to be changed to “bust the cache”, otherwise the browser won’t make a request if the cached asset is considered “fresh.”

Cache-busting strategies

Here’s a few standard approaches used for cache-busting URLS:

Whole Site Version Strings

The git revision of the entire site can be used as part of the URL. This means that every time a revision is added and the site is published, every asset would be considered fresh. The downside is that this will cause clients to download every asset even if it hasn’t actually been updated since the last time the site was published.

Query Params

With this approach, a query string is appended to a URL with either a hash of the contents or some other unique string e.g:

foo.css?v=1

foo.css?v=4ab8sbc7

A downside of this approach is that in some cases caching proxies won’t consider the query string. This could result in stale assets being served. Despite this, this approach is pretty common and caching proxies don’t typically default to ignoring query params.

Content Hashes with Server-side Rewrites

In this scenario, you change your asset references to point to files with a hash as part of the URL. The server is configured to rewrite those resources internally to ignore the hash.

For example if your HTML refers to foo.4ab8sbc7.css the server will serve foo.css. This means you don’t need to update the actual filename of foo.css to foo.4ab8sbc7.css.

This requires server config to work, but it’s a pretty neat approach.

Content Hashes in Built Files

In this approach, once you know the hash of your content, you need to update the references to that file, and then you can output the file with that hash as part of the filename.

The upside of this approach is that once the static site is built this way it can be served anywhere and you don’t need any additional server config as per the previous example.

This was the strategy we decided to use.

Building an Asset Pipeline

Eleventy doesn’t have an asset pipeline, though this is something that is being considered for the future.

In order to deploy the Eleventy port, cache-busting would be needed so we could continue to deploy to S3 with far future expires headers.

With the Jekyll assets plugin, you used liquid templating to control the assets building. Ideally, I wanted to avoid content authors needing to know about cache-busting at all.

To make this work, all asset references would need to start with “/assets/” and could not be built with variables. Given the simplicity of the site, this was a reasonable limitation.

With asset references easily found, a solution to implement cache-busting was needed.

First attempt: Using Webpack as an asset pipeline

The first attempt used Webpack.  We almost got it working using eleventy-webpack-boilerplate as a guide, however this started to introduce differences in the webpack configs for dev and production since it essentially used the built HTML as an entry point. This was because cache-busting and optimizations were not to be part of the dev process to keep the locale development build as fast as possible. Getting this working became more and more complex, requiring special patched forks of loaders because of limitations in the way HTML extraction worked.

Webpack also didn’t work well for this project because of the way it expects to understand relationships in JS modules in order to build a bundle. The JS for this site was written in an older style where the scripts are concatenated together in a specific order without any special consideration for dependencies (other than script order). This alone required a lot of workarounds to work in Webpack.

Second attempt: AssetGraph

On paper, AssetGraph looked like the perfect solution:

AssetGraph is an extensible, node.js-based framework for manipulating and optimizing web pages and web applications. The main core is a dependency graph model of your entire website, where all assets are treated as first class citizens. It can automatically discover assets based on your declarative code, reducing the configuration needs to a minimum.

From the AssetGraph README.md

The main concept is that the relationship between HTML documents and other resources is essentially a graph problem.

AssetGraph-builder uses AssetGraph, and using your HTML as a starting point it works out all the relationships and optimizes all of your assets along the way.

This sounded ideal. However when I ran it on the built content of the site, node ran out of memory. With no control over what it was doing and minimal feedback as to where it was stuck, this attempt was shelved.

That said, the overall goals of the AssetGraph project are really good, and this looks like it’s something worth keeping an eye on for the future.

Third attempt: Building a pipeline script

In the end, the solution that ended up working best was to build a script that would process the assets after the site build was completed by Eleventy.

The way this works is as follows:

  • Process all the binary assets (images, fonts etc), record a content hash for them.
  • Process the SVG and record a content hash for them.
  • Process the CSS, and rewrite references to resources within them, minify the CSS, and record a content hash for them.
  • Process the JS, and rewrite references to resources within them, minify the JS, and record a content hash for them.
  • Process the HTML and rewrite the references to assets within them.
  • Write anything out that hasn’t already been written to a new directory.

Note: there might be some edge cases not covered here. For example, if a JS file references another JS file,  this could break depending on the order of processing. The script could be updated to handle this, but that would mean files would need to be re-processed as changed and anything that referenced them would also need to be updated and so on. Since this isn’t a concern for the current site, this was left out for simplicity. There’s also no circular dependency detection. Again, for this site and most other basic sites this won’t be a concern.

There’s a reason optimizations and cache-busting aren’t part of the development build. This separation helps to ensure the site continues to build really fast when making changes locally.

This is a trade-off, but as long as you check the production build before you release it’s a reasonable compromise. In our case, we have a development site built from master, a staging site built from tags with a -stage suffix, and the production site. All of these deployments run the production deployment processes so there’s plenty of opportunity for catching issues with full builds.

Conclusion

Porting to Eleventy has been a positive change. It certainly took quite a lot of steps to get there, but it was worth the effort.

In the past, long build times with Jekyll made this site really painful for contributors and document authors to work with. We’re already starting to see some additional contributions as a result of lowering the barrier of entry.

With this port complete, it’s now starting to become clear what the next steps should be to minimize the boilerplate for document authors and make the site even easier to use and contribute to.

If you have a site running under Jekyll or are considering using a modern static site generator, then taking a look at Eleventy is highly recommended. It’s fast, flexible, well documented, and a joy to work with. ✨

The post To Eleventy and Beyond appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyMozilla Partners with the African Telecommunications Union to Promote Rural Connectivity

Mozilla and the African Telecommunications Union (ATU) have signed a Memorandum of Understanding (MOU) for a joint project that will promote rural connectivity in the Africa region. “The project, pegged to the usage of spectrum policy, regulations and practices, is designed to ensure affordable access to communication across the continent,” said ATU Secretary-General John OMO. “Figuring out how to make spectrum accessible, particularly in rural areas, is critical to bringing people online throughout the African continent,” said Mitchell Baker, CEO of Mozilla, “I’m committed to Mozilla making alliances to address this challenge.”

While half the world is now connected to the internet, the existing policy, regulatory, financial, and technical models are not fit for purpose to connect the poorer and more sparsely populated rural areas. More needs to be done to achieve the United Nations’ universal access goals by 2030. Clear policy and regulatory interventions that can support innovation, and new business models to speed up progress, are urgently required.

Access to the internet should not be a luxury, but a global public resource that is open and accessible to all. This is particularly true during a global public health crisis, as it underpins the deployment of digital healthcare solutions for detecting COVID-19.

Rural connectivity in the African region presents a unique set of challenges. More than 60% of Africa’s populations live in rural areas, but they lack resources and infrastructures needed to connect them. Potential users are often spread out, making it difficult to support the traditional business case for investments necessary to establish broadband infrastructure.

There are many factors that contribute to this digital divide, but one of the biggest challenges is making wireless spectrum available to low-cost operators, who are prepared to deploy new business models for rural access.

Spectrum licenses are bets, in the form of 10-15 year commitments, for national coverage for mobile operators. As the demand for wireless spectrum continues to increase beyond its administrative availability, policy-makers and regulators have increasingly turned to spectrum auctions to assign a limited number of licenses. However, spectrum auctions act as a barrier to competition, creating financial obstacles for innovative, smaller service providers who could bring new technology and business models to rural areas. In addition, the high fees associated with these auctions are a disincentive to larger mobile operators to roll out services in rural areas, resulting in the dramatically under-utilised spectrum.

To unlock innovation and investment, we must develop policy and regulatory instruments to address access to spectrum in rural areas. Mozilla has partnered with the ATU to facilitate a dialogue among regulators, policy-makers, and other stakeholders, to explore ways to unlock the potential of the unused spectrum. Mozilla and the ATU will develop recommendations based on these dialogues and good practice. The recommendations will be presented at the 2021 Annual ATU Administrative Council meeting.

The post Mozilla Partners with the African Telecommunications Union to Promote Rural Connectivity appeared first on Open Policy & Advocacy.

The Mozilla BlogThe internet needs our love

It’s noisy out there. We are inundated with sensational headlines every minute, of every day. You almost could make a full-time job of sorting the fun, interesting or useful memes, feeds and reels from those that should be trashed. It’s hard to know what to pay attention to, and where to put your energy. With so much noise, chaos and division, it seems that one of the only things we all have in common is relying on the internet to help us navigate everything that’s happening in the world, and in our lives.

But the internet isn’t working.

I’m not talking about whether you have a wi-fi signal or can get online for work or school — in that sense the internet is doing its job for most of us, connecting billions of people around the globe. What I’m talking about is the magic. This amazing portal into the human experience has become a place filled with misinformation, corruption and greed. And in recent years we’ve seen those with power — Big Tech, governments, and bad actors — become more dominant, more brazen, and more dangerous. That’s a shame, because there’s still a lot to celebrate and do online. Whether it’s enjoying the absurd — long live cat videos — or addressing the downright critical, like beating back a global pandemic, we all need an internet where people, not profits, come first.

So it’s time to sound the alarm.

The internet we know and love is fcked up.

Let’s unfck it together. 

We are asking you to join us, and start a movement to create a better internet.

Let’s take back control from those who violate our privacy just to sell us stuff we don’t need. Let’s work to stop companies like Facebook and YouTube from contributing to the disastrous spread of misinformation and political manipulation. It’s time to take control over what we do and see online, and not let the algorithms feed us whatever they want.

You probably don’t know the name Mozilla. You might know Firefox. But we’ve been here, fighting for a better internet, for almost twenty years. We’re a non-profit backed organization that exists for the sole purpose of protecting the internet. Our products, like the Firefox browser, are designed with your privacy in mind. We’re here to prove that you can have an ethical tech business that works to make the internet a better place for all of us. We stand for people, not profit.

But we can’t fight this fight alone. Big tech has gotten too big. We need you. We need people who understand what it is to be part of something larger than themselves. People who love the internet and appreciate its magic. People who are looking for a company they can support because we are all on the same side.

We know it’s going to take more than provocative language to make this real. Which is why at the heart of this campaign are ways all of us can participate in changing the internet for the better. That’s what this is all about: working together to unfck the internet.

To start we’re giving you five concrete and shareable ways to reclaim what’s good about life online by clearing out the bad:

  1. Hold political ads accountable for misinformation: Download the Firefox extension that shares the political ads you see on Facebook to a public database so they can be tracked and monitored.
  2. Watch The Social Dilemma on Netflix & read our recommended readings from diverse voices: This #1 trending documentary unpacks the issues of the attention economy, and our compendium broadens the discussion by bringing more perspectives to the conversation.
  3. Get the Facebook Container extension: Prevent Facebook from following you around the rest of the web — which they can do even if you don’t have an account.
  4. Flag bad YouTube recommendations: This extension lets you report regrettable recos you’ve been served, so you can help make the engine better for everyone.
  5. Choose independent tech: Learn more about other independent tech companies and their products. Like shopping locally, using products like Firefox is a great way to vote your conscience online.

We’ll be updating the list frequently with new and timely ways you can take action, so check back regularly and bookmark or save the Unfck The Internet landing page.

Now, it’s time to get to work. We need you to speak up and own it. Tell your friends, your coworkers and your families. Tell the world that you’ve made the choice to “internet” with your values, and invite them to do the same.

It’s time to unfck the internet. For our kids, for society, for the climate. For the cats.

The post The internet needs our love appeared first on The Mozilla Blog.

Mozilla Add-ons BlogExpanded extension support in Firefox for Android Nightly

A few weeks ago, we mentioned that we were working on increasing extension support in the Firefox for Android Nightly pre-release channel. Starting September 30, you will be able to install any extension listed on addons.mozilla.org (AMO) in Nightly.

This override was created for extension developers and advanced users who are interested in testing for compatibility, so it’s not easily accessible. Installing untested extensions can lead to unexpected outcomes; please be judicious about the extensions you install. Also, since most developers haven’t been able to test and optimize their extensions for the new Android experience, please be kind if something doesn’t work the way it should. We will remove negative user reviews about extension performance in Nightly.

Currently, Nightly uses the Collections feature on AMO to install extensions. You will need to create a collection on AMO and change an advanced setting in Nightly in order to install general extensions.

Create a collection on AMO

Follow these instructions to create a collection on AMO. You can name the collection whatever you like as long as it does not contain any spaces in the name. When you are creating your collection, you will see a number in the Custom URL field. This number is your user ID. You will need the collection name and user ID to configure Nightly in the following steps.

Screenshot of the Create a Collection page

Once your collection has been created, you can add extensions to it. Note that you will only be able to add extensions that are listed on AMO.

You can edit this collection at any time.

Enable general extension support setting in Nightly

You will need to make a one-time change to Nightly’s advanced settings to enable general extension installation.

Step 1: Tap on the three dot menu and select Settings.

Screenshot of the Firefox for Android menu

Step 2: Tap on About Firefox Nightly.

Screenshot of the Fenix Settings Menu

Step 3. Tap the Firefox Nightly logo five times until the “Debug menu enabled” notification appears.

Screenshot of About Firefox Nightly

Screenshot - Debug menu enabled

Step 4: Navigate back to Settings. You will now see a new entry for “Custom Add-on collection.” Once a custom add-on collection has been set up, this menu item will always be visible.

Screenshot - Settings

Step 5: Configure your custom add-on collection. Use the collection name and your user ID from AMO for the Collection Owner (User ID)  and Collection name fields, respectively.

Screenshot of interface for adding a custom add-on collection

Before

After

After you tap “OK,” the application will close and restart.

WebExtensions API support

Most of the WebExtensions APIs supported on the previous Firefox for Android experience are supported in the current application. The notable exceptions are the downloads.download (implementation in progress) and the browserData APIs. You can see the current list of compatible APIs on MDN.

Extensions that use unsupported APIs may be buggy or not work at all on Firefox for Android Nightly.

User Experience

The new Firefox for Android has a much different look and feel than Firefox for desktop, or even the previous Android experience. Until now, we’ve worked with the developers of Recommended Extensions directly to optimize the user experience of their extensions for the release channel. We plan to share these UX recommendations with all developers on Firefox Extension Workshop in the upcoming weeks.

Coming next

We will continue to publish our plans for increasing extension support in Firefox for Android as they solidify. Stay tuned to this blog for future updates!

The post Expanded extension support in Firefox for Android Nightly appeared first on Mozilla Add-ons Blog.

Mozilla Add-ons BlogMore Recommended extensions added to Firefox for Android Nightly

As we mentioned recently, we’re adding Recommended extensions to Firefox for Android Nightly as a broader set of APIs become available to accommodate more add-on functionality. We just updated the collection with some new Recommended extensions, including…

Mobile favorites Video Background Play Fix (keeps videos playing in the background even when you switch tabs) and Google Search Fixer (mimics the Google search experience on Chrome) are now in the fold.

Privacy related extensions FoxyProxy (proxy management tool with advanced URL pattern matching) and Bitwarden (password manager) join popular ad blockers Ghostery and AdGuard.

Dig deeper into web content with Image Search Options (customizable reverse image search tool) and Web Archives (view archived web pages from an array of search engines). And if you end up wasting too much time exploring images and cached pages you can get your productivity back on track with Tomato Clock (timed work intervals) and LeechBlock NG (block time-wasting websites).

The new Recommended extensions will become available for Firefox for Android Nightly on 26 September, If you’re interested in exploring these new add-ons and others on your Android device, install Firefox Nightly and visit the Add-ons menu. Barring major issues while testing on Nightly, we expect these add-ons to be available in the release version of Firefox for Android in November.

The post More Recommended extensions added to Firefox for Android Nightly appeared first on Mozilla Add-ons Blog.

Blog of DataData Publishing @ Mozilla

Introduction

Mozilla’s history is steeped in openness and transparency –  it’s simply core to what we do and how we see ourselves in the world.  We are always looking  for ways to bring our mission to life in ways that help create a healthy internet and support the Mozilla Manifesto.  One of  our commitments says  “We are committed to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts”.

To this end, we have spent a good amount of time considering how we can publicly share  our Mozilla telemetry data sets – it is one of the most simple and effective ways we can enable collaboration and share knowledge.  But, only if it can be done safely and in a privacy protecting, principled way. We believe we’ve designed a way to do this and we are excited to outline our approach here.

Making data public not only  allows us to be transparent about our data practices, but directly demonstrates how our work contributes to our mission. Having a publicly available methodology for vetting and sharing our data demonstrates our values as a company. It will also enable other research opportunities with trusted scientists, analysts, journalists, and policymakers in a way that furthers our efforts to shape an internet that benefits everyone.

Dataset Publishing Process

We want our data publishing review process, as well as our review decisions to be public and understandable, similar to our Mozilla  Data Collection program. To that end, our full dataset publishing policy and details about what considerations we look at before determining what is safe to publish can be found on our wiki here.  Below is a summary of the critical pieces of that process.

The goal of our data publishing process is to:

  • Reduce friction for data publishing requests with low privacy risk to users;
  • Have a review system of checks and balances that considers both data aggregations and data level sensitivities to determine privacy risk prior to publishing, and;
  • Create a public record of these reviews,  including making data and the queries that generate it publicly available and putting a link to the dataset + metadata on a public-facing Mozilla property.

Having a dataset published requires filling out a publicly available request on Bugzilla.  Requesters will answer a series of questions, including information about aggregation levels, data collection  categories, and dimensions or metrics that include sensitive data.

A data steward will review the bug request.  They will help ensure the questions are correctly answered and determine if the data can be published or whether it requires review by our Trust & Security or  Legal teams.

When a request is approved, our telemetry data engineering team will:

  • Write (or review) the query
  • Schedule it to update on the desired frequency
  • Include it in the pubic facing dataset infrastructure, including metadata that links the public data back to the  review bug.

Finally, once the dataset is published, we’ll announce it on the Data @ Mozilla blog. It will also be added to https://public-data.telemetry.mozilla.org/

Want to know more?

Questions? Contact us at publicdata@mozilla.com

Blog of DataThis Week in Glean: glean-core to Wasm experiment

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All “This Week in Glean” blog posts are listed in the TWiG index.

 


 

In the past week Alessio, Mike, Hamilton and I got together for the Glean.js workweek. Our purpose was to build a proof-of-concept of a Glean SDK that works on Javascript environments. You can expect a TWiG in the next few weeks about the outcome of that. Today I am going to talk about something that I tried out in preparation for that week: attempting to compile glean-core to Wasm.

 

A quick primer

 

glean-core

 

The glean-core is the heart of the Glean SDK where most of the logic and functionality of Glean lives. It is written in Rust and communicates with the language bindings in C#, Java, Swift or Python through an FFI layer. For a comprehensive overview of the Glean SDKs architecture, please refer to Jan-Erik’s great blog post and talk on the subject.

wasm

 

From the WebAssembly website:

“WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.”

Or, from Lin Clark’s “A cartoon intro to WebAssembly”:

“WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser.”

 

Why did I decide to do this?

 

On the Glean team we make an effort to move as much of the logic as possible to glean-core, so that we don’t have too much code duplication on the language bindings and guarantee standardized behaviour throughout all platforms.

Since that is the case, it was counterintuitive for me, that when we set out to build a version of Glean for the web, we wouldn’t rely on the same glean-core as all our other language bindings. The hypothesis was: let’s make JavaScript just another language binding, by making our Rust core compile to a target that runs on the browser.

Rust is notorious for making an effort to have a great Rust to Wasm experience, and the Rust and Webassembly working group has built awesome tools that make boilerplate for such projects much leaner.

 

First try: compile glean-core “as is” to Wasm

 

Since this was my first try in doing anything Wasm, I started by following MDN’s guide “Compiling from Rust to WebAssembly”, but instead of using their example “Hello, World!” Rust project, I used glean-core.

From that guide I learned about wasm-pack, a tool that deals with the complexities of compiling a Rust crate to Wasm and wasm-bindgen a tool that exposes, among many other things, the #[wasm_bindgen] attribute which, when added to a function, will make that function accessible from Javascript.

The first thing that was obvious, was that it would be much harder to try and compile glean-core directly to Wasm. Passing complex types to it has many limitations and I was not able to add the #[wasm_bindgen] attribute to trait objects or structs that contain trait objects or lifetime annotations. I needed a simpler API surface to make the connection between Rust and Javascript. Fortunately, I had that in hand: glean-ffi.

Our FFI crate exposes functions that rely on a global Glean singleton and have relatively simple signatures. These functions are the ones accessed by our language bindings through a C FFI. Most of the Rust complex structures are hidden by this layer from the consumers.

Perfect! I proceeded to add the #[wasm_bindgen] attribute to one of our entrypoint functions: glean_initialize. This uncovered a limitation I didn’t know about: you can’t add this attribute to functions that are unsafe, which unfortunately this one is.

My assumption that I would be able to just expose the API of glean-ffi to Javascript by compiling it to Wasm without making any changes to it was not holding up. I would have to go through some refactoring to make that work. But until now, I hadn’t gotten to the actual compilation step, the error I was getting was a syntax error. I wanted to go through compilation and see if that completed before diving into any refactoring work. I just removed the  #[wasm_bindgen] attribute for now and made a new attempt at compiling.

Now I got a new error. Progress! If you clone the Glean repository, install wasm-pack, and run wasm-pack build inside the glean-core/ffi/ folder right now, you are bound to get this same error and here is one important excerpt of it:

<...>

fatal error: 'sys/types.h' file not found
cargo:warning=#include <sys/types.h>
cargo:warning=         ^~~~~~~~~~~~~
cargo:warning=1 error generated.
exit code: 1

--- stderr

error occurred: Command "clang" "-Os" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=wasm32-unknown-unknown" "-Wall" "-Wextra" "-DMDB_IDL_LOGN=16" "-o" "<...>/target/wasm32-unknown-unknown/release/build/lmdb-rkv-sys-5e7282bb8d9ba64e/out/mdb.o" "-c" "<...>/.cargo/registry/src/github.com-1ecc6299db9ec823/lmdb-rkv-sys-0.11.0/lmdb/libraries/liblmdb/mdb.c" with args "clang" did not execute successfully (status code exit code: 1)

One of glean-core’s dependencies is rkv a storage crate we use for persisting metrics before they are collected and sent in pings. This crate depends on LMDB which is written in C, thus the clang error.

I do not have extensive experience in writing C/C++ programs, so this was not familiar to me. I figured out that the file this error points to as “not found”, <sys/types.h>, is a header file that should be part of libc. This compiles just fine when trying to compile for our usual targets, so I had a hunch that maybe I just didn’t have the proper libc files for compiling to Wasm targets.

Internet searching pointed me to wasi-libc, a libc for WebAssembly programs. Promising! With this, I retried compiling glean-ffi to Wasm.  I just needed to run the build command with added flags:

CFLAGS="--sysroot=/path/to/the/newly/built/wasi-libc/sysroot" wasm-pack build

This didn’t work immediately and the error messages told me to add some extra flags to the command, which I did without thinking much and the final command is:

CFLAGS="--sysroot=/path/to/wasi-sdk/clone/share/wasi-sysroot -D_WASI_EMULATED_MMAN -D_WASI_EMULATED_SIGNAL" wasm-pack build

I would advise the reader now not to get too excited. This command still doesn’t work. It will return yet another set of errors and warnings, mostly related to “usage of undeclared identifiers” or “implicit declaration of functions”. Most of the identifiers that were erroing started with the pthread_ prefix, which reminded me of something that I read on the wasi-sdk, a toolkit for compiling C programs to WebAssembly that includes wasi-libc,  README section:

“Specifically, WASI does not yet have an API for creating and managing threads yet, and WASI libc does not yet have pthread support”.

That was it. I was done with trying to approach the problem of compiling glean-core to Wasm “as is” and I decided to try another way. I could try to abstract away our usage of rkv so that depending on it didn’t block compilation to Wasm, but that is way too big a refactoring task that I considered it a blocker for this experiment.

 

Second try: take a part of glean-core and compile that to Wasm

 

After learning that it would require way too much refactoring of glean-core and glean-ffi to get them to compile to Wasm, I decided to try a different approach and just get a small self contained part of glean-core and compile that to Wasm.

Earlier this year I had a small taste of trying to rewrite part of glean-core in Javascript for the distribution simulators that we added to The Glean Book. To make the simulators work I essentially had to reimplement histograms code and part of the distribution metrics code in Javascript.

The histograms code is very self contained so it was a perfect candidate to try and single out for this experiment. I did just that and I was actually able to get it to not error fairly quickly as a standalone thing (you can check out the histogram code on the glean-to-wasm repo vs. the histogram code on the Glean repo).

After getting this to work I created three accumulation functions that would mimic how each one of the distribution metric types work. These functions would then be exposed to Javascript. The resulting API looks like this:

#[wasm_bindgen]
pub fn accumulate_samples_custom_distribution(
    range_min: u32,
    range_max: u32,
    bucket_count: usize,
    histogram_type: i32,
    samples: Vec<u64>,
) -> String

#[wasm_bindgen]
pub fn accumulate_samples_timing_distribution(
    time_unit: i32,
    samples: Vec<u64>
) -> String

#[wasm_bindgen]
pub fn accumulate_samples_memory_distribution(
    memory_unit: i32,
    samples: Vec<u64>
) -> String

Each one of these functions creates a histogram, accumulates the given samples to this histogram and returns the resulting histogram as a JSON encoded string. I tried getting them to return HashMap<u64,u64> at first, but that is not supported.

For this I was still following MDN’s guide “Compiling from Rust to WebAssembly”, which I can’t recommend enough, and after I got my Rust code to compile to Wasm it was fairly straightforward to call the functions imported from the Wasm module inside my Javascript code.

Here is a little taste of what that looked like:

import("glean-wasm").then(Glean => {
    const data = JSON.parse(
        Glean.accumulate_samples_memory_distribution(
            unit, // A Number value between 0 - 3
            values // A BigUint64Array with the sample values
        )
    )
    // <Do something with data>
})

The only hiccup I ran into was that I needed to change my code to use the BigInt number type instead of the default Number type from Javascript. That is necessary because, in Rust, my functions expect a u64 and BigInt is the type that maps to that from Javascript.

This code can be checked out at: https://github.com/brizental/glean-wasm-experiment

And there is a demo of it working in: https://glean-wasm.herokuapp.com/

 

Final considerations

 

This was a very fun experiment, but does it validate my initial hypothesis:

Should we compile glean-core to Wasm and have Javascript be just another language binding?

We definitely can do that. Even though my first try was not concluded, if we abstract away all the dependencies that we have that can’t be compiled to Wasm, refactor the unsafe functions out and all other possible roadblocks that we find other than these, we can do it. The effort that would take though, I believe is not worth it. It would take us much less time to rewrite glean-core’s code in Javascript. Spoiler alert for our upcoming TWiG about the Glean.js workweek, but in just a week we were able to get a functioning prototype of that.

Our requirements for a Glean software for the web are different from our requirements for a native version of Glean. Different enough that the burden of maintenance for two versions of glean-core, one in Rust and another in Javascript, is probably smaller than the amount of work and hacks it would take to build a single version that attends both platforms.

Another issue is compatibility, Wasm is very well supported but there are environments that still don’t have support for it. It would be suboptimal if we went through the trouble of changing glean-core for it to compile to Wasm and then still had to make a Javascript only version for compatibility reasons.

My conclusion is that although we can compile glean-core to Wasm, it doesn’t mean that we should do that. The advantages of having a single source of truth for the Glean SDK are very enticing, but at the moment it would be more practical to rewrite something specific for the web.

Mozilla L10NL10n Report: September 2020 Edition

Welcome!

New localizers

  • Victor and Orif are teaming up to re-build the Tajik community.
  • Théo of Corsican (co).
  • Jonathan of Luganda (lg).
  • Davud of Central Kurdish (ckb).

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

Infrastructure and l10n.mozilla.org

As part of the effort to streamline and rationalize the localization infrastructure, following the recent lay-offs, we have decided to decommission Elmo. Elmo is the project name of what has been the backbone of our localization infrastructure for over 10 years, and its public facing part was hosted on l10n.mozilla.org (el as in “el-10-en (l10n)”, m(ozilla), o(rg) = elmo).

The practical consequences of this change are:

  • There are no more sign-offs for Firefox. Beta builds are going to use the latest content available in the l10n repositories at the time of the build.
  • The deadline for localization moves to the Monday before Release Candidate week. That’s 8 days before release day, and 5 full more days available for localization compared to the previous schedule. For reference, the deadline will be set to the day before in Pontoon (Sunday), since the actual merge happens in the middle of the day on Monday.
  • https://l10n.mozilla.org will be redirected to https://pontoon.mozilla.org/ (the 400 – Bad Gateway error currently displayed is a known problem).

What’s new or coming up in Firefox desktop

Upcoming deadlines:

  • Firefox 82 is currently in beta and will be released on October 20th. The deadline to update localization is on October 11 (see above to understand why it moved closer to the release date).

As you might have noticed, the number of new strings in Firefox has significantly decreased, with DevTools becoming less actively developed. Now more than ever it’s a good time to:

  • Test your builds.
  • Review pending suggestions in Pontoon for your locale, in Firefox but also other projects. Firefox alone has currently over 12 thousand suggestions pending across teams, with several locales well over 500 unreviewed suggestions.

What’s new or coming up in mobile

This last month, as announced – and as you have probably noticed – we have been reducing the number, and priority, of mobile products to localize. We are now focusing much more on Firefox for Android and Firefox for iOS – our original flagship products for mobile. Please thus refer to the “star” metric on Pontoon to prioritize your work for mobile.

The Firefox for Android schedule from now on should give two weeks out of four for localization work – as it did for Focus. This means strings will be landing during two weeks in Pontoon – and then you will have two weeks to work on those strings so they can make it into the next version. Check the deadline section in Pontoon to know when the l10n deadline for the next release is.

Concerning iOS: with iOS 14 we can now set Firefox as default! Thanks to everyone who has helped localize the new strings that will enable globally this functionality.

What’s new or coming up in web projects

Common Voice

The support will continue with reduced staff. Though there won’t be new features introduced in the next six months, the team is still committed to fixing high priority bugs, adding newly requested languages, and releasing updated dataset. It will take longer to implement than before. Please follow the project’s latest update on Discourse.

WebThings Gateway

The project is being spun out of Mozilla as an independent open source project. It will be renamed from Mozilla WebThings to WebThings and will be moved to a new home at webthings.io. For other FAQ, please check out here. When the transition is complete, we will update everyone as soon as it becomes available.

What’s new or coming up in SuMo

It would be great to get the following articles localized in Indonesian in the upcoming release for Firefox for iOS:

What’s new or coming up in Pontoon

  • Mentions. We have added the ability to mention users in comments. After you type @ followed by any character, a dropdown will show up allowing you to select users from the list using Tab, Enter or mouse. You can narrow down the list by typing more characters. Kudos to April who has done an excellent job from the design and research phase all the way to implementing the final details of this feature!Screenshot of Pontoon, showing the new "mentions" feature
  • Download Terminology and TM from dashboards. Thanks to our new contributor Anuj Pandey you can now download TBX and TMX files directly from Team and Localization dashboards, without needing to go to the Translate page. Anuj also fixed two other bugs that will make the Missing translation status more noticeable and remove hardcoded @mozilla.com email addresses from the codebase.

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR BlogFirefox Reality 12

Firefox Reality 12

The latest version of Firefox Reality for standalone VR headsets brings a host of long-awaited features we're excited to reveal, as well as improved stability and performance.

Add-on support

Firefox Reality is the first and only browser to bring add-on support to the immersive web. Now you can download powerful extensions that help you take control of your VR browsing experience. We started with favorites like uBlock, Dark Reader, and Privacy Badger.

Autofill

Ever get tired of typing your passwords in the browser? This can be tedious, especially using VR headset controllers. Now, your browser can do the work of remembering and entering your passwords and other frequent form text with our autofill feature.

Redesigned library and updated status bar

We’ve completely redesigned and streamlined our library and simplified our status bar. You can also find additional information on the status bar, including indicators for the battery levels of controllers and the headset, as well as time/date info.

Firefox Reality 12<figcaption>Find the Bookmarks menu in our redesigned Library interface.</figcaption>
Firefox Reality 12<figcaption>Indicators for controller and headset battery life</figcaption>
Firefox Reality 12<figcaption>Find the Addons list in our redesigned Library interface.</figcaption>

Redesigned Content Feed

We’ve also redesigned our content feed for ease of navigation and discovery of related content organized by the categories in the left menu. Stay tuned for this change rolling out to your platform of choice soon.

Firefox Reality 12

The future of Firefox Reality

Look for Firefox Reality 12 available now in the HTC, Pico and Oculus stores. This feature-packed release of Firefox Reality will be the last major feature release for a while as we gear up for a deeper investment in Hubs. But not to worry! Firefox Reality will still be well supported and maintained on your favorite standalone VR platform.

Contribute to Firefox Reality!

Firefox Reality is an open source project. We love hearing from and collaborating with our developer community. Check out Firefox Reality on GitHub and help build the open immersive web

about:communityContributors to Firefox 81 (and 80, whoops)

Errata: In our release notes for Firefox 80, we forgot to mention all the developers who contributed their first code change to Firefox in this release, 10 of whom were brand new volunteers! We’re grateful for their efforts, and apologize for not giving them the recognition they’re due on time. Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

As well, with the release of Firefox 81 we are once again honoured to welcome the developers who contributed their first code change to Firefox with this release, 18 of whom were brand new volunteers. Again, please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

The Mozilla BlogLaunching the European AI Fund

Right now, we’re in the early stages of the next phase of computing: AI. First we had the desktop. Then the internet. And smartphones. Increasingly, we’re living in a world where computing is built around vast troves of data and the algorithms that parse them. They power everything from the social platforms and smart speakers we use everyday, to the digital machinery of our governments and economies.

In parallel, we’re entering a new phase of  how we think about, deploy, and regulate technology. Will the AI era be defined by individual privacy and transparency into how these systems work? Or, will the worst parts of our current internet ecosystem — invasive data collection, monopoly, opaque systems — continue to be the norm?

A year ago, a group of funders came together at Mozilla’s Berlin office to talk about just this: how we, as a collective, could help shape the direction of AI in Europe. We agreed on the importance of a landscape where European public interest and civil society organisations — and not just big tech companies — have a real say in shaping policy and technology. The next phase of computing needs input from a diversity of actors that represent society as a whole.

Over the course of several months and with dozens of organizations around the table, we came up with the idea of a European AI Fund — a project we’re excited to launch this week.

The fund is supported by the Charles Stewart Mott Foundation, King Baudouin Foundation, Luminate, Mozilla, Oak Foundation, Open Society Foundations and Stiftung Mercator. We are a group of national, regional and international foundations in Europe that are dedicated to using our resources — financial and otherwise — to strengthen civil society. We seek to deepen the pool of experts across Europe who have the tools, capacity and know-how to catalogue and monitor the social and political impact of AI and data driven interventions — and hold them to account. The European AI Fund is hosted by the Network of European Foundations. I can’t imagine a better group to be around the table with.

Over the next five years, the European Commission and national governments across Europe will forge a plan for Europe’s digital transformation, including AI. But without a strong civil society taking part in the debate, Europe — and the world — risk missing critical opportunities and could face fundamental harms.

At Mozilla, we’ve seen first-hand the expertise that civil society can provide when it comes to the intersection of AI and consumer rights, racial justice, and economic justice. We’ve collaborated closely over the years with partners like European Digital Rights,  Access Now Algorithm Watch and Digital Freedom Fund. Alternatively, we’ve seen what can go wrong when diverse voices like these aren’t part of important conversations: AI systems that discriminate, surveil, radicalize.

At Mozilla, we believe that philanthropy has a key role to play in Europe’s digital transformation and in keeping AI trustworthy, as we’ve laid out in our trustworthy AI theory of change. We’re honoured to be working alongside this group of funders in an effort to strengthen civil society’s capacity to contribute to these tech policy discussions.

In its first step, the fund will launch with a 1,000,000 € open call for funding, open until November 1. Our aim is to build the capacity of those who already work on AI and Automated Decision Making (ADM). At the same time, we want to bring in new civil society actors to the debate, especially those who haven’t worked on issues relating to AI yet, but whose domain of work is affected by AI.

To learn more about the European AI Fund visit http://europeanaifund.org/

The post Launching the European AI Fund appeared first on The Mozilla Blog.

Firefox UXFrom a Feature to a Habit: Why are People Watching Videos in Picture-in-Picture?

At the end of 2019, if you were using Firefox to watch a video, you saw a new blue control with a simple label: “Picture-in-Picture.” Even after observing and carefully crafting the feature with feedback from in-progress versions of Firefox (Nightly and Beta), our Firefox team wasn’t really sure how people would react to it. So we were thrilled when we saw signals that the response was positive.

Firefox’s Picture-in-Picture allows you to watch videos in a floating window (always on top of other windows) so you can keep an eye on what you’re watching while interacting with other sites, or applications.

Picture-in-Picture window

From a feature to a habit

About 6 months after PiP’s release, we started to see some trends from our data. We know from our internal data that people use Firefox to watch video. In fact, some people watch video over 60% of the time when they’re using Firefox. And, some of these people use PiP to do that. Further, our data shows that people who use Picture-in-Picture open more PiP windows over time. In short, we see that not everyone uses PiP, but those who do seem to be forming a habit with it.

A habit is a behaviour “done with little or no conscious thought.” So we asked ourselves:

  • Why is PiP becoming a habit for some people?
  • What are peoples’ motivations behind using PiP?

Fogg’s Behavior Model describes habits and how they form. We already knew two parts of this equation: Behavior and Ability. But we didn’t know Motivation and Trigger.

foggs: “Behavior = Motivation, Ability, Trigger”<figcaption>Fogg’s Behavior Model.</figcaption>

To get at these “why” questions, we conducted qualitative research with people who use PiP. We conducted interviews with 11 people to learn more about how they discovered PiP and how they use it in their everyday browsing. We were even able to observe these people using PiP in action. It’s always a privilege to speak directly to people who are using the product. Talking to and observing peoples’ actions is an indispensable part of making something people find useful.

Now we’ll talk about the Motivation part of the habit equation by sharing how the people we interviewed use PiP.

Helps with my tasks

When we started to look at PiP, we were worried that the feature would bring some unintended consequences in peoples’ lives. Could PiP diminish their productivity by increasing distractibility? Surprisingly, from what we observed in these interviews, PiP helped some participants do their task, as opposed to being needlessly distracting. People are using PiP as a study tool, to improve their focus, or to motivate them to complete certain tasks.

PiP for note-taking

One of our participants was a student. He used Picture-in-Picture to watch lecture videos and take notes while doing his homework. PiP helped him complete and enhance a task.

PiP video open to the left with Pages applications on the left<figcaption>Taking notes in a native desktop application while watching a lecture video in picture-in-picture. (Recreation of what a participant did during an interview)</figcaption>

Breaks up the monotony of work

You might have this experience: listening to music or a podcast helps you “get in the zone” while you’re exercising or perhaps doing chores. It helps you lose yourself in the task, and make mundane tasks more bearable. Picture-in-Picture does the same for some people while they are at work, to avoid the surrounding silence.

“I just kind of like not having dead silence… I find it kind of motivating and I don’t know, it just makes the day seem less, less long.”
— Executive Assistant to a Real Estate Developer

Calms me down

Multiple people told us they watch videos in PiP to calm themselves down. If they are reading a difficult article for work or study, or doing some art, watching ASMR or trance-alike videos feels therapeutic. Not only does this calm people down, they said it can help them focus.

<figcaption>Reading an article in a native Desktop application while watching a soothing video of people running in pip. (Recreation of what a participant did during an interview)</figcaption>

Keeps me entertained

And finally, some people use Picture-in-Picture for pure and simple entertainment. One person watches a comedic YouTuber talk about reptiles while playing a dragon-related browser game. Another person watches a friend’s live streaming gaming while playing a game themself.

PiP video in the upper left with a game in the main area of the screen<figcaption>Playing a browser game while watching a funny YouTube video. (Recreation of what a participant showed us during an interview)</figcaption>

Our research impact

Some people have habits with PiP for the reasons listed above, and we also learned there’s nothing gravely wrong with PiP to prevent habit-forming. Therefore, our impact is related to PiP’s strategy: Do not make “habit-forming” a measure of PiP’s success. Instead, better support what people already do with PiP. Particularly, PiP is getting more controls, for example, changing the volume.

https://medium.com/media/a49d33971e122bbb7abbf2fba57d5532/href

Share your stories

While conducting these interviews, we also prepared an experiment to test different versions of Picture-in-Picture, with the goal of increasing the number of people who discover it. We’ll talk more on that soon!

In the meantime, we’d like to hear even more stories. Are you using Picture-in-Picture in Firefox? Are you finding it useful? Please share your stories in the comments below, or send us a tweet @firefoxUX with a screenshot. We’d love to hear from you.

This article was co-written by Jennifer Davidson and also published on the Firefox UX blog.

Thank you to Betsy Mikel for editing our blog post.


From a Feature to a Habit: Why are People Watching Videos in Picture-in-Picture? was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXFrom a Feature to a Habit: Why are People Watching Videos in Picture-in-Picture?

At the end of 2019, if you were using Firefox to watch a video, you saw a new blue control with a simple label: “Picture-in-Picture.” Even after observing and carefully crafting the feature with feedback from in-progress versions of Firefox (Nightly and Beta), our Firefox team wasn’t really sure how people would react to it. So we were thrilled when we saw signals that the response was positive.

Firefox’s Picture-in-Picture allows you to watch videos in a floating window (always on top of other windows) so you can keep an eye on what you’re watching while interacting with other sites, or applications.

From a feature to a habit

About 6 months after PiP’s release, we started to see some trends from our data. We know from our internal data that people use Firefox to watch video. In fact, some people watch video over 60% of the time when they’re using Firefox. And, some of these people use PiP to do that. Further, our data shows that people who use Picture-in-Picture open more PiP windows over time. In short, we see that not everyone uses PiP, but those who do seem to be forming a habit with it.

A habit is a behaviour “done with little or no conscious thought.”  So we asked ourselves:

  • Why is PiP becoming a habit for some people?
  • What are peoples’ motivations behind using PiP?

Fogg’s Behavior Model describes habits and how they form. We already knew two parts of this equation: Behavior and Ability. But we didn’t know Motivation and Trigger.

Behavior = Motivation, Ability, Trigger

Fogg’s Behavior Model.

To get at these “why” questions, we conducted qualitative research with people who use PiP.  We conducted interviews with 11 people to learn more about how they discovered PiP and how they use it in their everyday browsing. We were even able to observe these people using PiP in action. It’s always a privilege to speak directly to people who are using the product. Talking to and observing peoples’ actions is an indispensable part of making something people find useful.

Now we’ll talk about the Motivation part of the habit equation by sharing how the people we interviewed use PiP.

Helps with my tasks

When we started to look at PiP, we were worried that the feature would bring some unintended consequences in peoples’ lives. Could PiP diminish their productivity by increasing distractibility? Surprisingly, from what we observed in these interviews, PiP helped some participants do their task, as opposed to being needlessly distracting. People are using PiP as a study tool, to improve their focus, or to motivate them to complete certain tasks.

PiP for note-taking

One of our participants was a student. He used Picture-in-Picture to watch lecture videos and take notes while doing his homework. PiP helped him complete and enhance a task.

PiP video open on the left with Pages applications in the main area of the screen

Taking notes in a native desktop application while watching a lecture video in picture-in-picture. (Recreation of what a participant did during an interview)

Breaks up the monotony of work

You might have this experience: listening to music or a podcast helps you “get in the zone” while you’re exercising or perhaps doing chores. It helps you lose yourself in the task, and make mundane tasks more bearable. Picture-in-Picture does the same for some people while they are at work, to avoid the surrounding silence.

“I just kind of like not having dead silence… I find it kind of motivating and I don’t know, it just makes the day seem less, less long.” — Executive Assistant to a Real Estate Developer

Calms me down

Multiple people told us they watch videos in PiP to calm themselves down. If they are reading a difficult article for work or study, or doing some art, watching ASMR or trance-like videos feels therapeutic. Not only does this calm people down, they said it can help them focus.

PiP on the bottom left with an article open in the main area of the screen

Reading an article in a native Desktop application while watching a soothing video of people running in picture-in-picture. (Recreation of what a participant did during an interview)

Keeps me entertained

And finally, some people use Picture-in-Picture for pure and simple entertainment. One person watches a comedic YouTuber talk about reptiles while playing a dragon-related browser game. Another person watches a friend’s live streaming gaming while playing a game themself.

PiP video in the upper left with a game in the main area of the screen

Playing a browser game while watching a funny YouTube video. (Recreation of what a participant showed us during an interview)

Our research impact 

Some people have habits with PiP for the reasons listed above, and we also learned there’s nothing gravely wrong with PiP to prevent habit-forming. Therefore, our impact is related to PiP’s strategy: Do not make “habit-forming” a measure of PiP’s success. Instead, better support what people already do with PiP. Particularly, PiP is getting more controls, for example, changing the volume.

Red panda in a PiP video

You don’t have to stop reading to watch this cute red panda in Picture-in-Picture

 

Share your stories

While conducting these interviews, we also prepared an experiment to test different versions of Picture-in-Picture, with the goal of increasing the number of people who discover it. We’ll talk more on that soon!

In the meantime, we’d like to hear even more stories. Are you using Picture-in-Picture in Firefox? Are you finding it useful? Please share your stories in the comments below, or send us a tweet @firefoxUX with a screenshot. We’d love to hear from you.

 

Thank you to Betsy Mikel for editing our blog post.

This post was originally published on Medium.

Mozilla VR BlogYour Security and Mozilla Hubs

Your Security and Mozilla Hubs

Mozilla and the Hubs team takes internet security seriously. We do our best to follow best practices for web security and securing data. This post will provide an overview of how we secure access to your rooms and your data.

Room Authentication

In the most basic scenario, only people who know the URL of your room can access your room. We use randomly generated strings in the URLs to obfuscate the URLs. If you need more security in your room, then you can limit your room to only allow users with Hubs accounts to join (usually, anyone can join regardless of account status). This is a server-wide setting, so you have to run your own Hubs Cloud instance to enable this setting.

You can also make rooms “invite only” which generates an additional key that needs to be used on the link to allow access. While the room ID can’t be changed, an “invite only” key can be revoked and regenerated, allowing you to revoke access to certain users.

Discord OAuth Integration

Alternatively, users can create a room via the Hubs Discord bot, and the room becomes bound to the security context of that Discord. In this scenario, a user’s identity is tied to their identity in Discord, and they only have access to rooms that are tied to channels they have access to. Users with “modify channel” permissions in Discord get corresponding “room owner” permissions in Hubs, which allows them to change room settings and kick users out of the room. For example, if I am a member of the private channel #standup, and there is a room tied to that channel, only members of that channel (including me) are allowed in the associated room. Anyone attempting to access the room will first need to authenticate via Discord.

How we secure your data

We collect minimal data on users. For any data that we do collect, all database data and backups are encrypted at rest. Additionally, we don’t store raw emails in our database--this means we can’t retrieve your email, we can only check to see if the email you enter for log in is in our database. All data is stored on a private subnet and is not accessible via the internet.

For example, let’s go through what happens when a user uploads a file inside a room. First, the user uploads a personal photo to the room to share with others. This generates a URL via a unique key, which is passed to all other users inside the room. Even if others find the URL of the file, they cannot decrypt the photo without this key (including the server operator!). The photo owner can choose to pin the photo to the room, which saves the encryption key in a database with the encrypted file. When you visit the room again, you can access the file, because the key is shared with room visitors. However, if the file owner leaves the room without pinning the photo, then the photo is considered ‘abandoned data’ and the key is erased. This means that no users can access the file anymore, and the data is erased within 72 hours.

All data is encrypted in transit via TLS. We do not currently support end-to-end encryption.

Hubs Cloud Security

When you deploy your own Hubs Cloud instance, you have full control over the instance and its data via AWS or DigitalOcean infrastructure--Mozilla simply provides the template and automatic updates. Therefore, you can integrate your own security measures and technology as you like. Everyone’s use case is different. Hubs cloud is an as-is product, and we’re unable to predict the performance as you make changes to the template.

Server access is limited by SSH and sometimes two-factor authentication. For additional security, you can set stack template rules to restrict which IP addresses can SSH into the server.

How do we maintain Hubs Cloud with the latest security updates

We automatically update packages for security updates, and update our version in a monthly cadence, but if there’s a security issue exposed (either in our software or third party software), we can immediately update all stacks. We inherit our network architecture from AWS, which includes load balancing and DDoS protection.

Your security on the web is non-negotiable. Between maintaining security updates, authenticating users, and encrypting data at rest and in transit, we prioritize our users security needs. For any additional questions, please reach out to us. To contribute to Hubs, visit https://github.com/mozilla/hubs.

SeaMonkeySeaMonkey 2.53.4 released!

Hi everyone,

Just want to quickly post that SeaMonkey 2.53.4 has been released!

This time around, it took a bit longer than anticipated due unforeseen circumstances and me needing to do some workarounds.

Sorry for the delay.

:ewong

Mozilla VR BlogYour Privacy and Mozilla Hubs

Your Privacy and Mozilla Hubs

At Mozilla, we believe that privacy is fundamental to a healthy internet. We especially believe that this is the case in social VR platforms, which process and transmit large amounts of personal information. What happens in Hubs should stay in Hubs.

Privacy expectations in a Hubs room

First, let’s discuss what your privacy expectations should be when you’re in a Hubs room. In general, anything transmitted in a room is available to everyone connected to that room. They can save anything that you send. This is why it’s so important to only give the Hubs link out to people you want to be in the room, or to use Discord authentication so only authorized users can access a room.

While some rooms may have audio falloff to declutter the audio in a room, users should still have the expectation that anyone in the room (or in the lobby) can hear what’s being said. Audio falloff is performed in the client, so anyone who modifies their client can hear you from anywhere in the room.

Other users in the room have the ability to create recordings. While recording, the camera tool will display a red icon, and your avatar will indicate to others with a red icon that you are filming and capturing audio. All users are notified when a photo or video has been taken. However, users should still be aware that others could use screen recorders to capture what happens in a Hubs room without their knowledge.

Minimizing the data we collect on you

The only data we need to create an account for you is your email address, which we store hashed in an encrypted database. We don’t collect any additional personal information like birthdate, real name, or telephone numbers. Accounts aren’t required to use Hubs, and many features are available to users without accounts.

Processing data instead of collecting data

There’s a certain amount of information that we have to process in order to provide you with the Hubs experience. For example, we receive and send to others the name and likeness of your avatar, its position in the room, and your interactions with objects in the room. If you create an account, you can store custom avatars and their names.

We receive data about the virtual objects and avatars in a room in order to share that data with others in the room, but we don’t monitor the individual objects that are posted in a room. Users have the ability to permanently pin objects to a room, which will store them in the room until they’re deleted. Unpinned files are deleted from Mozilla’s servers after 72 hours.

We do collect basic metrics about how many rooms are being created and how many users are in those rooms, but we don’t tie that data to specific rooms or users. What we don’t do is collect or store any data without the user's explicit consent.

Hubs versus Hubs Cloud

Hubs Cloud owners have the capability to implement additional server-side analytics. We provide Hubs Cloud instances with their own versions of Hubs, with minimal data collection and no user monitoring, which they can then modify to suit their needs. Unfortunately, this means that we can’t make any guarantees about what individual Hubs Cloud instances do, so you’ll need to consult with the instance owner if you have any privacy concerns.

Our promise to you

We will never perform user monitoring or deep tracking, particularly using VR data sources like gaze-tracking. We will continue to minimize the personal data we collect, and when we do need to collect data, we will invest in privacy preserving solutions like differential privacy. For full details, see our privacy policy. Hubs is an open source project–to contribute to Hubs, visit https://github.com/mozilla/hubs.

The Mozilla BlogUpdate on Firefox Send and Firefox Notes

As Mozilla tightens and refines its product focus in 2020, today we are announcing the end of life for two legacy services that grew out of the Firefox Test Pilot program: Firefox Send and Firefox Notes. Both services are being decommissioned and will no longer be a part of our product family. Details and timelines are discussed below.

Firefox Send was a promising tool for encrypted file sharing. Send garnered good reach, a loyal audience, and real signs of value throughout its life.  Unfortunately, some abusive users were beginning to use Send to ship malware and conduct spear phishing attacks. This summer we took Firefox Send offline to address this challenge.

In the intervening period, as we weighed the cost of our overall portfolio and strategic focus, we made the decision not to relaunch the service. Because the service is already offline, no major changes in status are expected. You can read more here.

Firefox Notes was initially developed to experiment with new methods of encrypted data syncing. Having served that purpose, we kept the product as a little utility tool For Firefox and Android users. In early November, we will decommission the Android Notes app and syncing service. The Firefox Notes desktop browser extension will remain available for existing installs and we will include an option to export all notes, however it will no longer be maintained by Mozilla and will no longer be installable. You can learn more about how to export your notes here.

Thank you for your patience as we’ve refined our product strategy and portfolio over the course of 2020. While saying goodbye is never easy, this decision allows us to sharpen our focus on experiences like Mozilla VPN, Firefox Monitor, and Firefox Private Network.

The post Update on Firefox Send and Firefox Notes appeared first on The Mozilla Blog.

Mozilla Add-ons BlogDownload Statistics Update

In June, we announced that we were making changes to add-on usage statistics on addons.mozilla.org (AMO).  Now, we’re making a similar change to add-on download statistics. These statistics are aggregated from the AMO server logs, do not contain any personally identifiable information, and are only available to add-ons developers via the Developer Hub.

Just like with usage stats, the new download stats will be less expensive to process and will be based on Firefox telemetry data. As users can opt out of telemetry reporting, the new download numbers will be generally lower than those reported from the server logs. Additionally, the download numbers are based on new telemetry introduced in Firefox 80, so they will be lower at first and increase as users update their Firefox. As before, we will only count downloads originating from AMO.

The good news is that it’ll be easier now to track attribution for downloads. The old download stats were based on a custom src parameter in the URL. The new ones will break down sources with the more standard UTM parameters, making it easier to measure the effect of social media and other online campaigns.

Here’s a preview of what the new downloads dashboard will look like:

A screenshot of the updated statistics dashboard

We expect to turn on the new downloads data on October 8. Make sure to export your current download numbers if you’re interested in preserving them.

The post Download Statistics Update appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyMozilla files comments with the European Commission on safeguarding democracy in the digital age

As in many parts of the world, EU lawmakers are eager to get greater insight into the ways in which digital technologies and online discourse can serve to both enhance and create friction in democratic processes. In context of its recent ‘Democracy Action Plan’ (EDAP), we’ve just filed comments with the European Commission, with the aim of informing thoughtful and effective EU policy responses to key issues surrounding democracy and digital technologies.

Our submission complements our recent EU Digital Services Act filing, and focuses on four key areas:

  • The future of the EU Code of Practice on Disinformation: Mozilla was a founding signatory of the Code of Practice, and we recognise it as a considerable step forward. However, in policy terms, the Code is a starting point. There is more work to be done, both to ensure that the Code’s commitments are properly implemented, and to ensure that it is situated within a more coherent general EU policy approach to platform responsibility.
    ***
  • Meaningful transparency to address disinformation: To ensure transparency and to facilitate accountability in the effort to address the impact and spread of disinformation online, the European Commission should consider a mandate for broad disclosure of advertising through publicly available ad archive APIs.
    ***
  • Developing a meaningful problem definition for microtargeting: We welcome the Commission’s consideration of the role of microtargeting with respect to political advertising and its contribution to the disinformation problem. The EDAP provides an opportunity to gather the systematic insight that is a prerequisite for thoughtful policy responses to limit the harms in microtargeting of political content.
    ***
  • Addressing disinformation on messaging apps while maintaining trust and security: In its endeavours to address misinformation on messaging applications, the Commission should refrain from any interventions that would weaken encryption. Rather, its focus should be on enhancing digital literacy; encouraging responsive product design; and enhancing redress mechanisms.

A high-level overview of our filing can be read here, and the substantive questionnaire response can be read here.

We look forward to working alongside policymakers in the European Commission to give practical meaning to the political ambition expressed in the EDAP and the EU Code of Practice on Disinformation. This, as well as our work on the EU Digital Services Act will be a key focus of our public policy engagement in Europe in the coming months.

The post Mozilla files comments with the European Commission on safeguarding democracy in the digital age appeared first on Open Policy & Advocacy.

Mozilla ServicesThe Future of Sync

Intro

There’s a new Sync back-end! The past year or so has been a year of a lot of changes and some of those changes broke things. Our group reorganized, we moved from IRC to Matrix, and a few other things caught us off guard and needed to be addressed. None of those should be excuses for why we kinda stopped keeping you up to date about Sync. We did write a lot of stuff about what we were going to do, but we forgot to share it outside of mozilla. Again, not an excuse, but just letting you know why we felt like we had talked about all of this, even though we absolutely had not.

So, allow me to introduce you to the four person “Services Engineering” team whose job it is to keep a bunch of back-end services running, including Push Notifications and Sync back-end, and a few other miscellaneous services.

For now, let’s focus on Sync.

Current Situation

Sync probably didn’t do what you thought it did.

Sync’s job is to make sure that the bookmarks, passwords, history, extensions and other bits you want to synchronize between one copy of Firefox gets to your other copies of Firefox. Those different copies of Firefox could be different profiles, or be on different devices. Not all of your copies of Firefox may be online or accessible all the time, though, so sync has to do is keep a temporary, encrypted copy on some backend servers which it can use to coordinate later. Since it’s encrypted, Mozilla can’t read that data, we just know it belongs to you. A side effect is that adding a new instance of Firefox (by installing and signing in on a new device, or uninstalling and reinstalling on the same device, or creating a new Firefox profile you then sign in to), just adds another copy of Firefox to Sync’s list of things to synchronize. It might be a bit confusing, but this is true even if you only had one copy of Firefox. If you “lost” a copy of Firefox because you uninstalled it, or your computer’s disc crashed, or your dog buried your phone in the backyard, when you re-installed Firefox, you add another copy of Firefox to your account. Sync would then synchronize your data to that new copy. Sync would just never get an update from the “old” version of Firefox you lost. Sync would just try to rebuild your data from the temporary echoes of the encrypted data that was still on our servers.

That’s great for short term things, but kinda terrible if you, say, shut down Firefox while you go on walk-about only to come back months later to a bad hard drive. You reinstall, try to set up sync, and due to an unexpected Sync server crash we wound up losing your data echos.

That was part of the problem. If we lost a server, we’d basically tell all the copies of Firefox that were using that server, “Whoops, go talk to this new server” and your copy of Firefox would then re-upload what it had. Sometimes this might result in you losing a line of history, sometimes you’d get a duplicate bookmark, but generally, Sync would tend to recover OK and you’d be none the wiser. If that happens when there are no other active copies of Firefox for your account , however, all bets were off and you’d probably lose everything since there were no other copies of your data anywhere.

A New Hope Service

A lot of folks expected it to be a Backup service. The good news is, now it is a backup service. Sync is more reliable now. We use a distributed database to store your data securely, so we no longer lose databases (or your data echos). There’s a lot of benefit for us as well. We were able to rewrite the service in Rust, a more efficient programming language that lets us run on less machines.

Of course, there are a few challenges we face when standing up a service like this.

Sync needs to run with new versions of Firefox, as well as older ones. In some cases, very old ones, which had some interesting “quirks”. It needs to continue to be at least as secure as before while hopefully giving devs a chance to fix some of the existing weirdness as well as add new features. Oh, and switching folks to the new service should be as transparent as possible.

It’s a long, complicated list of requirements.

How we got here

First off we had to decide a few things. Like what data store were we going to use. We picked Google Cloud’s Spanner database for its own pile of reasons, some technical, some non-technical. Spanner provides a SQL like database which means that we don’t have to radically change existing MySQL based code. This means that we can provide some level of abstraction allowing for those who want to self-host without radically altering internal data structures. In addition, Spanner provides us an overall cost savings in running our servers. It’s a SQL like database that should be able to handle what we need to do.

We then picked Rust as our development platform and Actix as the web base because we had pretty good experience with moving other Python projects to them. It’s not been magically easy, and there have been plenty of pain points we’ve hit, but by-and-large we’re confident in the code and it’s proven to be easy enough to work with. Rust has also allowed us to reduce the number of servers we have to run in order to provide the service at the scale we need to offer it, which also helps us reduce costs.

For folks interested in following our progress, we’re working with the syncstorage-rs repo on Github. We also are tracking a bunch of the other issues at the services engineering repo.

Because Rust is ever evolving, often massively useful features roll out on different schedules. For instance, we HEAVILY use the async/await code, which landed in late December of 2019, and is taking a bit to percolate through all the libraries. As those libraries update, we’re going to need to rebuild bits of our server to take advantage of them.

How you can help

Right now, all we can ask is some patience, and possibly help with some of our Good First Bugs. Google released a “stand-alone” spanner emulator that may help you work with our new sync server if you want to play with that part, or you can help us work on the traditional, MySQL stand alone side. That should let you start experimenting with the server and help us find bugs and issues.

To be honest, our initial focus was more on the Spanner integration work than the stand-alone SQL side. We have a number of existing unit tests that exercise both halves and there are a few of us who are very vocal about making sure we support stand-alone SQL databases, but we can use your help testing in more “real world” environments.

For now, folks interested in running the old python 2.7 syncserver still can while we continue to improve stand-alone support inside of syncstorage-rs.

Some folks who run stand-alone servers are well aware that Python 2.7 officially reached “end of life”, meaning no further updates or support is coming from the Python developers, however, we have a bit of leeway here. The Pypy group has said that they plan on offering some support for Python 2.7 for a while longer. Unfortunately, the libraries that we use continue to progress or get abandoned for python3. We’re trying to lock down versions as much as possible, but it’s not sustainable.

We finally have rust based sync storage working with our durable back end running and hosting users. Our goal is to now focus on the “stand-alone” version, and we’re making fairly good progress.

I’m sorry that things have been too quiet here. While we’ve been putting together lots of internal documents explaining how we’re going to do this move, we’ve not shared them publicly. Hopefully we can clean them up and do that.

We’re excited to offer a new version of Sync and look forward to telling you more about what’s coming up. Stay tuned!

Open Policy & AdvocacyMozilla announces partnership to explore new technology ideas in the Africa Region

Mozilla and AfriLabs – a Pan-African community and connector of African tech hubs with over 225 technology innovation hubs spread across 47 countries – have partnered to convene a series of roundtable discussions with African startups, entrepreneurs, developers and innovators to better understand the tech ecosystem and identify new product ideas – to spur the next generation of open innovation.

This strategic partnership will help develop more relevant, sustainable support for African innovators and entrepreneurs to build scalable resilient products while leveraging honest and candid discussions to identify areas of common interest. There is no shortage of innovators and creative talents across the African continent, diverse stakeholders coming together to form new ecosystems to solve social, economic problems that are unique to the region.

“Mozilla is pleased to be partnering with AfriLabs to learn more about the intersection of African product needs and capacity gaps and to co-create value with local entrepreneurs,” said Alice Munyua, Director of the Africa Innovation Program.

Mozilla is committed to supporting communities of technologists by putting people first while strengthening the knowledge-base. This partnership is part of Mozilla’s efforts to reinvest within the African tech ecosystem and support local innovators with scalable ideas that have the potential to impact across the continent.

The post Mozilla announces partnership to explore new technology ideas in the Africa Region appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogExtensions in Firefox 81

In Firefox 81, we have improved error messages for extension developers and updated user-facing notifications  to provide more information on how extensions are modifying their settings.

For developers, the menus.create API now provides more meaningful error messages when supplying invalid match or url patterns.  This updated message should make it easier for developers to quickly identify and fix the error. In addition, webNavigation.getAllFrames and webNavigation.getFrame will return a promise resolved with null in case the tab is discarded, which is how these APIs behave in Chrome.

For users, we’ve added a notification when an add-on is controlling the “Ask to save logins and passwords for websites” setting, using the privacy.services.passwordSavingEnabled settings API. Users can see this notification in their preferences or by navigating to about:preferences#privacy.

https://lh6.googleusercontent.com/Sy7Q3cLUc8lkcMvj6HI1hUKA7dM0YjkZnlHkFxVM3UeNjmGUzAeqbxTDlDdL2rCdZgKNa9KCkbioBvo_rQSHWkTcnSoAUIyxeBa4z7kkghffAvwfseVFopCmnJ1KX-ZF8FatwLSI

Thank you Deepika Karanji for improving the error messages, and our WebExtensions and security engineering teams for making these changes possible. We’re looking forward to seeing what is next for Firefox 82.

The post Extensions in Firefox 81 appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyMozilla applauds TRAI for maintaining the status quo on OTT regulation, upholding a key aspect of net neutrality in India

Mozilla applauds the Telecom Regulatory Authority of India (TRAI) for its decision to maintain the existing regulatory framework for OTT services in India. The regulation of OTT services sparked the fight for net neutrality in India in 2015, leading to over a million Indians asking TRAI to #SaveTheInternet and over time becoming one of the most successful grassroots campaigns in the history of digital activism. Mozilla’s CEO, Mitchell Baker, wrote an open letter to Prime Minister Modi at the time stating: “We stand firm in the belief that all users should be able to experience the full diversity of the Web. For this to be possible, Internet Service Providers must treat all content transmitted over the Internet equally, regardless of the sender or the receiver.”

Since then, as we have stated in public consultations in both 2015 and 2019, we believe that imposing a new uniform regulatory framework for OTT services, akin to how telecom operators are governed, would irredeemably harm the internet ecosystem in India. It would create legal uncertainty, chill innovation, undermine security best practices, and eventually, hurt the promise of Digital India. TRAI’s thoughtful and considered approach to the topic sets an example for regulators across the world and helps mitigate many of these concerns.  It is a historical step for a country which already has among the strongest net neutrality regulations in the world. We look forward to continuing to work with TRAI to create a progressive regulatory framework for the internet ecosystem in India.

The post Mozilla applauds TRAI for maintaining the status quo on OTT regulation, upholding a key aspect of net neutrality in India appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyIndia’s ambitious non personal data report should put privacy first, for both individuals and communities

After almost a year’s worth of deliberation, the Kris Gopalakrishnan Committee released its draft report on non-personal data regulation in India in July 2020. The report is one of the first comprehensive articulations of how non-personal data should be regulated by any country and breaks new ground in interesting ways. While seemingly well intentioned, many of the report’s recommendations leave much to be desired in both clarity and feasibility of implementation. In Mozilla’s response to the public consultation, we have argued for a consultative and rights respecting approach to non-personal data regulation that benefits communities, individuals and businesses alike while upholding their privacy and autonomy.

We welcome the consultation, and believe the concept of non-personal data will benefit from a robust public discussion. Such a process is essential to creating a rights-respecting law compatible with the Indian Constitution and its fundamental rights of equality, liberty and privacy.

The key issues outlined in our submission are:

  • Mitigating risks to privacy: Non-personal data can also often constitute protected trade secrets, and its sharing with third parties can raise significant privacy concerns. As we’ve stated before, information about sales location data from e-commerce platforms, for example, can be used to draw dangerous inferences and patterns regarding caste, religion, and sexuality.
  • Clarifying community data: Likewise, the paper proposes the nebulous concept of community data while failing to adequately provide for community rights. Replacing the fundamental right to privacy with a notion of ownership akin to property, vested in the individual but easily divested by state and non-state actors, leaves individual autonomy in a precarious position.
  • Privacy is not a zero-sum construct: More broadly, the paper focuses on how to extract data for the national interest, while continuing to ignore the urgent need to protect Indians’ privacy. Instead of contemplating how to force the transfer of non-personal data for the benefit of local companies, the Indian Government should leverage India’s place in the global economy by setting forth an interoperable and rights respecting vision of data governance.
  • Passing a comprehensive data protection law: The Indian government should prioritize the passage of a strong data protection law, accompanied by reform of government surveillance. Only after the implementation of such a law that makes the fundamental right of privacy a reality for all Indians to all should we begin to look into non-personal data.

The goal of data-driven innovation oriented towards societal benefit is a valuable one. However, any community-oriented data models must be predicated on a legal framework that secures the individual’s rights to their data, as affirmed by the Indian Constitution.  As we’ve argued extensively to MeitY and the Justice Srikrishna Committee, such a law has the opportunity to build on the globally standard of data protection set by Europe, and position India as a leader in internet regulation.

We look forward to engaging with the Indian government as it deliberates how to regulate non-personal data over the coming years.

Our full submission can be found here.

The post India’s ambitious non personal data report should put privacy first, for both individuals and communities appeared first on Open Policy & Advocacy.

Firefox UXContent Strategy in Action on Firefox for Android

The behind-the-strings work that content strategy does to improve the user experience of our products.

Mozilla UXContent Strategy in Action on Firefox for Android

The Firefox for Android team recently celebrated an important milestone. We launched a completely overhauled Android experience that’s fast, personalized, and private by design.

Image of white Android phone with Firefox logo over a purple background.

Firefox recently launched launched a completely overhauled Android experience.

When I joined the mobile team six months ago as its first embedded content strategist, I quickly saw the opportunity to improve our process by codifying standards. This would help us avoid reinventing solutions so we could move faster and ultimately develop a more cohesive product for end users. Here are a few approaches I took to integrate systems thinking into our UX process.

Create documentation to streamline decision making

I had an immediate ask to write strings for several snackbars and confirmation dialogs. Dozens of these already existed in the app. They appear when you complete actions like saving a bookmark, closing a tab, or deleting browsing data.

Screenshots of a snackbar message and confirmation dialog message in the Firefox for Android app.

Snackbars and confirmation dialogs appear when you take certain actions inside the app, such as saving a bookmark or deleting your browsing data.

All I had to do was review the existing strings and follow the already-established patterns. That was easier said than done. Strings live in two XML files. Similar strings, like snackbars and dialogs, are rarely grouped together. It’s also difficult to understand the context of the interaction from an XML file.

Screenshot of the app's XML file, which contains all strings inside the Firefox for Android app.

It’s difficult to identify content patterns and inconsistencies from an XML file.

To see everything in one, more digestible place, I conducted a holistic audit of the snackbars and dialogs.

I downloaded the XML files and pulled all snackbar and dialog-related strings into a spreadsheet. I also went through the app and triggered as many of the messages as I could to add screenshots to my documentation. I audited a few competitors, too.  As the audit came together, I began to see patterns emerge.

Screenshot of spreadsheet for organizing strings for the Firefox for Android app.

Organizing and coding strings in a spreadsheet helped me identify patterns and inconsistencies.

I identified the following:

  • Inconsistencies in strings. Example: Some had terminal punctuation and others did not.
  • Inconsistencies in triggers and behaviors. Example: A snackbar should have appeared but didn’t, or should NOT have appeared but did.

I used this to create guidelines around our usage of these components. Now when a request for a snackbar or dialog comes up, I can close the loop much faster because I have documented guidelines to follow.

Define and standardize reusable patterns and components

Snackbars are one component of many within the app. Firefox for Android has buttons, permission prompts, error fields, modals, in-app messaging surfaces, and much more. Though the design team maintained a UI library, we didn’t have clear standards around the components themselves. This led to confusion and in some cases the same components being used in different ways.

I began to collect examples of various in-app components. I started small, tackling inconsistencies as I came across them and worked with individual designers to align our direction. After a final decision was made about a particular component, we shared back with the rest of the team. This helped to build the team’s institutional memory and improve transparency about a decision one or two people may have made.

Image of snackbar do's and don'ts with visual examples of how to properly use the component.

Example of guidance we now provide around when it’s appropriate to use a snackbar.

Note that you don’t need fancy tooling to begin auditing and aligning your components. Our team was in the middle of transitioning between tools, so I used a simple Google Slides deck with screenshots to start. It was also easy for other team members to contribute because a slide deck has a low barrier to entry.

Establish a framework for introducing new features

As we moved closer towards the product launch, we began to discuss which new features to add. This led to conversations around feature discoverability. How would users discover the features that served them best?

Content strategy advocated for a holistic approach; if we designed a solution for one feature independent of all others in the app, we could end up with a busy, overwhelming end experience that annoyed users. To help us define when, why, and how we would draw attention to in-app features, I developed a Feature Discovery Framework.

Bulleted list of 5 goals for the feature discoverability framework.

Excerpt from the Feature Discovery Framework we are piloting to provide users access to the right features at the right time.

The framework serves as an alignment tool between product owners and user experience to identify the best approach for a particular feature. It’s broken down into three steps. Each step poses a series of questions that are intended to create clarity around the feature itself.

Step 1: Understanding the feature

How does it map to user and business goals?

Step 2: Introducing the feature

How and when should the feature be surfaced?

Step 3: Defining success with data

How and when will we measure success?

After I had developed the first draft of the framework, I shared in our UX design critique for feedback. I was surprised to discover that my peers working on other products in Firefox were enthusiastic about applying the framework on their own teams. The feedback I gathered during the critique session helped me make improvements and clarify my thinking. On the mobile team, we’re now piloting the framework for future features.

Wrapping up

The words you see on a screen are the most tangible output of a content strategist’s work, but are a small sliver of what we do day-to-day. Developing documentation helps us align with our teams and move faster. Understanding user needs and business goals up front help us define what approach to take. To learn more about how we work as content strategists at Firefox, check out Driving Value as a Tiny UX Content Team.

This post was originally published on Medium.

about:communityWeaving Safety into the Fabric of Open Source Collaboration

At Mozilla, with over 400 staff in community-facing roles, and thousands of volunteer contributors across multiple projects: we believe that everyone deserves the right to work, contribute and convene with knowledge that their safety and well-being are at the forefront of how we operate and communicate as an organization.

In my 2017 research into the state of diversity and inclusion in open source, including qualitative interviews with over 90 community members, and a survey of over 204 open source projects, we found that while a majority of projects had  adopted a code of conduct, nearly half (47%) of community members did not trust (or doubted) its enforcement. That number jumped to 67% for those in underrepresented groups.

For mission driven organizations like Mozilla, and others building open source into their product development workflows, I found a lack of cross-organizational  strategy for enforcement.  A strategy that considers the intertwined nature of open source,  where staff and contributors regularly work together as teammates and colleagues.

It was clear, that the success of enforcement was dependent on the  organizational capacity to respond as a whole, and not as sole responsibility of community managers. This blog post describes our journey to get there.

Why This Matters

 

Truly ‘open’ participation requires that everyone feel safe, supported, and empowered in their roles.

From an ‘organizational health perspective this is also critical to get right as there are unique risks associated with code of conduct enforcement in open collaboration:

  • Safety  – Physical/Sexual/Psychological for all involved.
  • Privacy – Privacy, confidentiality, security of data related to reports.
  • Legal – Failure to recognize applicable law, or follow required processes, resulting in potential legal liability. This includes the geographic region where reporter & reported individuals reside.
  • Brand – Mishandling (or not handling) can create mistrust in organizations, and narratives beyond their ability to manage.

Where We Started (Staff)

 

Looking down, you can see someone's feet, on concrete next to a purple spraypainted lettering that says 'Start Here'

Photo by Jon Tyson on Unsplash

I want to first acknowledge that across Mozilla’s many projects and communities, maintainers, and project leads were doing an excellent job of managing low to moderate risk cases, including  ‘drive by’ trolling.

That said, our processes and program were immature. Many of those same staff found themselves without expertise, tools and lacking key partnerships required to manage escalations and high risk situations.  This caused stress and perhaps placed an unfair burden on those people to solve complex problems in real time. Specifically gaps were:

Investigative & HR skill set  –  Investigation requires both a mindset, and set of tactics to ensure that all relevant information is considered, before making a decision. This and other skills, related to supporting employees, sits in the HR department.

Legal – Legal partnership for both product and employment issues are key to high risk cases (in any of the previously mentioned categories) and those which may involve staff – either as the reporter or the reported.  The when and how for consulting legal wasn’t yet fully clear.

Incident Response  Incident response requires timing and a clear set of steps that ensures complex decisions like a project ban are executed in such a way that safety and privacy of all involved are at center.  This includes access to expertise and tools that help keep people safe.  There was no repeatable predictable, and visible process to plug into.

Centralized Data TrackingThere was no single, cohesive way to track HR, Legal and community violations of the CPG  across the project.   This means theoretically, that someone banned from the community could have potentially applied for a MOSS grant, fellowship or been invited to Mozilla’s bi-annual All Hands by another team – without that being readily flagged.

Where We Started (Community)

 

“Community does not pay attention to CPG ,  people don’t feel it will do anything.”  – 2017 Research into the state of diversity & inclusion in open source.

For those in our communities, 2017 research found little- to-no knowledge about how to report violations, and what to expect if they did.  In situations perceived as urgent, contributors would look for help from multiple staff members they already had a rapport with, and/or affiliated community leadership groups like Mozilla Reps council.  Those community leaders were often heroic in their endeavors to help, but again just lacked the same tools, processes and visibility into how the organization was set up to support them.

In open source more broadly, we have a long timeline of unaddressed toxic behavior, especially  from those in roles of influence .   It seems fair to hypothesize that the human and financial cost of unaddressed behavior is not unlike concerning numbers showing up in research about toxic work environments.

Where We Are Now

 

While this work is never done, I can say with a lot of confidence that the program we’ve built is solid, both from the perspective of systematically addressing the majority of the gaps I’ve mentioned, and set up to continually improve.

Investments required to build this program were both practical, in that we required resources, and budget, but also of the intangible – and emotional commitment to stand shoulder-to-shoulder with people in difficult circumstances, and potentially endure the response of those of those for whom equality felt uncomfortable.

Over time, and as processes became more efficient,  those investments have also been gradually reduced from two people, working full time, to only 25% of a full time employee’s time.  Even with recent layoffs at Mozilla, these programs are now lean enough to continue as is.

The CPG Program for Enforcement

To date, we’ve triaged 282 reports, consulted on countless issues related to enforcement, and fully rolled out 19 complex full project bans among other decisions ranked according to levels on our consequence ladder . We’ve also ensured that over 873 of Mozilla’s Github repositories use our template, which directs to our processes.

Customers/Users

 

Who uses this program?   It might seem a bit odd to describe those who seek support in difficult situations as customers, or users but from the perspective of service design, thinking this way ensures we are designing with empathy, compassion and providing value for the journey of open source collaboration.

“I felt very supported by Mozilla’s CPG process when I was being harassed. It was clear who was involved in evaluating my case, and the consequences ladder helped jump-start a conversation about what steps would be taken to address the harassment. It also helped that the team listened to and honor my requests during the process.”  – Mozilla staff member

“I am not sure how I would have handled this on my own. I am grateful that Mozilla provided support to manage and fix a CPG issues in my community”  – Mozilla community member.

Obviously I cannot be specific to protect privacy of individuals involved, but I can group ‘users’ into three groups:

People – contributors, community leaders, community-facing staff, and their managers.

Mozilla Communities & Projects –  it’s hard to think of an area that has not leveraged this program in some capacity including: Firefox, Developer Tools,  SUMO, MDN, Fenix, Rust, Servo, Hubs, Addons, Firefox, Mozfest, All Hands, Mozilla Reps, Tech Speakers, Dev Rel, Reps, MOSS, L10N and regional communities are top of mind.

External Communities & Projects –  because we’ve shared our work openly,  we’ve seen adoption more broadly in the ecosystem including the Contributor’s Covenant ‘Enforcement Guidelines’.

Policy & Standards

 

Answering: “How might we scale our processes, in a way that ensures quality, safety, stability, reproducibility and ultimately builds trust across the board (staff and communities)?”.

This includes continual improvement of the CPG itself. This year, after interviewing an expert on the topic of caste discrimination, and its potentially negative impact on open communities, we  added caste as a protected group.   This year, we’ve also translated the policy into 7 more languages for a total of 15.  Finally, we added a How to Report page, including best practices for accepting a report, and ensuring compliance based on whether staff are reporting or the reporter.  All changes are tracked here.

For the enforcement policy itself we have the following standards:

  • A Decision making policy governs  cases where contributors are involved  – ensuring the right stakeholders are consulted,  scope of that consultation is limited to protect privacy of those involved.
  • Consequence ladder  guides decision-making, and  if required, provides guidance for escalation in cases of repeat violations.
  • For rare cases where a ban is temporary, we have a policy to invite members back through our CPG onboarding.
  • To roll out a decision across multiple systems, we’ve developed this project-wide process including communication templates.

To ensure unified understanding and visibility, we have the following supportive processes and tools:

Internal Partnerships

 

Answering:  “How might we unite people, and teams with critical skills needed to respond  efficiently and effectively (without that requiring a lot of formality)?”.

There were three categories of formal, and informal partnerships, internally:

Product  Partnerships – those with accountability, and skill sets related to product implementation of standards and policies. Primarily this is legal’s product team, and those administering the Mozilla GitHub organization.

Safety Partnerships – those with specific skill sets, required in emergency situations.  At Mozilla, this is Security Assurance, HR, Legal and Workplace Resources (WPR)  .

Enforcement Partnerships – Specifically this means alignment between HR and legal on which actions belong to which team.  That’s not to say, we always have the need for these, many smaller reports can easily be handled by the community team alone.

There are three circles, one that says 'Employee Support' and lists tasks of that role, the second says 'Investigation (HR)' and lists the tasks of that role, the third says 'Case Actions(community team)' and lists associated actions

An example of how a case involving an employee as reporter, or reported is managed between departments.

We also have less formalized partnerships (more of an intention to work together) across events like All Hands, and in collaboration with other enforcement leaders at Mozilla like those managing high-volume issues in places like Bugzilla.

Working Groups

 

Answering: “How can we convene representatives from different areas of the org, around specific problems?

Centralized CPG Enforcement Data – Working Group

To mitigate risk identified a working group consisting of HR (for both Mozilla Corporation, and Mozilla Foundation), legal and the community team come together periodically to triage requests for things like MOSS grants, community leadership roles, and in-person invites to events (pre-COVID-19)  reduced the potential for re-emergence of those with CPG violations risk in a number of areas.

Safety  – Working Group

When Mozillians feel threatened (perceived or actual), we want to make sure there is an accountable team, with access and ability to trigger broader responses across the organization, based on risk. This started first as a mailing list of accountable for Security, Community, Workplaces Resources (WPR) and HR, this group now has Security as a DRI, ensuring prompt incident response.

Each of these working groups started as an experiment, each having demonstrated value,  now has an accountable DRI (HR & Security Assurance respectively).

Education

 

Answering: “How can we ensure an ongoing high standard of response, through knowledge sharing and training for contributors in roles of project leadership, and staff in community-facing roles(and their managers)?

We created two courses:

two course tiles are shown. One titled (contributor) community participation guidelines, the other (staff) participation guidelines. Each show link to FAQThese  are not intended to make people ‘enforcement experts’.  Instead, curriculum covers, at a high level (think ‘first aid’ for enforcement!), those topics critical to mitigating risk, and keeping people safe.

98% of the 501 staff who have completed this course said they understood how it applied to their role, and valued the experience.

Central to content is this triage process, for quick decision making, and escalation if needed.

A cropped inforgraphic showing a triage process for CPG violations. The first 4 steps are for P1 (and descriptions for what thsoe are), the next time is for p2 (with descriptions for what those are) with two more tiles for P3, and P4 each with descriptions.

CPG Triage Infographic

 

Last (but not least), these courses encourage learners to prioritize self-care , with available resources, and clear organizational support for doing so.

Supporting Systems

 

As part of our design and implementation we also found a need for systems to further our program’s effectiveness.  Those are:

Reporting Tool:  We needed a way to effectively and consistently accept, and document reports.  We decided to use a third party system that allowed Mozilla to create records directly and allowed contributors/community members to personally submit reports in the same tool.  This helped with making sure that we had one authorized system rather than a smattering of notes and documents being kept in an unstructured way.  It also allows people to report in their own language.

Learning Management System (LMS):  No program is effective without meaningful training. To support, we engaged with a third party tool that allowed us to create content that was easy to digest, but also provides assessment opportunities (quizzes) and ability to track course completion.

Finally…

 

This, often invisible work of safety, is critical if open source is to reach it’s full potential. I want to thank, the many, many people who cared, advocated and contributed to this work and those that trusted us to help.

If you have any questions about this program, including how to leverage our open resources, please do reach out via our Github repository, or Discourse.

___________________________________________________

NOTE:  We determined Community-facing roles as those with ‘Community Manager’ in their title and:

  • Engineers, designers, and others working regularly with contributors on platforms like Mercurial, Bugzilla and Github.
  • Anyone organizing, speaking or hosting events on behalf of Mozilla.
  • Those with jobs requiring social media interactions with external audiences including blogpost authorship.

Mozilla Add-ons BlogIntroducing the Promoted Add-ons Pilot

Today, we are launching a pilot program to give developers a way to promote their add-ons on addons.mozilla.org (AMO). This pilot program, which will run between the end of September and the end of November 2020, aims to expand the number of add-ons we can review and verify as compliant with Mozilla policies, and provides developers with options for boosting their discoverability on AMO.

 

Building Upon Recommended Extensions

We strive to maintain a balance between openness for our development ecosystem and security and privacy for our users. Last summer, we launched a program called Recommended Extensions consisting of a relatively small number of editorially chosen add-ons that are regularly reviewed for policy compliance and prominently recommended on AMO and other Mozilla channels. All other add-ons display a caution label on their listing pages letting users know that we may not have reviewed these add-ons.

We would love to review all add-ons on AMO for policy compliance, but the cost would be prohibitive because they are performed by humans. Still, developers often tell us they would like to have their add-ons reviewed and featured on AMO, and some have indicated a willingness to pay for these services if we provide them.

Introducing Promoted Add-ons

To support these developers, we are adding a new program called Promoted Add-ons, where add-ons can be manually reviewed and featured on the AMO homepage for a fee. Offering these services as paid options will help us expand the number of add-ons that are verified and give developers more ways to gain users.

There will be two levels of paid services available:

  • “Verified” badging: Developers will have all new versions of their add-on reviewed for security and policy compliance. If the add-on passes, it will receive a Verified badge on AMO and in the Firefox Add-ons Manager (about:addons). The caution label will no longer appear on the add-on’s AMO listing page.

Add-on listing page example with verified badge

  • Sponsored placement on the AMO homepage. Developers of add-ons that have a Verified badge have the option to reach more users by paying an additional fee for placement in a new Sponsored section of the AMO homepage. The AMO homepage receives about two million unique visits per month.

AMO Homepage with Sponsored Ssection

During the pilot program, these services will be provided to a small number of participants without cost. More details will be provided to participants and the larger community about the program, including pricing, in the coming months.

Sign up for the Pilot Program

If you are interested in participating in this pilot program, click here to sign up. Please note that space will be limited based on the following criteria and restrictions:

  • Your add-on must be listed on addons.mozilla.org.
  • You (or your company) must be based in the United States, Canada, New Zealand, Australia, the United Kingdom, Malaysia, or Singapore, because once the pilot ends, we can only accept payment from these countries. (If you’re interested in participating but live outside these regions, please sign up to join the waitlist. We’re currently looking into how we can expand to more countries.)
  • Up to 12 add-ons will be accepted to the pilot program due to our current capacity for manual reviews. We will prioritize add-ons that are actively maintained and have an established user base.
  • Prior to receiving the Verified badge, a participating add-on will need to pass manual review. This may require some time commitment from developers to respond to potential review requests in a timely manner.
  • Add-ons in the Recommended Extensions program do not need to apply, because they already receive verification and discovery benefits.

We’ll begin notifying developers who are selected to participate in the program on September 16, 2020. We may expand the program in the future if interest grows, so the sign-up sheet will remain open if you would like to join the waitlist.

Next Steps

We expect Verified badges and homepage sponsorships for pilot participants to go live in early October. We’ll run the pilot for a few weeks to monitor its performance and communicate the next phase in November.

For developers who do not wish to participate in this program but are interested in more ways to support their add-ons, we plan to streamline the contribution experience later this year and explore features that make it easier for people to financially support the add-ons they use regularly. These features will be free to all add-on developers, and remain available whether or not the Promoted Add-ons pilot graduates.

We look forward to your participation, and hope you stay tuned for updates! If you have any questions about the program, please post them to our community forum.

The post Introducing the Promoted Add-ons Pilot appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyMozilla offers a vision for how the EU Digital Services Act can build a better internet

Later this year the European Commission is expected to publish the Digital Services Act (DSA). These new draft laws will aim at radically transforming the regulatory environment for tech companies operating in Europe. The DSA will deal with everything from content moderation, to online advertising, to competition issues in digital markets. Today, Mozilla filed extensive comments with the Commission, to outline Mozilla’s vision for how the DSA can address structural issues facing the internet while safeguarding openness and fundamental rights.

The stakes at play for consumers and the internet ecosystem could not be higher. If developed carefully and with broad input from the internet health movement, the DSA could help create an internet experience for consumers that is defined by civil discourse, human dignity, and individual expression. In addition, it could unlock more consumer choice and consumer-facing innovation, by creating new market opportunities for small, medium, and independent companies in Europe.

Below are the key recommendations in Mozilla’s 90-page submission:

  • Content responsibility: The DSA provides a crucial opportunity to implement an effective and rights-protective framework for content responsibility on the part of large content-sharing platforms. Content responsibility should be assessed in terms of platforms’ trust & safety efforts and processes, and should scale depending on resources, business practices, and risk. At the same time, the DSA should avoid ‘down-stack’ content control measures (e.g. those targeting ISPs, browsers, etc). These interventions pose serious free expression risk and are easily circumventable, given the technical architecture of the internet.
    ****
  • Meaningful transparency to address disinformation: To ensure transparency and to facilitate accountability, the DSA should consider a mandate for broad disclosure of advertising through publicly available ad archive APIs.
    ****
  • Intermediary liability: The main principles of the E-Commerce directive still serve the intended purpose, and the European Commission should resist the temptation to weaken the directive in the effort to increase content responsibility.
    ****
  • Market contestability: The European Commission should consider how to best support healthy ecosystems with appropriate regulatory engagement that preserves the robust innovation we’ve seen to date — while also allowing for future competitive innovation from small, medium and independent companies without the same power as today’s major platforms.
    ****
  • Ad tech reform: The advertising ecosystem is a key driver of the digital economy, including for companies like Mozilla. However the ecosystem today is unwell, and a crucial first step towards restoring it to health would be for the DSA to address its present opacity.
    ****
  • Governance and oversight: Any new oversight bodies created by the DSA should be truly co-regulatory in nature, and be sufficiently resourced with technical, data science, and policy expertise.

Our submission to the DSA public consultation builds on this week’s open letter from our CEO Mitchell Baker to European Commission President Ursula von der Leyen. Together, they provide the vision and the practical guidance on how to make the DSA an effective regulatory tool.

In the coming months we’ll advance these substantive recommendations as the Digital Services Act takes shape. We look forward to working with EU lawmakers and the broader policy community to ensure the DSA succeeds in addressing the systemic challenges holding back the internet from what it should be.

A high-level overview of our DSA submission can be found here, and the complete 90-page submission can be found here.

The post Mozilla offers a vision for how the EU Digital Services Act can build a better internet appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogOpenPGP in Thunderbird 78

Updating to Thunderbird 78 from 68

Soon the Thunderbird automatic update system will start to deliver the new Thunderbird 78 to current users of the previous release, Thunderbird 68. This blog post is intended to share with you details about our OpenPGP support in Thunderbird 78, and some details Enigmail add-on users should consider when updating. If you are interested in reading more about the other features in the Thunderbird 78 release, please see our previous blog post.

Updating to Thunderbird 78 is highly recommended to ensure you will receive security fixes, because no more fixes will be provided for Thunderbird 68 after September 2020.

The traditional Enigmail Add-on cannot be used with version 78, because of changes to the underlying Mozilla platform Thunderbird is built upon. Fortunately, it is no longer needed with Thunderbird version 78.2.1 because it enables a new built-in OpenPGP feature.

Not all of Enigmail’s functionality is offered by Thunderbird 78 yet – but there is more to come. And some functionality has been implemented differently, partly because of technical necessity, but also because we are simplifying the workflow for our users.

With the help of a migration tool provided by the Enigmail Add-on developer, users of Enigmail’s classic mode will get assistance to migrate their settings and keys. Users of Enigmail’s Junior Mode will be informed by Enigmail, upon update, about their options for using that mode with Thunderbird 78, which requires downloading software that isn’t provided by the Thunderbird project. Alternatively, users of Enigmail’s Junior Mode may attempt a manual migration to Thunderbird’s new integrated OpenPGP feature, as explained in our howto document listed below.

Unlike Enigmail, OpenPGP in Thunderbird 78 does not use GnuPG software by default. This change was necessary to provide a seamless and integrated experience to users on all platforms. Instead, the software of the RNP project was chosen for Thunderbird’s core OpenPGP engine. Because RNP is a newer project in comparison to GnuPG, it has certain limitations, for example it currently lacks support for OpenPGP smartcards. As a workaround, Thunderbird 78 offers an optional configuration for advanced users, which requires additional manual setup, but which can allow the optional use of separately installed GnuPG software for private key operations.

The Mozilla Open Source Support (MOSS) awards program has thankfully provided funding for an audit of the RNP library and Thunderbird’s related code, which was conducted by the Cure53 company.  We are happy to report that no critical or major security issues were found, all identified issues had a medium or low severity rating, and we will publish the results in the future.

More Info and Support

We have written a support article that lists questions that users might have, and it provides more detailed information on the technology, answers, and links to additional articles and resources. You may find it at: https://support.mozilla.org/en-US/kb/openpgp-thunderbird-howto-and-faq

If you have questions about the OpenPGP feature, please use Thunderbird’s discussion list for end-to-end encryption functionality at: https://thunderbird.topicbox.com/groups/e2ee

Several topics have already been discussed, so you might be able to find some answers in its archive.

The Mozilla BlogMozilla CEO Mitchell Baker urges European Commission to seize ‘once-in-a-generation’ opportunity

Today, Mozilla CEO Mitchell Baker published an open letter to European Commission President Ursula von der Leyen, urging her to seize a ‘once-in-a-generation’ opportunity to build a better internet through the opportunity presented by the upcoming Digital Services Act (“DSA”).

Mitchell’s letter coincides with the European Commission’s public consultation on the DSA, and sets out high-level recommendations to support President von der Leyen’s DSA policy agenda for emerging tech issues (more on that agenda and what we think of it here).

The letter sets out Mozilla’s recommendations to ensure:

  • Meaningful transparency with respect to disinformation;
  • More effective content accountability on the part of online platforms;
  • A healthier online advertising ecosystem; and,
  • Contestable digital markets

As Mitchell notes:

“The kind of change required to realise these recommendations is not only possible, but proven. Mozilla, like many of our innovative small and medium independent peers, is steeped in a history of challenging the status quo and embracing openness, whether it is through pioneering security standards, or developing industry-leading privacy tools.”

Mitchell’s full letter to Commission President von der Leyen can be read here.

The post Mozilla CEO Mitchell Baker urges European Commission to seize ‘once-in-a-generation’ opportunity appeared first on The Mozilla Blog.

SUMO BlogIntroducing the Customer Experience Team

A few weeks ago, Rina discussed the impact of the recent changes in Mozilla on the SUMO team. This change has resulted in a more focused team that combines Pocket Support and Mozilla Support into a single team that we’re calling Customer Experience, led by Justin Rochell. Justin has been leading the support team in Pocket and will now broaden his responsibilities to oversee Mozilla’s products as well as SUMO community.

Here’s a short introduction from Justin:

Hey everyone! I’m excited and honored to be stepping into this new role leading our support and customer experience efforts at Mozilla. After heading up support at Pocket for the past 8 years, I’m excited to join forces with SUMO to improve our support strategy, collaborate more closely with our product teams, and ensure that our contributor community feels nurtured and valued. 

One of my first support jobs was for an email client called Postbox, which is built on top of Thunderbird. It feels as though I’ve come full circle, since support.mozilla.org was a valuable resource for me when answering support questions and writing knowledge base articles. 

You can find me on Matrix at @justin:mozilla.org – I’m eager to learn about your experience as a contributor, and I welcome you to get in touch. 

We’re also excited to welcomer Olivia Opdahl, who is a Senior Community Support Rep at Pocket and has been on the Pocket Support team since 2014. She’s been responsible for many things in addition to support, including QA, curating Pocket Hits, and doing social media for Pocket.

Here’s a short introduction from Olivia:

Hi all, my name is Olivia and I’m joining the newly combined Mozilla support team from Pocket. I’ve worked at Pocket since 2014 and have seen Pocket evolve many times into what we’re currently reorganizing as a more integrated part of Mozilla. I’m excited to work with you all and learn even more about the rest of Mozilla’s products! 

When I’m not working, I’m probably playing video games, hiking, learning programming, taking photos or attending concerts. These days, I’m trying to become a Top Chef, well, not really, but I’d like to learn how to make more than mac and cheese :D

Thanks for welcoming me to the new team! 

Besides Justin and Olivia, JR/Joe Johnson, who you might remember for being a maternity cover for Rina earlier this year, will step in as a Release/Insights Manager for the team and work closely with the product team. Joni will continue to be our Content Lead and Angela as a Technical Writer. I will also stay as a Support Community Manager.

We’ll be sharing more information about our team’s focus in the future as we get to know more. For now, please join me to welcome Justin and Olivia on the team!

 

On behalf of the Customer Experience Team,

Kiki

The Mozilla BlogA look at password security, Part V: Disk Encryption

The previous posts ( I, II, III, IV) focused primarily on remote login, either to multiuser systems or Web sites (though the same principles also apply to other networked services like e-mail). However, another common case where users encounter passwords is for login to devices such as laptops, tablets, and phones. This post addresses that topic.

Threat Model

We need to start by talking about the threat model. As a general matter, the assumption here is that the attacker has some physical access to your device. While some devices do have password-controlled remote access, that’s not the focus here.

Generally, we can think of two kinds of attacker access.

Non-invasive: The attacker isn’t willing to take the device apart, perhaps because they only have the device temporarily and don’t want to leave traces of tampering that would alert you.

Invasive: The attacker is willing to take the device apart. Within invasive, there’s a broad range of how invasive the attacker is willing to be, starting with “open the device and take out the hard drive” and ending with “strip the packaging off all the chips and examine them with an electron microscope”.

How concerned you should be depends on who you are, the value of your data, and the kinds of attackers you face. If you’re an ordinary person and your laptop gets stolen out of your car, then attacks are probably going to be fairly primitive, maybe removing the hard disk but probably not using an electron microscope. On the other hand, if you have high value data and the attacker targets you specifically, then you should assume a fairly high degree of capability. And of course people in the computer security field routinely worry about attackers with nation state capabilities.

It’s the data that matters

It’s natural to think of passwords as a measure that protects access to the computer, but in most cases it’s really a matter of access to the data on your computer. If you make a copy of someone’s disk and put it in another computer that will be a pretty close clone of the original (that’s what a backup is, after all) and the attacker will be able to read all your sensitive data off the disk, and quite possibly impersonate you to cloud services.

This implies two very easy attacks:

  • Bypass the operating system on the computer and access the disk directly. For instance, on a Mac you can boot into recovery mode and just examine the disk. Many UNIX machines have something called single-user mode which boots up with administrative access.
  • Remove the disk and mount it in another computer as an external disk. This is trivial on most desktop computers, requiring only a screwdriver (if that) and on many laptops as well; if you have a Mac or a mobile device, the disk may be a soldered in Flash drive, which makes things harder but still doable.

The key thing to realize is that nearly all of the access controls on the computer are just implemented by the operating system software. If you can bypass that software by booting into an administrative mode or by using another computer, then you can get past all of them and just access the data directly.1

If you’re thinking that this is bad, you’re right. And the solution to this is to encrypt your disk. If you don’t do that, then basically your data will not be secure against any kind of dedicated attacker who has physical access to your device.

Password-Based Key Derivation

The good news is that basically all operating systems support disk encryption. The bad news is that the details of how it’s implemented vary dramatically in some security critical ways. I’m not talking here about the specific details about cryptographic algorithms and how each individual disk block is encrypted. That’s a fascinating topic (see here), but most operating systems do something mostly adequate. The most interesting question for users is how the disk encryption keys are handled and how the the password is used to gate access to those keys.

The obvious way to do this — and the way things used to work pretty much everywhere — is to generate the encryption key directly from the password. [Technical Note: You probably really want generate a random key and encrypt it with a key derived from the password. This way you can change your password without re-encrypting the whole disk. But from a security perspective these are fairly equivalent.] The technical term for this is a password-based key derivation function, which just means that it takes a password and outputs a key. For our purposes, this is the same as a password hashing function and it has the same problem: given an encrypted disk I can attempt to brute force the password by trying a large number of candidate passwords. The result is that you need to have a super-long password (or often a passphrase) in order to prevent this kind of attack. While it’s possible to memorize a long enough password, it’s no fun, as well as being a real pain to type in whenever you want to log in to your computer, let alone on your smartphone or tablet. As a result, most people use much shorter passwords, which of course weakens the security of disk encryption.

Hardware Security Modules

As we’ve seen before, the problem here is that the attacker gets to try candidate passwords very fast and the only real fix is to limit the rate at which they can try. This is what many modern devices do. Instead of just deriving the encryption key from the password, they generate a random encryption key inside of a piece of hardware security module (HSM).2 What “secure” means varies but ideally it’s something like:

  1. It can do encryption and decryption internally without ever exposing the keys.4
  2. It resists physical attacks to recover the keys. For instance it might erase them if you try to remove the casing from the HSM.

In order to actually encrypt or decrypt, you first unlock the HSM with the password, but that doesn’t give you the keys, but just lets you use the HSM to do encryption and decryption. However, until you enter the password, it won’t do anything.

The main function of the HSM is to limit the rate at which you can try passwords. This might happen by simply having a flat limit of X tries per second, or maybe it exponentially backs off the more passwords you try, or maybe it will only allow some small number of failures (10 is common) before it erases itself. If you’ve ever pulled your iPhone out of your pocket only to see “iPhone is disabled, try again in 5 minutes”, that’s the rate limiting mechanism in action. Whatever the technique, the idea is the same: prevent the attacker from quickly trying a large number of candidate passwords. With a properly designed rate limiting mechanism, you can get away with a much much shorter passwords. For instance, if you can only have 10 tries before the phone erases itself, then the attacker only has a 1/1000 chance of breaking a 4 digit PIN, let alone a 16 character password. Some HSMs can also do biometric authentication to unlock the encryption key, which is how features like TouchID and FaceID work.

So, having the encryption keys in an HSM is a big improvement to security and it doesn’t require any change in the user interface — you just type in your password — which is great. What’s not so great is that it’s not always clear whether your device has an HSM or not. As a practical matter, new Apple devices do, as does the Google Pixel. The situation on Windows 10 is maybe but many modern devices will.

It needs to be said that an HSM isn’t magic: iPhones store their keys in HSMs and it certainly makes it much harder to decrypt them, but there are also companies who sell technology for breaking into HSM-protected devices like iPhones (Cellebrite being probably the best known), but you’re far better off with a device like this than you are without. And of course all bets are off if someone takes your device when it’s unlocked. This is why it’s a good idea to have your screen set to lock automatically after a fairly short time; obviously that’s a lot more convenient if you have fingerprint or face ID.3

Summary

OK, so this has been a pretty long series, but I hope it’s given you an appreciation for all the different settings in which passwords are used and where they are safe(r) versus unsafe.

As always, I can be reached at ekr-blog@mozilla.com if you have questions or comments.


  1. Some computers allow you to install a firmware password which will stop the computer from booting unless you enter the right password. This isn’t totally useless but it’s not a defense if the attacker is willing to remove the disk. 
  2. Also called a Secure Encryption Processor (SEP) or a Trusted Platform Module (TPM). 
  3. It’s not technically necessary to keep the keys in HSM in order to secure the device against password guessing. For instance, once the HSM is unlocked it could just output the key and let decryption happen on the main CPU. The problem is that this then exposes you to attacks on the non-tamper-resistant hardware that makes up the rest of the computer. For this reason, it’s better to have the key kept inside the HSM. Note that this only applies to the keys in the HSM, not the data in your computer’s memory, which generally isn’t encrypted, and there are ways to read that memory. If you are worried your computer might be seized and searched, as in a border crossing, do what the pros do and turn it off.
  4. Unfortunately, biometric ID also makes it a lot easier to be compelled to unlock your phone–whatever the legal situation in your jurisdiction, someone can just press your finger against the reader, but it’s a lot harder to make you punch in your PIN–so it’s a bit of a tradeoff. 

Update: 2020-09-07: Changed TPM to HSM once in the main text for consistency.

The post A look at password security, Part V: Disk Encryption appeared first on The Mozilla Blog.

about:communityFive years of Tech Speakers

Given the recent restructuring at Mozilla, many teams have been affected by the layoff. Unfortunately, this includes the Mozilla Tech Speakers program. As one of the volunteers who’s been part of the program since the very beginning, I’d like to share some memories of the last five years, watching the Tech Speakers program grow from a small group of people to a worldwide community.

Mozilla Tech Speakers' photos

Mozilla Tech Speakers – a program to bring together volunteer contributors who are already speaking to technical audiences (developers/web designers/computer science and engineering students) about Firefox, Mozilla and the Open Web in communities around the world. We want to support you and amplify your work!

It all started as an experiment in 2015 designed by Havi Hoffman and Dietrich Ayala from the Developer Relations team. They invited a handful of volunteers who were passionate about giving talks at conferences on Mozilla-related technologies and the Open Web in general to trial a program that would support their conference speaking activities, and amplify their impact. That’s how Mozilla Tech Speakers were born.

It was a perfect symbiosis. A small, scrappy  Developer Relations team can’t cover all the web conferences everywhere, but with help from trained and knowledgeable volunteers that task becomes a lot easier. Volunteer local speakers can share information at regional conferences that are distant or inaccessible for staff. And for half a decade, it worked, and the program grew in reach and popularity.

Mozilla Tech Speakers: Whistler Videoshoot

Those volunteers, in return, were given training and support, including funding for conference travel and cool swag as a token of appreciation.

From the first cohort of eight people, the program grew over the years to have more than a hundred of expert technical speakers around the world, giving top quality talks at the best web conferences. Sometimes you couldn’t attend an event without randomly bumping into one or two Tech Speakers. It was a globally recognizable brand of passionate, tech-savvy Mozillians.

The meetups

After several years of growth, we realized that connecting remotely is one thing, but meeting in person is a totally different experience. That’s why the idea for Tech Speakers meetups was born. We’ve had three gatherings in the past four years: Berlin in 2016, Paris in 2018, and two events in 2019, in Amsterdam and Singapore to accommodate speakers on opposite sides of the globe.

Mozilla Tech Speakers: Berlin

The first Tech Speakers meetup in Berlin coincided with the 2016 View Source Conference, hosted by Mozilla. It was only one year after the program started, but we already had a few new cohorts trained and integrated into the group. During the Berlin meetup we gave short lightning talks in front of each other, and received feedback from our peers, as well as professional speaking coach Denise Graveline.

Mozilla Tech Speakers: View Source

After the meetup ended, we joined the conference as volunteers, helping out in the registration desk, talking to attendees in the booths, and making the speakers feel welcome.

Mozilla Tech Speakers: Paris

The second meetup took place two years later in Paris – hosted by Mozilla’s unique Paris office, looking literally like a palace. We participated in training workshops about Firefox Reality, IoT, WebAssembly, and Rust. We continued the approach of presenting lightning talks that were evaluated by experts in the web conference scene: Ada Rose Cannon, Jessica Rose, Vitaly Friedman, and Marc Thiele.

Mozilla Tech Speakers: Amsterdam

Mozilla hosted two meetups in 2019, before the 2020 pandemic put tech conferences and events on hold.  The European tech speakers met in Amsterdam, while folks from Asia and the Pacific region met in Singapore.

The experts giving feedback for our Amsterdam lightning talks were Joe Nash, Kristina Schneider, Jessica Rose, and Jeremy Keith, with support from Havi Hoffman, and Ali Spivak as well. The workshops included Firefox DevTools and Web Speech API.

The knowledge

The Tech Speakers program was intended to help developers grow, and share their incredible knowledge with the rest of the world. We had various learning opportunities – from the first training to actually becoming a Tech Speaker. We had access to updates from Mozilla engineering staff talking about various technologies (Tech Briefings), or from experts outside of the company (Masterclasses), to monthly calls where we talked about our own lessons learned.

Mozilla Tech Speakers: Paris lecture

People shared links, usually tips about speaking, teaching and learning, and everything tech related in between.

The perks

Quite often we were speaking at local or not-for-profit conferences, organized by passionate people like us, and having those costs covered, Mozilla was being presented as the partner of such a conference, which benefited all parties involved.

Mozilla Tech Speakers: JSConf Iceland

It was a fair trade – we were extending the reach of Mozilla’s Developer Relations team significantly, always happy to do it in our free time, while the costs of such activities were relatively low. Since we were properly appreciated by the Tech Speakers staff, it felt really professional at all times and we were happy with the outcomes.

The reports

At its peak, there were more than a hundred Tech Speakers  giving thousands of talks to tens of thousands of other developers around the world. Those activities were reported via a dedicated form, but writing trip reports was also a great way to summarize and memorialize  our involvement in a given event.

The Statistics

In the last full year of the program, 2019, we had over 600 engagements (out of which about 14% were workshops, the rest – talks at conferences) from 143 active speakers across 47 countries. This summed up to a total of about 70 000 talk audience and 4 000 workshop audience. We were collectively fluent in over 50 of the world’s most common languages.

The life-changing experience

I reported on more than one hundred events I attended as a speaker, workshop lead, or booth staff – many of which wouldn’t have been possible without Mozilla’s support for the Tech Speakers program. Last year I was invited to attend a W3C workshop on Web games in Redmond, and without the travel and accommodation coverage I received from Mozilla, I’d have missed a huge opportunity.

Mozilla Tech Speakers: W3C talk

At that particular event, I met Desigan Chinniah, who got me hooked on the concept of Web Monetization. I immediately went all in, and we quickly announced the Web Monetization category in the js13kGames competition, I was showcasing monetized games at MozFest Arcade in London, and later got awarded with the Grant for the Web. I don’t think it all would be possible without someone actually accepting my request to fly across the ocean to talk about an Indie perspective on Web games as a Tech Speaker.

The family

Aside from the “work” part, Tech Speakers have become literally one big family, best friends for life, and welcome visitors in each other’s cities. This is stronger than anything a company can offer to their volunteers, for which I’m eternally grateful. Tech Speakers were, and always will be, a bunch of cool people doing stuff out of pure passion.

Mozilla Tech Speakers: MozFest 2019

I’d like to thank Havi Hoffman most of all, as well as Dietrich Ayala, Jason Weathersby, Sandra Persing, Michael Ellis, Jean-Yves Perrier, Ali Spivak, István Flaki Szmozsánszky, Jessica Rose, and many others shaping the program over the years, and every single Tech Speaker who made this experience unforgettable.

I know I’ll be seeing you fine folks at conferences when the current global situation settles. We’ll be bumping casually into each other, remembering the good old days, and continuing to share our passions, present and talk about the Open Web. Much love, see you all around!

Blog of DataThis Week in Glean: Leveraging Rust to build cross-platform mobile libraries

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All “This Week in Glean” blog posts are listed in the TWiG index.


A couple of weeks ago I gave a talk titled “Leveraging Rust to build cross-platform mobile libraries”. You can find my slides as a PDF. It was part of the Rusty Days Webference, an online conference that was initially planned to happen in Poland, but had to move online. Definitely check out the other talks.

One thing I wanted to achieve with that talk is putting that knowledge out there. While multiple teams at Mozilla are already building cross-platform libraries, with a focus on mobile integration, the available material and documentation is lacking. I’d like to see better guides online, and I probably have to start with what we have done. But that should also be encouragement for those out there doing similar things to blog, tweet & speak about it.

Who else is using Rust to build cross-platform libraries, targetting mobile?

I’d like to hear about it. Find me on Twitter (@badboy_) or drop me an email.

The Glean SDK

I won’t reiterate the full talk (go watch it, really!), so this is just a brief overview of the Glean SDK itself.

The Glean SDK is our approach to build a modern Telemetry library, used in Mozilla’s mobile products and soon in Firefox on Desktop as well.

The SDK consists of multiple components, spanning multiple programming languages for different implementations. All of the Glean SDK lives in the GitHub repository at mozilla/glean. This is a rough diagram of the Glean SDK tech stack:

Glean SDK Stack

On the very bottom we have glean-core, a pure Rust library that is the heart of the SDK. It’s responsible for controlling the database, storing data and handling additional logic (e.g. assembling pings, clearing data, ..). As it is pure Rust we can rely on all Rust tooling for its development. We can write tests that cargo test picks up. We can generate the full API documentation thanks to rustdoc and we rely on clippy to tell us when our code is suboptimal. Working on glean-core should be possible for everyone that knows some Rust.

On top of that sits glean-ffi. This is the FFI layer connecting glean-core with everything else. While glean-core is pure Rust, it doesn’t actually provide the nice API we intend for users of Glean. That one is later implemented on top of it all. glean-ffi doesn’t contain much logic. It’s a translation between the proper Rust API of glean-core and C-compatible functions exposed into the dynamic library. In it we rely on the excellent ffi-support crate. ffi-support knows how to translate between Rust and C types, offers a nice (and safer) abstraction for C strings. glean-ffi holds some state: the instantiated global Glean object and metric objects. We don’t need to pass pointers back and forth. Instead we use opaque handles that index into a map held inside the FFI crate.

The top layer of the Glean SDK are the different language implementations. Language implementations expose a nice ergonomic API to initialize Glean and record metrics in the respective language. Additionally each implementation handles some special cases for the platform they are running on, like gathering application and platform data or hooking into system events. The nice API calls into the Glean SDK using the exposed FFI functions of glean-ffi. Unfortunately at the moment different language implementations carry different amounts of actual logic in them. Sometimes metric implementations require this (e.g. we rely on the clock source of Kotlin for timing metrics), in other parts we just didn’t move the logic out of the implementations yet. We’re actively working on moving logic into the Rust part where we can and might eventually use some code generation to unify the other parts. uniffi is a current experiment for a multi-language bindings generator for Rust we might end up using.

Mozilla VR BlogWhy Researchers Should Conduct User Testing Sessions in Virtual Reality (VR): On Using Hubs by Mozilla for Immersive, Embodied User Feedback

Why Researchers Should Conduct User Testing Sessions in Virtual Reality (VR): On Using Hubs by Mozilla for Immersive, Embodied User Feedback

Amidst the pandemic, our research team from Mozilla and The Extended Mind (www.extendedmind.io) performed user testing research entirely in a remote 3D virtual space where participants had to BYOD (Bring Your Own Device). This research aimed to test security concepts that could help users feel safe traversing links in the immersive web, the results of which are forthcoming in 2021. By utilizing a virtual space, we were able to get more intimate knowledge of how users would interact with these security concepts because they were immersed in a 3D environment.

The purpose of this article is to persuade you that Hubs, and other VR platforms offer unique affordances for qualitative research. In this blog post, I’ll discuss the three key benefits of using VR platforms for research, namely the ability to perform immersive and embodied research across distances, with global participants, and the ability to test out concepts prior to implementation. Additionally, I will discuss the unique accessibility of Hubs as a VR platform and the benefits it provided us in our research.

To perform security concept research in VR, The Extended Mind recruited nine Oculus Quest users and brought them into a staged Mozilla Hubs room where we walked them through each security concept design and asked them to rate their likelihood to click a link and continue to the next page. (Of the nine subjects, seven viewed the experience on the Quest and two did on PC due to technical issues). For each security concept, we walked them through the actual concept, as well as spoofs of the concept to see how well people understood the indicators of safety (or lack thereof) they should be looking for.

Why Researchers Should Conduct User Testing Sessions in Virtual Reality (VR): On Using Hubs by Mozilla for Immersive, Embodied User Feedback

Because we were able to walk the research subjects through each concept, in multiple iterations, we were able to get a sense not only of their opinion of the concepts, but data on what spoofed them. And giving our participants an embodied experience made it so that we, as the researchers, did not have to do as much explaining of the concepts. To fully illustrate the benefits of performing research in VR, we’ll walk through the key benefits it offers.

Immersive Research Across Distances

The number one affordance virtual reality offers qualitative researchers is the ability to perform immersive research remotely. Research participants can partake no matter where they live, and yet whatever concept is being studied can be approached as an embodied experience rather than a simple interview.

If researchers wanted qualitative feedback on a new product, for instance, they could provide participants with the opportunity to view the object in 360 degrees, manipulate it in space, and even create a mock up of its functionality for participants to interact with - all without having to collect participants in a single space or provide them with physical prototypes of the product.

Global Participants

The second affordance is that researchers can do global studies. The study we performed with Mozilla had participants from the USA, Canada, Australia and Singapore. Whether researchers want a global sampling or to perform a cross-cultural analysis on a certain subject, VR provides qualitative researchers to collect that data through an immersive medium.

Collect Experiential Qualitative Feedback on Concepts Pre-Implementation

The third affordance is that researchers can gather immersive feedback on concepts before they are implemented. These concepts may be mock-ups of buildings, public spaces, or concepts for virtual applications, but across the board virtual reality offers researchers a deeper dive into participants’ experiences of new concepts and designs than other platforms.

We used flat images to simulate link traversal inside of Hubs by Mozilla and just arranged them in a way that conveyed the storytelling appropriately (one concept per room). Using Hubs to test concepts allows for rapid and inexpensive prototyping. One parallel to this type of research is when people in the Architecture, Engineer, and Construction (AEC) fields use interactive virtual and augmented reality models to drive design decisions. Getting user feedback inside of an immersive environment, regardless of the level of fidelity within, can benefit the final product design.

Accessibility of Hubs by Mozilla

Hubs by Mozilla provided an on-demand immersive environment for research. Mozilla Hubs can be accessed through a web browser, which makes it more accessible than your average virtual reality platform. For researchers who want to perform immersive research in a more accessible way, Mozilla Hubs is a great option.

In our case, Mozilla Hubs allowed researchers to participate through their browser and screen share via Zoom with colleagues, which allowed for a number of team members to observe without crowding the actual virtual space. It also provided participants who had technological issues with their headsets an easy alternative.

Takeaway

Virtual reality is an exciting new platform for qualitative research. It offers researchers new affordances that simply aren’t available through telephone calls or video conferencing. The ability to share a space with the participant and direct their attention towards an object in front of them expands not only the scope of what can be studied remotely through qualitative means, but also the depth of data that can be collected from the participants themselves.

Why Researchers Should Conduct User Testing Sessions in Virtual Reality (VR): On Using Hubs by Mozilla for Immersive, Embodied User Feedback

The more embodied an experience we can offer participants, the more detailed and nuanced that their opinions, thoughts, feelings, towards a new concept will be. This will make it easier for designers and developers to integrate the voice of the user into product creation.

Authors: Jessica Outlaw, Diane Hosfelt, Tyesha Snow, and Sara Carbonneau

Mozilla L10NL10n Report: August 2020 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

As you are probably aware, Mozilla just went through a massive round of layoffs. About 250 people were let go, reducing the overall size of the workforce by a quarter. The l10n-drivers team was heavily impacted, with Axel Hecht (aka Pike) leaving the company.

We are still in the process of understanding how the reorganization will affect our work and the products we localize. A first step was to remove some projects from Pontoon, and we’ll make sure to communicate any further changes in our communication channels.

Telegram channel and Matrix

The “bridge” between our Matrix and Telegram channel, i.e. the tool synchronizing content between the two, has been working only in one direction for a few weeks. For this reason, and given the unsupported status of this tool, we decided to remove it completely.

As of now:

  • Our Telegram and Matrix channels are completely independent from each other.
  • The l10n-community channel on Matrix is the primary channel for synchronous communications. The reason for this is that Matrix is supported as a whole by Mozilla, offering better moderation options among other things, and can be easily accessed from different platforms (browser, phone).

If you haven’t used Matrix yet, we encourage you to set it up following the instructions available in the Mozilla Wiki. You can also set an email address in your profile, to receive notifications (like pings) when you’re offline.

We plan to keep the Telegram channel around for now, but we might revisit this decision in the future.

New content and projects

What’s new or coming up in Firefox desktop

Upcoming deadlines:

  • Firefox 81 is currently in beta and will be released on September 22nd. The deadline to update localization is on September 8.

In terms of content and new features, most of the changes are around the new modal print preview, which can be currently tested on Nightly.

What’s new or coming up in mobile

The new Firefox for Android has been rolled out at 100%! You should therefore have either been upgraded from the older version (or will be in just a little bit) – or you can download it directly from the Play Store here.

Congratulations to everyone who has made this possible!

For the next Firefox for Android release, we are expecting string freeze to start towards the end of the week, which will give localizers two weeks to complete localizing and testing.

Concerning Firefox for iOS: v29 strings have been exposed on Pontoon. We are still working out screenshots for testing with iOS devs at the moment, but these should be available soon and as usual from the Pontoon project interface.

On another note, and as mentioned at the beginning of this blog post, due to the recent lay-offs, we have had to deactivate some projects from Pontoon. The mobile products are currently: Scryer, Firefox Lite and Lockwise iOS. More may be added to this list soon, so stay tuned. Once more, thanks to all the localizers who have contributed their time and effort to these projects across the years. Your help has been invaluable for Mozilla.

What’s new or coming up in web projects

Common Voice

The Common Voice team is greatly impacted due to the changes in recent announcement. The team has stopped the two-week sprint cycle and is working in a maintenance mode right now. String updates and new language requests would take longer time to process due to resource constraints

Some other changes to the project before the reorg:

  • New site name https://commonvoice.mozilla.org; All traffic from the old domain will be forwarded to the new domain automatically.
  • New GitHub repo name mozilla/common-voice and new branch name main. All traffic to the previous domain voice-web will be forwarded directly to the new repo, but you may need to manually update your git remote if you have a local copy of the site running.
Mozilla.org

An updated firefox/welcome/page4.ftl with new layout will be ready for localization in a few days. The turnaround time is rather short. Be on the lookout for it.

Along with this update is the temporary page called banners/firefox-daylight-launch.ftl that promotes Fenix. It has a life of a few weeks. Please localize it as soon as possible. Once done, you will see the localized banner on https://www.mozilla.org/firefox/ on production.

The star priority ratings in Pontoon are also revised. The highest priority pages are firefox/all.ftl, firefox/new/*.ftl, firefox/whatsnew/*.ftl, and brands.ftl. The next level priority pages are the shared files. Unless a page has a hard deadline to complete, the rest are normal priority with a 3-star rating and you can take time to localize them.

WebThings Gateway

The team is completely dissolved due to the reorg. At the moment, the project would not take any new language requests or update the repo with changes in Pontoon. The project is actively working to move into a community-maintained state. We will update everyone as soon as that information becomes available.

What’s new or coming up in Foundation projects

The Foundation website homepage got a major revamp, strings have been exposed to the relevant locales in the Engagement and Foundation website projects. There’s no strict deadline, you can complete this anytime. The content will be published live regularly, with a first push happening in a few days.

What’s new or coming up in Pontoon

Download Terminology as TBX

We’ve added the ability to download Terminology from Pontoon in the standardized TBX file format, which allows you to exchange it with other users and systems. To access the feature, click on your user icon in the top-right section of the translation workspace and select “Download Terminology”.

Improving Machinery with SYSTRAN

We have an update on the work we’ve been doing with SYSTRAN to provide you with better machine translation options in Pontoon.

SYSTRAN has published three NMT models (German, French, and Spanish) based on contributions of Mozilla localizers. They are available in the SYSTRAN Marketplace as free and open source and accessible to any existing SYSTRAN NMT customers. In the future, we hope to make those models available beyond the SYSTRAN system.

These models have been integrated with Pontoon and are available in the Machinery tab. Please report any feedback that you have for them, as we want to make sure these are a useful resource for your contributions.

We’ll be working with SYSTRAN to learn how to build the models for new language pairs in 2021, which should widely expand the language coverage.

Search with Enter

From now on you need to press Enter to trigger search in the string list. This change unifies the behaviour of the string list search box and the Machinery search box, which despite similar looks previously hadn’t worked the same way. Former had been searching after the last keystroke (with a 500 ms delay), while latter after Enter was pressed.

Search on every keystroke is great when it’s fast, but string list search is not always fast. It becomes really tedious if more than 500 ms pass between the keystrokes and search gets triggered too early.

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR BlogUpdate on Mozilla Mixed Reality

Update on Mozilla Mixed Reality

The wider XR community has long supported the Mozilla Mixed Reality team, and we look forward to that continuing as the team restructures itself in the face of recent changes at Mozilla.

Charting the future with Hubs

Going forward we will be focusing much of our efforts on Hubs. Over the last few months we have been humbled and inspired by the thousands of community organizers, artists, event planners and educators who’ve joined the Hubs community. We are increasing our investment in this project, and Hubs is excited to welcome several new members from the Firefox Reality team. We are enthusiastic about the possibilities of remote collaboration, and look forward to making Hubs even better. If you are interested in sharing thoughts on new features or use-cases we would love your input here in our feedback form.

The state of Firefox Reality and WebXR

Having developed a solid initial Firefox Reality offering that brings the web to virtual reality, we are going to continue to invest in standards. We’ll also be supporting our partners, but in light of Covid-19 we have chosen to reduce our investment in broad new features at this time.

At the end of the month, we will release Firefox Reality v12 for standalone VR headsets, our last major release for a while. We’ll continue to support the browser (including security updates) and make updates to support Hubs and our partners. In addition, we’ll remain active in the Immersive Web standards group.

Two weeks ago, we released a new preview for Firefox Reality for PC, which we’ll continue to support. We’ll also continue to provide Firefox Reality for Hololens, and it will be accessible in the Microsoft store.

Finally, for iOS users, the WebXR Viewer will remain available, but not continue to be maintained.

If anyone is interested in contributing to the work, we welcome open source contributions at:

We're looking forward to continuing our collaboration with the community and we'll continue to provide updates here on the blog, on the Mixed Reality Twitter, and the Hubs Twitter.

The Mozilla BlogFast, personalized and private by design on all platforms: introducing a new Firefox for Android experience

Big news for mobile: as of today, Firefox for Android users in Europe will find an entirely redesigned interface and a fast and secure mobile browser that was overhauled down to the core. Users in North America will receive the update on August 27. Like we did with our “Firefox Quantum” desktop browser revamp, we’re calling this release “Firefox Daylight” as it marks a new beginning for our Android browser. Included with this new mobile experience are lots of innovative features, an improved user experience with new customization options, and some massive changes under the hood. And we couldn’t be more excited to share it.

New Firefox features Android users will love

We have made some very significant changes that could revolutionize mobile browsing:

Privacy & security

  • Firefox for Android now offers Enhanced Tracking Protection, providing a better web experience. The revamped browsing app comes with our highest privacy protections ever – on by default. ETP keeps numerous ad trackers at bay and out of the users’ business, set to “Standard” mode right out of the box to put their needs first. Stricter protections are available to users who want to customize their privacy settings.
  • Additionally, we took the best parts of Firefox Focus, according to its users, and applied them to Private Mode: Now, Private Mode is easily accessible from the Firefox for Android homescreen and users have the option to create a private browsing shortcut on their Android homescreen, which will launch the browsing app automatically in the respective mode and allow users to browse privately on-the-go.

Enhanced Tracking Protection automatically blocks many known third-party trackers, by default, in order to improve user privacy online.                    Private Mode adds another layer for better privacy on device level.

Enhanced Tracking Protection automatically blocks many known third-party trackers, by default, in order to improve user privacy online. Private Mode adds another layer for better privacy on device level.

Appearance & productivity

  • With regard to appearance, we redesigned the user interface of our Android browser completely so that it’s now even cleaner, easier to handle and to make it one’s own: users can set the URL bar at the bottom or top of the screen, improving the accessibility of the most important browser element especially for those with smartphones on the larger side.
  • Taking forever to enter a URL is therefore now a thing of the past, and so are chaotic bookmarks: Collections help to stay organized online, making it easy to return to frequent tasks, share across devices, personalize one’s browsing experience and get more done on mobile. As a working parent, for example, Collections may come in handy when organizing and curating one’s online searches based on type of activity such as kids, work, recipes, and many more. Multitaskers, who want to get more done while watching videos, will also enjoy the new Picture-in-Picture feature.

The new Firefox for Android comes with an adjustable URL bar.                    A brand new, convenient solution to organize bookmarks in Firefox for Android: Collections.

Productivity is key on mobile. That’s why the new Firefox for Android comes with an adjustable URL bar and a convenient solution to organize bookmarks: Collections.

  • Bright or dark, day or night: Firefox for Android simplifies toggling between Light and Dark Themes, depending on individual preferences, vision needs or environment. Those who prefer an automatic switch may also set Firefox to follow the Android setting, so that the browsing app will switch automatically to dark mode at a certain time of day.
  • Last but not least, we revamped the extensions experience. We know that add-ons play an important role for many Firefox users and we want to make sure to offer them the best possible experience when starting to use our newest Android browsing app. We’re kicking it off with the top 9 add-ons for enhanced privacy and user experience from our Recommended Extensions program. At the same time, we’re continuously working on offering more add-on choice in the future that will seamlessly fit into Firefox for Android.

The overhauled Android browser comes with the top add-ons for enhanced privacy and user experience.

Firefox users love add-ons! Our overhauled Android browser therefore comes with the top add-ons for enhanced privacy and user experience from our Recommended Extensions program.

What’s new under the hood

The improvements in Firefox for Android don’t just stop here: they even go way beyond the surface as Firefox for Android is now based on GeckoView, Mozilla’s own mobile browser engine. What does that mean for users?

  • It’s faster. The technology we used in the past limited our capability to further improve the browser as well as our options to implement new features. Now, we’re free to decide and our release cycle is flexible. Also, GeckoView makes browsing in Firefox for Android significantly speedier.
  • It’s built on our standards: private and secure. With our own engine we set the ground rules. We can decide independently which privacy and security features we want to make available for our mobile users and are entirely free to cater to our unique high standards.
  • It’s independent, just like our users. Unlike Edge, Brave and Chrome, Firefox for Android is not based on Blink (Google’s mobile engine). Instead Firefox for Android is based on GeckoView, Mozilla’s wholly-built engine. This allows us to have complete freedom of choice when it comes to implementation of standards and features. This independence lets us create a user interface that when combined with an overall faster browsing pace, enables unprecedented performance. Also, it protects our users if there are security issues with Blink as Firefox will not be affected.

User experience is key, in product and product development

Completely overhauling an existing product is a complex process that comes with a high potential for pitfalls. In order to avoid them and create a browsing experience users would truly appreciate, we looked closely at existing features and functionalities users love and we tested – a lot – to make sure we’d keep the promise to create a whole new browsing experience on Android.

  1. Bringing the best recent features from desktop to mobile. Over the last couple of years we’ve been very busy with continuously improving the Firefox desktop browsing experience: We did experiments, launched new tools like Firefox Monitor, Send and Lockwise and took existing features to a whole new level. This includes, amongst others, Dark Mode, the Picture-in-Picture feature, the support of extensions and add-ons as well as, last but not least, the core element of Firefox privacy technology: today, Enhanced Tracking Protection  protects Firefox users from up to 10 billion third-party tracking cookies, fingerprinters and cryptominers per day. Feedback from users showed that they like the direction Firefox is developing into, so we worked hard to bring the same level of protection and convenience to mobile, as well. As a result, users can now experience a better Firefox mobile experience on their Android devices than ever before.
  1. We tested extensively and emphasized direct feedback from mobile users. Over the course of several months, earlier versions of the new Firefox for Android were available as a separate app called Firefox Preview. This enabled us to adequately try out new features, examine the user experience, gather feedback and implement it in accordance with the users’ wishes and needs. And the result of this process is now available.

Try the new Firefox for Android!

We’re proud to say that we provided Firefox for Android with an entirely new shape and foundation and we’re equally happy to share the result with Android users now. Here’s how to get our overhauled browser:

  • Users who have the browser downloaded to their Android devices already will receive the revamp either as an automatic or manual update, depending on their device preferences. Their usage data, such as the browsing history or bookmarks, will be migrated automatically to the new app version, which might take a few moments. Users who have set a master password will need to disable the master password in order for their logins to automatically migrate over.
  • New users can download the update from the Google PlayStore as of today. It’s easy to find it as ‘Firefox for Android’ through the search functionality and tapping ‘Install’ will get the process started. The new Firefox for Android supports a wide range of devices from Android 5.0+ and above.

Make sure to let us know what you think about the overhauled browsing experience with Firefox for Android and stay tuned for more news in the upcoming months!

UPDATE: We’re all about moving forward. It’s why we rehauled Firefox for Android with a new engine, privacy protections, and a new look. But we know history and looking back is important. So, we’re bringing the “back button” back. We’ll continue to incorporate user feedback as we add new features. So if you haven’t already downloaded it, get the new Firefox for Android now. Added Sept. 2, 2020

The post Fast, personalized and private by design on all platforms: introducing a new Firefox for Android experience appeared first on The Mozilla Blog.

hacks.mozilla.orgAn Update on MDN Web Docs

Last week, Mozilla announced some general changes in our investments and we would like to outline how they will impact our MDN platform efforts moving forward. It hurts to make these cuts, and it’s important that we be candid on what’s changing and why.

First we want to be clear, MDN is not going away. The core engineering team will continue to run the MDN site and Mozilla will continue to develop the platform.

However, because of Mozilla’s restructuring, we have had to scale back our overall investment in developer outreach, including MDN. Our Co-Founder and CEO Mitchell Baker outlines the reasons why here. As a result, we will be pausing support for DevRel sponsorship, Hacks blog and Tech Speakers. The other areas we have had to scale back on staffing and programs include: Mozilla developer programs, developer events and advocacy, and our MDN tech writing.

We recognize that our tech writing staff drive a great deal of value to MDN users, as do partner contributions to the content. So we are working on a plan to keep the content up to date. We are continuing our planned platform improvements, including a GitHub-based submission system for contributors.

We believe in the value of MDN Web Docs as a premier web developer resource on the internet. We are currently planning how to move MDN forward long term, and will develop this new plan in close collaboration with our industry partners and community members.

Thank you all for your continued care and support for MDN,

— Rina Jensen, Director, Contributor Experience

The post An Update on MDN Web Docs appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogA look at password security, Part IV: WebAuthn

As discussed in part III, public key authentication is great in principle but in practice has been hard to integrate into the Web environment. However, we’re now seeing deployment of a new technology called WebAuthn (short for Web Authentication) that hopefully changes that.1

Previous approaches to public key authentication required the browser to provide the user interface. For a variety of reasons (the interfaces were bad, the sites wanted to control the experience) this didn’t work well for sites, and public key authentication didn’t get much adoption. WebAuthn takes a different approach, which is to provide a JavaScript API that the site can use to do public key authentication via the browser.

The key difference here is that previous systems tended to operate at a lower layer (typically HTTP or TLS), which made it hard for the site to control how and when authentication happened.2 By contrast, a JS API puts the site in control so it can ask for authentication when it wants to (e.g., after showing the home page and prompting for the username).

Some Technical Details

WebAuthn offers two new API points that are used by the server’s JavaScript [Technical note: These are buried in the credential management API.]:

  1. makeCredential: Creates a new public key pair and returns the public key.
  2. getAssertion: Sign with an existing credential over a challenge provided by the server.

The way this is used in practice is that when the user first registers with the server — or as is more likely now, when the server first adds WebAuthn support or detects that a client has it — the server uses makeCredential() to create a new public key pair and stores the public key, possibly along with an attestation. An attestation is a provable statement such as, “this public key was minted by a YubiKey.” Note that unlike some public key authentication systems, each server gets its own public key so WebAuthn is harder to use for cross-site tracking (more on this later). Then when the user returns, the site uses getAssertion(), causing the browser to sign the server’s challenge using the private key associated with the public key. The server can then verify the assertion, allowing it to determine that the client is the same endpoint as originally registered (for some value of “the same”. More on this later too).

The clever bit here is that because this is all hidden behind a JS API, the site can authenticate the client at any part of its login experience it wants without disrupting the user experience. In particular, WebAuthn can be used as a second factor in addition to a password or as a primary authenticator without a password.

Hardware Authenticators

The WebAuthn specification doesn’t require any particular mechanism for handling the key pair, so it’s technically possible to implement WebAuthn entirely in the browser, storing the key on the user’s disk. However, the designers of WebAuthn and its predecessor FIDO U2F were very concerned about the user’s machine being compromised and the private key being stolen, which would allow the attacker to impersonate the user indefinitely (just like if your password was compromised).

Accordingly, WebAuthn was explicitly designed around having the key pair in a hardware token. These tokens are designed to do all the cryptography internally and never expose the key, so if your computer is compromised, the attacker may be able to impersonate you temporarily, but they won’t be able to steal the key. This also has the advantage that the token is portable, so you can pull it out of your computer and carry it with you — thus minimizing the risk of your computer being stolen — or plug it into a second computer; it’s the token that matters not the computer it’s plugged into. We’re also starting to see hardware backed designs that don’t depend on a token. For instance, modern Macs have trusted hardware built in to power TouchID and FaceID and Apple is using this to implement WebAuthn. We have been looking at similar designs for Firefox.

While hardware key storage isn’t mandatory, WebAuthn was designed to allow sites to require it. Obviously you can’t just trust the browser when it says that it’s storing the key in hardware and so WebAuthn includes an attestation scheme that is designed to let the site determine the type of token/device being used for WebAuthn. However, there are privacy concerns about the attestation scheme 3 and many sites don’t actually insist on it. Firefox shows a separate prompt (shown below) when the site requests attestation.

Privacy Properties and User Interactivity

While as a technical matter a browser or token could just do all the WebAuthn computations automatically with no user interaction, that’s not really what you want for two reasons:

  1. It allows sites to track users without their consent (this already happens with user login fields which is why Firefox requires that the user interact with the page before filling in your username or password.)
  2. It would allow an attacker who had compromised your computer to invisibly log in as you.

In order to prevent this, FIDO-compliant tokens require the user to do something (typically touch the token) before signing an assertion. This prevents invisible tracking or use of the key to log in. Apple’s use of FaceID/TouchID takes this one step further, requiring a specific user to authorize a login, thus protecting you in case your laptop is stolen.

Alternative Designs

If you’re familiar with Web technologies, you might be wondering why we need something new here. In particular, many of the properties of WebAuthn could be replicated with cookies or WebCrypto. However, WebAuthn offers a number of advantages over these alternatives.

First, because WebAuthn requires user interaction prior to authentication it is much harder to use for tracking. This means that the browser doesn’t need to clear WebAuthn state when it clears cookie or WebCrypto state as they can be used for invisible tracking. It would be possible to add some kind of explicit user action step before accessing cookies or WebCrypto but then you would have something new.

Second, when used with keys in hardware, WebAuthn is more resistant to machine compromise. By contrast, cookies and WebCrypto state are generally stored in storage which is available directly to the browser, so if it’s compromised they can be stolen. While this is a real issue, it’s unclear how important it is: many sites use cookies for authentication over fairly long periods (when was the last time Facebook made you actually log in?) and so an attacker who steals your cookies will still be able to impersonate you for a long period. And of course the cost of this is that you have to buy a token.

Adoption Status

Technically, WebAuthn is a pretty big improvement over pre-existing systems. However, authentication systems tend to rely pretty heavily on network effects: it’s not worth users enabling it unless a lot of sites use it and it’s not worth sites enabling it unless a lot of users are willing to sign up. So far, indications are pretty promising: a number of important sites such as GSuite and Github already support WebAuthn as do SSO vendors like Okta and Duo. All four major browsers support it as well. With any luck we’ll be seeing a lot more WebAuthn deployment over the next few years — a big step forward for user security.

Up Next: Login and Device Encryption

This about wraps it up for remote authentication, but what about logging into your computer or phone? I’ll be covering that next.

Acknowledgement

Thanks to JC Jones and Chris Wood for help with this post.


  1. The WebAuthn spec is pretty hard to read. MDN’s article does a better job. 
  2. For instance, with TLS the easiest thing to do is to authenticate the user as soon as they connect, but this means you don’t get to show any UI, which is awkward for users who don’t yet have accounts. You can also do “TLS renegotiation” later in the connection but for a variety of technical reasons that has proven hard to integrate with servers. In addition, any TLS-level authentication is an awkward fit for CDNs because the TLS is terminated at the CDN, not at the origin. 
  3. The idea behind the attestation mechanism is that the device manufacturer issues a certificate to the device and device uses the corresponding private key to sign the new generated authentication key. However, if that certificate is unique to the device and used for every site then it becomes a tracking vector. The specification suggests two (somewhat clunky) mechanisms for reducing the risk here, but neither is mandatory. 

The post A look at password security, Part IV: WebAuthn appeared first on The Mozilla Blog.

Open Policy & AdvocacyPracticing Lean Data and Defending “Lean Data”

At Mozilla, we put privacy first. We do this in our own products with features like tracking protection. We also promote privacy in our public advocacy. A key feature of our privacy work is a commitment to reducing the amount of user data that is collected in the first place. Focusing on the data you really need lowers risk and promotes trust. Our Lean Data Practices page describes this framework and includes tools and tips for staying lean. For years, our legal and policy teams have held workshops around the world, advising businesses on how they can use lean data practices to reduce their data footprint and improve the privacy of their products and services.

Mozilla is not the only advocate for lean data. Many, many, many, many, many, many, many, many, many others use the term “lean data” to refer to the principle of minimizing data collection. Given this, we were very surprised to receive a demand letter from lawyers representing LeanData, Inc. claiming that Mozilla’s Lean Data Practices page infringes the company’s supposed trademark rights. We have responded to this letter to stand up for everyone’s right to use the words “lean data” in digital advocacy.

Our response to LeanData explains that it cannot claim ownership of a descriptive term such as “lean data.” In fact, when we investigated its trademark filings we discovered that the US Patent and Trademark Office (USPTO) had repeatedly rejected the company’s attempts to register a wordmark that covered the term. The USPTO has cited numerous sources, including the very Mozilla page LeanData accused of infringing, as evidence that “lean data” is descriptive. Also, the registration for LeanData’s logo cited in the company’s letter to Mozilla was recently cancelled (and it wouldn’t cover the words “lean data” in any event). LeanData’s demand is without merit.

In a follow-up letter, LeanData, Inc. acknowledged that it does not have any currently registered marks on “lean data.” LeanData’s lawyer suggested, however, the company will continue to pursue its application for a “LeanData” wordmark. We believe the USPTO should, and will, continue to reject this application. Important public policy discussions must be free from intellectual property overreach. Scholars, engineers, and commentators should be allowed to use a descriptive term like “lean data” to describe a key digital privacy principle.

The post Practicing Lean Data and Defending “Lean Data” appeared first on Open Policy & Advocacy.

SeaMonkeySeaMonkey 2.53.4 Beta 1 released!

Hi All,

The SeaMonkey project team would like to announce the release of SeaMonkey 2.53.4 Beta 1!

Please do keep safe and healthy.

:ewong

PS: had comments to add, but just wasn’t into it.  the situation around the world is just unsettling.

SUMO BlogAdjusting to changes at Mozilla

Earlier last week, Mozilla announced a number of changes and these changes include aspects of SUMO as well.

For a high level overview of these changes, we encourage you to read Mitchell’s address to the community. For Support, the most immediate change is that we will be creating a more focused team that combines Pocket Support and Mozilla Support into a single team.

We want to take a moment to stress that Mozilla remains fully committed to our Support team and community, and the team changes are in no way a reflection on Mozilla’s focus on Support moving forward. The entire organization is grateful for all the hard work the community does everyday to support the products we all love. Community is the heart of Mozilla, and that can be said for our support functions as well. As we make plans as a combined Support team, we’d love to hear from you as well, so please feel free to reach out to us.

We very much appreciate your patience while we adjust to these changes.

On behalf of the Support team – Rina

Firefox UXLean UR: How Firefox Lite finds user insights

The 4 steps our product team applies to find valuable insights that support us to make the right decisions.

Firefox Lite is a lightweight browser made for emerging market users with storage/ data constraints on their mobile devices. Last week, Firefox Lite reached to 1.6M monthly active users, which was the highest number that it has had so far. 🙌

To keep improving Firefox Lite to provide a better user experience for our users, we did some A/B testing on the top sites, which is one of the most used features in Firefox Lite. We looked into the telemetry data, listed assumptions, ran A/B tests…which was taxing on the entire team, including PM, UX, devs, QA, and data analysts, but unfortunately, the test failed.

<figcaption>Photo by Andrik Langfield on Unsplash</figcaption>

“Why did the test fail?” we asked ourselves during our sprint retrospective.

It ended up that our A/B test was too complex for a small team. 6 groups with different top site variants were rolled out at the same time, which answered the 3 assumptions we listed. We were too ambitious to find all the answers that could influence the UX design, without thinking about how we could narrow down the questions by making our own decisions based on the learnings from the previous user research. Eventually, the test scope was too aggressive that it didn’t allow any human errors to happen when team members passed on work from one to another.

We need to level up and transform how we do research: to have a lean approach which is leveraging different methodologies from qualitative user research and quantitative data analysis on the same group of users, at the same time. And most importantly, ask the right questions to make the right design decisions.

The Firefox Lite team

Our product team consists of dedicated team members and shared resources. The product team is small but flexible on supporting each other due to the research resource shared. Product managers can dig into telemetry data, and designers can run user tests.

<figcaption>Our research resource is shared with other product teams in Mozilla Taipei</figcaption>

Just like most fast-paced software companies, things are changing fast. Sometimes we swap priorities based on the environmental changes or the new goals that the organization sets. Therefore, moving fast to adopt changes and efficiently find answers to the research questions is key. Mixing qualitative and quantitative methods in user research can help us reduce the time and effort.

Our recent research projects for Firefox Lite are mostly initiated with the insights from the telemetry data, then dig deeper by qualitative user research and following-up on “why”. We’re lucky to have a user researcher and data scientists all talented, collaborating closely as a team while running research projects. Both qualitative and quantitative mindsets are complementary to the research they conducted.

If you put user needs in mind and orient yourself by asking “why” questions to the user scenarios, you will be able to translate the quantitative data with qualitative insights.
- Lany, senior data scientist in Firefox Lite team

How to Lean UR?

The spirit of the Lean UR is to move fast and respect the shared research resources. The following is the best practices that we developed from our lesson learned:

Step 1: Frame research objectives and questions that match the project goals.

🧑‍💻 Who is involved: PM and UX

Product managers are usually the people who initiate a project and set goals for it based on the initial findings from the telemetry data. To make sure the research objectives are matching the project goals, we found it more efficient if the PM and UX draft research objectives and questions before inviting researchers to kick-start it.

Step 2: Prioritize the research questions.

🧑‍💻 Who is involved: PM and UX

As I mentioned the failed test in the beginning, we designed an A/B test with a large scope which was too big and complicated for a small team. I would suggest to ask the team some questions when making decisions on running a research project or not:

  1. Is it really something that you need to know before making the right design decisions?
    Any previous research that can provide some pieces of knowledge which can guide you to make good decisions?
  2. Is the usage rate of the feature high enough to impact the entire product significantly if you find the best solution by doing user tests or A/B tests?
    Let’s say the feature only has 4% of the usage rate. How can it help with the overall retention rate if you enhance it to 100%?

Here’s another story: In the project “Home Customization”, we were considering running an A/B test on the feature button (which is next to the search bar on Firefox Lite Home) to see if the Smart Shopping Search or Private Mode is the best bet for app retention.

After looking at the usage data and the previous user testing results, we decided to make the decision without spending time and effort on running a test. We’re confident to have the Private Mode next to the search bar as it makes more sense to the UI layout and the usage of the Private Mode is much higher than Smart Shopping Search.

The decision was taken immediately, and our research resource was saved for the next big project, wonderful! 🙌

Step 3: Sit together with user researchers and data scientists to identify the right methods to answer research questions.

🧑‍💻 Who is involved: PM, UX, UR, and Data scientists

After prioritizing the research questions, now it’s clear for our user researcher and data scientists to move forward on the important ones to find the best combination of research methods. The following tips are to make the test lean and effective on research resources:

Quantitative research

Tip #1: Make a small number of A/B test groups and iterate multiple times.

Tip #2: Collect the data only when it’s essential to the research questions. Explore ways to collect data that is not sacrificing users’ privacy.

Consciously collecting telemetry data can not only protect your users’ privacy but also reduce the effort of developers from setting unnecessary telemetry. Moreover, it helps you build trust with users and reduce operational risk in your organization.

If you don’t need a piece of data, don’t collect it.
If you need a piece of data, keep it for only as long as necessary and anonymize the data before you store it.
Lean Data Practices, Mozilla

Qualitative research

Tip #1: Evaluate the research scope and invite designers to support small and medium design validation.

Not all research has to be conducted rigorously to find valuable insights. We categorize user research with 3 levels in terms of scope:

<figcaption>Types of user research in small, medium, large scopes</figcaption>
  • [Small scope] Internal design validation
    This is the test that can be conducted by designers. Sometimes a quick-and-dirty user testing in the office reveals the majority of the insights for an iteration. Examples like icon testing for a new feature, usability testing for the new menu panel can be frequently conducted in any stage of the design process.
  • [Medium scope] External design validation
    If the project includes some big changes on a high-visibility feature or some user flows that are not commonly used in the industry, we’ll consider running a remote user test or survey to validate the design with global participants. Unmoderated usability tests on Usertesting.com saves time on conducting user tests with numbers of participants. However, it has a lower tolerance for the comprehension of the test script. Some pilot tests for checking the wording and task flow are essential to the success of the test. We found it easier to make the test scripts unbiased and direct when we have more than one person working on it. Therefore, sometimes we’ll have designers making a draft script and then get reviewed and shipped by our user researcher.
  • [Large scope] Fundamental user research
    Firefox Lite has some large-scope fundamental user research that helps us understand our users more. Research projects like Persona, Push factors, and Pull factors require the expertise of our user researcher and highly rely on the collaboration with our data scientists. That’s why we have designers supporting the design validations so that our researchers can contribute more time and effort on building fundamental knowledge for Firefox Lite.

Tip #2: User testing with no more than 5 participants.

3~5 is the magic number for recruiting participants because you’ll find more and more overlapping insights if you test with more participants in the same category. The article that Jakob Nielsen wrote 20 years ago is still a good reference for today’s user researchers. Highly recommended.

Step 4: Review results simultaneously to generate comprehensive insights.

🧑‍💻 Who is involved: UR, and Data scientists

It’s natural to have comprehensive insights from both ends because our user researcher and data scientists have worked as a team since the beginning of the planning stage. Take Project Persona as an example: deep-diving into telemetry data can help us categorize users into several groups of personas, and user interviews can reveal the “why” under their behavioral patterns.

Interested in the Project Persona? Click here to read more stories.

Research insights are one of the tools that help us get greater confidence in making product decisions.

There are always insights from tests, but you may not need all pieces of evidence to avoid making wrong product decisions. Our lean UR approach can lower the time and effort of running unnecessary tests, focus more on important features, and test & iterate them more.

We’re continuously exploring and polishing our lean UR process. Feel free to share your thoughts with me on how your organization runs research projects :)

<figcaption>The picture of Firefox Lite team celebrating for a new achievement on our user base.</figcaption>

Special thanks to our user researcher, Ivonne Chen, and Data scientist, Lany Liu for generously sharing their work experiences and thoughts on research that were summarized in this post.

References:

  1. Simultaneous Triangulation: Mixing User Research & Data Science Methods — Colette Kolenda
  2. Why You Only Need to Test with 5 Users — Jakob Nielsen

Lean UR: How Firefox Lite finds user insights was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

hacks.mozilla.orgjs13kGames 2020: A lean coding challenge with WebXR and Web Monetization

Have you heard about the js13kGames competition? It’s an online code-golfing challenge for HTML5 game developers. The month-long competition has been happening annually since 2012; it runs from August 13th through September 13th. And the fun part? We set the size limit of the zip package to 13 kilobytes, and that includes all sources—from graphic assets to lines of JavaScript. For the second year in a row you will be able to participate in two special categories: WebXR and Web Monetization.

js13kGames 2020

WebXR entries

The WebXR category started in 2017, introduced as A-Frame. We allowed the A-Frame framework to be used outside the 13k size limit. All you had to do was to link to the provided JavaScript library in the head of your index.html file to use it.

<!DOCTYPE HTML>
<html>
<head>
  <script src="https://js13kgames.com/webxr-src/aframe.js"></script>
  // ...
</head>
// ...

Then, the following year, 2018, we added Babylon.js to the list of allowed libraries. Last year, we made Three.js another library option.

js13kGames 2020 WebXR

Which brings us to 2020. Amazingly, the top prize in this year’s WebXR category is a Magic Leap device, thanks to the Mozilla Mixed Reality team.

Adding Web Monetization

Web Monetization is the newest category. Last year, it was introduced right after the W3C Workshop about Web Games. The discussion had identified discoverability and monetization as key challenges facing indie developers. Then, after the 2019 competition ended, some web monetized entries were showcased at MozFest in London. You could play the games and see how authors were paid in real time.

A few months later, Enclave Games was awarded a Grant for the Web, which meant the Web Monetization category in js13kGames 2020 would happen again. For a second year, Coil is offering free membership coupon codes to all participants, so anyone who submits an entry or just wants to play can become a paying web-monetized user.

js13kGames 2020 Web Monetization

Implementation details

Enabling the Web Monetization API in your entry is as straightforward as the WebXR implementation. Once again, all you do is add one tag to the head of your index:

<!DOCTYPE HTML>
<html>
<head>
  <meta name="monetization" content="your_payment_pointer">
  // ...
</head>
// ...

Voila, your game is web monetized! Now you can work on adding extra features to your own creations based on whether or not the visitor playing your game is monetized.

function startEventHandler(event){
  // user monetized, offer extra content
}
document.monetization.addEventListener('monetizationstart', startEventHandler);

The API allows you to detect this, and provide extra features like bonus points, items, secret levels, and much more. You can get creative with monetizable perks.

Learn from others

Last year, we had 28 entries in the WebXR category, and 48 entries in Web Monetization, out of the 245 total. Several submissions were entered into two categories. The best part? All the source code—from all the years, for all the entries—is available in a readable format in the js13kGames repo on GitHub, so you can see how anything was built.

js13kGames 2019 posts

Also, be sure to read lessons learned blog posts from previous years. Participants share what went well and what could have been improved. It’s a perfect opportunity to learn from their experiences.

Take the challenge

Remember: The 13 kilobyte zip size limit may seem daunting, but with the right approach, it could be doable. If you avoid big images and procedurally generate as much as possible, you should be fine. Plus, the WebXR libraries allow you to build a scene with components faster than from scratch. And most of those entries didn’t even use up the all the allocated zip file space!

Personally, I’m hoping to see more entries in the WebXR and Web Monetization categories this year. Good luck to all, and don’t forget to have fun!

The post js13kGames 2020: A lean coding challenge with WebXR and Web Monetization appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogChanging World, Changing Mozilla

This is a time of change for the internet and for Mozilla. From combatting a lethal virus and battling systemic racism to protecting individual privacy — one thing is clear: an open and accessible internet is essential to the fight.

Mozilla exists so the internet can help the world collectively meet the range of challenges a moment like this presents. Firefox is a part of this. But we know we also need to go beyond the browser to give people new products and technologies that both excite them and represent their interests. Over the last while, it has been clear that Mozilla is not structured properly to create these new things — and to build the better internet we all deserve.

Today we announced a significant restructuring of Mozilla Corporation. This will strengthen our ability to build and invest in products and services that will give people alternatives to conventional Big Tech. Sadly, the changes also include a significant reduction in our workforce by approximately 250 people. These are individuals of exceptional professional and personal caliber who have made outstanding contributions to who we are today. To each of them, I extend my heartfelt thanks and deepest regrets that we have come to this point. This is a humbling recognition of the realities we face, and what is needed to overcome them.

As I shared in the internal message sent to our employees today, our pre-COVID plan for 2020 included a great deal of change already: building a better internet by creating new kinds of value in Firefox; investing in innovation and creating new products; and adjusting our finances to ensure stability over the long term.  Economic conditions resulting from the global pandemic have significantly impacted our revenue. As a result, our pre-COVID plan was no longer workable. Though we’ve been talking openly with our employees about the need for change — including the likelihood of layoffs — since the spring, it was no easier today when these changes became real. I desperately wish there was some other way to set Mozilla up for long term success in building a better internet.

But to go further, we must be organized to be able to think about a different world. To imagine that technology will become embedded in our world even more than it is, and we want that technology to have different characteristics and values than we experience today.

So going forward we will be smaller. We’ll also be organizing ourselves very differently, acting more quickly and nimbly. We’ll experiment more. We’ll adjust more quickly. We’ll join with allies outside of our organization more often and more effectively. We’ll meet people where they are. We’ll become great at expressing and building our core values into products and programs that speak to today’s issues. We’ll join and build with all those who seek openness, decency, empowerment and common good in online life.

I believe this vision of change will make a difference — that it can allow us to become a Mozilla that excites people and shapes the agenda of the internet. I also realize this vision will feel abstract to many. With this in mind, we have mapped out five specific areas to focus on as we roll out this new structure over the coming months:

  1. New focus on product. Mozilla must be a world-class, modern, multi-product internet organization. That means diverse, representative, focused on people outside of our walls, solving problems, building new products, engaging with users and doing the magic of mixing tech with our values. To start, that means products that mitigate harms or address the kinds of the problems that people face today. Over the longer run, our goal is to build new experiences that people love and want, that have better values and better characteristics inside those products.
  2. New mindset. The internet has become the platform. We love the traits of it — the decentralization, its permissionless innovation, the open source underpinnings of it, and the standards part — we love it all. But to enable these changes, we must shift our collective mindset from a place of defending, protecting, sometimes even huddling up and trying to keep a piece of what we love to one that is proactive, curious, and engaged with people out in the world. We will become the modern organization we aim to be — combining product, technology and advocacy — when we are building new things, making changes within ourselves and seeing how the traits of the past can show up in new ways in the future.
  3. New focus on technology. Mozilla is a technical powerhouse of the internet activist movement. And we must stay that way. We must provide leadership, test out products, and draw businesses into areas that aren’t traditional web technology. The internet is the platform now with ubiquitous web technologies built into it, but vast new areas are developing (like Wasmtime and the Bytecode Alliance vision of nanoprocesses). Our vision and abilities should play in those areas too.
  4. New focus on community. Mozilla must continue to be part of something larger than ourselves, part of the group of people looking for a better internet. Our open source volunteers today — as well as the hundreds of thousands of people who donate to and participate in Mozilla Foundation’s advocacy work — are a precious and critical part of this. But we also need to go further and think about community in new ways. We must be increasingly open to joining others on their missions, to contribute to the better internet they’re building.
  5. New focus on economics. Recognizing that the old model where everything was free has consequences, means we must explore a range of different business opportunities and alternate value exchanges. How can we lead towards business models that honor and protect people while creating opportunities for our business to thrive? How can we, or others who want a better internet, or those who feel like a different balance should exist between social and public benefit and private profit offer an alternative? We need to identify those people and join them. We must learn and expand different ways to support ourselves and build a business that isn’t what we see today.

We’re fortunate that Firefox and Mozilla retain a high degree of trust in the world. Trust and a feeling of authenticity feel unusual in tech today. But there is a sense that people want more from us. They want to work with us, to build with us. The changes we are making today are hard. But with these changes we believe we’ll be ready to meet these people — and the challenges and opportunities facing the future of the internet — head on.

The post Changing World, Changing Mozilla appeared first on The Mozilla Blog.

Firefox UXSimplifying the Complex: Crafting Content for Meaningful Privacy Experiences

How content strategy simplified the language and improved content design around a core Firefox feature.

Image of a shield on purple background with the words “Enhanced Tracking Protection.”<figcaption>Enhanced Tracking Protection is a feature of the Firefox browser that automatically protects your privacy behind the scenes.</figcaption>

Firefox protects your privacy behind the scenes while you browse. These protections work invisibly, blocking advertising trackers, data collectors, and other annoyances that lurk quietly on websites. This core feature of the Firefox browser is called Enhanced Tracking Protection (ETP). When the user experience team redesigned the front-end interface, content strategy led efforts to simplify the language and improve content design.

Aligning around new nomenclature

The feature had been previously named Content Blocking. Content blockers are extensions that block trackers placed by ads, analytics companies, and social media. It’s a standardized term among tech-savvy, privacy-conscious users. However, it wasn’t well understood in user testing. Some participants perceived the protections to be too broad, assuming it blocked adult content (it doesn’t). Others thought the feature only blocked pop-ups.

This was an opportunity to improve the clarity and comprehension of the feature name itself. The content strategy team renamed the feature to Enhanced Tracking Protection.

A chart outlining the differences between Content Blocking and Enhanced Tracking Protection.<figcaption>The feature had been previously called Content Blocking, which was a reflection of its technical functionality. To bring focus to the feature’s end benefits to users, we renamed it to Enhanced Tracking Protection.</figcaption>

Renaming a core browser feature is 5 percent coming up with a name, 95 percent getting everyone on the same page. Content strategy led the communication and coordination of the name change. This included alignment efforts with marketing, localization, support, engineering, design, and legal.

Revisiting content hierarchy and information architecture

To see what Firefox blocked on a particular website, you can select the shield to the left of your address bar.

Image of shield in the Firefox address bar highlighted.<figcaption>To access the Enhanced Tracking Protection panel in Firefox, select the shield to the left of the address bar.</figcaption>

Previously, this opened a panel jam-packed with an overwhelming amount of information — only a portion of which pertained to the actual feature. User testing participants struggled to parse everything on the panel.

The new Enhanced Tracking Protection panel needed to do a few things well:

  • Communicate if the feature was on and working
  • Make it easy to turn the feature off and back on
  • Provide just-enough detail about which trackers Firefox blocks
  • Offer a path to adjust your settings and visit your Protections Dashboard
Image of the previous Content Blocking panel alongside the new Enhanced Tracking Protection Panel.<figcaption>The panel previously included information about a site’s connection, the Content Blocking feature, and site-level permissions. The redesigned panel focuses only on Enhanced Tracking Protection.</figcaption>

We made Enhanced Tracking Protection the panel’s only focus. Information pertaining to a site’s security and permissions were moved to a different part of the UI. We made design and content changes to reinforce the feature’s on/off state. Additional modifications were made to the content hierarchy to improve scannability.

Solving problems without words

A chart outlining variations of button copy and the recommendation to change the design element entirely.<figcaption>User weren’t able to quickly able to turn the feature on and off. Clearer button copy alone couldn’t solve the problem.</figcaption>

Words alone can’t solve certain problems. The enable/disable button is a perfect example. Users can go to their settings and manually opt in to a stricter level of Enhanced Tracking Protection. Websites occasionally don’t work as expected in strict ETP. There’s an easy fix: Turn it off right from the panel.

User testing participants couldn’t figure out how to do it. Though there was a button on the panel, its function was far from obvious. A slight improvement was made by updating the copy from ‘enable/disable’ to ‘turn on/turn off.’ Ultimately, the best solution was not better button copy. It was removing the button entirely and replacing it with a different design element.

A chart outlining the differences between a button and a switch design element.<figcaption>A switch better communicated the on/off state of the feature.</figcaption>

We also moved this element to the top of the panel for easier access.

Image of the previous button, which read “Disable Blocking for this Site” beside the new switch element.<figcaption>User testing participants struggled to understand how to turn the feature off. The solution was not better button copy, but replacing the button with an on/off switch and moving it higher up on the panel for better visibility.</figcaption>

Lastly, we added a sub-panel to inform users how turning off ETP might fix their problem. We used one of the best-kept secrets of the content strategy trade: a bulleted list to make this sub-panel scannable.

Image of Enhanced Tracking Protection panel and its sub-panel, which explains reasons why a site might not be working.<figcaption>A sub-panel outlines reasons why you might want to turn Enhanced Tracking Protection off.</figcaption>

Improving the clarity of language on Protections Dashboard

An image of the previous Protections Dashboard beside the revised content and design.<figcaption>Adding clarifying language to Protections Dashboard provides an overview of the page, offers users a path to adjust their settings, and reinforces that the Enhanced Tracking Protection feature is always on.</figcaption>

Firefox also launched a Protections Dashboard to give users more visibility into their privacy and security protections. After user research was conducted, we made further changes to the content design and copy. All credit goes to my fellow content strategist Meridel Walkington for making these improvements.

A chart outlining issues identified in user research and changes that were made.<figcaption>Content strategy recommended improvements to the content design and language of the Protections Dashboard to improve comprehension.</figcaption>

Explaining jargon in clear and simple terms

Jargon can’t always be avoided. Terms like ‘cryptominers,’ ‘fingerprinters,’ and other trackers Firefox blocks are technical by nature. Most users aren’t familiar with these terms, so the Protections Dashboard offers a short definition to break down each in clear, simple language. We also offer a path for users to explore these terms in more detail. The goal was to provide just-enough information without overwhelming users when they landed on their dashboard.

Descriptions for types of trackers that Firefox blocks.<figcaption>Descriptions of each type of tracker help explain terms that are inherently technical.</figcaption>
“Mozilla doesn’t just throw stats at users. The dashboard has a minimalist design and uses color-coded bar graphs to provide a simple overview of the different types of trackers blocked. It also features explainers clearly describing what the different types of trackers do.” — Fast Company: Firefox at 15: its rise, fall, and privacy-first renaissance

Wrapping up

Our goal in creating meaningful privacy experiences is to educate and empower users without overwhelming and paralyzing them. It’s an often delicate dance that requires deep partnership between product management, engineering, design, research and content strategy. Enhanced Tracking Protection is just one example of this type of collaboration. For any product to be successful, it’s important that our cross-functional teams align early on the user problems so we can design the best experience to meet them where they are.

Acknowledgements

Thank you to Michelle Heubusch for your ongoing support in this work and to Meridel Walkington for making it even better. All user research conducted by Alice Rhee. Design by Bryan Bell and Eric Pang.


Simplifying the Complex: Crafting Content for Meaningful Privacy Experiences was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXDriving Value as a Tiny UX Content Team: How We Spend Content Strategy Resources Wisely

Photo of a small ant carrying a green leaf across the ground.

Source: Vlad Tchompalov, Unsplash.

<figcaption class="imageCaption"></figcaption>

Our tiny UX content strategy team works to deliver the right content to the right users at the right time. We make sure product content is useful, necessary, and appropriate. This includes everything from writing an error message in Firefox to developing the full end-to-end content experience for a stand-alone product.

Mozilla has around 1,000 employees, and many of those are developers. Our UX team has 20 designers, 7 researchers, and 3 content strategists. We support the desktop and mobile Firefox browsers, as well as satellite products.

There’s no shortage of requests for content help, but there is a shortage of hours and people to tackle them. When the organization wants more of your time than you actually have, what’s a strategic content strategist to do?

1. Identify high-impact projects and work on those

We prioritize our time for the projects with high impact — those that matter most to the business and reach the most end users. Tactically, this means we do the following:

  • Review high-level organizational goals as a team at the beginning of every planning cycle.
  • Ask product managers to identify and rank order the projects needing content strategy support.
  • Evaluate those requests against the larger business priorities and estimate the work. Will this require 10 percent of one content strategist’s time? 25 percent?
  • Communicate out the projects that content strategy will be able to support.

By structuring our work this way, we aim to develop deep expertise on key focus areas rather than surface-level understanding of many.

2. Embed yourself and partner with product management

People sometimes think ‘content strategist’ is another name for ‘copywriter.’ While we do write the words that appear in the product, writing interface copy is about 10–20 percent of our work. The lion’s share of a content strategist’s day is spent laying the groundwork leading up to writing the words:

  • Driving clarity
  • Collaborating with designers
  • Understanding technical feasibility and constraints
  • Conducting user research
  • Developing processes
  • Aligning teams
  • Documenting and communicating decisions up and out

To do this work, we embed with cross-functional teams. Content strategy shows up early and stays late. We attend kickoffs, align on user problems, crystallize solutions, and act as connective tissue between the product experience and our partners in localization, legal, and the support team. We often stay involved after designs are handed off because content decisions have long tentacles.

We also work closely with our product managers to ensure that the strategy we bring to the table is the right one. A clear up-front understanding of user and business goals leads to more thoughtful content design.

3. Participate in and lead research

A key way we shape strategy is by participating in user research (as observers and note-takers) and leading usability studies.

“I’m not completely certain what you mean when you say, ‘found on your device.’ WHICH device?” — Usability study participant on a redesigned onboarding experience (User Research Firefox UX, Jennifer Davidson, 2020)

Designing a usability study not only requires close partnership with design and product management, but also forces you to tackle product clarity and strategy issues early. It gives you the opportunity to test an early solution and hear directly from your users about what’s working and what’s not. Hearing the language real people use is enlightening, and it can help you articulate and justify the decisions you make in copy later on.

4. Collaborate with each other

Just because your team works on different projects doesn’t mean you can’t collaborate. In fact, it’s more of a reason to share and align. We want to present our users with a unified end experience, no matter when they encounter our product — or which content strategist worked on a particular aspect of it.

  • Peer reviews. Ask one of your content strategy teammates to provide a second set of eyes on your work. This is especially useful for identifying confusing terminology or jargon that you, as the dedicated content strategist, may have adopted as an ‘insider.’ Reviews also keep team members in the loop so they can spread the word to other project teams.
  • Document your decisions and why you made them. This builds your team’s institutional memory, saving you time when you (inevitably) need to dig back into those decisions later on.
  • Share learnings. Did you just complete a usability testing study that revealed a key insight about language? Share it! Are you developing a framework to help your team think through how to approach product discovery? Share it! The more we can share what we’re each learning individually, the better we can learn and grow as a team.
  • Content-only team meetings. At our weekly team meetings, we bring requests for feedback, content challenges we’re struggling with, and other in-the-weeds content topics like how our capitalization rules are evolving. It’s important to have a space to focus on your specialized craft.
  • Legal and localization meetings. Our team also meets weekly with product counsel for legal review and monthly with our localization team to discuss terminology. What your teammates bring to those meetings provides visibility into content-related discussions outside of our day-to-day purview.
  • Team work days. A few times a year, we clear our calendars and spend the day together. This is an opportunity for us to check in. What’s going well? What would we like to improve? What would we like to accomplish as a team in the coming months?

Screenshot of a Slack message asking another team member to review a document about desktop and see where its aligns or diverges with mobile.

Cross-platform coordination between content strategy colleagues

5. Communicate the impact of your work

In addition to delivering the work, it’s critical that you evangelize it. Content strategists may love a long copy deck or beautifully formatted spreadsheet, but a lot of people…don’t. And many folks still don’t understand what a product content strategist or UX writer does, especially as an embedded team member.

To make your work and impact tangible, take the time to package it up for demo presentations and blogs. We’re storytellers. Put those skills to work to tell the story of how you arrived at the end result and what it took to get there.

Look for opportunities in your workplace to share — lunch-and-learns, book clubs, lightning talks, etc. When you get to represent your work alongside your team members, people outside of UX can begin to see that you aren’t just a Lorem ipsum magician.

6. Create and maintain a style guide

Creating a style guide is a massive effort, but it’s well worth the investment. First and foremost, it’s a tool that helps the entire team write from a unified perspective and with consistency.

Style guides need to be maintained and updated. The ongoing conversations you have with your team about those changes ensure you consider a variety of use cases and applications. A hard-and-fast rule established for your desktop product might not work so well on mobile or a web-based product, and vice versa. As a collective team, we can make better decisions as they apply across our entire product ecosystem

A style guide also scales your content work — even if you aren’t the person actually doing the work. The reality of a small content team is that copy will need to be created by people outside of your group. With a comprehensive style guide, you can equip those people with guidance. And, while style guides require investment upfront, they can save you time in the long-term by automating decisions already made so you can focus on solving new problems.

Screenshot of the punctuation guidelines for the Firefox Photon Design System.

The Firefox Photon Design System includes copy guidelines.

<figcaption class="imageCaption"></figcaption>

7. Keep track of requests, even when you can’t cover them

Even though you won’t be able to cover it all, it’s a good idea to track and catalogue all of the requests you receive. It’s one thing to just say more content strategy support is needed — it’s another to quantify it. To prioritize your work, you already had to document the projects somewhere, anyway.

Our team captures every request and estimates the percentage of one content strategist’s time that the work would require in a staffing spreadsheet. Does that mean you should be providing 100 percent coverage for everything? Unfortunately, it’d be impossible to deliver a 100 percent on all of it. But it’s eye opening to see that we’d need more people just to cover the business-critical requests.

Here’s a link for creating your own staffing requests tracker.

Screenshot of a spreadsheet for tracking staffing requests.

Template for tracking staffing requests

Closing thoughts

To realize the full breadth and depth of your impact, content strategists should prioritize strategically, collaborate deeply, and document and showcase their work. When you’re building your practice and trying to demonstrate value, it’s tough to take this approach. It’s hard to get a seat at the table initially, and it’s hard to prioritize requests. But even doing a version of these things can yield results — maybe you are able to embed on just one project, and the rest of the time you are tackling a deluge of strings. That’s okay. That’s progress.

Acknowledgements

Special thanks to Michelle Heubusch for making it possible for our tiny team to work strategically. And thanks to Sharon Bautista for editing help.

Blog of DataImproving Your Experience across Products

When you log into your Firefox Account, you expect a seamless experience across all your devices. In the past, we weren’t doing the best job of delivering on that experience, because we didn’t have the tools to collect cross-product metrics to help us make educated decisions in a way that fulfilled our lean data practices and our promise to be a trusted steward of your data. Now we do.

Firefox 83 will include new telemetry measurements that help us understand the experience of Firefox Account users across multiple products, answering questions such as: Do users who set up Firefox Sync spend more time on desktop or mobile devices? How is Firefox Lockwise, the password-manager built into the Firefox desktop browser, used differently than the Firefox Lockwise apps? We will use the unique privacy features of Firefox Accounts to answer questions like these while staying true to Mozilla’s data principles of necessity, privacy, transparency, and accountability–in particular, cross-product telemetry will only gather non-identifiable interaction data, like button clicks, used to answer specific product questions.

We achieve this by introducing a new telemetry ping that is specifically for gathering cross-product metrics. This ping will include a pseudonymous account identifier or pseudonym so that we can tell when two pings were created by the same Firefox Account. We manage this pseudonym in a way that strictly limits the ability to map it back to the real user account, by using the same technology that keeps your Firefox Sync data private (you can learn more in this post if you’re curious). Without getting into the cryptographic details, we use your Firefox Account password as the basis for creating a secret key on your client. The key is known only to your instances of Firefox, and only when you are signed in to your Firefox account, and we use that key to generate the pseudonym for your account.

We’ll have more to share soon. As we gain more insight into Firefox, we’ll come back to tell you what we learn!

[Edited on 2020-10-06 to change Firefox 81 to Firefox 83]

Mozilla VR BlogWhat's new in ECSY v0.4 and ECSY-THREE v0.1

What's new in ECSY v0.4 and ECSY-THREE v0.1

We just released ECSY v0.4 and ECSY-THREE v0.1!

Since the initial release of ECSY we have been focusing on API stability and bug fixing as well as providing some features (such as components’ schemas) to improve the developer experience and provide better validation and descriptive errors when working in development mode.

The API layer hasn't changed that much since its original release but we introduced some new concepts:

Components schemas: Components are now required to have a schema (Unless you are defining a TagComponent).
Defining a component schema is really simple, you just need to define the properties of your component, their types, and, optionally, their default values

class Velocity extends Component {}
Velocity.schema = {
  x: { type: Types.Number, default: 0 },
  y: { type: Types.Number, default: 0 }
};

We provide type definitions for the basic javascript types, and also on ecsy-three for most of the three.js data types, but you can create your own custom types too.

Defining a schema for a component will give you some benefits for free like:

  • Default copy, clone and reset implementation for the component. Although you could define your own implementation for these functions for maximum performance.
  • Components pool to reuse instances to improve performance and GC stability.
  • Tooling such as validators or editors which could show specific widgets depending on the data type and value.
  • Serializers for components so we can store them in JSON or ArrayBuffers, useful for networking, tooling or multithreading.

Registering components: One of the requirements we introduced recently is that you must always register your components before using them, and that includes registering them before registering systems that use these components too. This was initially done to support an optimization to Queries, but we feel making this an explicit rule will also allow us to add further optimizations down the road, as well as just generally being easier to reason about.

Benchmarks: When building a framework like ECSY that is expected to be doing a lot of operations per frame, it’s hard to infer how a change in the implementation will impact the overall performance. So we introduced a benchmark command to run a set of tests and measure how long they take.

What's new in ECSY v0.4 and ECSY-THREE v0.1

We can use this data to compare across different branches and get an estimation on how much specific changes are affecting the performance.

What's new in ECSY v0.4 and ECSY-THREE v0.1

ECSY-THREE

We created ecsy-three to facilitate developing applications using ECSY and three.js by providing a set of components and systems for interacting with ThreeJS from ECSY. In this release it has gotten a major refactor and we got rid of all the “extras” in the main repository, to focus on the “core” API to make it easy to use with three.js without adding unneeded abstractions.

We introduce extended implementations of ECSY's World and Entity classes (ECSYThreeWorld and ECSYThreeEntity) which provides some helpers to interact with three.js' Object3Ds.
The main helper you will be using is Entity.addObject3DComponent(Object3D, parent) that will add the Object3DComponent to the entity holding a three.js' Object3D reference, as well as adding extra tag components to indentify the type of Object3D we are adding.

Please note that we have been adding tag components for almost every type of Object3D in three.js (Mesh, Light, Camera, Scene, …) but you could still add any other you may need.
For example:

// Create a three.js Mesh
let mesh = new THREE.Mesh(
    new THREE.BoxBufferGeometry(), 
    new THREE.MeshBasicMaterial()
);

// Attach the mesh to a new ECSY entity
let entity = world.createEntity().addObject3DComponent(mesh);

If we inspect the entity, it will have the following components:

  • Object3DComponent with { value: mesh }
  • MeshTagComponent. Automatically added by addObject3DComponent

Then you can query for each one of these components in your systems and grab the Object3D by using entity.getObject3D(), which is an alias for entity.getComponent(Object3DComponent).value.

import { MeshTagComponent, Object3DComponent } from "ecsy-three";
import { System } from "ecsy";

class RandomColorSystem extends System {
    execute(delta) {
        this.queries.entities.results.forEach(entity => {
        
            // This will always return a Mesh since 
            // we are querying for the MeshTagComponent
            const mesh = entity.getObject3D();
            mesh.material.color.setHex(Math.random() * 0xffffff);
        });
    }
}

RandomColorSystem.queries = {
    entities: {
        components: [MeshTagComponent, Object3DComponent]
    }
};

When removing Object3D components we also introduced the entity.removeObject3DComponent(unparent) that will get rid of the Object3DComponent as well as all the tag components introduced automatically (For example the MeshTagComponent in the previous example).

One important difference between adding or removing the Object3D components yourself or using these new helpers is that they can handle parenting/unparenting too.

For example the following code will attach the mesh to the scene (Internally it will be doing sceneEntity.getObject3D().add(mesh)):

let entity = world.createEntity().addObject3DComponent(mesh, sceneEntity);

When removing the Object3D component we can pass true as parameter to indicate that we want to unparent this Object3D from its current parent:

entity.removeObject3DComponent(true);

// The previous line is equivalent to:
entity.getObject3D().parent.remove(entity.getObject3D());

"Initialize" helper

Most ecsy & three.js applications will need a set of common components:

  • World
  • Camera
  • Scene
  • WebGLRenderer
  • Render loop

So we added a helper function initialize in ecsy-three that will create all these for you. It is completely optional, but is a great way to quickly bootstrap an application.

import { initialize } from "ecsy-three";

const { world, scene, camera, renderer } = initialize();

You can find a complete example using all these methods the following glitch:

Developer tools

We updated the developer tools extension (Read about them) to support the newest version of ECSY core, but it should still be backward compatible with previous versions.
We fixed the remote debugging mode that allows you to run the extension in one browser while debugging an application in another browser or device (for example an Oculus Quest).
You can grab them on your favourite browser extension store:

The community

We are so happy with how the community has been helping us build ECSY and the project that have emerged out of them.

We have a bunch of cool experiments around physics, networking, games, and boilerplates and bindings for pixijs, phaser, babylon, three.js, react, and many more!

It’s been especially useful for us to open a Discord where a lot of interesting discussions both ECSY or ECS as a paradigm have happened.

What's next?

There is still a long road ahead, and we have a lot of features and projects in mind to keep improving the ECSY ecosystem as for example:

  • Revisit the current reactive queries implementation and design, especially the deferred removal step.
  • Continue to experiment with using ECSY in projects internally at Mozilla, like Hubs and Spoke.
  • Improving the sandbox and API examples
  • Keep adding new systems and components for higher level functionality: physics, teleport, networking, hands & controllers, …
  • Keep building new demos to showcase the features we are releasing

Please feel free to use our github repositories (ecsy, ecsy-three, ecsy-devtools) to follow the development, request new features or file issues on bugs you find. Also come participate in community discussions on our discourse forum and discord server.

SUMO BlogReview of the year so far, and looking forward to the next 6 months.

In 2019 we started looking into our experiences and 2020 saw us release the new responsive redesign, a new AAQ flow, a finalized Firefox Accounts migration, and a few other minor tweaks. We have also performed a Python and Django upgrade carrying on with the foundational work that will allow us to grow and expand our support platform. This was a huge win for our team and the first time we have improved our experience in years! The team is working on tracking the impact and improvement to our overall user experience.

We also know that contributors in Support have had to deal with an old, sometimes very broken, toolset, and so we wanted to work on that this year. You may have already heard the updates from Kiki and Giulia through their monthly strategy updates. The research and opportunity identification the team did was hugely valuable, and the team identified onboarding as an immediate area for improvement. We are currently working through an improved onboarding process and look forward to implementing and launching ongoing work.

Apart from that, we’ve done a quite chunk of work on the Social Support side with the transition from Buffer Reply to Conversocial. The change was planned since the beginning of this year and we worked together with the Pocket team on the implementation. We’ve also collaborated closely with the marketing team to kick off the @FirefoxSupport Twitter account that we’ll be using to focus our Social Support community effort.

Now, the community managers are focusing on supporting the Fennec to Fenix migration. A community campaign to promote the Respond Tool is lining up in parallel with the migration rollout this week and will run until the end of August as we’re completing the rollout.

We plan to continue implementing the information architecture we developed late last year that will improve our navigation and clean up a lot of the old categories that are clogging up our knowledge base editing tools. We’re also looking into redesigning our internal search architecture, re-implement it from scratch and expand our search UI.

2020 is also the year we have decided to focus more on data. Roland and JR have been busy building out our product dashboards, all internal for now – and we are now working on how we make some of this data publicly available. It is still work in progress, but we hope to make this possible sometime in early 2021.

In the meantime, we welcome feedback, ideas, and suggestions. You can also fill out this form or reach out to Kiki/Giulia for questions. We hope you are all as excited about all the new things happening as we are!

Thanks,
Patrick, on behalf of the SUMO team

The Mozilla BlogVirtual Tours of the Museum of the Fossilized Internet

Let’s brainstorm a sustainable future together.

Imagine: We are in the year 2050 and we’re opening the Museum of the Fossilized Internet, which commemorates two decades of a sustainable internet. The exhibition can now be viewed in social VR. Join an online tour and experience what the coal and oil-powered internet of the past was like.

 

Visit the Museum from home

In March 2020, Michelle Thorne and I announced office tours of the Museum of the Fossilized Internet as part of our new Sustainability programme. Then the pandemic hit, and we teamed up with the Mozilla Mixed Reality team to make it more accessible while also demonstrating the capabilities of social VR with Hubs.

We now welcome visitors to explore the museum at home through their browsers.

The museum was created to be a playful source of inspiration and an invitation to imagine more positive, sustainable futures. Here’s a demo tour to help you get started on your visit.

Video Production: Dan Fernie-Harper; Spoke scene: Liv Erickson and Christian Van Meurs; Tour support: Elgin-Skye McLaren.

 

Foresight workshops

But that’s not all. We are also building on the museum with a series of foresight workshops. Once we know what preferable, sustainable alternatives look like, we can start building towards them so that in a few years, this museum is no longer just a thought experiment, but real.

Our first foresight workshop will focus on policy with an emphasis on trustworthy AI. In a pilot, facilitators Michelle Thorne and Fieke Jansen will specifically focus on the strategic opportunity that the European Commission is in the process of defining its AI strategy, climate agenda and COVID-19 recovery plans. Thought together, this workshop will develop options to advance both, trustworthy AI and climate justice.

More foresight workshops should and will follow. We are currently looking at businesses, technology, or the funders community as additional audiences. Updates will be available on the wiki.

You are also invited to join the sustainability team as well as our environmental champions on our Matrix instance to continue brainstorming sustainable futures. More updates on Mozilla’s journey towards sustainability will be shared here on the Mozilla Blog.

The post Virtual Tours of the Museum of the Fossilized Internet appeared first on The Mozilla Blog.

Blog of DataExperimental integration Glean with Unity applications

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This is a special guest post by non-Glean-team member Daosheng Mu!

 

You might notice Firefox Reality PC Preview has been released in HTC’s Viveport store. That is a VR web browser that provides 2D overlay browsing alongside immersive content and supports web-based immersive experiences for PC-connected VR headsets. In order to easily deploy our product into the Viveport store, we take advantage of Unity to help make our application launcher. Also because of that, it brings us another challenge about how to use Mozilla’s existing telemetry system.

As we know, Glean SDK has provided language bindings for different programming language requirements that include Kotlin, Swift, and Python. However, when we are talking about supporting applications that use Unity as their development toolkit, there are no existing bindings available to help us achieve it. Unity allows users using a Python interpreter to embed Python scripts in a Unity project; however, due to Unity’s technology being based on the Mono framework, that is not the same as our familiar Python runtime for running Python scripts. So, the alternative way we need to find out is how to run Python on .Net Framework or exactly on Mono framework. If we are discussing possible approaches to run Python script in the main process, using IronPython is the only solution. However, it is only available for Python 2.7, and the Glean SDK Python language binding needs Python 3.6. Hence, we start our plans to develop a new Glean binding for C#.

 

Getting started

The Glean team and I initialized the discussions about what are the requirements of running Glean in Unity to implement C# binding from Glean. We followed minimum viable product strategy and defined very simple goals to evaluate if the plan could be workable. Technically, we only need to send built-in and custom pings as the current Glean Python binding mechanism, and we are able to just use StringMetricType as our first metric in this experimental Unity project. Besides, we also notice .Net Frameworks have various versions, and it is essential to consider the compatibility with the Mono framework in Unity. Therefore, we decide Glean C# binding would be based on .Net Standard 2.0. Based on these efficient MVP goals and Glean team’s rapid production, we got our first alpha version of C# binding in a very short moment. I really appreciate Alessio, Mike, Travis, and other team members from the Glean team. Their hard work made it happen so quickly, and they were patient with my concerns and requirements.

 

How it works

In the beginning, it is worth it for us to explain how to integrate Glean into a Hello World C# application. We can choose either importing the C# bindings source code from glean-core/csharp or just building the csharp.sln from the Glean repository and then copy and paste the generated Glean.dll to your own project. Then, in your C# project’s Dependencies setting, add this Glean.dll. Aside from this, we also need to copy and paste glean_ffi.dll that is existing in the folder from pulling Glean after running `cargo build`. Lastly, add Serilog library into your project via NuGet. We can install it through NuGet Package Manager Console as below:

PM> Install-Package Serilog

PM> Install-Package Serilog.Sinks.Console

Defining pings and metrics

Before we start to write our program, let’s design our metrics first. Based on the current ability of Glean SDK C# language binding, we can create a custom ping and set a string type metric for this custom ping. Then, at the end of the program, we will submit this custom ping, this string metric would be collected and uploaded to our data server. The ping and metric description files are as below:

Testing and verifying it

Now, it is time for us to write our HelloWorld program.

As we can see, the code above is very straightforward. Although Glean parser in C# binding hasn’t been supported to assist us create metrics, we are still able to create these metrics in our program manually. One thing you might notice is Thread.Sleep(1000); at the bottom part of the main(). It means pausing the current thread for 1000 milliseconds. Because this HelloWorld program is a console application, the application will quit once there are no other operations. We need this line to make sure the Glean ping uploader has enough time to finish its ping uploading before the main process is quit. We also can choose to replace it with Console.ReadKey(); to let users quit the application instead of  closing the application itself directly. The better solution here is to launch a separate process for Glean uploader; this is the current limitation of our C# binding implementation. We will work on this in the future. Regarding the extended code you can go to glean/samples/csharp to see other pieces, we also will use the same code at the Unity section.

After that, we would be interested in seeing if our pings are uploaded to a data server successfully. We can set the Glean debug view environment variable by inserting set GLEAN_DEBUG_VIEW_TAG=samplepings in our command-line tool, samplepings could be any you would like to be as a tag of pings. Then, run your executable file in the same command-line window. Finally, you can see this result from Glean Debug Ping Viewer , the result is as below:

It looks great now. But you might notice the os_version looks not right if you are running it on a Windows 10 platform. This is due to the fact that we are using Environment.OSVersion to get the OS version, and it is not reliable. The follow up discussion please reference Bug 1653897. The solution is by adding a manifest file, then unmarking the lines of Windows 10 compatibility.

 

Your first Glean program in Unity

Now, I think we are good enough about the explanation of general C# program parts. Let’s move on the main point of this article, using Glean in Unity. Using Glean in Unity is a little bit different, but it is not too complicated. First of all, open the C# solution that your Unity project creates for you. Because Unity needs more dependencies when using Glean, we would let the Glean NuGet package help us install them in advance. As Glean NuGet package mentions, install Glean NuGet package by the below command:

Install-Package Mozilla.Telemetry.Glean -Version 0.0.2-alpha

Then, check your Packages folder, you could see there are some packages already downloaded into your folder.

However, the Unity project builder couldn’t be able to recognize this Packages folder, we need to move the libraries from these packages into the Unity Assets\Scripts\Plugins folder. You might be able to see these libraries have some runtime versions differently. The basic idea is that Unity is .Net Standard 2.0 compatible, so we can just grab them from lib\netstandard2.0 folders. In addition, Unity allows users to distribute their version to x86 and x86_64 platforms. Therefore, when moving Glean FFI library into this Plugins folder, we have to put Glean FFI library, glean_ffi.dll, x86 and x86_64 builds into Assets\Scripts\Plugins\x86 and Assets\Scripts\Plugins\x86_64 individually. Fortunately, in Unity, we don’t need to worry about the Windows compatibility manifest stuff, they already handle it for us.

Now, we can start to copy the same code to its main C# script.

We might notice we didn’t add Thread.Sleep(1000); at the end of the start() function. It is because in general, Unity is a Window application, the main approach to exit an Unity application is by closing its window. Hence, Glean ping uploader has enough time to finish its task. However, if your application needs to use a Unity specific API, Application.Quit(), to close the program. Please make sure to make the main thread wait for a couple of seconds in order to preserve time for Glean ping uploader to finish its jobs. For more details about this example code, please go to GleanUnity to see its source code.

 

Next steps

C# binding is still at its initial stage. Although we already support a few metric types, we still have lots of metric types and features that are unavailable compared to other bindings. We also look forward to providing a better solution of off-main process ping upload to resolve needing a main thread to wait for Glean ping uploader before the main process can quit. We will keep working on it!

Mozilla VR BlogIntroducing Firefox Reality PC Preview

Introducing Firefox Reality PC Preview

Have you ever played a VR game and needed a tip for beating the game... but you didn’t want to take off your headset to find that solution? Or, have you wanted to watch videos while you played your game? Or, how about wanting to immerse yourself in a 360 video on Youtube?

Released today, Firefox Reality PC Preview enables you to do these things and more. This is the newest addition to the Firefox Reality family of products. Built upon the latest version of the well-known and trusted Firefox browser, Firefox Reality PC Preview works with tethered headsets as well as wireless headsets streaming from a PC.

Introducing Firefox Reality PC Preview<figcaption>Firefox Reality PC Preview in Viveport Origin environment</figcaption>

Firefox Reality PC Preview is a new VR web browser that provides 2D overlay browsing alongside immersive apps and supports web-based immersive experiences for PC-connected VR headsets. You can access the conventional web, floating in your virtual environment as you use the controller to interact with the web content. You can also watch 360 Videos from sites such as Vimeo or KaiXR, with many showcases of immersive experiences from around the world. With Firefox Reality PC Preview, you can explore 3D web content created with WebVR or WebXR, such as Mozilla Hubs, Sketchfab, and Hello WebXR!. And, of course, Firefox Reality PC Preview contains the same privacy and security that underpin regular Firefox on the desktop.

<figcaption>Browsing the web and playing 360 videos while inside Viveport Origin, and watching a gameplay video of Pirate Space Trainer on YouTube while playing it at the same time.</figcaption>

Firefox Reality PC Preview is available for download right now in HTC’s Viveport store and runs on any device accessible from the Viveport Store, including the HTC Vive Cosmos. Our Preview release is intended to give users the chance to test the features shared above. From this initial release, we plan to deliver updates that add more functionality and stability as well as expand to other VR platforms. We would love to get feedback about your experience with this Preview. Please install it and take it for a test drive!

Introducing Firefox Reality PC Preview<figcaption>Screen shot of Firefox Reality PC Preview in HTC Viveport Store</figcaption>