Mozilla VR BlogBringing WebXR to iOS

Bringing WebXR to iOS

The first version of the WebXR Device API is close to being finalized, and browsers will start implementing the standard soon (if they haven't already). Over the past few months we've been working on updating the WebXR Viewer (source on github, new version available now on the iOS App Store) to be ready when the specification is finalized, giving developers and users at least one WebXR solution on iOS. The current release is a step along this path.

Most of the work we've been doing is hidden from the user; we've re-written parts of the app to be more modern, more robust and efficient. And we've removed little-used parts of the app, like video and image capture, that have been made obsolete by recent iOS capabilities.

There are two major parts to the recent update of the Viewer that are visible to users and developers.

A New WebXR API

We've updated the app to support a new implementation of the WebXR API based on the official WebXR Polyfill. This polyfill currently implements a version of the standard from last year, but when it is updated to the final standard, the WebXR API used by the WebXR Viewer will follow quickly behind. Keep an eye on the standard and polyfill to get a sense of when that will happen; keep an eye on your favorite tools, as well, as they will be adding or improving their WebXR support over the next few months. (The WebXR Viewer continues to support our early proposal for a WebXR API, but that API will eventually be deprecated in favor of the official one.)

We've embedded the polyfill into the app, so the API will be automatically available to any web page loaded in the app, without the need to load a polyfill or custom API. Our goal is to have any WebXR web applications designed to use AR on a mobile phone or tablet run in the WebXR Viewer. You can try this now, by enabling the "Expose WebXR API" preference in the viewer. Any loaded page will see the WebXR API exposed on navigator.xr, even though most "webxr" content on the web won't work right now because the standard is in a state of constant change.

You can find the current code for our API in the webxr-ios-js, along with a set of examples we're creating to explore the current API, and future additions to the API.These examples are available online. A glance at the code, or the examples, will show that we are not only implementing the initial API, but also building implementations of a number of proposed additions to the standard, including anchors, hit testing, and access to real world geometry. Implementing support for requesting geospatial coordinate system alignment allows integration with the existing web Geolocation API, enabling AR experiences that rely on geospatial data (illustrated simply in the banner image above). We will soon be exploring an API for camera access to enable computer vision.

Most of these APIs were available in our previous WebXR API implementation, but the former here more closely aligns with the work of the Immersive Web Community group. (We have also kept a few of our previous ARKit-specific APIs, but marked them explicitly as not being on the standards-track yet.)

A New Approach to WebXR Permissions

The most visible change to the application is a permissions API that is popped up when a web page requests a WebXR Session. Previously, the app was an early experiment and devoted to WebXR applications built with our custom API, so we did not explicitly ask for permission, inferring that anyone running an app in our experimental web browser intends to use WebXR capabilities.

When WebXR is released, browsers will need to obtain a user's permission before a web page can access the potentially sensitive data available via WebXR. We are particularly interested in what levels of permissions WebXR should have, so that users retain control of what data apps may require. One approach that seems reasonable is to differentiate between basic access (e.g., device motion, perhaps basic hit testing against the world), access to more detailed knowledge of the world (e.g., illumination, world geometry) and finally access to cameras and other similar sensors. The corresponding permission dialogs in the Viewer are shown here.

Bringing WebXR to iOS

If the user gives permission, an icon in the URL bar shows the permission level, similar to the camera and microphone access icons in Firefox today.

Bringing WebXR to iOS

Tapping on the icon brings up the permission dialog again, allowing the user to temporarily restrict the flow of data to the web app. This is particularly relevant for modifying access to cameras, where a mobile user (especially when HMDs are more common) may want to turn off sensors depending on the location, or who is nearby.

A final aspect of permissions we are exploring is the idea of a "Lite" mode. In each of the dialogs above, a user can select Lite mode, which brings up a UI allowing them to select a single ARKit plane.
Bringing WebXR to iOS
The APIs that expose world knowledge to the web page (including hit testing in the most basic level, and geometry in the middle level) will only use that single plane as the source of their actions. Only that plane would produce hits, and only that plane would have it's geometry sent into the page. This would allow the user to limit the information passed to a page, while still being able to access AR apps on the web.

Moving Forward

We are excited about the future of XR applications on the web, and will be using the WebXR Viewer to provide access to WebXR on iOS, as well as a testbed for new APIs and interaction ideas. We hope you will join us on the next step in the evolution of WebXR!

hacks.mozilla.orgFaster smarter JavaScript debugging in Firefox DevTools

Script debugging is one of the most powerful and complex productivity features in the web developer toolbox. Done right, it empowers developers to fix bugs quickly and efficiently. So the question for us, the Firefox DevTools team, has been, are the Firefox DevTools doing it right?

We’ve been listening to feedback from our community. Above everything we heard the need for greater reliability and performance; especially with modern web apps. Moreover, script debugging is a hard-to-learn skill that should work in similar fashion across browsers, but isn’t consistent because of feature and UI gaps.

With these pain points in mind, the DevTools Debugger team – with help from our tireless developer community – landed countless updates to design a more productive debugging experience. The work is ongoing, but Firefox 67 marks an important milestone, and we wanted to highlight some of the fantastic improvements and features. We invite you to open up Firefox Quantum: Developer Edition, try out the debugger on the examples below and your projects and let us know if you notice the difference.

A rock-solid debugging experience

Fast and reliable debugging is the result of many smaller interactions. From initial loading and source mapping to breakpoints, console logging, and variable previews, everything needs to feel solid and responsive. The debugger should be consistent, predictable, and capable of understanding common tools like webpack, Babel, and TypeScript.

We can proudly say that all of those areas have improved in the past months:

  1. Faster load time. We’ve eliminated the worst performance cliffs that made the debugger slow to open. This has resulted in a 30% speedup in our performance test suite. We’ll share more of our performance adventures in a future post.
  2. Excellent source map support. A revamped and faster source-map backend perfects the illusion that you’re debugging your code, not the compiled output from Babel, Webpack, TypeScript, vue.js, etc.
    Generating correct source maps can be challenging, so we also contributed patches to build tools (i.e. babel, vue.js, regenerator) – benefiting the whole ecosystem.
  3. Reduced overhead when debugger isn’t focused. No need to worry any longer about keeping the DevTools open! We found and removed many expensive calculations from running in the debugger when it’s in the background.
  4. Predictable breakpoints, pausing, and stepping. We fixed many long-standing bugs deep in the debugger architecture, solving some of the most common and frustrating issues related to lost breakpoints, pausing in the wrong script, or stepping through pretty-printed code.
  5. Faster variable preview. Thanks to our faster source-map support (and lots of additional work), previews are now displayed much more quickly when you hover your mouse over a variable while execution is paused.

These are just a handful of highlights. We’ve also resolved countless bugs and polish issues.

Looking ahead

Foremost, we must maintain a high standard of quality, which we’ll accomplish by explicitly setting aside time for polish in our planning. Guided by user feedback, we intend to use this time to improve new and existing features alike.

Second, continued investment in our performance and correctness tests ensures that the ever-changing JavaScript ecosystem, including a wide variety of frameworks and compiled languages, is well supported by our tools.

Debug all the things with new features

Finding and pausing in just the right location can be key to understanding a bug. This should feel effortless, so we’ve scrutinized our own tools—and studied others—to give you the best possible experience.

Inline breakpoints for fine-grained pausing and stepping

Inline handlers are the perfect match for the extra granularity.

Why should breakpoints operate on lines, when lines can have multiple statements? Thanks to inline breakpoints, it’s now easier than ever to debug minified scripts, arrow functions, and chained method calls. Learn more about breakpoints on MDN or try out the demo.

Logpoints combine the power of Console and Debugger

Adding Logpoints to monitor application state.

Console logging, also called printf() debugging, is a quick and easy way to observe your program’s flow, but it rapidly becomes tedious. Logpoints break that tiresome edit-build-refresh cycle by dynamically injecting console.log() statements into your running application. You can stay in the browser and monitor variables without pausing or editing any code. Learn more about log points on MDN.

Seamless debugging for JavaScript Workers

The new Threads panel in the Debugger for Worker debugging

Web Workers power the modern web and need to be first-class concepts in DevTools. Using the new Threads panel, you can switch between and independently pause different execution contexts. This allows workers and their scripts to be debugged within the same Debugger panel, similarly to other modern browsers. Learn more about Worker debugging on MDN.

Human-friendly variable names for source maps

Debugging bundled and compressed code isn’t easy. The Source Maps project, started and maintained by Firefox, bridges the gap between minified code running in the browser and its original, human-friendly version, but the translation isn’t perfect. Often, bits of the minified build output shine through and break the illusion. We can do better!

From build output back to original human-readable variables

By combining source maps with the Babel parser, Firefox’s Debugger can now preview the original variables you care about, and hide the extraneous cruft from compilers and bundlers. This can even work in the console, automatically resolving human-friendly identifiers to their actual, minified names behind the scenes. Due to its performance overhead, you have to enable this feature separately by clicking the “Map” checkbox in the Debugger’s Scopes panel. Read the MDN documentation on using the map scopes feature.

What’s next

Developers frequently need to switch between browsers to ensure that the web works for everyone, and we want our DevTools to be an intuitive, seamless experience. Though browsers have converged on the same broad organization for tools, we know there are still gaps in both features and UI. To help us address those gaps, please let us know where you experience friction when switching browsers in your daily work.

Your input makes a big difference

As always, we would love to hear your feedback on how we can improve DevTools and the browser.

While all these updates will be ready to try out in Firefox 67, when it’s released next week, we’ve polished them to perfection in Firefox 68 and added a few more goodies. Download Firefox Developer Edition (68) to try the latest updates for devtools and platform now.

The post Faster smarter JavaScript debugging in Firefox DevTools appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogMaking ethical decisions for the immersive web

Making ethical decisions for the immersive web

One of the promises of immersive technologies is real time communication unrestrained by geography. This is as transformative as the internet, radio, television, and telephones—each represents a pivot in mass communications that provides new opportunities for information dissemination and creating connections between people. This raises the question, “what’s the immersive future we want?”

We want to be able to connect without traveling. Indulge our curiosity and creativity beyond our physical limitations. Revolutionize the way we visualize and share our ideas and dreams. Enrich everyday situations. Improve access to limited resources like healthcare and education.

The internet is an integral part of modern life—a key component in education, communication, collaboration, business, entertainment and society as a whole.
— Mozilla Manifesto, Principle 1

My first instinct is to say that I want an immersive future that brings joy. Do AR apps that help me maintain my car bring me joy? Not really.

What I really want is an immersive future that respects individual creators and users. Platforms and applications that thoughtfully approach issues of autonomy, privacy, bias, and accessibility in a complex environment. How do we get there? First, we need to understand the broader context of augmented and virtual reality in ethics, identifying overlap with both other technologies (e.g. artificial intelligence) and other fields (e.g. medicine and education). Then, we can identify the unique challenges presented by spatial and immersive technologies. Given the current climate of ethics and privacy, we can anticipate potential problems, identify the core issues, and evaluate different approaches.

From there, we have an origin for discussion and a path for taking practical steps that enable legitimate uses of MR while discouraging abuse and empowering individuals to make choices that are right for them.

For details and an extended discussion on these topics, see this paper.

The immersive web

Whether you have a $30 or $3000 headset, you should be able to participate in the same immersive universe. No person should be excluded due to their skin color, hairstyle, disability, class, location, or any other reason.

The internet is a global public resource that must remain open and accessible.
Mozilla Manifesto, Principle 2

The immersive web represents an evolution of the internet. Immersive technologies are already deployed in education and healthcare. It's unethical to limit their benefits to a privileged few, particularly when MR devices can improve access to limited resources. For example, Americans living in rural areas are underserved by healthcare, particularly specialist care. In an immersive world, location is no longer an obstacle. Specialists can be virtually present anywhere, just like they were in the room with the patient. Trained nurses and assistants would be required for physical manipulations and interactions, but this could dramatically improve health coverage and reduce burdens on both patients and providers.

While we can build accessibility into browsers and websites, the devices themselves need to be created with appropriate accomodations, like settings that indicate a user is in a wheelchair. When we design devices and experiences, we need to consider how they'll work for people with disabilities. It's imperative to build inclusive MR devices and experiences, both because it's unethical to exclude users due to disability, and because there are so many opportunities to use MR as an assistive technology, including:

  • Real time subtitles
  • Gaze-based navigation
  • Navigation with vehicle and obstacle detection and warning

The immersive web is for everyone.

Representation and safety

Mixed reality offers new ways to connect with each other, enabling us to be virtually present anywhere in the world instantaneously. Like most technologies, this is both a good and a bad thing. While it transforms how we can communicate, it also offers new vectors for abuse and harassment.

All social VR platforms need to have simple and obvious ways to report abusive behavior and block the perpetrators. All social platforms, whether 2D or 3D should have this, but the VR-enabled embodiment intensifies the impact of harassment. Behavior that would be limited to physical presence is no longer geographically limited, and identities can be more obfuscated. Safety is not a 'nice to have' feature — it's a requirement. Safety is a key component in inclusion and freedom of expression, as well as being a human right.

Freedom of expression in this paradigm includes both choosing how to present yourself and having the ability to maintain multiple virtual identities. Immersive social experiences allow participants to literally build their identities via avatars. Human identity is infinitely complex (and not always very human — personally, I would choose a cat avatar). Thoughtfully approaching diversity and representation in avatars isn't easy, but it is worthwhile.

Individuals must have the ability to shape the internet and their own experiences on it.
Mozilla Manifesto, Principle 5

Suppose Banksy, a graffiti artist known both for their art and their anonymity, is an accountant by day who used an HMD to conduct virtual meetings. Outside of work, Banksy is a virtual graffiti artist. However, biometric data could tie the two identities together, stripping Banksy of their anonymity. Anonymity enables free speech; it removes the threats of economic retailiation and social ostracism and allows consumers to process ideas free of predjudices about the creators. There's a long history of women who wrote under assumed names to avoid being dismissed for their gender, including JK Rowling and George Sand.

Unique considerations in mixed reality

Immersive technologies differ from others in their ability to affect our physical bodies. To achieve embodiment and properly interact with virtual elements, devices use a wide range of data derived from user biometrics, the surrounding physical world, and device orientation. As the technology advances, the data sources will expand.

The sheer amount of data required for MR experiences to function requires that we rethink privacy. Earlier, I mentioned that gaze-based navigation can be used to allow mobility impaired users to participate more fully on the immersive web. Unfortunately, gaze tracking data also exposes large amounts of nonverbal data that can be used to infer characteristics and mental states, including ADHD and sexual arousal.

Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
Mozilla Manifesto, Principle 4

While there may be a technological solution to this problem, it highlights a wider social and legal issue: we've become too used to companies monetizing our personal data. It's possible to determine that, although users report privacy concerns, they don't really care, because they 'consent' to disclosing personal information for small rewards. The reality is that privacy is hard. It's hard to define and it's harder to defend. Processing privacy policies feels like it requires a law degree and quantifying risks and tradeoffs is nondeterministic. In the US, we've focused on privacy as an individual's responsibility, when Europe (with the General Data Protection Regulation) shows that it's society's problem and should be tackled comprehensively.

Concrete steps for ethical decision making

Ethical principles aren't enough. We also need to take action — while some solutions will be technical, there are also legal, regulatory, and societal challenges that need to be addressed.

  1. Educate and assist lawmakers
  2. Establish a regulatory authority for flexible and responsive oversight
  3. Engage engineers and designers to incorporate privacy by design
  4. Empower users to understand the risks and benefits of immersive technology
  5. Incorporate experts from other fields who have addressed similar problems

Tech needs to take responsibility. We've built technology that has incredible positive potential, but also serious risks for abuse and unethical behavior. Mixed reality technologies are still emerging, so there's time to shape a more respectful and empowering immersive world for everyone.

hacks.mozilla.orgEmpowering User Privacy and Decentralizing IoT with Mozilla WebThings

Smart home devices can make busy lives a little easier, but they also require you to give up control of your usage data to companies for the devices to function. In a recent article from the New York Times’ Privacy Project about protecting privacy online, the author recommended people to not buy Internet of Things (IoT) devices unless they’re “willing to give up a little privacy for whatever convenience they provide.

This is sound advice since smart home companies can not only know if you’re at home when you say you are, they’ll soon be able to listen for your sniffles through their always-listening microphones and recommend sponsored cold medicine from affiliated vendors. Moreover, by both requiring that users’ data go through their servers and by limiting interoperability between platforms, leading smart home companies are chipping away at people’s ability to make real, nuanced technology choices as consumers.

At Mozilla, we believe that you should have control over your devices and the data that smart home devices create about you. You should own your data, you should have control over how it’s shared with others, and you should be able to contest when a data profile about you is inaccurate.

Mozilla WebThings follows the privacy by design framework, a set of principles developed by Dr. Ann Cavoukian, that takes users’ data privacy into account throughout the whole design and engineering lifecycle of a product’s data process. Prioritizing people over profits, we offer an alternative approach to the Internet of Things, one that’s private by design and gives control back to you, the user.

User Research Findings on Privacy and IoT Devices

Before we look at the design of Mozilla WebThings, let’s talk briefly about how people think about their privacy when they use smart home devices and why we think it’s essential that we empower people to take charge.

Today, when you buy a smart home device, you are buying the convenience of being able to control and monitor your home via the Internet. You can turn a light off from the office. You can see if you’ve left your garage door open. Prior research has shown that users are passively, and sometimes actively, willing to trade their privacy for the convenience of a smart home device. When it seems like there’s no alternative between having a potentially useful device or losing their privacy, people often uncomfortably choose the former.

Still, although people are buying and using smart home devices, it does not mean they’re comfortable with this status quo. In one of our recent user research surveys, we found that almost half (45%) of the 188 smart home owners we surveyed were concerned about the privacy or security of their smart home devices.

Bar graph showing about 45% of the 188 current smart home owners we surveyed were concerned about their privacy or security at least a few times per month.

User Research Survey Results

Last Fall 2018, our user research team conducted a diary study with eleven participants across the United States and the United Kingdom. We wanted to know how usable and useful people found our WebThings software. So we gave each of our research participants some Raspberry Pis (loaded with the Things 0.5 image) and a few smart home devices.

User research participants were given a raspberry Pi, a smart light, a motion sensor, a smart plug, and a door sensor.

Smart Home Devices Given to Participants for User Research Study

We watched, either in-person or through video chat, as each individual walked through the set up of their new smart home system. We then asked participants to write a ‘diary entry’ every day to document how they were using the devices and what issues they came across. After two weeks, we sat down with them to ask about their experience. While a couple of participants who were new to smart home technology were ecstatic about how IoT could help them in their lives, a few others were disappointed with the lack of reliability of some of the devices. The rest fell somewhere in between, wanting improvements such as more sophisticated rules functionality or a phone app to receive notifications on their iPhones.

We also learned more about people’s attitudes and perceptions around the data they thought we were collecting about them. Surprisingly, all eleven of our participants expected data to be collected about them. They had learned to expect data collection, as this has become the prevailing model for other platforms and online products. A few thought we would be collecting data to help improve the product or for research purposes. However, upon learning that no data had been collected about their use, a couple of participants were relieved that they would have one less thing, data, to worry about being misused or abused in the future.

By contrast, others said they weren’t concerned about data collection; they did not think companies could make use of what they believed was menial data, such as when they were turning a light on or off. They did not see the implications of how collected data could be used against them. This showed us that we can improve on how we demonstrate to users what others can learn from your smart home data. For example, one can find out when you’re not home based on when your door has opened and closed.

graph of door sensor logs over a week reveals when someone is not home

Door Sensor Logs can Reveal When Someone is Not Home

From our user research, we’ve learned that people are concerned about the privacy of their smart home data. And yet, when there’s no alternative, they feel the need to trade away their privacy for convenience. Others aren’t as concerned because they don’t see the long-term implications of collected smart home data. We believe privacy should be a right for everyone regardless of their socioeconomic or technical background. Let’s talk about how we’re doing that.

Decentralizing Data Management Gives Users Privacy

Vendors of smart home devices have architected their products to be more of service to them than to their customers. Using the typical IoT stack, in which devices don’t easily interoperate, they can build a robust picture of user behavior, preferences, and activities from data they have collected on their servers.

Take the simple example of a smart light bulb. You buy the bulb, and you download a smartphone app. You might have to set up a second box to bridge data from the bulb to the Internet and perhaps a “user cloud subscription account” with the vendor so that you can control the bulb whether you’re home or away. Now imagine five years into the future when you have installed tens to hundreds of smart devices including appliances, energy/resource management devices, and security monitoring devices. How many apps and how many user accounts will you have by then?

The current operating model requires you to give your data to vendor companies for your devices to work properly. This, in turn, requires you to work with or around companies and their walled gardens.

Mozilla’s solution puts the data back in the hands of users. In Mozilla WebThings, there are no company cloud servers storing data from millions of users. User data is stored in the user’s home. Backups can be stored anywhere. Remote access to devices occurs from within one user interface. Users don’t need to download and manage multiple apps on their phones, and data is tunneled through a private, HTTPS-encrypted subdomain that the user creates.

The only data Mozilla receives are the instances when a subdomain pings our server for updates to the WebThings software. And if a user only wants to control their devices locally and not have anything go through the Internet, they can choose that option too.

Decentralized distribution of WebThings Gateways in each home means that each user has their own private “data center”. The gateway acts as the central nervous system of their smart home. By having smart home data distributed in individual homes, it becomes more of a challenge for unauthorized hackers to attack millions of users. This decentralized data storage and management approach offers a double advantage: it provides complete privacy for user data, and it securely stores that data behind a firewall that uses best-of-breed https encryption.

The figure below compares Mozilla’s approach to that of today’s typical smart home vendor.

Comparison of Mozilla’s Approach to Typical Smart Home Vendor

Mozilla’s approach gives users an alternative to current offerings, providing them with data privacy and the convenience that IoT devices can provide.

Ongoing Efforts to Decentralize

In designing Mozilla WebThings, we have consciously insulated users from servers that could harvest their data, including our own Mozilla servers, by offering an interoperable, decentralized IoT solution. Our decision to not collect data is integral to our mission and additionally feeds into our Emerging Technology organization’s long-term interest in decentralization as a means of increasing user agency.

WebThings embodies our mission to treat personal security and privacy on the Internet as a fundamental right, giving power back to users. From Mozilla’s perspective, decentralized technology has the ability to disrupt centralized authorities and provide more user agency at the edges, to the people.

Decentralization can be an outcome of social, political, and technological efforts to redistribute the power of the few and hand it back to the many. We can achieve this by rethinking and redesigning network architecture. By enabling IoT devices to work on a local network without the need to hand data to connecting servers, we decentralize the current IoT power structure.

With Mozilla WebThings, we offer one example of how a decentralized, distributed system over web protocols can impact the IoT ecosystem. Concurrently, our team has an unofficial draft Web Thing API specification to support standardized use of the web for other IoT device and gateway creators.

While this is one way we are making strides to decentralize, there are complementary projects, ranging from conceptual to developmental stages, with similar aims to put power back into the hands of users. Signals from other players, such as FreedomBox Foundation, Daplie, and Douglass, indicate that individuals, households, and communities are seeking the means to govern their own data.

By focusing on people first, Mozilla WebThings gives people back their choice: whether it’s about how private they want their data to be or which devices they want to use with their system.


This project is an ongoing effort. If you want to learn more or get involved, check out the Mozilla WebThings Documentation, you can contribute to our documentation or get started on your own web things or Gateway.

If you live in the Bay Area, you can find us this weekend at Maker Faire Bay Area (May 17-19). Stop by our table. Or follow @mozillaiot to learn about upcoming workshops and demos.

The post Empowering User Privacy and Decentralizing IoT with Mozilla WebThings appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgTLS 1.0 and 1.1 Removal Update

tl;dr Enable support for Transport Layer Security (TLS) 1.2 today!

 

As you may have read last year in the original announcement posts, Safari, Firefox, Edge and Chrome are removing support for TLS 1.0 and 1.1 in March of 2020. If you manage websites, this means there’s less than a year to enable TLS 1.2 (and, ideally, 1.3) on your servers, otherwise all major browsers will display error pages, rather than the content your users were expecting to find.

Screenshot of a Secure Connection Failed error page

In this article we provide some resources to check your sites’ readiness, and start planning for a TLS 1.2+ world in 2020.

Check the TLS “Carnage” list

Once a week, the Mozilla Security team runs a scan on the Tranco list (a research-focused top sites list) and generates a list of sites still speaking TLS 1.0 or 1.1, without supporting TLS ≥ 1.2.

Tranco list top sites with TLS <= 1.1

As of this week, there are just over 8,000 affected sites from the one million listed by Tranco.

There are a few potential gotchas to be aware of, if you do find your site on this list:

  • 4% of the sites are using TLS ≤ 1.1 to redirect from a bare domain (https://example.com) to www (https://www.example.com) on TLS ≥ 1.2 (or vice versa). If you were to only check your site post-redirect, you might miss a potential footgun.
  • 2% of the sites don’t redirect from bare to www (or vice versa), but do support TLS ≥ 1.2 on one of them.

The vast majority (94%), however, are just bad—it’s TLS ≤ 1.1 everywhere.

If you find that a site you work on is in the TLS “Carnage” list, you need to come up with a plan for enabling TLS 1.2 (and 1.3, if possible). However, this list only covers 1 million sites. Depending on how popular your site is, you might have some work to do regardless of whether you’re not listed by Tranco.

Run an online test

Even if you’re not on the “Carnage” list, it’s a good idea to test your servers all the same. There are a number of online services that will do some form of TLS version testing for you, but only a few will flag not supporting modern TLS versions in an obvious way. We recommend using one or more of the following:

Check developer tools

Another way to do this is open up Firefox (versions 68+) or Chrome (versions 72+) DevTools, and look for the following warnings in the console as you navigate around your site.

Firefox DevTools console warning

Chrome DevTools console warning

What’s Next?

This October, we plan on disabling old TLS in Firefox Nightly and you can expect the same for Chrome and Edge Canaries. We hope this will give enough time for sites to upgrade before affecting their release population users.

The post TLS 1.0 and 1.1 Removal Update appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogSpoke, now on the Web

Spoke, now on the Web

Spoke, the editor that lets you create 3D scenes for use in Hubs, is now available as a fully featured web app. When we announced the beta for Spoke back in October, it was the first step towards making the process of creating social VR spaces easier for everyone. At Mozilla, we believe in the power of the web, and it was a natural decision for us to make Spoke more widely available by making the editor entirely available online - no downloads required.

The way that we communicate is often guided by the spaces that we are in. We use our understanding of the environment to give us cues to the tone of the room, and understanding how to build environments that reflect different use cases for social collaboration is an important element of how we view the Hubs platform. With Spoke, we want everyone to have the creative control over their rooms from the ground (plane) up.

We’re constantly impressed by the content that 3D artists and designers create and we think that Spoke is the next step in making it easier for everyone to learn how to make their own 3D environments. Spoke isn’t trying to replace the wide range of 3D modeling or animation software out there, we just want to make it easier to bring all of that awesome work into one place so that more people can build with the media all of these artists have shared so generously with the world.

Spoke, now on the Web

When it comes to building rooms for Hubs, we want everyone to feel empowered to create custom spaces regardless of whether or not they already have experience building 3D environments. Spoke aims to make it as simple as possible to find, add, and remix what’s already out there on the web under the Creative Commons license. You can bring in your own glTF models, find models on Sketchfab and Google Poly, and publish your rooms to Hubs with just a single click, which makes it easier than ever before to create shared spaces. In addition to 3D models, the Spoke editor allows you to bring in images, gifs, and videos for your spaces. Create your own themed rooms in Hubs for movie nights, academic meetups, your favorite Twitch streams: the virtual world is yours to create!

Spoke, now on the Web

Spoke works by providing an easy-to-use interface to consolidate content into a glTF 2.0 binary (.glb) file that is read by Hubs to generate the base environment for a room. Basing Spoke on glTF, an extensible, standardized 3D file format, makes it possible to export robust scenes that could also be used in other 3D applications or programs.

Once you’ve created scenes in Spoke, you can access them from your Projects page and use them as a base for making new rooms in Hubs. You can keep your scenes private for your own use, or publish them and make them available through the media browser for anyone in Hubs to enjoy. In the future, we plan on adding functionality that allows you to remix environments that others have marked as being open to derivative works.

We want to hear your feedback as you start using Spoke to create environments - try it out at https://hubs.mozilla.com/spoke. You can view the source code and file issues on the GitHub repo, or jump into our community Discord server and say hi in the #spoke channel! If you have specific thoughts or considerations that you want to share with us, feel free to send us an email at hubs@mozilla.com to get in touch. We can’t wait to see what you make!

SUMO BlogIntroducing Josh and Jeremy to the SUMO team

Today the SUMO team would like to welcome Josh and Jeremy who will be joining our team from Boise, Idaho.

Josh and Jeremy will be joining our team to help out on Support for some of the new efforts Mozilla are working on towards creating a connected and integrated Firefox experience.

They will be helping out with new products, but also providing support on forums and social channels, as well as serving as an escalation point for hard to solve issues.

A bit about Josh:

Hey everyone! My name is Josh Wilson and I will be working as a contractor for Mozilla. I have been working in a variety of customer support and tech support jobs over the past ten years. I enjoy camping and hiking during the summers, and playing console RPG’s in the winters. I recently started cooking Indian food, but this has been quite the learning curve for me. I am so happy to be a part of the Mozilla community and look forward to offering my support.

A bit about Jeremy:

Hello! My name is Jeremy Sanders and I’m a contractor of Mozilla through a small company named PartnerHero. I’ve been working in the field of Information Technology since 2015 and have been working with a variety of government, educational, and private entities. In my free time, I like to get out of the office and go fly fishing, camping, or hiking. I also play quite a few video games such as Counterstrike: Global Offensive and League of Legends. I am very excited to start my time here with Mozilla and begin working in conjunction with the community to provide support for users!

Please say hi to them when you see them!

The Mozilla BlogGoogle’s Ad API is Better Than Facebook’s, But…

… with a few important omissions. Google’s tool meets four of experts’ five minimum standards

 

Last month, Mozilla released an analysis of Facebook’s ad archive API, a tool that allows researchers to understand how political ads are being targeted to Facebook users. Our goal: To determine if Facebook had fulfilled its promise to make political advertising more transparent. (It did not.)

Today, we’re releasing an analysis of Google’s ad archive API. Google also promised the European Union it would release an ad transparency tool ahead of the 2019 EU Parliament elections.

Our finding: Google’s API is a lot better than Facebook’s, but is still incomplete. Google’s API meets four of experts’ five minimum standards. (Facebook met two.)

Google does much better than Facebook in providing access to the data in a format that allows for real research and analysis. That is a hugely important requirement; this is a baseline researchers need. But while the data is usable, it isn’t complete. Google doesn’t provide data on the targeting criteria advertisers use, making it more difficult to determine whom people are trying to influence or how information is really spreading across the platform.

Below are the specifics of our Google API analysis:

[1]

Researchers’ guideline: A functional, open API should have comprehensive political advertising content.

Google’s API: The full list of ads, campaigns, and advertisers are available, and can be searched and filtered. The entire database can be downloaded in bulk and analyzed at scale. There are shortcomings, however: There is no data on the audience the ads reached, like their gender, age, or region. And Google has included fewer ads in their database than Facebook, perhaps due to a narrower definition of “political ads.”


[2] ❌

Researchers’ guideline: A functional, open API should provide the content of the advertisement and information about targeting criteria.

Google’s API: While Google’s API does provide the content of the advertisements, like Facebook, it provides no information on targeting criteria, nor does the API provide engagement data (e.g., clicks). Targeting and engagement data is critical for researchers because it lets them see what types of users an advertiser is trying to influence, and whether or not their attempts were successful.


[3]

Researchers’ guideline: A functional, open API should have up-to-date and historical data access.

Google’s API: The API appears to be up to date.


[4] ✅

Researchers’ guideline: A functional, open API should be accessible to and shareable with the general public.

Google’s API: Public access to the API is available through the Google Cloud Public Datasets program.


[5] ✅

Researchers’ guideline: A functional, open API should empower, not limit, research and analysis.

Google’s API: The tool has components that facilitate research, like: bulk download capabilities; no problematic bandwidth limits; search filters; and unique URLs for ads.

 

Overall: While the company gets a passing grade, Google doesn’t sufficiently allow researchers to study disinformation on its platform. The company also significantly delayed the release of their API, unveiling it only weeks before the upcoming EU elections and nearly two months after the originally promised deadline.

With the EU elections fewer than two weeks away, we hope Google (and Facebook) take action swiftly to improve their ad APIs — action that should have been taken months ago.

The post Google’s Ad API is Better Than Facebook’s, But… appeared first on The Mozilla Blog.

SUMO BlogSUMO/Firefox Accounts integration

One of Mozilla’s goals is to deepen relationships with our users and better connect them with our products. For support this means integrating Firefox Accounts (FxA) as the authentication layer on support.mozilla.org

What does this mean?

Currently support.mozilla.org is using its own auth/login system where users are logging in using their username and password. We will replace this auth system with Firefox Accounts and both users and contributors will be asked to connect their existing profiles to FxA.

This will not just help align support.mozilla.org with other Mozilla products but also be a vehicle for users to discover FxA and its many benefits.

In order to achieve this we are looking at the following milestones (the dates are tentative):

Transition period (May-June)

We will start with a transition period where users can log in using both their old username/password as well as Firefox Accounts. During this period new users registering to the site will only be able to create an account through Firefox Accounts. Existing users will get a recommendation to connect their Firefox Account through their existing profile but they will still be able to use their old username/password auth method if they wish. Our intention is to have banners across the site that will let users know about the change and how to switch to Firefox Accounts. We will also send email communications to active users (logged in at least once in the last 3 years).

Switching to Firefox Accounts will also bring a small change to our AAQ (Ask a Question) flow. Currently when users go through the Ask a Question flow they are prompted to login/create an account in the middle of the flow (which is a bit of a frustrating experience). As we’re switching to Firefox Accounts and that login experience will no longer work, we will be moving the login/sign up step at the beginning of the flow – meaning users will have to log in first before they can go through the AAQ. During the transition period non-authenticated users will not be able to use the AAQ flow. This will get back to normal during the Soft Launch period.

Soft Launch (end of June)

After the transition period we will enter a so-called “Soft Launch” period where we integrate the full new log in/sign up experiences and do the fine tuning. By this time the AAQ flow should have been updated and non-authenticated users can use it again. We will also send more emails to active users who haven’t done the switch yet and continue having banners on the site to inform people of the change.

Full Launch (July-August)

If the testing periods above go well, we should be ready to do the full switch in July or August. This means that no old SUMO logins will be accepted and all users will be asked to switch over to Firefox Accounts. We will also do a final round of communications.

Please note: As we’re only changing the authentication mechanism we don’t expect there to be any changes to your existing profile, username and contribution history. If you do encounter an issue please reach out to Madalina or Tasos (or file a bug through Bugzilla).

We’re excited about this change, but are also aware that we might encounter a few bumps on the way. Thank you for your support in making this happen.

If you want to help out, as always you can follow our progress on Github and/or join our weekly calls.

SUMO staff team

The Mozilla BlogWhat we do when things go wrong

We strive to make Firefox a great experience. Last weekend we failed, and we’re sorry.

An error on our part prevented new add-ons from being installed, and stopped existing add-ons from working. Now that we’ve been able to restore this functionality for the majority of Firefox users, we want to explain a bit about what happened and tell you what comes next.

Add-ons are an important feature of Firefox. They enable you to customize your browser and add valuable functionality to your online experience. We know how important this is, which is why we’ve spent a great deal of time over the past few years coming up with ways to make add-ons safer and more secure. However, because add-ons are so powerful, we’ve also worked hard to build and deploy systems to protect you from malicious add-ons. The problem here was an implementation error in one such system, with the failure mode being that add-ons were disabled. Although we believe that the basic design of our add-ons system is sound, we will be working to refine these systems so similar problems do not occur in the future.

In order to address this issue as quickly as possible, we used our “Studies” system to deploy the initial fix, which requires users to be opted in to Telemetry.  Some users who had opted out of Telemetry opted back in, in order to get the initial fix as soon as possible. As we announced in the Firefox Add-ons blog at 2019-05-08T23:28:00Z there is now no longer a need to have Studies on to receive updates anymore; please check that your settings match your personal preferences before we re-enable Studies, which will happen sometime after 2019-05-13T16:00:00Z. In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z.

Our CTO, Eric Rescorla, shares more about what happened technically in this post.

We would like to extend our thanks to the people who worked hard to address this issue, including the hundred or so community members and employees localizing content and answering questions on https://support.mozilla.org/, Twitter, and Reddit.

There’s a lot more detail we will be sharing as part of a longer post-mortem which we will make public — including details on how we went about fixing this problem and why we chose this approach. You deserve a full accounting, but we didn’t want to wait until that process was complete to tell you what we knew so far. We let you down and what happened might have shaken your confidence in us a bit, but we hope that you’ll give us a chance to earn it back.

The post What we do when things go wrong appeared first on The Mozilla Blog.

hacks.mozilla.orgTechnical Details on the Recent Firefox Add-on Outage

Editor’s Note: May 9, 8:22 pt – Updated as follows: (1) Fixed verb tense (2) Clarified the situation with downstream distros. For more detail, see Bug 1549886.

Recently, Firefox had an incident in which most add-ons stopped working. This was due to an error on our end: we let one of the certificates used to sign add-ons expire which had the effect of disabling the vast majority of add-ons. Now that we’ve fixed the problem for most users and most people’s add-ons are restored, I wanted to walk through the details of what happened, why, and how we repaired it.

Background: Add-Ons and Add-On Signing

Although many people use Firefox out of the box, Firefox also supports a powerful extension mechanism called “add-ons”. Add-ons allow users to add third party features to Firefox that extend the capabilities we offer by default. Currently there are over 15,000 Firefox add-ons with capabilities ranging from blocking ads to managing hundreds of tabs.

Firefox requires that all add-ons that are installed be digitally signed. This requirement is intended to protect users from malicious add-ons by requiring some minimal standard of review by Mozilla staff. Before we introduced this requirement in 2015, we had serious problems with malicious add-ons.

The way that the add-on signing works is that Firefox is configured with a preinstalled “root certificate”. That root is stored offline in a hardware security module (HSM). Every few years it is used to sign a new “intermediate certificate” which is kept online and used as part of the signing process. When an add-on is presented for signature, we generate a new temporary “end-entity certificate” and sign that using the intermediate certificate. The end-entity certificate is then used to sign the add-on itself. Shown visually, this looks like this:

Diagram showing the digital signature workflow from Root to Add-on

Note that each certificate has a “subject” (to whom the certificate belongs) and an “issuer” (the signer). In the case of the root, these are the same entity, but for other certificates, the issuer of a certificate is the subject of the certificate that signed it.

An important point here is that each add-on is signed by its own end-entity certificate, but nearly all add-ons share the same intermediate certificate [1]. It is this certificate that encountered a problem: Each certificate has a fixed period during which it is valid. Before or after this window, the certificate won’t be accepted, and an add-on signed with that certificate can’t be loaded into Firefox. Unfortunately, the intermediate certificate we were using expired just after 1AM UTC on May 4, and immediately every add-on that was signed with that certificate become unverifiable and could not be loaded into Firefox.

Although add-ons all expired around midnight, the impact of the outage wasn’t felt immediately. The reason for this is that Firefox doesn’t continuously check add-ons for validity. Rather, all add-ons are checked about every 24 hours, with the time of the check being different for each user. The result is that some people experienced problems right away, some people didn’t experience them until much later. We at Mozilla first became aware of the problem around 6PM Pacific time on Friday May 3 and immediately assembled a team to try to solve the issue.

Damage Limitation

Once we realized what we were up against, we took several steps to try to avoid things getting any worse.

First, we disabled signing of new add-ons. This was sensible at the time because we were signing with a certificate that we knew was expired. In retrospect, it might have been OK to leave it up, but it also turned out to interfere with the “hardwiring a date” mitigation we discuss below (though eventually didn’t use) and so it’s good we preserved the option. Signing is now back up.

Second, we immediately pushed a hotfix which suppressed re-validating the signatures on add-ons. The idea here was to avoid breaking users who hadn’t re-validated yet. We did this before we had any other fix, and have removed it now that fixes are available.

Working in Parallel

In theory, fixing a problem like this looks simple: make a new, valid certificate and republish every add-on with that certificate. Unfortunately, we quickly determined that this wouldn’t work for a number of reasons:

  1. There are a very large number of add-ons (over 15,000) and the signing service isn’t optimized for bulk signing, so just re-signing every add-on would take longer than we wanted.
  2. Once add-ons were signed, users would need to get a new add-on. Some add-ons are hosted on Mozilla’s servers and Firefox would update those add-ons within 24 hours, but users would have to manually update any add-ons that they had installed from other sources, which would be very inconvenient.

Instead, we focused on trying to develop a fix which we could provide to all our users with little or no manual intervention.

After examining a number of approaches, we quickly converged on two major strategies which we pursued in parallel:

  1. Patching Firefox to change the date which is used to validate the certificate. This would make existing add-ons magically work again, but required shipping a new build of Firefox (a “dot release”).
  2. Generate a replacement certificate that was still valid and somehow convince Firefox to accept it instead of the existing, expired certificate.

We weren’t sure that either of these would work, so we decided to pursue them in parallel and deploy the first one that looked like it was going to work. At the end of the day, we ended up deploying the second fix, the new certificate, which I’ll describe in some more detail below.

A Replacement Certificate

As suggested above, there are two main steps we had to follow here:

  1. Generate a new, valid, certificate.
  2. Install it remotely in Firefox.

In order to understand why this works, you need to know a little more about how Firefox validates add-ons. The add-on itself comes as a bundle of files that includes the certificate chain used to sign it. The result is that the add-on is independently verifiable as long as you know the root certificate, which is configured into Firefox at build time. However, as I said, the intermediate certificate was broken, so the add-on wasn’t actually verifiable.

However, it turns out that when Firefox tries to validate the add-on, it’s not limited to just using the certificates in the add-on itself. Instead, it tries to build a valid chain of certificates starting at the end-entity certificate and continuing until it gets to the root. The algorithm is complicated, but at a high level, you start with the end-entity certificate and then find a certificate whose subject is equal to the issuer of the end-entity certificate (i.e., the intermediate certificate). In the simple case, that’s just the intermediate that shipped with the add-on, but it could be any certificate that the browser happens to know about. If we can remotely add a new, valid, certificate, then Firefox will try that as well. The figure below shows the situation before and after we install the new certificate.

a diagram showing two workflows, before and after we installed a new valid certificate

Once the new certificate is installed, Firefox has two choices for how to validate the certificate chain, use the old invalid certificate (which won’t work) and use the new valid certificate (which will work). An important feature here is that the new certificate has the same subject name and public key as the old certificate, so that its signature on the End-Entity certificate is valid. Fortunately, Firefox is smart enough to try both until it finds a path that works, so the add-on becomes valid again. Note that this is the same logic we use for validating TLS certificates, so it’s relatively well understood code that we were able to leverage.[2]

The great thing about this fix is that it doesn’t require us to change any existing add-on. As long as we get the new certificate into Firefox, then even add-ons which are carrying the old certificate will just automatically verify. The tricky bit then becomes getting the new certificate into Firefox, which we need to do automatically and remotely, and then getting Firefox to recheck all the add-ons that may have been disabled.

Normandy and the Studies System

Ironically, the solution to this problem is a special type of add-on called a system add-on (SAO). In order to let us do research studies, we have developed a system called Normandy which lets us serve SAOs to Firefox users. Those SAOs automatically execute on the user’s browser and while they are usually used for running experiments, they also have extensive access to Firefox internal APIs. Important for this case, they can add new certificates to the certificate database that Firefox uses to verify add-ons.[3]

So the fix here is to build a SAO which does two things:

  1. Install the new certificate we have made.
  2. Force the browser to re-verify every add-on so that the ones which were disabled become active.

But wait, you say. Add-ons don’t work so how do we get it to run? Well, we sign it with the new certificate!

Putting it all together… and what took so long?

OK, so now we’ve got a plan: issue a new certificate to replace the old one, build a system add-on to install it on Firefox, and deploy it via Normandy. Starting from about 6 PM Pacific on Friday May 3, we were shipping the fix in Normandy at 2:44 AM, or after less than 9 hours, and then it took another 6-12 hours before most of our users had it. This is actually quite good from a standing start, but I’ve seen a number of questions on Twitter about why we couldn’t get it done faster. There are a number of steps that were time consuming.

First, it took a while to issue the new intermediate certificate. As I mentioned above, the Root certificate is in a hardware security module which is stored offline. This is good security practice, as you use the Root very rarely and so you want it to be secure, but it’s obviously somewhat inconvenient if you want to issue a new certificate on an emergency basis. At any rate, one of our engineers had to drive to the secure location where the HSM is stored. Then there were a few false starts where we didn’t issue exactly the right certificate, and each attempt cost an hour or two of testing before we knew exactly what to do.

Second, developing the system add-on takes some time. It’s conceptually very simple, but even simple programs require taking some care, and we really wanted to make sure we didn’t make things worse. And before we shipped the SAO, we had to test it, and that takes time, especially because it has to be signed. But the signing system was disabled, so we had to find some workarounds for that.

Finally, once we had the SAO ready to ship, it still takes time to deploy. Firefox clients check for Normandy updates every 6 hours, and of course many clients are offline, so it takes some time for the fix to propagate through the Firefox population. However, at this point we expect that most people have received the update and/or the dot release we did later.

Final Steps

While the SAO that was deployed with Studies should fix most users, it didn’t get to everyone. In particular, there are a number of types of affected users who will need another approach:

  • Users who have disabled either Telemetry or Studies.
  • Users on Firefox for Android (Fennec), where we don’t have Studies.
  • Users of downstream builds of Firefox ESR that don’t opt-in to
    telemetry reporting.
  • Users who are behind HTTPS Man-in-the-middle proxies, because our add-on installation systems enforce key pinning for these connections, which proxies interfere with.
  • Users of very old builds of Firefox which the Studies system can’t reach.

We can’t really do anything about the last group — they should update to a new version of Firefox anyway because older versions typically have quite serious unfixed security vulnerabilities. We know that some people have stayed on older versions of Firefox because they want to run old-style add-ons, but many of these now work with newer versions of Firefox. For the other groups we have developed a patch to Firefox that will install the new certificate once people update. This was released as a “dot release” so people will get it — and probably have already — through the ordinary update channel. If you have a downstream build, you’ll need to wait for your build maintainer to update.

We recognize that none of this is perfect. In particular, in some cases, users lost data associated with their add-ons (an example here is the “multi-account containers” add-on).

We were unable to develop a fix that would avoid this side effect, but we believe this is the best approach for the most users in the short term. Long term, we will be looking at better architectural approaches for dealing with this kind of issue.

Lessons

First, I want to say that the team here did amazing work: they built and shipped a fix in less than 12 hours from the initial report. As someone who sat in the meeting where it happened, I can say that people were working incredibly hard in a tough situation and that very little time was wasted.

With that said, obviously this isn’t an ideal situation and it shouldn’t have happened in the first place. We clearly need to adjust our processes both to make this and similar incidents it less likely to happen and to make them easier to fix.

We’ll be running a formal post-mortem next week and will publish the list of changes we intend to make, but in the meantime here are my initial thoughts about what we need to do. First, we should have a much better way of tracking the status of everything in Firefox that is a potential time bomb and making sure that we don’t find ourselves in a situation where one goes off unexpectedly. We’re still working out the details here, but at minimum we need to inventory everything of this nature.

Second, we need a mechanism to be able to quickly push updates to our users even when — especially when — everything else is down.  It was great that we are able to use the Studies system, but it was also an imperfect tool that we pressed into service, and that had some undesirable side effects. In particular, we know that many users have auto-updates enabled but would prefer not to participate in Studies and that’s a reasonable preference (true story: I had it off as well!) but at the same time we need to be able to push updates to our users; whatever the internal technical mechanisms, users should be able to opt-in to updates (including hot-fixes) but opt out of everything else. Additionally, the update channel should be more responsive than what we have today. Even on Monday, we still had some users who hadn’t picked up either the hotfix or the dot release, which clearly isn’t ideal. There’s been some work on this problem already, but this incident shows just how important it is.

Finally, we’ll be looking more generally at our add-on security architecture to make sure that it’s enforcing the right security properties at the least risk of breakage.

We’ll be following up next week with the results of a more thorough post-mortem, but in the meantime, I’ll be happy to answer questions by email at ekr-blog@mozilla.com.

 

[1] A few very old add-ons were signed with a different intermediate.

[2] Readers who are familiar with the WebPKI will recognize that this is also the way that cross-certification works.
[3] Technical note: we aren’t adding the certificate with any special privileges; it gets its authority by being signed for the root. We’re just adding it to the pool of certificates which can be used by Firefox. So, it’s not like we are adding a new privileged certificate to Firefox.

The post Technical Details on the Recent Firefox Add-on Outage appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogWeTransfer File Transfer Now Available in Thunderbird

WeTransfer’s file-sharing service is now available within Thunderbird for sending large files (up to 2GB) for free, without signing up for an account.

Even better, sharing large files can be done without leaving the composer. While writing an email, just attach a large file and you will be prompted to choose whether you want to use file link, which will allow you to share a large file with a link to download it. Via this prompt you can select to use WeTransfer.

Filelink prompt in Thunderbird

Filelink prompt in Thunderbird

You can also enable File Link through the Preferences menu, under the attachments tab and the Outgoing page. Click “Add…” and choose “WeTransfer” from the drop down menu.

WeTransfer in Preferences

Once WeTransfer is set up in Thunderbird it will be the default method of linking for files over the size that you have specified (you can see that is set to 5MB in the screenshot above).

WeTransfer and Thunderbird are both excited to be able to work together on this great feature for our users. The Thunderbird team thinks that this will really improve the experience of collaboration and and sharing for our users.

WeTransfer is also proud of this feature. Travis Brown, WeTransfer VP of Business Development says about the collaboration:

“Mozilla and WeTransfer share similar values. We’re focused on the user and on maintaining our user’s privacy and an open internet. We’ll continue to work with their team across multiple areas and put privacy at the front of those initiatives.”

We hope that all our users will give this feature a try and enjoy being able to share the files they want with co-workers, friends, and family – easily.

QMOFirefox 67 Beta 16 Testday Results

Hello Mozillians!

As you may already know, last Friday May 3rd – we held a new Testday event, for Firefox 67 Beta 16.

Thank you all for helping us make Mozilla a better place: Rok Žerdin, Fernando Espinoza, Kamila Kamciatek.

Result: Several test cases were executed for: Track Changes M2 & WebExtensions compatibility & support.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO. We will make announcements as soon as something shows up!

Mozilla IndiaFirefox मौजूदा और नए add-ons को स्थापित होने से रोक रहा है|

नयी सूचनाएँ:

 

  • कुछ उपभोक्ता जानकारी दे रहे है की उन्हे “about:studies” मे “hotfix-update-xpi-signing-intermediate-bug-1548973” study सक्रिय नही मिल रहा है। work-around के बजाय, जो बाद मे परेशानियाँ पैदा कर सकती है, हम प्रभावशाली ढंग से सलाह दे रहे है की आप प्रतिक्षारत रहे। अगर आपके लिए ये संभव है की आप hotfix प्राप्त कर सकते है तो, आप इसे इसके प्रथम प्रकाशन के 24 घंटे पश्चात  6 PM EDT, तक ले ले।बाकी सभी के लिए, हम एक स्थायी रूप वाले हल पर काम कर रहे है।(May 5, 00:54 EDT)
  • समूह में कुछ work-arounds के बारे में बातचीत चल रही है।ये सभी अनुसंशित नही है और हमारे हल के साथ मेल नही खा सकते।हम आपको हमारे द्वारा अनुसंशित सूचनाएँ बतायेंगे जब वो उपलब्ध होगी।आपके धैर्य की हम सराहना करते है।(May 4, 15:01 EDT)

 

देर शाम शुक्रवार 3 मई, हम लोग एक मुद्दे से अवगत हुए जिससे Firefox मौजूदा और नए add-ons को स्थापित होने से रोक रहा है। हमारे Firefox उपभोक्ताओं को हुई किसी भी प्रकार की हुई असुविधा के लिए हम खेद व्यक्त करते है।

हमारे दल ने इसे पहचान कर सभी प्रकाशित Beta और Nightly Firefox उपभोक्ताओं के लिए एक हल प्रसारित किया है। कुछ ही घंटो में यह स्वचालित रूप से पृष्ठभूमि में लागू हो जाएगी। Add-ons को दोबारा कार्यरत करने के लिए कुछ भी क़दम उठाने की ज़रूरत नही है।

कम समय में इस हल को उपलब्ध कराने के लिए, हम Studies system का उपयोग कर रहे हैं। यह System डिफ़ॉल्ट रूप से चालू रहेगा, और जबतक Studies बंद ना किए हो तब तक कोई कार्यवाही करने की ज़रूरत नही है । Firefox उपभोक्ता निम्न तरीक़े से देख सकते है अगर उन्होंने Studies चालू किया रहेगा:

* Firefox Options/Preferences -> Privacy & Security -> Allow Firefox to install and run studies (सेटिंग्स के लिए नीचे स्क्रोल करे)

* Add-ons दोबारा शुरू होने पर Studies को दोबारा बंद किया जा सकता है।

 

Firefox में Study को लागू होने में 6 घंटे लग सकते है। अगर हल लागू हुआ है तो आप स्थान पट्टी में “about:studies” प्रविष्ट करके देख सकते है। अगर हल सक्रिय है, तो आप “hotfix-update-xpi-signing-intermediate-bug-1548973” देखेंगे।

आप शायद “hotfix-reset-xpi-verification-timestamp-1548973” भी देख सकते है, जोकि हल का ही एक हिस्सा है और शायद सक्रिय studies या सम्पादित studies हिस्से में होगा।

हम लोग एक सामान्य हल पर कार्य कर रहे है जो Studies system का उपयोग नही करेगा और नीचे दिए गये ब्लॉग पोस्ट को अद्यतन करते रहेंगे।

हम लोग किए गये प्रयासों के बारे में इस पोस्ट मे अद्यतन करते रहेंगे और आने वाले दिनो मे संतोषजनक सुधारों के बारे में अवगत कराते रहेंगे।

ज़्यादा जानकारी के लिए पढ़े: https://blog.mozilla.org/addons/2019/05/04/update-regarding-add-ons-in-firefox/?fbclid=IwAR0RfJTcs-SCyWeorZ-7IOpLBLEJA1Fi5TnLgiMkDixZMkO59jo1_3AwAbc

 

अतिरिक्त जानकारी के स्रोत:

– चंदन कुमार बाबा

The Mozilla BlogThe Firefox EU Elections Toolkit helps you to prevent pre-vote online manipulation

What comes to your mind when you hear the term ‘online manipulation’? In the run-up to the EU parliamentary elections at the end of May, you probably think first and foremost of disinformation. But what about technical ways to manipulate voters on the internet? Although they are becoming more and more popular because they are so difficult to recognize and therefore particularly successful, they probably don’t come to mind first. Quite simply because they have not received much public attention so far. Firefox tackles this issue today: The ‘Firefox EU Election Toolkit’ not only provides important background knowledge and tips – designed to be easily understood by non-techies – but also tools to enable independent online research and decision-making.

Manipulation on the web: ‘fake news’ isn’t the main issue (anymore)

Few other topics have been so present in public perception in recent years, so comprehensively discussed in everyday life, news and science, and yet have been demystified as little as disinformation. Also commonly referred to as ‘fake news’, it’s defined as “deliberate disinformation or hoaxes spread via traditional print and broadcast news media or online social media.” Right now, so shortly before the next big elections at the end of May, the topic seems to be bubbling up once more: According to the European Commission’s Eurobarometer, 73 percent of Internet users in Europe are concerned about disinformation in the run-up to the EU parliamentary elections.

However, research also proves: The public debate about disinformation takes place in great detail, which significantly increases awareness of the ‘threat’. The fact that more and more initiatives against disinformation and fact-checking actors have been sprouting up for some time now – and that governments are getting involved, too – may be related to the zeitgeist or connected to individuals’ impression that they are constantly confronted with ‘fake news’ and cannot protect themselves on their own.

It’s important to take action against disinformation. Also, users who research the elections and potential candidates on the Internet, for example, should definitely stay critical and cautious. After all, clumsy disinformation campaigns are still taking place, revealing some of the downsides of a global, always available Internet; and they even come with a wide reach and rapid dissemination. Countless actors, including journalists, scientists and other experts now agree that the impact of disinformation is extremely limited and traditional news is still the primary and reliable source of information. This does not, however, mean that the risk of manipulation has gone away; in fact, we must make sure to stay alert and not close our eyes to new, equally problematic forms of manipulation, which have just been less present in the media and science so far. At Firefox we understand that this may require some support – and we’re happy to provide it today.

A toolkit for well-informed voters

Tracking has recently been a topic of discussion in the context of intrusive advertising, big data and GDPR. To refresh your memory: When browsing from site to site, users’ personal information may be collected through scripts or widgets on the websites. They’re called trackers. Many people don’t like that user information collected through trackers is used for advertising, often times without people’s knowledge (find more info here). But there’s another issue a lot less people are aware of and which hasn’t been widely discussed so far: User data can also be used for manipulation attempts, micro-targeted at specific groups or individuals. We believe that this needs to change – and in order to make that happen, more people need to hear about it.

Firefox is committed to an open and free Internet that provides access to independent information to everyone. That’s why we’ve created the ‘Firefox EU Elections Toolkit’: a website where users can find out how tracking and opaque online advertising influence their voting behavior and how they can easily protect themselves – through browser add-ons and other tools. Additionally, disinformation and the voting process are well represented on the site. The toolkit is now available online in English, German and French. No previous technical or policy-related knowledge is required. Among other things, the toolkit contains:

  • background information on how tracking, opaque election advertising and other questionable online activities affect people on the web, including a short, easy-to-digest video.
  • selected information about the EU elections as well as the EU as an institution – only using trustworthy sources.
  • browser extensions, checked on and recommended by Firefox, that support independent research and opinion making.

Make an independent choice when it matters the most

Of course, manipulation on the web is not only relevant in times of major political votes. With the forthcoming parliamentary elections, however, we find ourselves in an exceptional situation that calls for practical measures also because there might be greater interest in the election, the programmes, parties and candidates than in recent years: More and more EU citizens are realizing how important the five-yearly parliamentary election is; the demands on parliamentarians are rising; and last but not least, there are numerous new voters again this May for whom Internet issues play an important role, but who need to find out about the election, its background and consequences.

Firefox wants to make sure that everyone has the chance to make informed choices. That detailed technical knowledge is not mandatory for getting independent information. And that the internet with all of its many advantages and (almost) unlimited possibilities is open and available to everyone, independent from demographics. Firefox fights for you.

The post The Firefox EU Elections Toolkit helps you to prevent pre-vote online manipulation appeared first on The Mozilla Blog.

Mozilla Add-ons BlogAdd-ons disabled or failing to install in Firefox

Incident summary

Updates – Last updated 14:35 PST May 14, 2019. We expect this to be our final update.

  • If you are running Firefox versions 61 – 65 and 1) did not receive the deployed fix and 2) do not want to update to the current version (which includes the permanent fix): Install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • If you are running Firefox versions 57 – 60: Install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • If you are running Firefox versions 47 – 56: install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • A less technical blog post about the outage is also available. If you enabled telemetry to get the initial fix, we’re deleting all data collected since May 4. (May 9, 17:04 EDT)
  • Mozilla CTO Eric Rescorla posted a blog on the technical details of what went wrong last weekend. (May 9, 16:20 EDT)
  • We’ve released Firefox 66.0.5 for Desktop and Android, and Firefox ESR 60.6.3, which include the permanent fix for re-enabling add-ons that were disabled starting on May 3rd. The initial, temporary fix that was deployed May 4th through the Studies system is replaced by these updates, and we recommend updating as soon as possible. Users who enabled Studies to receive the temporary fix, and have updated to the permanent fix, can now disable Studies if they desire.For users who cannot update to the latest version of Firefox or Firefox ESR, we plan to distribute an update that automatically applies the fix to versions 52 through 60. This fix will also be available as a user-installable extension. For anyone still experiencing issues in versions 61 through 65, we plan to distribute a fix through a user-installable extension. These extensions will not require users to enable Studies, and we’ll provide an update when they are available. (May 8, 19:28 EDT)
  • Firefox 66.0.5 has been released, and we recommend that people update to that version if they continue to experience problems with extensions being disabled. You’ll get an update notification within 24 hours, or you can initiate an update manually. An update to ESR 60.6.3 is also available as of 16:00 UTC May 8th. We’re continuing to work on a fix for older versions of Firefox, and will update this post and on social media as we have more information. (May 8, 11:51 EDT)
  • A Firefox release has been pushed — version 66.0.4 on Desktop and Android, and version 60.6.2 for ESR. This release repairs the certificate chain to re-enable web extensions, themes, search engines, and language packs that had been disabled (Bug 1549061). There are remaining issues that we are actively working to resolve, but we wanted to get this fix out before Monday to lessen the impact of disabled add-ons before the start of the week. More information about the remaining issues can be found by clicking on the links to the release notes above. (May 5, 16:25 EDT)
  • Some users are reporting that they do not have the “hotfix-update-xpi-signing-intermediate-bug-1548973” study active in “about:studies”. Rather than using work-arounds, which can lead to issues later on, we strongly recommend that you continue to wait. If it’s possible for you to receive the hotfix, you should get it by 6am EDT, 24 hours after it was first released. For everyone else, we are working to ship a more permanent solution. (May 5, 00:54 EDT)
  • There are a number of work-arounds being discussed in the community. These are not recommended as they may conflict with fixes we are deploying. We’ll let you know when further updates are available that we recommend, and appreciate your patience. (May 4, 15:01 EDT)
  • Temporarily disabled commenting on this post given volume and duplication. They’ll be re-enabled as more updates become available. (May 4, 13:02 EDT)
  • Updated the post to clarify that deleting extensions can result in data loss, and should not be used to attempt a fix. (May 4, 12:58 EDT)
  • Clarified that the study may appear in either the Active studies or Completed studies of “about:studies” (May 4, 12:10 EDT)
  • We’re aware that some users are reporting that their extensions remain disabled with both studies active. We’re tracking this issue on Bugzilla in bug 1549078. (May 4, 12:03 EDT)
  • Clarified that the Studies fix applies only to Desktop users of Firefox distributed by Mozilla. Firefox ESR, Firefox for Android, and some versions of Firefox included with Linux distributions will require separate updates. (May 4, 12:03 EDT)


Late on Friday May 3rd, we became aware of an issue with Firefox that prevented existing and new add-ons from running or being installed. We are very sorry for the inconvenience caused to people who use Firefox.

Our team  identified and rolled-out a temporary fix for all Firefox Desktop users on Release, Beta and Nightly. The fix will be automatically applied in the background within 24 hours. No active steps need to be taken to make add-ons work again. In particular, please do not delete and/or re-install any add-ons as an attempt to fix the issue. Deleting an add-on removes any data associated with it, where disabling and re-enabling does not.

Please note: The fix does not apply to Firefox ESR or Firefox for Android. We’re working on releasing a fix for both, and will provide updates here and on social media.

To provide this fix on short notice, we are using the Studies system. This system is enabled by default, and no action is needed unless Studies have been disabled. Firefox users can check if they have Studies enabled by going to:

  • Firefox Options/Preferences -> Privacy & Security -> Allow Firefox to install and run studies (scroll down to find the setting)

  • Studies can be disabled again after the add-ons have been re-enabled

It may take up to six hours for the Study to be applied to Firefox. To check if the fix has been applied, you can enter “about:studies” in the location bar. If the fix is in the active, you’ll see “hotfix-update-xpi-signing-intermediate-bug-1548973” in either the Active studies or Completed studies as follows:

You may also see “hotfix-reset-xpi-verification-timestamp-1548973” listed, which is part of the fix and may be in the Active studies or Completed studies section(s).

We are working on a general fix that doesn’t use the Studies system and will keep this blog post updated accordingly. We will share a more substantial update in the coming days.

Additional sources of information:

The post Add-ons disabled or failing to install in Firefox appeared first on Mozilla Add-ons Blog.

Mozilla Add-ons BlogAdd-on Policy and Process Updates

As part of our ongoing work to make add-ons safer for Firefox users, we are updating our Add-on Policy to help us respond faster to reports of malicious extensions. The following is a summary of the changes, which will go into effect on June 10, 2019.

  • We will no longer accept extensions that contain obfuscated code. We will continue to allow minified, concatenated, or otherwise machine-generated code as long as the source code is included. If your extension is using obfuscated code, it is essential to submit a new version by June 10th that removes it to avoid having it rejected or blocked.

We will also be clarifying our blocking process. Add-on or extension blocking (sometimes referred to as “blocklisting”), is a method for disabling extensions or other third-party software that has already been installed by Firefox users.

  • We will be blocking extensions more proactively if they are found to be in violation of our policies. We will be casting a wider net, and will err on the side of user security when determining whether or not to block.
  • We will continue to block extensions for intentionally violating our policies, critical security vulnerabilities, and will also act on extensions compromising user privacy or circumventing user consent or control.

You can preview the policy and blocking process documents and ensure your extensions abide by them to avoid any disruption. If you have questions about these updated policies or would like to provide feedback, please post to this forum thread.

 

May 4, 2019 9:09 AM PST update: A certificate expired yesterday and has caused add-ons to stop working or fail to install. This is unrelated to the policy changes. We will be providing updates about the certificate issue in other posts on this blog.

9:55 am PST: Because a lot of comments on this post are related to the certificate issue, we are temporarily turning off comments for this post. 

The post Add-on Policy and Process Updates appeared first on Mozilla Add-ons Blog.

Mozilla Add-ons BlogMay’s featured extensions

Firefox Logo on blue background

Pick of the Month: Google Translator for Firefox

by nobzol
Sleek translation tool. Just highlight text, hit the toolbar icon and your translation appears right there on the web page itself. You can translate selected text (up to 1100 characters) or the entire page.

Bonus feature: the context menu presents an option to search your highlighted word or phrase on Wikipedia.

“Sehr einfache Bedienung, korrekte Übersetzung aller Texte.”

Featured: Google Container

by Perflyst
Isolate your Google identity into a container. Make it difficult for Google to track your moves around the web.

(NOTE: Though similarly titled to Mozilla’s Facebook Container and Multi-Account Containers, this extension is not affiliated with Mozilla.)

“Thanks a lot for making this. Works great! I’m only sorry I did not find this extension sooner.”

The post May’s featured extensions appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgOwning it: browser compatibility data and open source governance

What does it mean to “own” an open-source project? With the browser-compat-data project (“BCD”), the MDN (Mozilla Developer Network) community and I recently had the opportunity to find out.

In 2017, the MDN Web Docs team invited me to work on what was described to me as a small, but growing project (previously on Hacks). The little project had a big goal: to provide detailed and reliable structured data about what Web platform features are supported by different browsers. It sounded ambitious, but my part was narrow: convert hand-written HTML compatibility tables on MDN into structured JSON data.

As a technical writer and consultant, it was an unusual project to get to work on. Ordinarily, I look at data and code and use them to write words for people. For BCD, I worked in the opposite direction: reading what people wrote and turning it into structured data for machines. But I think I was most excited at the prospect of working on an open source project with a lot of reach, something I’d never done before.

Plus the project appealed to my sense of order and tidiness. Back then, most of the compatibility tables looked something like this:

A screenshot of a cluttered, inconsistent table of browser support for the CSS linear-gradient feature

In their inconsistent state, they couldn’t be updated in bulk and couldn’t be redesigned without modifying thousands upon thousands of pages on MDN. Instead, we worked to liberate the data in the tables to a structured, validated JSON format that we could publish in an npm package. With this change, new tables could be generated and other projects could use the data too.

A screenshot of a tidy, organized table of browser support for the CSS linear-gradient feature

Since then, the project has grown considerably. If there was a single inflection point, it was the Hack on MDN event in Paris, where we met in early 2018 to migrate more tables, build new tools, and play with the data. In the last year and a half, we’ve accomplished so many things, including replacing the last of the legacy tables on MDN with shiny, new BCD-derived tables, and seeing our data used in Visual Studio Code.

Building a project to last

We couldn’t have built BCD into what it is now without the help of the hundreds of new contributors that have joined the project. But some challenges have come along with that growth. My duties shifted from copying data into the repository to reviewing others’ contributions, learning about the design of the schema, and hacking on supporting tools. I had to learn so much about being a thoughtful, helpful guide for new and established contributors alike. But the increased size of the project also put new demands on the project as a whole.

Florian Scholz, the project leader, took on answering a question key to the long-term sustainability of the project: how do we make sure that contributors can be more than mere inputs, and can really be part of the project? To answer that question, Florian wrote and helped us adopt a governance document that defines how any contributor—not just MDN staff—can become an owner of the project.

Inspired by the JS Foundation’s Technical Advisory Committee, the ESLint project, and others, BCD’s governance document lays out how contributors can become committers (known as peers), how important decisions are made by the project leaders (known as owners), and how to become an owner. It’s not some stuffy rule book about votes and points of order; it speaks to the project’s ambition of being a community-led project.

Since adopting the governance document, BCD has added new peers from outside Mozilla, reflecting how the project has grown into a cross-browser community. For example, Joe Medley, a technical writer at Google, has joined us to help add and confirm data about Google Chrome. We’ve also added one new owner: me.

If I’m being honest, not much has changed: peers and owners still review pull requests, still research and add new data, and still answer a lot of questions about BCD, just as before. But with the governance document, we know what’s expected and what we can do to guide others on the journey to project ownership, like I experienced. It’s reassuring to know that as the project grows so too will its leadership.

More to come

We accomplished a lot in the past year, but our best work is ahead. In 2019, we have an ambitious goal: get 100% real data for Firefox, Internet Explorer, Edge, Chrome, Safari, mobile Safari, and mobile Chrome for all Web platform features. That means data about whether or not any feature in our data set is supported by each browser and, if it is, in what version it first appeared. If we achieve our goal, BCD will be an unparalleled resource for Web developers.

But we can’t achieve this goal on our own. We need to fill in the blanks, by testing and researching features, updating data, verifying pull requests, and more. We’d love for you to join us.

The post Owning it: browser compatibility data and open source governance appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogFirefox Reality coming to SteamVR

Firefox Reality coming to SteamVR

We are excited to announce that we’re working with Valve to bring the immersive web to SteamVR!

This January, we announced that we were bringing the Firefox Reality experience to desktop devices and the Vive stores. Since then, collaborating closely with Valve, we have been working to also bring Firefox Reality to the SteamVR immersive experience. In the coming months, users will be offered a way to install Firefox Reality via a new web dashboard button, and then launch a browser window over any OpenVR experience.

With a few simple clicks, users will be able to access web content such as tips or guides or stream a Twitch comment channel without having to exit their immersive experiences. In addition, users will be able to log into their Firefox account once, and access synced bookmarks and cookies across both Firefox and Firefox Reality — no need to log in twice!

Firefox Reality coming to SteamVR

We are excited to collaborate with Valve and release Firefox for SteamVR this summer.

Mozilla Gfx TeamWebRender newsletter #44

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Mozilla’s research web browser servo and on its way to becoming Firefox‘s rendering engine.

WebRender on Linux in Firefox Nightly

Right after the previous newsletter was published, Andrew and Jeff enabled WebRender for Linux users on Intel integrated GPUs with Mesa 18.2 or newer on Nightly if their screen resolution is 3440×1440 or less.
We decided to start with Mesa is thanks to the quality of the drivers. Users with 4k screens will have to wait a little longer though (or enable WebRender manually) as there are a number of specific optimizations we want to do before we are comfortable getting WebRender used on these very high resolution screens. While most recent discreet GPUs can stomach about anything we throw at them, integrated GPUs operate on a much tighter budget and compete with the CPU for memory bandwidth. 4k screens are real little memory bandwidth eating monsters.

WebRender roadmap

Jessie put together a roadmap of the WebRender project and other graphics endeavors from the items discussed in the week in Toronto.
It gives a good idea of the topics that we are focusing on for the coming months.

A week in Toronto – Part deux

In the previous newsletter I went over a number of the topics that we discussed during the graphics team’s last get-together in Toronto. Let’s continue here.

WebRender on Android

We went over a number of the items in WebRender’s Android TODO-list. Getting WebRender to work at all on Android is one thing. A lot of platform-specific low level glue code which Sotaro has been steadily improving lately.

On top of that come mores questions:

  • Which portion of the Android user population support the OpenGL features that WebRender relies on?
  • Which OpenGL features we could stop relying on to cover more users
  • What do we do about the remaining users which have such a small OpenGL feature set available that we don’t plan to get WebRender in the foreseeable future.

Among the features that WebRender currently heavily relies on but are (surprisingly) not universally supported in this day and age:

  • texture arrays,
  • float 32 textures,
  • texture fetches in vertex shaders,
  • instancing,

We discussed various workarounds. Some of them will be easy to implement, some harder, some will come at a cost, some we are not sure will provide an acceptable user experience. As it turns out, building a modern rendering engine while also targetting devices that are everything but modern is quite a challenge, who would have thought!

Frame scheduling

Rendering a frame, from a change of layout triggered by some JavaScript to photons flying out of the screen, goes through a long pipeline. Sometimes some steps in that pipeline take longer than we would want but other parts of the pipeline sort of absorb and hide the issue and all is mostly fine. Sometimes, though, slowdowns in particular places with the wrong timing can cause a chain of bad interactions which results a back and forth between a rapid burst of a few frames followed by a couple of missed frames as parts the system oscillate between throttle themselves on and off.

I am describing this in the abstract because the technical description of how and why this can happen in Gecko is complicated. It’s a big topic that impacts the design of a lot of pieces in Firefox’s rendering engine. We talked about this and came up with some short and long term potential improvements.

Intel 4K performance

I mentioned this towards the beginning of this post. Integrated GPUs tend to be more limited in, well in most things, but most importantly in memory bandwidth, which is exacerbated by sharing RAM with the CPU. When high resolution screens don’t fit in the integrated GPU’s dedicated caches, Jeff and Markus made the observation that it can be significantly faster to split the screen into a few large regions and render them one by one. This is at the cost of batch breaks and an increased amount of draw calls, however the restricting rendering to smaller portions of the screen gives the GPU a more cache-friendly workload than rendering the entire screen in a single pass.

This approach is interestingly similar to the way tiled GPUs common on mobile devices work.
On top of that there are some optimizations that we want to investigate to reduce the amount of batch breaks caused by text on platforms that do not support dual-source blending, as well as continued investigation in progress of what is slow specifically on Intel devices.

Other topics

We went over a number of other technical topics such as WebRender’s threading architecture, gory details of support for backface-visibility, where to get the best Thaï food in downtown Toronto and more. I won’t cover them here because they are somewhat hard and/or boring to explain (or because I wasn’t involved enough in the topics do them justice on this blog).

In conclusion

It’s been a very useful and busy week. The graphics team will meet next in Whistler in June along with the rest of Mozilla. By then Firefox 67 will ship, enabling WebRender for a subset of Windows users in the release channel which is huge milestone for us.

Enabling WebRender in Firefox Nightly

In about:config, enable the pref gfx.webrender.all and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Using WebRender in a Rust project

WebRender is available as a standalone crate on crates.io (documentation)

QMOFirefox 67 Beta 16 Testday, May 3rd

Hello Mozillians,

We are happy to let you know that Friday, May 3rd, we are organizing Firefox 67 Beta 16 Testday. We’ll be focusing our testing on: Track Changes M2 and WebExtensions compatibility & support.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday! 🙂

The Mozilla Blog$2.4 Million in Prizes for Schools Teaching Ethics Alongside Computer Science

Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies are announcing the Stage I winners of our Responsible Computer Science Challenge

 

Today, we are announcing the first winners of the Responsible Computer Science Challenge. We’re awarding $2.4 million to 17 initiatives that integrate ethics into undergraduate computer science courses.

The winners’ proposed curricula are novel: They include in-class role-playing games to explore the impact of technology on society. They embed philosophy experts and social scientists in computer science classes. They feature “red teams” that probe students’ projects for possible negative societal impacts. And they have computer science students partner with local nonprofits and government agencies.

The winners will receive awards of up to $150,000, and they span the following categories: public university, private university, liberal arts college, community college, and Jesuit university. Stage 1 winners are located across 13 states, with computer science programs ranging in size from 87 students to 3,650 students.

The Responsible Computer Science Challenge is an ambitious initiative by Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies. It aims to integrate ethics and responsibility into undergraduate computer science curricula and pedagogy at U.S. colleges and universities.

Says Kathy Pham, computer scientist and Mozilla Fellow co-leading the Challenge: “Today’s computer scientists write code with the potential to affect billions of people’s privacy, security, equality, and well-being. Technology today can influence what journalism we read and what political discussions we engage with; whether or not we qualify for a mortgage or insurance policy; how results about us come up in an online search; whether we are released on bail or have to stay; and so much more.”

Pham continues: “These 17 winners recognize that power, and take crucial steps to integrate ethics and responsibility into core courses like algorithms, compilers, computer architecture, neural networks, and data structures. Furthermore, they will release their materials and methodology in the open, allowing other individuals and institutions to adapt and use them in their own environment, broadening the reach of the work. By deeply integrating ethics into computer science curricula and sharing the content openly, we can create more responsible technology from the start.”

Says Yoav Schlesinger, principal at Omidyar Network’s Tech and Society Lab co-leading the Challenge: “Revamping training for the next generation of technologists is critical to changing the way tech is built now and into the future. We are impressed with the quality of submissions and even more pleased to see such outstanding proposals awarded funding as part of Stage I of the Responsible Computer Science Challenge. With these financial resources, we are confident that winners will go on to develop exciting, innovative coursework that will not only be implemented at their home institutions, but also scaled to additional colleges and universities across the country.”

Challenge winners are announced in two stages: Stage I (today), for concepts that deeply integrate ethics into existing undergraduate computer science courses, either through syllabi changes or teaching methodology adjustments. Stage I winners receive up to $150,000 each to develop and pilot their ideas. Stage II (summer 2020) supports the spread and scale of the most promising approaches developed in Stage I. In total, the Challenge will award up to $3.5 million in prizes.

The winners announced today were selected by a panel of 19 independent judges from universities, community organizations, and the tech industry. Judges deliberated over the course of three weeks.

<The Winners>

(School | Location | Principal Investigator)

Allegheny College | Meadville, PA | Oliver Bonham-Carter 

While studying fields like artificial intelligence and data analytics, students will investigate potential ethical and societal challenges. For example: They might interrogate how medical data is analyzed, used, or secured. Lessons will include relevant readings, hands-on activities, and talks from experts in the field.

 

Bemidji State University | Bemidji, MN | Marty J. Wolf, Colleen Greer

The university will lead workshops that guide faculty at other institutions in developing and implementing responsible computer science teaching modules. The workshops will convene not just computer science faculty, but also social science and humanities faculty.

 

Bowdoin College | Brunswick, ME | Stacy Doore

Computer science students will participate in “ethical narratives laboratories,” where they experiment with and test the impact of technology on society. These laboratories will include transformative engagement with real and fictional narratives including case studies, science fiction readings, films, shows, and personal interviews.

 

Columbia University | New York, NY | Augustin Chaintreau

This approach integrates ethics directly into the computer science curriculum, rather than making it a stand-alone course. Students will consult and engage with an “ethical companion” that supplements a typical course textbook, allowing ethics to be addressed immediately alongside key concepts. The companion provides examples, case studies, and problem sets that connect ethics with topics like computer vision and algorithm design.

 

Georgetown University | Washington, DC | Nitin Vaidya

Georgetown’s computer science department will collaborate with the school’s Ethics Lab to create interactive experiences that illuminate how ethics and computer science interact. The goal is to introduce a series of active-learning engagements across a semester-long arc into selected courses in the computer science curriculum.

 

Georgia Institute of Technology | Atlanta, GA | Ellen Zegura

This approach embeds social responsibility into the computer science curriculum, starting with the introductory courses. Students will engage in role-playing games (RPGs) to examine how a new technology might impact the public. For example: How facial recognition or self-driving cars might affect a community.

 

Harvard University | Cambridge, MA | Barbara Grosz

Harvard will expand the open-access resources of its Embedded EthiCS program which pairs computer science faculty with philosophy PhD students to develop ethical reasoning modules that are incorporated into courses throughout the computer science curriculum. Computer science postdocs will augment module development through design of activities relevant to students’ future technology careers.

 

Miami Dade College | Miami, FL | Antonio Delgado

The college will integrate social impact projects and collaborations with local nonprofits and government agencies into the computer science curriculum. Computer science syllabi will also be updated to include ethics exercises and assignments.

 

Northeastern University | Boston, MA | Christo Wilson

This initiative will embed an ethics component into the university’s computer science, cybersecurity, and data science programs. The ethics component will include lectures, discussion prompts, case studies, exercises, and more. Students will also have access to a philosophy faculty advisor with expertise in information and data ethics.

 

Santa Clara University | Santa Clara, CA | Sukanya Manna, Shiva Houshmand, Subramaniam Vincent

This initiative will help CS students develop a deliberative ethical analysis framework that complements their technical learning. It will develop software engineering ethics, cybersecurity ethics, and data ethics modules, with integration of case studies and projects. These modules will also be adapted into free MOOC materials, so other institutions worldwide can benefit from the curriculum.

 

University of California, Berkeley | Berkeley, CA | James Demmel, Cathryn Carson

This initiative integrates a “Human Contexts and Ethics Toolkit” into the computer science/data science curriculum. The toolkit helps students discover when and how their work intersects with social power structures. For example: bias in data collection, privacy impacts, and algorithmic decision making.

 

University at Buffalo | Buffalo, NY | Atri Rudra

In this initiative, freshmen studying computer science will discuss ethics in the first-year seminar “How the internet works.” Sophomores will study responsible algorithmic development for real-­world problems. Juniors will study the ethical implications of machine learning. And seniors will incorporate ethical thinking into their capstone course.

 

University of California, Davis | Davis, CA | Annamaria (Nina) Amenta, Gerardo Con Díaz, and Xin Liu

Computer science students will be exposed to social science and humanities while pursuing their major, culminating in a “conscientious” senior project. The project will entail developing technology while assessing its impact on inclusion, privacy, and other factors, and there will be opportunities for projects with local nonprofits or government agencies.

 

University of Colorado, Boulder | Boulder, CO | Casey Fiesler

This initiative integrates an ethics component into introductory programming classes, and features an “ethics fellows program” that embeds students with an interest in ethics into upper division computer science and technical classes.

 

University of Maryland, Baltimore County | Baltimore, MD | Helena Mentis

This initiative uses three avenues to integrate ethics into the computer science curriculum: peer discussions on how technologies might affect different populations; negative implications evaluations, i.e. “red teams” that probe the potential negative societal impacts of students’ projects; and a training program to equip teaching assistants with ethics and equality literacy.

 

University of Utah | Salt Lake City, UT | Suresh Venkatasubramanian, Sorelle A. Friedler (Haverford College), Seny Kamara (Brown University)

Computer science students will be encouraged to apply problem solving and critical thinking not just to design algorithms, but also the social issues that their algorithms intersect with. For example: When studying bitcoin mining algorithms, students will focus on energy usage and environmental impact. The curriculum will be developed with the help of domain experts who have expertise in sustainability, surveillance, criminal justice, and other issue areas.

 

Washington University | St. Louis, MO | Ron Cytron

Computer science students will participate in “studio sessions,” or group discussions that unpack how their technical education and skills intersect with issues like individual privacy, data security, and biased algorithms.

 


The Responsible Computer Science Challenge is part of Mozilla’s mission to empower the people and projects on the front lines of internet health work. Learn more about Mozilla Awards.

Launched in October 2018, the Responsible Computer Science Challenge, incubated at Omidyar Network’s Tech and Society Solutions Lab, is part of Omidyar Network’s growing efforts to mitigate the unintended consequences of technology on our social fabric, and ensure products are responsibly designed and brought to market.

The post $2.4 Million in Prizes for Schools Teaching Ethics Alongside Computer Science appeared first on The Mozilla Blog.

about:communityFirefox 67 new contributors

With the release of Firefox 67, we are pleased to welcome the 75 developers who contributed their first code change to Firefox in this release, 66 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

The Mozilla BlogFacebook’s Ad Archive API is Inadequate

Facebook’s tool meets only two of experts’ five minimum standards. That’s a failing grade.

 

Facebook pledged in February to release an ad archive API, in order to make political advertising on the platform more transparent. The company finally released this API in late March — and we’ve been doing a review to determine if it is up to snuff.

While we appreciate Facebook following through on its commitment to make the ad archive API public, its execution on the API leaves something to be desired. The European Commission also hinted at this last week in its analysis when it said that “further technical improvements” are necessary.

The fact is, the API doesn’t provide necessary data. And it is designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation.

Last month, Mozilla and more than sixty researchers published five guidelines we hoped Facebook’s API would meet. Facebook’s API fails to meet three of these five guidelines. It’s too early to determine if it meets the two other guidelines. Below is our analysis:

[1]

Researchers’ guideline: A functional, open API should have comprehensive political advertising content.

Facebook’s API: It’s impossible to determine if Facebook’s API is comprehensive, because it requires you to use keywords to search the database. It does not provide you with all ad data and allow you to filter it down using specific criteria or filters, the way nearly all other online databases do. And since you cannot download data in bulk and ads in the API are not given a unique identifier, Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing).


[2] ❌

Researchers’ guideline: A functional, open API should provide the content of the advertisement and information about targeting criteria.

Facebook’s API: The API provides no information on targeting criteria, so researchers have no way to tell the audience that advertisers are paying to reach. The API also doesn’t provide any engagement data (e.g., clicks, likes, and shares), which means researchers cannot see how users interacted with an ad. Targeting and engagement data is important because it lets researchers see what types of users an advertiser is trying to influence, and whether or not their attempts were successful.


[3]

Researchers’ guideline: A functional, open API should have up-to-date and historical data access.

Facebook’s API: Ad data will be available in the archive for seven years, which is actually pretty good. Because the API is new and still hasn’t been properly populated, we cannot yet assess whether it is up-to-date, whether bugs will be fixed, or whether Facebook will support long-term studies.


[4]

Researchers’ guideline: A functional, open API should be accessible to and shareable with the general public.

Facebook’s API: This data is now available as part of Facebook’s standard GraphAPI and governed by Facebook Developers Terms of Service. It is too early to determine what exact constraints this will create for public availability and disclosure of data.


[5] ❌

Researchers’ guideline: A functional, open API should empower, not limit, research and analysis.

Facebook’s API: The current API design puts huge constraints on researchers, rather than allowing them to discover what is really happening on the platform. The limitations in each of these categories, coupled with search rate limits, means it could take researchers months to evaluate ads in a certain region or on a certain topic.

 

It’s not too late for Facebook to fix its API. We hope they take action soon. And, we hope bodies like the European Commission carefully scrutinize the tool’s shortcomings.

Mozilla will also be conducting an analysis of Google’s ad API when it is released in the coming weeks. Since Facebook’s ad archive API fails to let researchers do their jobs ahead of the upcoming European Parliamentary elections, we hope that Google will step up and deliver an API that enables this important research.

The post Facebook’s Ad Archive API is Inadequate appeared first on The Mozilla Blog.

The Mozilla BlogFirefox and Emerging Markets Leadership

Building on the success of Firefox Quantum, we have a renewed focus on better enabling people to take control of their internet-connected lives as their trusted personal agent — through continued evolution of the browser and web platform — and with new products and services that provide enhanced security, privacy and user agency across connected life.

To accelerate this work, we’re announcing some changes to our senior leadership team:

Dave Camp has been appointed SVP Firefox. In this new role, Dave will be responsible for overall Firefox product and web platform development.

As a long time Mozillian, Dave joined Mozilla in 2006 to work on Gecko, building networking and security features and was a contributor to the release of Firefox 3. After a short stint at a startup he rejoined Mozilla in 2011 as part of the Firefox Developer Tools team. Dave has since served in a variety of senior leadership roles within the Firefox product organization, most recently leading the Firefox engineering team through the launch of Firefox Quantum.

Under Dave’s leadership the new Firefox organization will pull together all product management, engineering, technology and operations in support of our Firefox products, services and web platform. As part of this change, we are also announcing the promotion of Marissa (Reese) Wood to VP Firefox Product Management, and Joe Hildebrand to VP Firefox Engineering. Both Joe and Reese have been key drivers of the continued development of our core browser across platforms, and the expansion of the Firefox portfolio of products and services globally.

In addition, we are increasing our investment and focus in emerging markets, building on the early success of products like Firefox Lite which we launched in India earlier this year, we are also formally establishing an emerging markets team based in Taipei:

Stan Leong appointed as VP and General Manager, Emerging Markets. In this new role, Stan will be responsible for our product development and go-to-market strategy for the region. Stan joins us from DCX Technology where he was Global Head of Emerging Product Engineering. He has a great combination of start-up and large company experience having spent years at Hewlett Packard, and he has worked extensively in the Asian markets.

As part of this, Mark Mayo, who has served as our Chief Product Officer (CPO), will move into a new role focused on strategic product development initiatives with an initial emphasis on accelerating our emerging markets strategy. We will be conducting an executive search for a CPO to lead the ongoing development and evolution of our global product portfolio.

I’m confident that with these changes, we are well positioned to continue the evolution of the browser and web platform and introduce new products and services that provide enhanced security, privacy and user agency across connected life.

The post Firefox and Emerging Markets Leadership appeared first on The Mozilla Blog.

The Mozilla BlogIt’s Complicated: Mozilla’s 2019 Internet Health Report

Our annual open-source report examines how humanity and the internet intersect. Here’s what we found

 

Today, Mozilla is publishing the 2019 Internet Health Report — our third annual examination of the internet, its impact on society and how it influences our everyday lives.

http://internethealthreport.org/2019/

The Report paints a mixed picture of what life online looks like today. We’re more connected than ever, with humanity passing the ‘50% of us are now online’ mark earlier this year. And, while almost all of us enjoy the upsides of being connected, we also worry about how the internet and social media are impacting our children, our jobs and our democracies.

When we published last year’s Report, the world was watching the Facebook-Cambridge Analytica scandal unfold — and these worries were starting to grow. Millions of people were realizing that widespread, laissez-faire sharing of our personal data, the massive growth and centralization of the tech industry, and the misuse of online ads and social media was adding up to a big mess.

Over the past year, more and more people started asking: what are we going to do about this mess? How do we push the digital world in a better direction?

As people asked these questions, our ability to see the underlying problems with the system — and to imagine solutions — has evolved tremendously. Recently, we’ve seen governments across Europe step up efforts to monitor and thwart disinformation ahead of the upcoming EU elections. We’ve seen the big tech companies try everything from making ads more transparent to improving content recommendation algorithms to setting up ethics boards (albeit with limited effect and with critics saying ‘you need to do much more!’). And, we’ve seen CEOs and policymakers and activists wrestling with each other over where to go next. We have not ‘fixed’ the problems, but it does feel like we’ve entered a new, sustained era of debate about what a healthy digital society should look like.

The 2019 Internet Health Report examines the story behind these stories, using interviews with experts, data analysis and visualization, and original reporting. It was also built with input from you, the reader: In 2018, we asked readers what issues they wanted to see in the next Report.

In the Report’s three spotlight articles, we unpack three big issues: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? and What data do they feed on? and Who is being discriminated against? Another examines ways to rethink the ad economy, so surveillance and addiction are no longer design necessities.  The third spotlight article examines the rise of smart cities, and how local governments can integrate tech in a way that serves the public good, not commercial interests.

Of course, the Report isn’t limited to just three topics. Other highlights include articles on the threat of deepfakes, the potential of user-owned social media platforms, pornography literacy initiatives, investment in undersea cables, and the dangers of sharing DNA results online.

So, what’s our conclusion? How healthy is the internet right now? It’s complicated — the digital environment is a complex ecosystem, just like the planet we live on. There have been a number of positive trends in the past year that show that the internet — and our relationship with it — is getting healthier:

Calls for privacy are becoming mainstream. The last year brought a tectonic shift in public awareness about privacy and security in the digital world, in great part due to the Cambridge Analytica scandal. That awareness is continuing to grow — and also translate into action. European regulators, with help from civil society watchdogs and individual internet users, are enforcing the GDPR: In recent months, Google has been fined €50 million for GDPR violations in France, and tens of thousands of violation complaints have been filed across the continent.

There’s a movement to build more responsible AI. As the flaws with today’s AI become more apparent, technologists and activists are speaking up and building solutions. Initiatives like the Safe Face Pledge seek facial analysis technology that serves the common good. And experts like Joy Buolamwini, founder of the Algorithmic Justice League, are lending their insight to influential bodies like the Federal Trade Commission and the EU’S Global Tech Panel.

Questions about the impact of ‘big tech’ are growing. Over the past year, more and more people focused their attention on the fact that eight companies control much of the internet. As a result, cities are emerging as a counterweight, ensuring municipal technology prioritizes human rights over profit — the Cities for Digital Rights Coalition now has more than two dozen participants. Employees at Google, Amazon, and Microsoft are demanding that their employers don’t use or sell their tech for nefarious purposes. And ideas like platform cooperativism and collaborative ownership are beginning to be discussed as alternatives.

On the flipside, there are many areas where things have gotten worse over the past year — or where there are new developments that worry us:

Internet censorship is flourishing. Governments worldwide continue to restrict internet access in a multitude of ways, ranging from outright censorship to requiring people to pay additional taxes to use social media. In 2018, there were 188 documented internet shutdowns around the world. And a new form of repression is emerging: internet slowdowns. Governments and law enforcement restrict access to the point where a single tweet takes hours to load. These slowdowns diffuse blame, making it easier for oppressive regimes to deny responsibility.

Biometrics are being abused. When large swaths of a population don’t have access to physical IDs, digital ID systems have the potential to make a positive difference. But in practice, digital ID schemes often benefit heavy-handed governments and private actors, not individuals. In India, over 1 billion citizens were put at risk by a vulnerability in Aadhaar, the government’s biometric ID system. And in Kenya, human rights groups took the government to court over its soon-to-be-mandatory National Integrated Identity Management System (NIIMS), which is designed to capture people’s DNA information, the GPS location of their home, and more.

AI is amplifying injustice. Tech giants in the U.S. and China are training and deploying AI at a breakneck pace that doesn’t account for potential harms and externalities. As a result, technology used in law enforcement, banking, job recruitment, and advertising often discriminates against women and people of color due to flawed data, false assumptions, and lack of technical audits. Some companies are creating ‘ethics boards’ to allay concerns — but critics say these boards have little or no impact.

When you look at trends like these — and many others across the Report — the upshot is: the internet has the potential both to uplift and connect us. But it also has the potential to harm and tear us apart. This has become clearer to more and more people in the last few years. It has also become clear that we need to step up and do something if we want the digital world to net out as a positive for humanity rather than a negative.

The good news is that more and more people are dedicating their lives to creating a healthier, more humane digital world. In this year’s Report, you’ll hear from technologists in Ethiopia, digital rights lawyers in Poland, human rights researchers from Iran and China, and dozens of others. We’re indebted to these individuals for the work they do every day. And also to the countless people in the Mozilla community — 200+ staff, fellows, volunteers, like-minded organizations — who helped make this Report possible and who are committed to making the internet a better place for all of us.

This Report is designed to be both a reflection and resource for this kind of work. It is meant to offer technologists and designers inspiration about what they might build; to give policymakers context and ideas for the laws they need to write; and, most of all, to provide citizens and activists with a picture of where others are pushing for a better internet, in the hope that more and more people around the world will push for change themselves. Ultimately, it is by more and more of us doing something in our work and our lives that we will create an internet that is open, human and humane.

I urge you to read the Report, leave comments and share widely.

PS. This year, you can explore all these topics through reading “playlists,” curated by influential people in the internet health space like Esra’a Al Shafei, Luis Diaz Carlos, Joi Ito and others.

The post It’s Complicated: Mozilla’s 2019 Internet Health Report appeared first on The Mozilla Blog.

Mozilla VR BlogVoxelJS: Chunking Magic

VoxelJS: Chunking Magic

A couple of weeks ago I relaunched VoxelJS with modern ThreeJS and modules support. Today I'm going to talk a little bit about how VoxelJS works internally, specifically how voxels are represented and drawn. This is the key magic part of a voxel engine and I owe a tremendous debt to Max Ogden, James Halliday and Mikola Lysenko

Voxels are represented by numbers in a large three dimensional array. Each number says what type of block goes in that block slot, with 0 representing empty. The challenge is how to represent a potentially infinite set of voxels without slowing the computer to a crawl. The only way to do this is to load just a portion of the world at a time.

The Challenge

We could do this by generating a large structure, say 512 x 512 x 512 blocks, then setting entries in the array whenever the player creates or destroys a block.

This process would work, but we have a problem. A grid of numbers is great for representing voxels, but we don’t have a way to draw them. ThreeJS (and by extension, WebGL) only thinks in terms of polygons. So we need a process to turn the voxel grid in to a polygon mesh. This process is called meshing.

The simple way to do meshing would be to create a cube object for every voxel in the original data. This would work but it would be incredibly slow. Plus most cubes are inside terrain and would never be seen, but they would still take up memory in the GPU and have to be culled from the view. Clearly we need to find a way to create more efficient meshes.

Even if we have a better way to do the meshing we still have another problem. When the player creates or destroys a block we need to update the data. Updating a single entry in an array is easy, but then we have to recreate the entire polygon mesh. That’s going to be incredibly slow if we must rebuild the entire mesh of the entire world every time the player creates a block, which will happen a lot in a game like Minecraft. VoxelJS has a solution to both of our problems. : chunks.

Chunks

The world is divided into large cubes called chunks, each composed of a uniform set of blocks. By default each chunk is 16x16x16 blocks, though you can adjust this depending on your need. Each chunk is represented by an array of block types, just as before, and when any block in the chunk changes the entire chunk transformed into a new polygon mesh and sent to the GPU. Because the chunk is so much smaller than before this shouldn’t take very long. In practice it can be done in less than a few milliseconds on a laptop. Only the single chunk which changed needs to be updated, the rest can stay in GPU memory and just be redrawn every frame (which is something modern GPUs excel at).

As the player moves around the world VoxelJS maintains a list of chunks that are near by. This is defined as chunks within a certain radius of the player (also configurable). As chunks go out of the nearby list they are deleted and new chunks are created. Thus at any given time only a small number of chunks are in memory.

This chunking system works pretty well with the naive algorithm that creates a cube for every voxel, but it still ends up using a tremendous amount of GPU memory and can be slow to draw, especially on mobile GPUs. We can do better.

Meshing

The first optimization is to only create cube geometry for voxels that are actually visible from the outside. All internal voxels can be culled. In VoxelJS this is implemented by the CullingMesher in the main src directory.

Culling works a lot better than naive meshing, but we could still do better. If there is a 10 x 10 wall it would be represented by 100 cubes (then multiplied another 12 when turned into triangles); yet in theory the wall could be drawn with a single box object.

To handle this case VoxelJS has another algorithm called the GreedyMesher. This produces vastly more efficient meshes and takes only slightly longer to compute than the culling mesh.

There is a downside to greedy meshing, however. Texture mapping is very easy on cubes, especially when using a texture atlas. The same with ambient occlusion. However, for arbitrary sized rectangular shapes the texturing and lighting calculations become a lot more complicated.

Texturing

Once we have the voxels turned into geometry we can upload it to the GPU, but this won’t let us draw different block types differently. We could do this with vertex colors, but for a real Minecraft like environment we need textures. A mesh can only be drawn with a single texture in WebGL. (Actually it is possible to use multiple textures per polygon using Texture Arrays but those are only supported with WebGL 2). In order to render multiple textures on a single mesh, which is required for chunks that have more than one block type, we must combine the different images into a single texture called a texture atlas.

To use a texture atlas we need to tell the GPU which part of the atlas to draw for each triangle in the mesh. VoxelJS does this by adding a subrects attribute to the buffer that holds the vertex data. Then the fragment shader can use this data to calculate the correct UV values for the required sub-texture.

Animated textures

VoxelJS can also handle animated textures for special blocks like water and lava. This is done by uploading all of the animation frames next to each other in the texture atlas. A frameCount attribute is passed to the shader along with a time uniform. The shader uses this information to offset the texture coordinates to get the particular frame that should be drawn.

Flaws

This animation system works pretty well but it does require all animations to have the same speed. To fix this I think we would need to have a speed attribute passed to the shader.

Currently VoxelJS supports both culled and greedy meshing. You can switch between them at runtime by swapping out the mesher and regenerating all chunks. Ambient occlusion lighting works wonderfully with culled meshing but does not currently work correctly with the greedy algorithm. Additionally the texturing code for greedy meshing is far more complicated and less efficient than it should be. All of these are open issues to be fixed.

Getting Involved

I hope this gives you insight into how VoxelJS Next works and how you can customize it to your needs.

If you'd like to get involved I've created a list of issues that are great for people new to the project.

Mozilla L10NL10n report: April edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

The deadline to ship localization updates in Firefox 67 is quickly approaching (April 30). Firefox 68 is going to be an ESR version, so it’s particularly important to ship the best localization possible. The deadline for that will be June 25.

The migration to Fluent is progressing steadily, and we are approaching two important milestones:

  • 3 thousand FTL messages.
  • Less than 2 thousand DTD strings.

What’s new or coming up in mobile

Lot’s of things have been happening on the mobile front, and much more is going to follow shortly.

One of the first things we’d like to call out if that Fenix browser strings have arrived for localization! While work has been opened up to only a small subset of locales, you can expect us to add more progressively, quite soon. More details around this can be found here and here.

We’ve also exposed strings for Firefox Reality, Mozilla’s mixed-reality browser! Also open to only a subset of locales, we expect to be able to add more locales once the in-app locale switcher is in place. Read more about this here.

There are more new and exciting projects coming up in the next few weeks, so as usual, stay tuned to the Dev.l10n mailing list for more announcements!

Concerning existing projects: Firefox iOS v17 l10n cycle is going to start within the next days, so keep an eye out on your Pontoon folder.

And concerning Fennec, just like for Firefox desktop, the deadline to ship localization updates in Firefox 67 is quickly approaching (April 30). Please read the section above for more details.

What’s new or coming up in web projects

Mozilla.org
  • firefox/whatsnew_67.lang: The page must be fully localized by 3 May to be included in the Firefox 67 product release.
  • navigation.lang: The file has been available on Pontoon for more than 2 months. This newly designed navigation menu will be switched on whether it is localized or not. This means every page you browse on mozilla.org will show the new layout. If the file is not fully localized, you will see the menu mixed with English text.
  • Three new pages will be opened up for localization in select locales: adblocker,  browser history and what is a browser. Be on the lookout on Pontoon.

What’s new or coming up in SuMo

What’s new or coming up in Fluent

Fluent Syntax 1.0 has been published! The syntax is now stable. Thanks to everyone who shared their feedback about Fluent in the past; you have made Fluent better for everyone. We published a blog post on Mozilla Hacks with more details about this release and about Fluent in general.

Fluent is already used in over 3000 messages in Firefox, as well as in Firefox Send and Common Voice. If you localize these projects, chances are you already localized Fluent messages. Thanks to the efforts of Matjaž and Adrian, Fluent is already well-supported in Pontoon. We continue to improve the Fluent experience in Pontoon and we’re open to your feedback about how to make it best-in-class.

You can learn more about the Fluent Syntax on the project’s website, through the Syntax Guide, and in the Mozilla localizer documentation. If you want to quickly see it in action, try the Fluent Playground—an online editor with shareable Fluent snippets.

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

  • Kudos to Sonia who introduced Mozilla and Pontoon to her fellow attendees. She ran a short workshop on localization at Dive into Open Source event held in Jalandhar, India in late March. After the event, she onboarded and mentored Anushka, Jasmine, and Sanja who have started contributing to various projects in Punjabi.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

The Mozilla BlogAndroid Browser Choice Screen in Europe

Today, Google announced a new browser choice screen in Europe. We love an opportunity to show more people our products, like Firefox for Android. Independent browsers that put privacy and security first (like Firefox) are great for consumers and an important part of the Android ecosystem.

There are open questions, though, about how well this implementation of a choice screen will enable people to easily adopt options other than Chrome as their default browser. The details matter, and the true measure will be the impact on competition and ultimately consumers. As we assess the results of this launch on Mozilla’s Firefox for Android, we’ll share our impressions and the impact we see.

The post Android Browser Choice Screen in Europe appeared first on The Mozilla Blog.

hacks.mozilla.orgIntroducing Mozilla WebThings

The Mozilla IoT team is excited to announce that after two years of development and seven quarterly software updates that have generated significant interest from the developer & maker community, Project Things is graduating from its early experimental phase and from now on will be known as Mozilla WebThings.

Mozilla’s mission is to “ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.”

The Mozilla IoT team’s mission is to create a Web of Things implementation which embodies those values and helps drive IoT standards for security, privacy and interoperability.

Mozilla WebThings is an open platform for monitoring and controlling devices over the web, including:

  • WebThings Gateway – a software distribution for smart home gateways focused on privacy, security and interoperability
  • WebThings Framework – a collection of reusable software components to help developers build their own web things

WebThings Gateway UI

 

We look forward to a future in which Mozilla WebThings software is installed on commercial products that can provide consumers with a trusted agent for their “smart”, connected home.

WebThings Gateway 0.8

The WebThings Gateway 0.8 release is available to download from today. If you have an existing Things Gateway it should have automatically updated itself. This latest release includes new features which allow you to privately log data from all your smart home devices, a new alarms capability and a new network settings UI.

Logs

Have you ever wanted to know how many times the door was opened and closed while you were out? Are you curious about energy consumption of appliances plugged into your smart plugs? With the new logs features you can privately log data from all your smart home devices and visualise that data using interactive graphs.

Logs UI

In order to enable the new logging features go to the main menu ➡ Settings ➡ Experiments and enable the “Logs” option.

Experiment Settings

You’ll then see the Logs option in the main menu. From there you can click the “+” button to choose a device property to log, including how long to retain the data.

Add log UI

The time series plots can be viewed by hour, day, or week, and a scroll bar lets users scroll back through time. This feature is still experimental, but viewing these logs will help you understand the kinds of data your smart home devices are collecting and think about how much of that data you are comfortable sharing with others via third party services.

Note: If booting WebThings Gateway from an SD card on a Raspberry Pi, please be aware that logging large amounts of data to the SD card may make the card wear out more quickly!

Alarms

Home safety and security are among the big potential benefits of smart home systems. If one of your “dumb” alarms is triggered while you are at work, how will you know? Even if someone in the vicinity hears it, will they take action? Do they know who to call? WebThings Gateway 0.8 provides a new alarms capability for devices like smoke alarms, carbon monoxide alarms or burglar alarms.

Alarm Capability

This means you can now check whether an alarm is currently active, and configure rules to notify you if an alarm is triggered while you’re away from home.

Network Settings

In previous releases, moving your gateway from one wireless network to another when the previous Wi-Fi access point was still active could not be done without console access and command line changes directly on the Raspberry Pi. With the 0.8 release, it is now possible to re-configure your gateway’s network settings from the web interface. These new settings can be found under Settings ➡ Network.

Network Settings UI

You can either configure the Ethernet port (with a dynamic or static IP address) or re-scan available wireless networks and change the Wi-Fi access point that the gateway is connected to.

WebThings Gateway for Wireless Routers

We’re also excited to share that we’ve been working on a new OpenWrt-based build of WebThings Gateway, aimed at consumer wireless routers. This version of WebThings Gateway will be able to act as a wifi access point itself, rather than just connect to an existing wireless network as a client.

This is the beginning of a new phase of development of our gateway software, as it evolves into a software distribution for consumer wireless routers. Look out for further announcements in the coming weeks.

Online Documentation

Along with a refresh of the Mozilla IoT website, we have made a start on some online user & developer documentation for the WebThings Gateway and WebThings Framework. If you’d like to contribute to this documentation you can do so via GitHub.

Thank you for all the contributions we’ve received so far from our wonderful Mozilla IoT community. We look forward to this new and exciting phase of the project!

The post Introducing Mozilla WebThings appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyMozilla reacts to European Parliament plenary vote on EU Terrorist Content regulation

This evening lawmakers in the European Parliament voted to adopt the institution’s negotiating position on the EU Terrorist Content regulation.

Here is a statement from Owen Bennett, Internet Policy Manager, reacting to the vote –

As the recent atrocities in Christchurch underscored, terrorism remains a serious threat to citizens and society, and it is essential that we implement effective strategies to combat it. But the Terrorist Content regulation passed today in the European Parliament is not that. This legislation does little but chip away at the rights of European citizens and further entrench the same companies the legislation was aimed at regulating. By demanding that companies of all sizes take down ‘terrorist content’ within one hour the EU has set a compliance bar that only the most powerful can meet.

Our calls for targeted, proportionate and evidence-driven policy responses to combat the evolving threat, were not accepted by the majority, but we will continue to push for a more effective regulation in the next phase of this legislative process. The issue is simply too important to get wrong, and the present shortfalls in the proposal are too serious to let stand.

The post Mozilla reacts to European Parliament plenary vote on EU Terrorist Content regulation appeared first on Open Policy & Advocacy.

hacks.mozilla.orgFluent 1.0: a localization system for natural-sounding translations

Fluent is a family of localization specifications, implementations and good practices developed by Mozilla. It is currently used in Firefox. With Fluent, translators can create expressive translations that sound great in their language. Today we’re announcing version 1.0 of the Fluent file format specification. We’re inviting translation tool authors to try it out and provide feedback.

The Problem Fluent Solves

With almost 100 supported languages, Firefox faces many localization challenges. Using traditional localization solutions, these are difficult to overcome. Software localization has been dominated by an outdated paradigm: translations that map one-to-one to the source language. The grammar of the source language, which at Mozilla is English, imposes limits on the expressiveness of the translation.

Consider the following message which appears in Firefox when the user tries to close a window with more than one tab.

tabs-close-warning-multiple =
    You are about to close {$count} tabs.
    Are you sure you want to continue?

The message is only displayed when the tab count is 2 or more. In English, the word tab will always appear as plural tabs. An English-speaking developer may be content with this message. It sounds great for all possible values of $count.

<figcaption>In English, a single variant of the message is enough for all values of $count.</figcaption>

Many translators, however, will quickly point out that the word tab will take different forms depending on the exact value of the $count variable.

In traditional localization solutions, the onus of fixing this message is on developers. They need to account for the fact that other languages distinguish between more than one plural form, even if English doesn’t. As the number of languages supported in the application grows, this problem scales up quickly—and not well.

  • In some languages, nouns have genders which require different forms of adjectives and past participles. In French, connecté, connectée, connectés and connectées all mean connected.
  • Style guides may require that different terms be used depending on the platform the software runs on. In English Firefox, we use Settings on Windows and Preferences on other systems, to match the wording of the user’s operating system. In Japanese, the difference is starker: some computer-related terms are spelled with a different writing system depending on the user’s OS.
  • The context and the target audience of the application may require adjustments to the copy. In English, software used in accounting may format numbers differently than a social media website. Yet in other languages such a distinction may not be necessary.

There are many grammatical and stylistic variations that don’t map one-to-one between languages. Supporting all of them using traditional localization solutions isn’t straightforward. Some language features require trade-offs in order to support them, or aren’t possible at all.

Asymmetric Localization

Fluent turns the localization landscape on its head. Rather than require developers to predict all possible permutations of complexity in all supported languages, Fluent keeps the source language as simple as it can be.

We make it possible to cater to the grammar and style of other languages, independently of the source language. All of this happens in isolation; the fact that one language benefits from more advanced logic doesn’t require any other localization to apply it. Each localization is in control of how complex the translation becomes.

Consider the Czech translation of the “tab close” message discussed above. The word panel (tab) must take one of two plural forms: panely for counts of 2, 3, and 4, and panelů for all other numbers.

tabs-close-warning-multiple = {$count ->
    [few] Chystáte se zavřít {$count} panely. Opravdu chcete pokračovat?
   *[other] Chystáte se zavřít {$count} panelů. Opravdu chcete pokračovat?
}

Fluent empowers translators to create grammatically correct translations and leverage the expressive power of their language. With Fluent, the Czech translation can now benefit from correct plural forms for all possible values of the $count variable.

<figcaption>In Czech, $count values of 2, 3, and 4 require a special plural form of the noun.</figcaption>

At the same time, no changes are required to the source code nor the source copy. In fact, the logic added by the Czech translator to the Czech translation doesn’t affect any other language. The same message in French is a simple sentence, similar to the English one:

tabs-close-warning-multiple =
    Vous êtes sur le point de fermer {$count} onglets.
    Voulez-vous vraiment continuer ?

The concept of asymmetric localization is the key innovation of Fluent, built upon 20 years of Mozilla’s history of successfully shipping localized software. Many key ideas in Fluent have also been inspired by XLIFF and ICU’s MessageFormat.

At first glance, Fluent looks similar to other localization solutions that allow translations to use plurals and grammatical genders. What sets Fluent apart is the holistic approach to localization. Fluent takes these ideas further by defining the syntax for the entire text file in which multiple translations can be stored, and by allowing messages to reference other messages.

Terms and References

A Fluent file may consist of many messages, each translated into the translator’s language. Messages can refer to other messages in the same file, or even to messages from other files. In the runtime, Fluent combines files into bundles, and references are resolved in the scope of the current bundle.

Referencing messages is a powerful tool for ensuring consistency. Once defined, a translation can be reused in other translations. Fluent even has a special kind of message, called a term, which is best suited for reuse. Term identifiers always start with a dash.

-sync-brand-name = Firefox Account

Once defined, the -sync-brand-name term can be referenced from other messages, and it will always resolve to the same value. Terms help enforce style guidelines; they can also be swapped in and out to modify the branding in unofficial builds and on beta release channels.

sync-dialog-title = {-sync-brand-name}
sync-headline-title =
    {-sync-brand-name}: The best way to bring
    your data always with you
sync-signedout-account-title =
    Connect with your {-sync-brand-name}

Using terms verbatim in the middle of a sentence may cause trouble for inflected languages or for languages with different capitalization rules than English. Terms can define multiple facets of their value, suitable for use in different contexts. Consider the following definition of the -sync-brand-name term in Italian.

-sync-brand-name = {$capitalization ->
   *[uppercase] Account Firefox
    [lowercase] account Firefox
}

Thanks to the asymmetric nature of Fluent, the Italian translator is free to define two facets of the brand name. The default one (uppercase) is suitable for standalone appearances as well as for use at the beginning of sentences. The lowercase version can be explicitly requested by passing the capitalization parameter, when the brand name is used inside a larger sentence.

sync-dialog-title = {-sync-brand-name}
sync-headline-title =
    {-sync-brand-name}: il modo migliore
    per avere i tuoi dati sempre con te

# Explicitly request the lowercase variant of the brand name.
sync-signedout-account-title =
    Connetti il tuo {-sync-brand-name(capitalization: "lowercase")}

Defining multiple term variants is a versatile technique which allows the localization to cater to the grammatical needs of many languages. In the following example, the Polish translation can use declensions to construct a grammatically correct sentence in the sync-signedout-account-title message.

-sync-brand-name = {$case ->
   *[nominative] Konto Firefox
    [genitive] Konta Firefox
    [accusative] Kontem Firefox
}

sync-signedout-account-title =
    Zaloguj do {-sync-brand-name(case: "genitive")}

Fluent makes it possible to express linguistic complexities when necessary. At the same time, simple translations remain simple. Fluent doesn’t impose complexity unless it’s required to create a correct translation.

sync-signedout-caption = Take Your Web With You
sync-signedout-caption = Il tuo Web, sempre con te
sync-signedout-caption = Zabierz swoją sieć ze sobą
sync-signedout-caption = So haben Sie das Web überall dabei.

Fluent Syntax

Today, we’re announcing the first stable release of the Fluent Syntax. It’s a formal specification of the file format for storing translations, accompanied by beta releases of parser implementations in JavaScript, Python, and Rust.

You’ve already seen a taste of Fluent Syntax in the examples above. It has been designed with non-technical people in mind, and to make the task of reviewing and editing translations easy and error-proof. Error recovery is a strong focus: it’s impossible for a single broken translation to break the entire file, or even the translations adjacent to it. Comments may be used to communicate contextual information about the purpose of a message or a group of messages. Translations can span multiple lines, which helps when working with longer text or markup.

Fluent files can be opened and edited in any text editor, lowering the barrier to entry for developers and localizers alike. The file format is also well supported by Pontoon, Mozilla’s open-source translation management system.

<figcaption>Fluent Playground is an online sandbox for trying out Fluent live inside the browser.</figcaption>

You can learn more about the syntax by reading the Fluent Syntax Guide. The formal definition can be found in the Fluent Syntax specification. And if you just want to quickly see it in action, try the Fluent Playground—an online editor with shareable Fluent snippets.

Request for Feedback

Firefox has been the main driver behind the development of Fluent so far. Today, there are over 3000 Fluent messages in Firefox. The migration from legacy localization formats started early last year and is now in full swing. Fluent has proven to be a stable and flexible solution for building complex interfaces, such as the UI of Firefox Preferences. It is also used in a number of Mozilla websites, such as Firefox Send and Common Voice.

We think Fluent is a great choice for applications that value simplicity and a lean runtime, and at the same time require that elements of the interface depend on multiple variables. In particular, Fluent can help create natural-sounding translations in size-constrained UIs of mobile apps; in information-rich layouts of social media platforms; and in games, to communicate gameplay statistics and mechanics to the player.

We’d love to hear from projects and localization vendors outside of Mozilla. Because we’re developing Fluent with a future standard in mind, we invite you to try it out and let us know if it addresses your challenges. With your help, we can iterate and improve Fluent to address the needs of many platforms, use cases, and industries.

We’re open to your constructive feedback. Learn more about Fluent on the project’s website and please get in touch on Fluent’s Discourse.

The post Fluent 1.0: a localization system for natural-sounding translations appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogLatest Firefox for iOS Now Available

Today’s Firefox for iPhone and iPad users offers enhancements that will make it easier to get you to what you want faster, from new links within your library and managing your logins and passwords, plus deleting your history as recent as the last hour.

Leave no trace with your web history

With today’s release, we made it easier to clear your web history with one tap on the history page. In the menu or on the Firefox Home page, tap ‘Your Library’, then ‘History’, and ‘Clear Recent History’.

Because we all make wrong turns on the web from time to time, you can now choose to delete your history from the last hour, that specific day, and the one before or, as it has always been, your full browsing history.

Clear your web history with one tap

Shortcuts in your library

Everyone likes a shortcut that gets you quickly to the place you need to go. We created links in your library to get you to your bookmarks, history, reading list and downloads all from the Firefox Home screen.

Get you to your bookmarks, history, reading list and downloads all from the Firefox Home screen

Get to your logins and passwords faster

We simplified the place where you can find your logins and passwords in the menu. Go to the menu and tap ‘Logins & Passwords’. Also, from there you can enable Face ID or password authentication in Settings to keep your passwords even more secure. It’s located in the Face ID & Passcode option.

Find your logins and passwords easily

To get the latest version of Firefox for iOS, visit the App Store.

 

The post Latest Firefox for iOS Now Available appeared first on The Mozilla Blog.

Mozilla ServicesMaking Firefox Accounts more transparent in Firefox

Over the past few months the Firefox Accounts team have been working on making users more aware of Firefox Account and the benefits of having an account. This phase had an emphasis on our desktop users because we believe that would have the highest impact.

The problem

Based on user testing and feedback, most of our users did not clearly understand all of the advantages of having a Firefox Account. Most users failed to understand the value proposition of an account and why they should create one. Additionally, if they had an account, users were not always aware of the current signed in status in the browser. This meant that users could be syncing their private data and not fully aware they were doing that.

The benefits of an account we wanted to highlight were:

  • Sync bookmarks, passwords, tabs, add-ons and history among all your Firefox Browsers
  • Send tabs between Firefox Browsers
  • Stores your data encrypted
  • Use one account to log in to supported Firefox services (Ex. Monitor, Lockbox, Send)

Previously, users that downloaded Firefox would only see the outlined benefits at a couple touch points in the browser. Specifically these points below

First run page (New installation)

What’s New page (Firefox installation upgraded)

Browsing to preferences and clicking Firefox Account menu

If a user failed to create an account and login during the first two points, it was very unlikely that they would organically discover Firefox Accounts at point three. Having only these touch points meant that users could not always set-up a Firefox Account at their own pace.

Our solution

Our team decided that we needed to make this easier and solicited input from our Growth and Marketing teams, particularly on how to best showcase and highlight our features. From these discussions, we decided to experiment with putting a top level account menu item next to the Firefox application menu. Our hypothesis was that having a top level menu would drive engagement and reinforce the benefits of Firefox Accounts.

We believed that having an account menu item at this location would give users more visibility into their account status and allow them to quickly manage it.

While most browsers have some form of a top level account menu, we decided to experiment with the feature because Firefox users are more privacy focused and might not behave as other browser users.

New Firefox Account Toolbar menu

Our designs

The initial designs for this experiment had a toolbar menu left of the Firefox application menu. This menu could not be removed and was always fixed. After consulting with engineering teams, having a fixed menu could more easily be achieved as a native browser feature. However, because of the development cycle of Firefox browser (new releases every 6 weeks), we would have to wait 6 weeks to test our experiment as a native feature.

Initial toolbar design

If the requirement that the menu was not fixed was lifted then we could ship a Shield web extension experiment and get results much more quickly (2-3 weeks). Shield experiments are not tied to a specific Firefox release schedule and users can opt in and out of them. This means that Firefox can install shield experiments, run them and then uninstall them at the end of the experiment.

After discussions with product and engineering teams, we decided to develop and ship the Shield web extension experiment (with these known limitations) and while that experiment was gathering data, start development of the native browser version for the originally spec design. There was a consensus between product and engineering teams that if the experiment was successful, it should eventually live as a native feature in the browser and not as a web extension.

Account toolbar Web Extension Version

Experiment results

Our experiment ran for 28 days and at the end of it, users were given a survey to fill out. There was a treatment (installed web extension) and control (did not install web extension) group. Below is a summary of the results

  • Treatment users authenticated into Firefox Account roughly 8% more
  • From survey
    • 45% of users liked the avatar
    • 45% of users were indifferent
    • 10% didn’t like it

The overall feedback for the top level menu was positive and most users were not bothered by it. Based on this we decided to update and iterate on the design for the final native browser version.

Iterating designs

While the overall feedback was positive for the new menu item, there were a few tweaks to the final design. Most notably

  • New menu is fully customizable (can be placed anywhere in Firefox browser vs fixed position)
    • After discussing with team, while having this menu fixed was in the original designs, we decided to have it not fixed because that was what the experiment ran as and it was more inline with the Firefox brand of being personalizable.
  • Ability to easily connect another device via SMS
  • View current sync tabs
  • Added ability to send current and multiple tabs to a device
  • Added ability to force a sync of data

Because we started working on the native feature while the experiment was running, we did not have to dedicate as many development resources to complete new requirements.

Final toolbar design

Check out the development bug here for more details.

Next steps

We are currently researching ways to best surface new account security features and services for Firefox Accounts using the new toolbar menu. Additionally, there are a handful of usability tweaks that we would like to add in future versions. Stay tuned!

The Firefox Account toolbar menu will be available starting in desktop Firefox 67. If you want to experiment early with it, you can download Firefox Beta 67 or Firefox Nightly.

Big thank you for all the teams and folks that helped to make this feature happen!

Open Policy & AdvocacyBrussels Mozilla Mornings: A policy blueprint for internet health

On 14 May, Mozilla will host the next installment of our Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

This event will coincide with the launch of the 2019 Mozilla Foundation Internet Health Report. We’re bringing together an expert panel to discuss some of the report’s highlights, and their vision for how the next EU political mandate can enhance internet health in Europe.

Featuring

Prabhat Agarwal
Deputy Head of Unit, E-Commerce & Platforms
European Commission, DG CNECT

Claudine Vliegen
Telecoms & Digital Affairs attaché
Permanent Representation of the Netherlands to the EU

Mark Surman
Executive Director, Mozilla Foundation

Introductory remarks by Solana Larsen
Editor in Chief, Internet Health Report

Moderated by Jennifer Baker, EU tech journalist

Logistical information

14 May, 2019
08:00-10:00
Radisson Red, Rue d’Idalie 35, 1050 Brussels

hacks.mozilla.orgPyodide: Bringing the scientific Python stack to the browser

Pyodide is an experimental project from Mozilla to create a full Python data science stack that runs entirely in the browser.

Density of 311 calls in Oakland, California

The impetus for Pyodide came from working on another Mozilla project, Iodide, which we presented in an earlier post.  Iodide is a tool for data science experimentation and communication based on state-of-the-art web technologies.  Notably, it’s designed to perform data science computation within the browser rather than on a remote kernel.

Unfortunately, the “language we all have” in the browser, JavaScript, doesn’t have a mature suite of data science libraries, and it’s missing a number of features that are useful for numerical computing, such as operator overloading. We still think it’s worthwhile to work on changing that and moving the JavaScript data science ecosystem forward. In the meantime, we’re also taking a shortcut: we’re meeting data scientists where they are by bringing the popular and mature Python scientific stack to the browser.

It’s also been argued more generally that Python not running in the browser represents an existential threat to the language—with so much user interaction happening on the web or on mobile devices, it needs to work there or be left behind. Therefore, while Pyodide tries to meet the needs of Iodide first, it is engineered to be useful on its own as well.

Pyodide gives you a full, standard Python interpreter that runs entirely in the browser, with full access to the browser’s Web APIs.  In the example above (50 MB download), the density of calls to the City of Oakland, California’s “311” local information service is plotted in 3D. The data loading and processing is performed in Python, and then it hands off to Javascript and WebGL for the plotting.

For another quick example, here’s a simple doodling script that lets you draw in the browser window:


from js import document, iodide

canvas = iodide.output.element('canvas')
canvas.setAttribute('width', 450)
canvas.setAttribute('height', 300)
context = canvas.getContext("2d")
context.strokeStyle = "#df4b26"
context.lineJoin = "round"
context.lineWidth = 5

pen = False
lastPoint = (0, 0)

def onmousemove(e):
    global lastPoint

    if pen:
        newPoint = (e.offsetX, e.offsetY)
        context.beginPath()
        context.moveTo(lastPoint[0], lastPoint[1])
        context.lineTo(newPoint[0], newPoint[1])
        context.closePath()
        context.stroke()
        lastPoint = newPoint

def onmousedown(e):
    global pen, lastPoint
    pen = True
    lastPoint = (e.offsetX, e.offsetY)

def onmouseup(e):
    global pen
    pen = False

canvas.addEventListener('mousemove', onmousemove)
canvas.addEventListener('mousedown', onmousedown)
canvas.addEventListener('mouseup', onmouseup)

And this is what it looks like:

Interactive doodle example

The best way to learn more about what Pyodide can do is to just go and try it! There is a demo notebook (50MB download) that walks through the high-level features. The rest of this post will be more of a technical deep-dive into how it works.

Prior art

There were already a number of impressive projects bringing Python to the browser when we started Pyodide.  Unfortunately, none addressed our specific goal of supporting a full-featured mainstream data science stack, including NumPy, Pandas, Scipy, and Matplotlib.

Projects such as Transcrypt transpile (convert) Python to JavaScript. Because the transpilation step itself happens in Python, you either need to do all of the transpiling ahead of time, or communicate with a server to do that work. This doesn’t really meet our goal of letting the user write Python in the browser and run it without any outside help.

Projects like Brython and Skulpt are rewrites of the standard Python interpreter to JavaScript, therefore, they can run strings of Python code directly in the browser.  Unfortunately, since they are entirely new implementations of Python, and in JavaScript to boot, they aren’t compatible with Python extensions written in C, such as NumPy and Pandas. Therefore, there’s no data science tooling.

PyPyJs is a build of the alternative just-in-time compiling Python implementation, PyPy, to the browser, using emscripten.  It has the potential to run Python code really quickly, for the same reasons that PyPy does.  Unfortunately, it has the same issues with performance with C extensions that PyPy does.

All of these approaches would have required us to rewrite the scientific computing tools to achieve adequate performance.  As someone who used to work a lot on Matplotlib, I know how many untold person-hours that would take: other projects have tried and stalled, and it’s certainly a lot more work than our scrappy upstart team could handle.  We therefore needed to build a tool that was based as closely as possible on the standard implementations of Python and the scientific stack that most data scientists already use.  

After a discussion with some of Mozilla’s WebAssembly wizards, we saw that the key to building this was emscripten and WebAssembly: technologies to port existing code written in C to the browser.  That led to the discovery of an existing but dormant build of Python for emscripten, cpython-emscripten, which was ultimately used as the basis for Pyodide.

emscripten and WebAssembly

There are many ways of describing what emscripten is, but most importantly for our purposes, it provides two things:

  1. A compiler from C/C++ to WebAssembly
  2. A compatibility layer that makes the browser feel like a native computing environment

WebAssembly is a new language that runs in modern web-browsers, as a complement to JavaScript.  It’s a low-level assembly-like language that runs with near-native performance intended as a compilation target for low-level languages like C and C++.  Notably, the most popular interpreter for Python, called CPython, is implemented in C, so this is the kind of thing emscripten was created for.

Pyodide is put together by:

  • Downloading the source code of the mainstream Python interpreter (CPython), and the scientific computing packages (NumPy, etc.)
  • Applying a very small set of changes to make them work in the new environment
  • Compiling them to WebAssembly using emscripten’s compiler

If you were to just take this WebAssembly and load it in the browser, things would look very different to the Python interpreter than they do when running directly on top of your operating system. For example, web browsers don’t have a file system (a place to load and save files). Fortunately, emscripten provides a virtual file system, written in JavaScript, that the Python interpreter can use. By default, these virtual “files” reside in volatile memory in the browser tab, and they disappear when you navigate away from the page.  (emscripten also provides a way for the file system to store things in the browser’s persistent local storage, but Pyodide doesn’t use it.)

By emulating the file system and other features of a standard computing environment, emscripten makes moving existing projects to the web browser possible with surprisingly few changes. (Some day, we may move to using WASI as the system emulation layer, but for now emscripten is the more mature and complete option).

Putting it all together, to load Pyodide in your browser, you need to download:

  • The compiled Python interpreter as WebAssembly.
  • A bunch of JavaScript provided by emscripten that provides the system emulation.
  • A packaged file system containing all the files the Python interpreter will need, most notably the Python standard library.

These files can be quite large: Python itself is 21MB, NumPy is 7MB, and so on. Fortunately, these packages only have to be downloaded once, after which they are stored in the browser’s cache.

Using all of these pieces in tandem, the Python interpreter can access the files in its standard library, start up, and then start running the user’s code.

What works and doesn’t work

We run CPython’s unit tests as part of Pyodide’s continuous testing to get a handle on what features of Python do and don’t work.  Some things, like threading, don’t work now, but with the newly-available WebAssembly threads, we should be able to add support in the near future.  

Other features, like low-level networking sockets, are unlikely to ever work because of the browser’s security sandbox.  Sorry to break it to you, your hopes of running a Python minecraft server inside your web browser are probably still a long way off. Nevertheless, you can still fetch things over the network using the browser’s APIs (more details below).

How fast is it?

Running the Python interpreter inside a JavaScript virtual machine adds a performance penalty, but that penalty turns out to be surprisingly small — in our benchmarks, around 1x-12x slower than native on Firefox and 1x-16x slower on Chrome. Experience shows that this is very usable for interactive exploration.

Notably, code that runs a lot of inner loops in Python tends to be slower by a larger factor than code that relies on NumPy to perform its inner loops. Below are the results of running various Pure Python and Numpy benchmarks in Firefox and Chrome compared to natively on the same hardware.

Pyodide benchmark results: Firefox and Chrome vs. native

Interaction between Python and JavaScript

If all Pyodide could do is run Python code and write to standard out, it would amount to a cool trick, but it wouldn’t be a practical tool for real work.  The real power comes from its ability to interact with browser APIs and other JavaScript libraries at a very fine level. WebAssembly has been designed to easily interact with the JavaScript running in the browser.  Since we’ve compiled the Python interpreter to WebAssembly, it too has deep integration with the JavaScript side.

Pyodide implicitly converts many of the built-in data types between Python and JavaScript.  Some of these conversions are straightforward and obvious, but as always, it’s the corner cases that are interesting.

Conversion of data types between Python and JavaScript

Python treats dicts and object instances as two distinct types. dicts (dictionaries) are just mappings of keys to values.  On the other hand, objects generally have methods that “do something” to those objects. In JavaScript, these two concepts are conflated into a single type called Object.  (Yes, I’ve oversimplified here to make a point.)

Without really understanding the developer’s intention for the JavaScript Object, it’s impossible to efficiently guess whether it should be converted to a Python dict or object.  Therefore, we have to use a proxy and let “duck typing” resolve the situation.

Proxies are wrappers around a variable in the other language.  Rather than simply reading the variable in JavaScript and rewriting it in terms of Python constructs, as is done for the basic types, the proxy holds on to the original JavaScript variable and calls methods on it “on demand”.  This means that any JavaScript variable, no matter how custom, is fully accessible from Python. Proxies work in the other direction, too.

Duck typing is the principle that rather than asking a variable “are you a duck?” you ask it “do you walk like a duck?” and “do you quack like a duck?” and infer from that that it’s probably a duck, or at least does duck-like things.  This allows Pyodide to defer the decision on how to convert the JavaScript Object: it wraps it in a proxy and lets the Python code using it decide how to handle it. Of course, this doesn’t always work, the duck may actually be a rabbit. Thus, Pyodide also provides ways to explicitly handle these conversions.

It’s this tight level of integration that allows a user to do their data processing in Python, and then send it to JavaScript for visualization. For example, in our Hipster Band Finder demo, we show loading and analyzing a data set in Python’s Pandas, and then sending it to JavaScript’s Plotly for visualization.

Accessing Web APIs and the DOM

Proxies also turn out to be the key to accessing the Web APIs, or the set of functions the browser provides that make it do things.  For example, a large part of the Web API is on the document object. You can get that from Python by doing:

from js import document

This imports the document object in JavaScript over to the Python side as a proxy.  You can start calling methods on it from Python:

document.getElementById("myElement")

All of this happens through proxies that look up what the document object can do on-the-fly.  Pyodide doesn’t need to include a comprehensive list of all of the Web APIs the browser has.

Of course, using the Web API directly doesn’t always feel like the most Pythonic or user-friendly way to do things.  It would be great to see the creation of a user-friendly Python wrapper for the Web API, much like how jQuery and other libraries have made the Web API easier to use from JavaScript.  Let us know if you’re interested in working on such a thing!

Multidimensional Arrays

There are important data types that are specific to data science, and Pyodide has special support for these as well.  Multidimensional arrays are collections of (usually numeric) values, all of the same type. They tend to be quite large, and knowing that every element is the same type has real performance advantages over Python’s lists or JavaScript’s Arrays that can hold elements of any type.

In Python, NumPy arrays are the most common implementation of multidimensional arrays. JavaScript has TypedArrays, which contain only a single numeric type, but they are single dimensional, so the multidimensional indexing needs to be built on top.

Since in practice these arrays can get quite large, we don’t want to copy them between language runtimes.  Not only would that take a long time, but having two copies in memory simultaneously would tax the limited memory the browser has available.

Fortunately, we can share this data without copying.  Multidimensional arrays are usually implemented with a small amount of metadata that describes the type of the values, the shape of the array and the memory layout. The data itself is referenced from that metadata by a pointer to another place in memory. It’s an advantage that this memory lives in a special area called the “WebAssembly heap,” which is accessible from both JavaScript and Python.  We can simply copy the metadata (which is quite small) back and forth between the languages, keeping the pointer to the data referring to the WebAssembly heap.

Sharing memory for arrays between Python and Javascript

This idea is currently implemented for single-dimensional arrays, with a suboptimal workaround for higher-dimensional arrays.  We need improvements to the JavaScript side to have a useful object to work with there. To date there is no one obvious choice for JavaScript multidimensional arrays. Promising projects such as Apache Arrow and xnd’s ndarray are working exactly in this problem space, and aim to make the passing of in-memory structured data between language runtimes easier.  Investigations are ongoing to build off of these projects to make this sort of data conversion more powerful.

Real-time interactive visualization

One of the advantages of doing the data science computation in the browser rather than in a remote kernel, as Jupyter does, is that interactive visualizations don’t have to communicate over a network to reprocess and redisplay their data.  This greatly reduces the latency — the round trip time it takes from the time the user moves their mouse to the time an updated plot is displayed to the screen.

Making that work requires all of the technical pieces described above to function together in tandem.  Let’s look at this interactive example that shows how log-normal distributions work using matplotlib. First, the random data is generated in Python using Numpy. Next, Matplotlib takes that data, and draws it using its built-in software renderer. It sends the pixels back to the JavaScript side using Pyodide’s support for zero-copy array sharing, where they are finally rendered into an HTML canvas.  The browser then handles getting those pixels to the screen. Mouse and keyboard events used to support interactivity are handled by callbacks that call from the web browser back into Python.

Interacting with distributions in matplotlib

Packaging

The Python scientific stack is not a monolith—it’s actually a collection of loosely-affiliated packages that work together to create a productive environment.  Among the most popular are NumPy (for numerical arrays and basic computation), Scipy (for more sophisticated general-purpose computation, such as linear algebra), Matplotlib (for visualization) and Pandas (for tabular data or “data frames”).  You can see the full and constantly updated list of the packages that Pyodide builds for the browser here.

Some of these packages were quite straightforward to bring into Pyodide. Generally, anything written in pure Python without any extensions in compiled languages is pretty easy. In the moderately difficult category are projects like Matplotlib, which required special code to display plots in an HTML canvas. On the extremely difficult end of the spectrum, Scipy has been and remains a considerable challenge.  

Roman Yurchak worked on making the large amount of legacy Fortran in Scipy compile to WebAssembly. Kirill Smelkov improved emscripten so shared objects can be reused by other shared objects, bringing Scipy to a more manageable size. (The work of these outside contributors was supported by Nexedi).  If you’re struggling porting a package to Pyodide, please reach out to us on Github: there’s a good chance we may have run into your problem before.

Since we can’t predict which of these packages the user will ultimately need to do their work, they are downloaded to the browser individually, on demand.  For example, when you import NumPy:

import numpy as np

Pyodide fetches the NumPy library (and all of its dependencies) and loads them into the browser at that time.  Again, these files only need to be downloaded once, and are stored in the browser’s cache from then on.

Adding new packages to Pyodide is currently a semi-manual process that involves adding files to the Pyodide build. We’d prefer, long term, to take a distributed approach to this so anyone could contribute packages to the ecosystem without going through a single project.  The best-in-class example of this is conda-forge. It would be great to extend their tools to support WebAssembly as a platform target, rather than redoing a large amount of effort.

Additionally, Pyodide will soon have support to load packages directly from PyPI (the main community package repository for Python), if that package is pure Python and distributes its package in the wheel format.  This gives Pyodide access to around 59,000 packages, as of today.

Beyond Python

The relative early success of Pyodide has already inspired developers from other language communities, including Julia, R, OCaml, Lua, to make their language runtimes work well in the browser and integrate with web-first tools like Iodide.  We’ve defined a set of levels to encourage implementors to create tighter integrations with the JavaScript runtime:

  • Level 1: Just string output, so it’s useful as a basic console REPL (read-eval-print-loop).
  • Level 2: Converts basic data types (numbers, strings, arrays and objects) to and from JavaScript.
  • Level 3: Sharing of class instances (objects with methods) between the guest language and JavaScript.  This allows for Web API access.
  • Level 4: Sharing of data science related types  (n-dimensional arrays and data frames) between the guest language and JavaScript.

We definitely want to encourage this brave new world, and are excited about the possibilities of having even more languages interoperating together.  Let us know what you’re working on!

Conclusion

If you haven’t already tried Pyodide in action, go try it now! (50MB download)

It’s been really gratifying to see all of the cool things that have been created with Pyodide in the short time since its public launch.  However, there’s still lots to do to turn this experimental proof-of-concept into a professional tool for everyday data science work. If you’re interested in helping us build that future, come find us on gitter, github and our mailing list.


Huge thanks to Brendan Colloran, Hamilton Ulmer and William Lachance, for their great work on Iodide and for reviewing this article, and Thomas Caswell for additional review.

The post Pyodide: Bringing the scientific Python stack to the browser appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogAnnouncing the Hubs Discord Bot

Announcing the Hubs Discord Bot

We’re excited to announce an official Hubs integration with Discord, a platform that provides text and voice chat for communities. In today's digital world, the ways we stay connected with our friends, family, and co-workers is evolving. Our established social networks span across different platforms, and we believe that shared virtual reality should build on those relationships and that they enhance the way we communicate with the people we care about. Being co-present as avatars in a shared 3D space is a natural progression for the tools we use today, and we’re building on that idea with Hubs to allow you to create private spaces where your conversations, content, and data is protected.

In recent years, Discord has grown in popularity for communities organized around games and technology, and is the platform we use internally on the Hubs development team for product development discussions. Using Discord as a persistent platform that is open to the public gives us the ability to be open about our ongoing work and initiatives on the Hubs team and integrate the community’s feedback into our product planning and development. If you’re a member of the Discord server for Hubs, you may have already seen the bot in action during our internal testing this month!

The Hubs Discord integration allows members to use their Discord identity to connect to rooms and connects a Discord channel with a specific Hubs room in order to capture the text chat, photos taken, and media shared between users in each space. With the ability to add web content to rooms in Hubs, users who are co-present together are able to collaborate and entertain one another, watch videos, chat, share their screen / webcam feed, and pull in 3D objects from Sketchfab and Google Poly. Users will be able to chat in the linked Discord channel to send messages, see the media added to the connected Hubs room, and easily get updates on who has joined or left at any given time.

Announcing the Hubs Discord Bot

We believe that embodied avatar presence will empower communities to be more creative, communicative, and collaborative - and that all of that should be doable without replacing or excluding your existing networks. Your rooms belong to you and the people you choose to share them with, and we feel strongly that everyone should be able to meet in secure spaces where their privacy is protected.

In the coming months, we plan to introduce additional platform integrations and new tools related to room management, authentication, and identity. While you will be able to continue to use Hubs without a persistent identity or login, having an account for the Hubs platform or using your existing identity on a platform such as Discord grants you additional permissions and abilities for the rooms you create. We plan to work closely with communities who are interested in joining the closed beta to help us understand how embodied communication works for them in order to focus our product planning on what meets their needs.

You can see the Hubs Discord integration in action live on the public Hubs Community Discord server for our weekly meetings. If you run a Discord server and are interested in participating in the closed beta for the Hubs Discord bot, you can learn more on the Hubs website.

Web Application SecurityMozilla’s Common CA Database (CCADB) promotes Transparency and Collaboration

The Common CA Database (CCADB) is helping us protect individuals’ security and privacy on the internet and deliver on our commitment to use transparent community-based processes to promote participation, accountability and trust. It is a repository of information about Certificate Authorities (CAs) and their root and subordinate certificates that are used in the web PKI, the publicly-trusted system which underpins secure connections on the web. The Common CA Database (CCADB) paves the way for more efficient and cost-effective management of root stores and helps make the internet safer for everyone. For example, the CCADB automatically detects and alerts root store operators when a root CA has outdated audit statements or a gap between audit periods. This is important, because audit statements provide assurance that a CA is following required procedures so that they do not issue fraudulent certificates.

Through the CCADB we are extending the checks and balances on root CAs to subordinate CAs to provide similar assurance that the subordinate CAs are not issuing fraudulent certificates. Root CAs, who are directly included in Mozilla’s program, can have subordinate CAs who also issue SSL/TLS certificates that are trusted by Firefox. There are currently about 150 root certificates in Mozilla’s root store, which leads to over 3,100 subordinate CA certificates that are trusted by Firefox. In our efforts to ensure that all subordinate CAs follow the rules, we require that they be disclosed in the CCADB along with their audit statements.

Additionally, the CCADB is making it possible for Mozilla to implement Intermediate CA Preloading in Firefox, with the goal of improving performance and privacy. Intermediate CA Preloading is a new way to hande websites that are not properly configured to serve up the intermediate certificate along with its SSL/TLS certificate. When other browsers encounter such websites they use a mechanism to connect to the CA and download the certificate just-in-time. Preloading the intermediate certificate data (aka subordinate CA data) from the CCADB avoids the just-in-time network fetch, which delays the connection. Avoiding the network fetch improves privacy, because it prevents disclosing user browsing patterns to the CA that issued the certificate for the misconfigured website.

Mozilla created and runs the CCADB, which is also used and contributed to by Microsoft, Google, Cisco, and Apple. Even though the common CA data is shared, each root store operator has a customized experience in the CCADB, allowing each root store operator to see the data sets that are important for managing root certificates included in their program.

The CCADB:

  • Makes root stores more transparent through public-facing reports, encouraging community involvement to help ensure that CAs and subordinate CAs are correctly issuing certificates.
    • For example the crt.sh website combines information from the CCADB and Certificate Transparency (CT) logs to identify problematic certificates.
  • Adds automation to improve the level and accuracy of management and rule enforcement. For example the CCADB automates:
  • Enables CAs to provide their annual updates in one centralized system, rather than communicating those updates to each root store separately; and in the future will enable CAs to apply to multiple root stores with a single application process.

Maintaining a root store containing only credible CAs is vital to the security of our products and the web in general. The major root store operators are using the CCADB to promote efficiency in maintaining root stores, to improve internet security by raising the quality and transparency of CA and subordinate CA data, and to make the internet safer by enforcing regular and contiguous audits that provide assurances that root and subordinate CAs do not issue fraudulent certificates. As such, the CCADB is enabling us to help ensure individuals’ security and privacy on the internet and deliver on our commitment to use transparent community-based processes to promote participation, accountability and trust.

The post Mozilla’s Common CA Database (CCADB) promotes Transparency and Collaboration appeared first on Mozilla Security Blog.

QMOFirefox 67 Beta 10 Testday Results

Hello Mozillians!

As you may already know, last Friday April 12th – we held a new Testday event, for Firefox 67 Beta 10.

Thank you all for helping us make Mozilla a better place: Rok Žerdin, noelonassis,  gaby2300, Kamila kamciatek

From Mozilla Bangladesh Community: Sayed Ibn Masud, Md.Rahimul Islam, Shah Yashfique Bhuian, Maruf Rahman, Niyaz Bin Hashem, Md. Almas Hossain, Ummay hany eiti, Kazi Ashraf Hossain.

Results:

– 2 bugs verified: 1522078, 1522237.

– 1 bug triaged: 1544069

– several test cases executed for: Graphics compatibility & support and Session Restore.

Thanks for yet another awesome testday, we appreciate your contribution 🙂

We hope to see you all in our next events, keep an eye on QMO, we will make announcements as soon as somethings shows up!

The Mozilla BlogThe Bug in Apple’s Latest Marketing Campaign

Apple’s latest marketing campaign — “Privacy. That’s iPhone” — made us raise our eyebrows.

It’s true that Apple has an impressive track record of protecting users’ privacy, from end-to-end encryption on iMessage to anti-tracking in Safari.

But a key feature in iPhones has us worried, and makes their latest slogan ring a bit hollow.

Each iPhone that Apple sells comes with a unique ID (called an “identifier for advertisers” or IDFA), which lets advertisers track the actions users take when they use apps. It’s like a salesperson following you from store to store while you shop and recording each thing you look at. Not very private at all.

The good news: You can turn this feature off. The bad news: Most people don’t know that feature even exists, let alone that they should turn it off. And we think that they shouldn’t have to.

That’s why we’re asking Apple to change the unique IDs for each iPhone every month. You would still get relevant ads — but it would be harder for companies to build a profile about you over time.

If you agree with us, will you add your name to Mozilla’s petition?

If Apple makes this change, it won’t just improve the privacy of iPhones — it will send Silicon Valley the message that users want companies to safeguard their privacy by default.

At Mozilla, we’re always fighting for technology that puts users’ privacy first: We publish our annual *Privacy Not Included shopping guide. We urge major retailers not to stock insecure connected devices. And our Mozilla Fellows highlight the consequences of technology that makes publicity, and not privacy, the default.

The post The Bug in Apple’s Latest Marketing Campaign appeared first on The Mozilla Blog.

Firefox UXPaying Down Enterprise Content Debt: Part 3

Paying Down Enterprise Content Debt

Part 3: Implementation & governance

Summary: This series outlines the process to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. Part 1 frames the Firefox add-ons space in terms of enterprise content debt. Part 2 lists the eight steps to develop a new content model. This final piece describes the deliverables we created to support that new model.

<figcaption>@neonbrand via Unsplash</figcaption>

Content guidelines for the “author experience”

“Just as basic UX principles tell us to help users achieve tasks without frustration or confusion, author experience design focuses on the tasks and goals that CMS users need to meet — and seeks to make it efficient, intuitive, and even pleasurable for them to do so.” — Sara Wachter-Boettcher, Content Everywhere

A content model is a useful tool for organizations to structure, future-proof, and clean up their content. But that content model is only brought to life when content authors populate the fields you have designed with actual content. And the quality of that content is dependent in part on how the content system supports those authors in their endeavor.

We had discovered through user research that developers create extensions for a great variety of reasons — including as a side hobby or for personal enjoyment. They may not have the time, incentive, or expertise to produce high-quality, discoverable content to market their extensions, and they shouldn’t be expected to. But, we can make it easier for them to do so with more actionable guidelines, tools, and governance.

An initial review of the content submission flow revealed that the guidelines for developers needed to evolve. Specifically, we needed to give developers clearer requirements, explain why each content field mattered and where that content showed up, and provide them with examples. On top of that, we needed to give them writing exercises and tips when they hit a dead end.

So, to support our developer authors in creating our ideal content state, I drafted detailed content guidelines that walked extension developers through the process of creating each content element.

<figcaption>Draft content guidelines for extension elements, mocked up in a rough Google Site for purposes of feedback and testing.</figcaption>

Once a draft was created, we tested it with Mozilla extension developer, Dietrich Ayala. Dietrich appreciated the new guidelines, and more importantly, they helped him create better content.

<figcaption>Sample of previous Product Page content</figcaption>
<figcaption>Sample of revised Product Page content</figcaption>
<figcaption>Sample of revised Product Page content: New screenshots to illustrate how extension works</figcaption>

We also conducted interviews with a cohort of developers in a related project to redesign the extensions submission flow (i.e., the place in which developers create or upload their content). As part of that process, we solicited feedback from 13 developers about the new guidelines:

  • Developers found the guidelines to be helpful and motivating for improving the marketing and SEO of their extensions, thereby better engaging users.
  • The clear “do this/not that” section was very popular.
  • They had some suggestions for improvement, which were incorporated into the next version.

Excerpts from developer interviews:

“If all documentation was like this, the world would be a better place…It feels very considered. The examples of what to do, what not do is great. This extra mile stuff is, frankly, something I don’t see on developer docs ever: not only are we going to explain what we want in human language, but we are going to hold your hand and give you resources…It’s [usually] pulling teeth to get this kind of info [for example, icon sizes] and it’s right here. I don’t have to track down blogs or inscrutable documentation.”
“…be more upfront about the stuff that’s possible to change and the stuff that would have consequences if you change it.”

Finally, to deliver the guidelines in a useful, usable format, we partnered with the design agency, Turtle, to build out a website.

<figcaption>Draft content guidelines page</figcaption>
<figcaption>Draft writing exercise on the subtitle content guidelines page</figcaption>

Bringing the model to life

Now that the guidelines were complete, it was time to develop the communication materials to accompany the launch.

To bring the content to life, and put a human face on it, we created a video featuring our former Director of Firefox UX, Madhava Enros, and an extension developer. The video conveys the importance of creating good content and design for product pages, as well as how to do it.

<figcaption>Product Page Content & Design Video</figcaption>

Preliminary results

Our content model had a tall order to fill, as detailed in our objectives and measurements template.

So, how did we do against those objectives? While the content guidelines have not yet been published by the time of this blog post, here’s a snapshot of preliminary results:

<figcaption>Snapshot of preliminary results</figcaption>

And, for illustration, some examples in action:

1. Improved Social Share Quality

<figcaption>Previous Facebook Share example</figcaption>
<figcaption>New Facebook Share example</figcaption>

2. New Google search snippet model reflective of SEO best practices and what we’d learned from user research — such as the importance of social proof via extension ratings.

<figcaption>Previous Google “search snippet”</figcaption>
<figcaption>New Google “search snippet”</figcaption>

3. Promising, but early, SEO findings:

  • Organic search traffic decline slowed down by 2x compared to summer 2018.
  • Impressions in search for the AMO site are much higher (30%) than before. Shows potential of the domain.
  • Overall rankings are stable, and top 10 rankings are a bit up.
  • Extension/theme installs via organic search are stable after previous year declines.
  • Conversion rate to installs via organic search are growing, from 42% to 55%.

Conclusion & resources

There’s isn’t a quick or easy fix to enterprise content debt, but investing the time and energy to think about your content as structured data, to cultivate a content model based upon your business goals, and develop the guidance and guardrails to realize that model with your content authors, pays dividends in the long haul.

Hopefully this series has provided a series of steps and tools to figure out your individual payment plan. For more on this topic:

And, if you need more help, and you’re fortunate enough to have a content strategist on your team, you can always ask that friendly content nerd for some content credit counseling.

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Paying Down Enterprise Content Debt: Part 3 was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXPaying Down Enterprise Content Debt: Part 2

Paying Down Enterprise Content Debt

Part 2: Developing Solutions

Summary: This series outlines the process to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. In Part 1 , I framed the Firefox add-ons space in terms of an enterprise content debt problem. In this piece, I walk through the eight steps we took to develop solutions, culminating in a new content model. See Part 3 for the deliverables we created to support that new model.

<figcaption>Source: Sam Truong Dan</figcaption>
  • Step 1: Stakeholder interviews
  • Step 2: Documenting content elements
  • Step 3: Data analysis — content quality
  • Step 4: Domain expert review
  • Step 5: Competitor compare
  • Step 6: User research — What content matters?
  • Step 7: Creating a content model
  • Step 8: Refine and align

Step 1: Stakeholder interviews

To determine a payment plan for our content debt, we needed to first get a better understanding of the product landscape. Over the course of a couple of weeks, the team’s UX researcher and I conducted stakeholder interviews:

Who: Subject matter experts, decision-makers, and collaborators. May include product, engineering, design, and other content folks.

What: Schedule an hour with each participant. Develop a spreadsheet with questions that get at the heart of what you are trying to understand. Ask the same set of core questions to establish trends and patterns, as well as a smaller set specific to each interviewee’s domain expertise.

<figcaption>Sample question template, including content-specific inquiries below</figcaption>

After completing the interviews, we summarized the findings and walked the team through them. This helped build alignment with our stakeholders around the issues and prime them for the potential UX and content solutions ahead.

Stakeholder interviews also allowed us to clarify our goals. To focus our work and make ourselves accountable to it, we broke down our overarching goal — improve Firefox users’ ability to discover, trust, install, and enjoy extensions — into detailed objectives and measurements using an objectives and measurements template. Our main objectives fell into three buckets: improved user experience, improved developer experience, and improved content structure. Once the work was done, we could measure our progress against those objectives using the measurements we identified.

Step 2: Documenting content elements

Product environment surveyed, we dug into the content that shaped that landscape.

Extensions are recommended and accessed not only through AMO, but in a variety of places, including the Firefox browser itself, in contextual recommendations, and in external content. To improve content across this large ecosystem, we needed to start small…at the cellular content level. We needed to assess, evolve, and improve our core content elements.

By “content elements,” I mean all of the types of content or data that are attached to an extension — either by developers in the extension submission process, by Mozilla on the back-end, or by users. So, very specifically, these are things like description, categories, tags, ratings, etc. For example, the following image contains three content elements: icon, extension name, summary:

Using Excel, I documented existing content elements. I also documented which elements showed up where in the ecosystem (i.e., “content touchpoints”):

<figcaption>Excerpt of content elements documentation</figcaption>
<figcaption>Excerpt of content elements documentation: content touchpoints</figcaption>

The content documentation Excel served as the foundational document for all the work that followed. As the team continued to acquire information to shape future solutions, we documented those learnings in the Excel, evolving the recommendations as we went.

Step 3: Data analysis — content quality

Current content elements identified, we could now assess the state of said content. To complete this analysis, we used a database query (created by our product manager) that populated all of the content for each content element for every extension and theme. Phew.

We developed a list of queries about the content…

<figcaption>Sample selection of data questions</figcaption>

…and then answered those queries for each content element field.

<figcaption>Sample data analysis for “Extension Name”</figcaption>
  • For quantitative questions (like minimum/maximum content length per element), we used Excel formulas.
  • For questions of content quality, we analyzed a sub-section of the data. For example, what’s the content quality state of extension names for the top 100 extensions? What patterns, good and bad, do we see?

Step 4: Domain expert review

I also needed input from domain experts on the content elements, including content reviewers, design, and localization. Through this process, we discovered pain points, areas of opportunity, and considerations for the new requirements.

For example, we had been contemplating a 10-character minimum for our long description field. Conversations with localization expert, Peiying Mo, revealed that this would not work well for non-English content authors…while 10 characters is a reasonable expectation in English, it’s asking for quite a bit of content when we are talking about 10 Chinese characters.

Because improving search engine optimization (SEO) for add-ons was a priority, review by SEO specialist, Raphael Raue, was especially important. Based on user research and analytics, we knew users often find extensions, and make their way to the add-ons site, through external search engines. Thus, their first impression of an add-on, and the basis upon which they may assess their interest to learn more, is an extension title and description in Google search results (also called a “search snippet”). So, our new content model needed to be optimized for these search listings.

<figcaption>Sample domain expert review comments for “Extension Name”</figcaption>

Step 5: Competitor compare

A picture of the internal content issues and needs was starting to take shape. Now we needed to look externally to understand how our content compared to competitors and other successful commercial sites.

Philip Walmsley, UX designer, identified those sites and audited their content elements, identifying surplus, gaps, and differences from Firefox. We discussed the findings and determined what to add, trim, or tweak in Firefox’s content element offerings depending on value to the user.

<figcaption>Excerpt of competitive analysis</figcaption>

Step 6: User research — what content matters?

A fair amount of user research about add-ons had already been done before we embarked on this journey, and Jennifer Davidson, our UX researcher, lead additional, targeted research over the course of the year. That research informed the content element issues and needs. In particular, a large site survey, add-ons team think-aloud sessions, and in-person user interviews identified how users discover and decide whether or not to get an extension.

Regarding extension product pages in particular, we asked:

  • Do participants understand and trust the content on the product pages?
  • What type of information is important when deciding whether or not to get an extension?
  • Is there content missing that would aid in their discovery and comprehension?

Through this work, we deepened our understanding of the relative importance of different content elements (for example: extension name, summary, long description were all important), what elements were critical to decision making (such as social proof via ratings), and where we had content gaps (for example, desire for learning-by-video).

Step 7: Creating a content model

“…content modeling gives you systemic knowledge; it allows you to see what types of content you have, which elements they include, and how they can operate in a standardized way — so you can work with architecture, rather than designing each one individually.” — Sara Wachter-Boettcher, Content Everywhere, 31

Learnings from steps 1–6 informed the next, very important content phase: identifying a new content model for an add-ons product page.

A content model defines all of the content elements in an experience. It details the requirements and restrictions for each element, as well as the connections between elements. Content models take diverse shapes and forms depending on project needs, but the basic steps often include documentation of the content elements you have (step 2 above), analysis of those elements (steps 3–6 above), and then charting new requirements based on what you’ve learned and what the organization and users need.

Creating a content model takes quite a bit of information and input upfront, but it pays dividends in the long-term, especially when it comes to addressing and preventing content debt. The add-ons ecosystem did not have a detailed, updated content model and because of that, developers didn’t have the guardrails they needed to create better content, the design team didn’t have the content types it needed to create scalable, user-focused content, and users were faced with varying content quality.

A content model can feel prescriptive and painfully detailed, but each content element within it should provide the flexibility and guidance for content creators to produce content that meets their goals and the goals of the system.

<figcaption>Sample content model for “Extension Name”</figcaption>

Step 8: Refine and align

Now that we had a draft content model — in other words, a list of recommended requirements for each content element — we needed review and input from our key stakeholders.

This included conversations with add-ons UX team members, as well as partners from the initial stakeholder interviews (like product, engineering, etc.). It was especially important to talk through the content model elements with designers Philip and Emanuela, and to pressure test whether each new element’s requirements and file type served design needs across the ecosystem. One of the ways we did this was by applying the new content elements to future designs, with both best and worst-case content scenarios.

[caption id=”attachment_4182" align=”aligncenter” width=”820"]

<figcaption>Re-designed product page with new content elements (note — not a final design, just a study). Design lead: Philip Walmsley</figcaption>
<figcaption>Draft “universal extension card” with new content elements (note — not a final design, just a study). This card aims to increase user trust and learnability when user is presented with an extension offering anywhere in the ecosystem. Design lead: Emanuela Damiani</figcaption>

Based on this review period and usability testing on usertesting.com, we made adjustments to our content model.

Okay, content model done. What’s next?

Now that we had our new content model, we needed to make it a reality for the extension developers creating product pages.

In Part 3, I walk through the creation and testing of deliverables, including content guidelines and communication materials.

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Paying Down Enterprise Content Debt: Part 2 was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXPaying Down Enterprise Content Debt: Part 1

Paying Down Enterprise Content Debt

Part 1: Framing the problem

Summary: This series outlines the process to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. This first part frames the enterprise content debt issue. Part 2 lists the eight steps to develop a new content model. Part 3 describes the deliverables we created to support that new model.

<figcaption>QuinceMedia via Wikimedia</figcaption>

Background

If you want to block annoying ads or populate your new tab with sassy cats, you can do it…with browser extensions and themes. Users can download thousands of these “add-ons” from Firefox’s host site, addons.mozilla.org (“AMO”), to customize their browsing experience with new functionality or a dash of whimsy.

<figcaption>The Tabby Cat extension takes over your new tab with adorable cats</figcaption>

Add-ons can be a useful and delightful way for people to improve their web experience — if they can discover, understand, trust, and appreciate their offerings. Over the last year, the add-ons UX pod at Firefox, in partnership with the larger add-ons team, worked on ways to do just that.

One of the ways we did this was by looking at these interconnected issues through the lens of content structure and quality. In this series, I’ll walk you through the steps we took to develop a new content model for the add-ons ecosystem.

Understanding the problem

Add-ons are largely created by third-party developers, who also create the content that describes the add-ons for users. That content includes things like extension name, icon, summary, long, description, screenshots, etcetera:

<figcaption>Sample developer-provided content for the Momentum extension</figcaption>

With 10,000+ extensions and 400,000+ themes, we are talking about a lot of content. And while the add-ons team completely appreciated the value of the add-ons themselves, we didn’t really understand how valuable the content was, and we didn’t use it to its fullest potential.

The first shift we made was recognizing that what we had was enterprise content — structured content and metadata stored in a formal repository, reviewed, sometimes localized, and published in different forms in multiple places.

Then, when we assessed the value of it to the enterprise, we uncovered something called content debt.

Content debt is the hidden cost of not managing the creation, maintenance, utility, and usability of digital content. It accumulates when we don’t treat content like an asset with financial value, when we value expediency over the big picture, and when we fail to prioritize content management. You can think of content debt like home maintenance. If you don’t clean your gutters now, you’ll pay in the long term with costly water damage.

AMO’s content debt included issues of quality (missed opportunities to communicate value and respond to user questions), governance (varying content quality with limited organizational oversight), and structure (the need for new content types to evolve site design and improve social shares and search descriptions).

A few examples of content debt in action:

<figcaption>Facebook social share experience: Confusing image accompanied by text describing how to report an issue with the extension</figcaption>
<figcaption>Google Search results example for an extension. Lacks description of basic functionality and value proposition. No SEO-optimized keywords or social proof like average rating.</figcaption>
<figcaption>A search for “Investing” doesn’t yield the most helpful results</figcaption>

All of this equals quite a bit of content debt that prevents developers and users from being as successful as they could be in achieving their interconnected goals: offering and obtaining extensions. It also hinders the ecosystem as a whole when it comes to things like SEO (Search Engine Optimization), which the team wanted to improve.

Given the AMO site’s age (15 years), and the amount of content it contains, debt is to be expected. And it’s rare for a content strategist to be given a content challenge that doesn’t involve some debt because it’s rare that you are building a system completely from scratch. But, that’s not the end of the story.

When we considered the situation from a systems-wide perspective, we realized that we needed to move beyond thinking of the content as something created by an individual developer in a vacuum. Yes, the content is developer-generated — and with that a certain degree of variation is to be expected — but how could we provide developers with the support and perspective to create better content that could be used not only on their product page, but across the content ecosystem? While the end goal is more users with more extensions by way of usable content, we needed to create the underpinning rules that allowed for that content to be expressed across the experience in a successful way.

In part 2, I walk through the specific steps the team took to diagnose and develop solutions for our enterprise content debt. Meanwhile, speaking of the team…

Meet our ‘UX supergroup’

“A harmonious interface is the product of functional, interdisciplinary communication and clear, well-informed decision making.” — Erika Hall, Conversational Design, 108

While the problem space is framed in terms of content strategy, the diagnosis and solutions were very much interdisciplinary. For this work, I was fortunate to be part of a “UX supergroup,” i.e., an embedded project team that included a user researcher (Jennifer Davidson), a designer with strength in interaction design (Emanuela Damiani), a designer with strength in visual design (Philip Walmsley), and a UX content strategist (me).

We were able to do some of our best work by bringing to bear our discipline-specific expertise in an integrated and deeply collaborative way. Plus, we had a mascot — Pusheen the cat — inspired in part by our favorite tab-changing extension, Tabby Cat.

<figcaption>Left to right: Jennifer, Emanuela, Philip, Meridel #teampusheen</figcaption>

See Part 2, Paying Down Enterprise Content Debt: Developing Solutions

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Paying Down Enterprise Content Debt: Part 1 was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla VR BlogFirefox Reality 1.1.3

Firefox Reality 1.1.3

Firefox Reality 1.1.3 will soon be available for all users in the Viveport, Oculus, and Daydream app stores.

This release includes some major new features including support for 6DoF controllers, new environments, option for curved browser window, greatly improved YouTube support, and many bug fixes.

Highlights:

  • Improved support for 6DoF Oculus controllers and user height.
  • Added support for 6DoF VIVE Focus (WaveVR) controllers.
  • Updated the Meadow environment and added new Offworld, Underwater, and Winter environments (Settings > Environments).
  • Added new option for curved browser window (Settings > Display).
  • Improved default playback quality of YouTube videos (defaults to 1440p HD or next best).

Improvements/Bug Fixes:

  • Fixed User-Agent override to fix Delight-VR video playback.
  • Changed the layout of the Settings window so it’s easier and faster to find the option you need to change.
  • Performance improvements, including dynamic clock levels and Fixed Foveated Rendering on Oculus.
  • Improved resolution of text rendering in UI widgets.
  • Plus a myriad of web-content handling improvements from GeckoVview 68.
  • … and numerous other fixes

Full release notes can be found in our GitHub repo here.

Looking ahead, we are exploring content sharing and syncing across browsers (including bookmarks), multiple windows, as well as continuing to invest in baseline features like performance. We appreciate your ongoing feedback and suggestions — please keep them coming!

Firefox Reality is available right now.

Download for Oculus
(supports Oculus Go)

Download for Daydream
(supports all-in-one devices)

Download for Viveport (Search for Firefox Reality in Viveport store)
(supports all-in-one devices running VIVE Wave)

Mozilla VR BlogWrapping up a week of WebVR experiments

Wrapping up a week of WebVR experiments

Earlier this week, we kicked off a week of WebVR experiments with our friends at Glitch.com. Glitch creator and WebVR expert Andrés Cuervo put together seven projects that are fun, unique, and will challenge you to learn advanced techniques for building Virtual Reality experiences on the web.

If you are just getting started with WebVR, we recommend you check out this WebVR starter kit which will walk you through creating your very first WebVR experience.

Today, we launched the final experiment. If you haven't been following along, you can catch up on all of them below:

Motion Capture Dancing in VR

Learn how to use free motion capture data to animate a character running, dancing, or cartwheeling across a floor.


Adding Models and Shaders

Learn about how to load common file types into your VR scene.


Using 3D Shapes like Winding Knots

Learn how to work with the torus knot shape and the animation component that is included with A-frame.


Animated Torus Knot Rings

Learn about template and layout components while you continue to build on the previous Winding Knots example.


Generated Patterns

Create some beautiful patterns using some flat geometry in A-Frame with clever tilting.


Creating Optical Illusions

This is a simple optical optical illusion that is made possible with virtual reality.


Including Dynamic Content

Learn how to use an API to serve random images that are used as textures in this VR scene.


We hope you enjoyed learning and remixing these experiments (We really enjoyed putting them together). Follow Andrés Cuervo on Glitch for even more WebVR experiments.

Mozilla UXWhat data types do real people care about protecting?

At Mozilla, helping users protect their privacy and ensuring that they have control over the data collected about them is a critical pillar of everything we do. It is so important that Principle #4 of the Mozilla Manifesto states, “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” It’s also why we build tools such as Firefox Monitor, which helps users know when their data has been accessed without their permission.

There are MANY different categories of data (here’s a list of 50 distinct categories) that constitute your online self: your name, your address, the size of your paycheck. Individually they are just small data points, but collectively, they can add up to a unique individual. Obviously, you are probably more concerned with some data categories than others. You may not care if the fact that you are single or married is public information, but you probably don’t want your home address or the size of your paycheck being easily found by just anyone.

To help us understand the types of data that people like you are most concerned about, we decided to run a survey using the MaxDiff methodology. MaxDiff is a great way to take a giant list of almost anything, and ask people to rate the attributes in the list against each other. Survey participants were presented with four random attributes from our list, and asked to choose which one they are most worried about, and which one they are least worried about. In practice, it looks like this:

An example of how a MaxDiff question appears to a survey taker
We then show random sets of four attributes over and over again until the user has seen each attribute multiple times, and has had the chance to rank each attribute relative to several combinations. This gives us a nice, ranked list of attributes, as well as the following information:

  • How often the participant selected “Most Worried About”
  • How often the participant selected “Least Worried About”
  • How often the attribute was never even selected

Using this data, we got a pretty good idea of what sorts of data users were most concerned about, which gives us a starting place to build tools and education about how to protect those most vulnerable types of data.

The Survey

We ran our survey on a paid panel of people in the United States. We tried to match a balance of age and gender that mirrored the US Census, and we surveyed 1,500 participants. We plan on eventually expanding this research to other countries in the future. All respondents were shown the MaxDiff question type shown above, as well as a variety of questions around data breaches and the bad actors that they were most worried about gaining access to their data.

The Results

The top ten most sensitive types of data:

A chart displaying the top ten most sensitive data types

For all participants the top 10 data types from our MaxDiff question can be seen above. Some summary notes of what we find participants rated highly:

  • Financial Information
  • Government issued IDs
  • Personal / Private information (Health, address, etc.)

This is a great exercise that lets us quantify the data and the responses to better protect users by prioritizing the data types above in our products and tools. For example, Firefox Lockbox is a project dedicated to better protecting your passwords (the 3rd most critical data-type) starting with secure, safe access to your Firefox-saved passwords on mobile.

Who are participants most concerned will gain unauthorized access to their data?

A chart displaying who users are most concerned will gain access to their data

Another question we asked participants is who they are most worried about gaining unauthorized access to their data. The third top choice is “Companies that collect and sell info about me” closely followed by “Social networking companies.” Facebook Container is an awesome example of a tool that we developed that helps protect a social networking company from knowing things about you that you might not want it to know.

Something interesting to note here is 54% of respondents said they were worried about “Companies that collect and sell data about me,” but only 22.7% of respondents were worried about advertisers. This implies that there is a large portion of the population that might be OK with online advertising, if they knew that the advertising company was just that–an advertising company–and not an advertising company that also was dedicated to collecting as much information about them as possible.

These results (combined with other research) are currently being used by our Security and Privacy teams to build products that help put you more in control of what data is collected about you, and how that data is used. Look out for exciting announcements later this year, both in Firefox, and outside the browser.

I hope this dive into the user research we are doing at Mozilla was an interesting and insightful peek behind the curtain and that it highlights how we use the opinions of real people to help guide our efforts to keep the web open, safe and free.

 

-Tyler

 

Acknowledgements

Thanks to everyone who assisted both with the development and analysis of this survey, and review of this post. In no particular order; Rob Rayborn, Rosanne Scholl, Josh Gaunt, Gemma Petrie, Jennifer Davidson, Alice Rhee, Sharon Bautista and Sandy Sage.

hacks.mozilla.orgDeveloper Roadshow 2019 returns with VR, IoT and all things web

Mozilla Developer Roadshow is a meetup-style, Mozilla-focused event series for people who build the web. In 2017, the Roadshow reached more than 50 cities around the world. We shared highlights of the latest and greatest Mozilla and Firefox technologies. Now, we’re back to tell the story of how the web continues to democratize opportunities for developers and digital creators.

New events in New York and Los Angeles

To open our 2019 series, Mozilla presents two events with VR visionary Nonny de la Peña and the Emblematic Group in Los Angeles (April 23) and in New York (May 20-23). de la Peña’s pioneering work in virtual reality, widely credited with helping create the genre of immersive journalism, has been featured in Wired, Inc., The New York Times, and on the cover of The Wall Street Journal. Emblematic will present their latest project, REACH in WebVR. Their presentation will include a short demo of their product. During the social hour, the team will be available to answer questions and share their learnings and challenges of developing for the web.

Funding and resource scarcity continue to be key obstacles in helping the creative community turn their ideas into viable products. Within the realm of cutting edge emerging technologies, such as mixed reality, it’s especially challenging for women. Because women receive less than 2% of total venture funding, the open distribution model of the web becomes a viable and affordable option to build, test, and deploy their projects.

Upcoming DevRoadshow events

The DevRoadshow continues on the road with eight more upcoming sessions in Europe and the Asia Pacific regions throughout 2019. Locations and dates will be announced soon. We’re eager to invite coders and creators around the world to join us this year. The Mozilla Dev Roadshow is a great way to make new friends and stay up to date on new products. Come learn about services and opportunities that extend the power of the web as the most accessible and inclusive platform for immersive experiences.

Check back to this post for updates, visit our DevRoadshow site for up to date registration opportunities, and follow along our journey on @mozhacks or sign up for the weekly Mozilla Developer Newsletter. We’ll keep you posted!

The post Developer Roadshow 2019 returns with VR, IoT and all things web appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NImplementing Fluent in a localization tool

In order to produce natural sounding translations, Fluent syntax supports gender, plurals, conjugations, and virtually any other grammatical category. The syntax is designed to be simple to read, but translators without developer background might find more complex concepts harder to deal with.

That’s why we designed a Fluent-specific user interface in Pontoon, which unleashes Fluent powers to localizers who aren’t coders. Any other general purpose Translation Management System (TMS) with support for popular localization formats can follow the example. Let’s have a closer look at Fluent implementation in Pontoon.

Storing Fluent strings in the DB

We store Fluent translations in the Translation table differently compared to other file formats.

In the string column we usually only store the translatable content, which is sufficient for representing the translation across the application. For example, string hello = Hello, world! in the .properties file format is stored as Hello, world! in Translation.string. In Fluent, we store the entire string: hello = Hello, world!.

The main reason for storing full Fluent strings are attributes, which can be present in Fluent Messages and Terms in addition to their values. Having multiple localizable values stored together allows us to build advanced UIs like the ones we use for access keys (see below).

The alternative to storing the serialized string is to store its AST. At the time of implementing Fluent support in Pontoon, the AST wasn’t as stable as the syntax, but today you should be fine with using either of these approaches.

Different presentation formats

Due to their potential complexity, Fluent strings can be presented in 4 different ways in Pontoon:

  1. In simplified form: used in string list, History tab and various other places across Pontoon, it represents the string as it appears to the end-user in the application. We generate it using FluentSerializer.serializeExpression(). For example, hello = Hello, world! would appear simply as Hello, world!.
  2. In read-only form: used to represent the source string.
  3. In editable form: used to represent the translation in the editor.
  4. As source: used to represent the translation in the source editor.
Presentation formats

Where are different presentation formats used.
You can access the source view by clicking on the FTL button.

Example walk-through

The following is a list of various Fluent strings and their appearances in a Pontoon translation interface with source string panel on top and translation editor below. In the first batch are the strings using the existing Pontoon UI, shared with other file formats.

Simple strings
Simple strings

title = About Localization

Multiline strings
Multiline strings

feedbackUninstallCopy =
    Your participation in Firefox Test Pilot means
    a lot! Please check out our other experiments,
    and stay tuned for more to come!

Strings with a single attribute
Strings with a single attribute

emailOptInInput =
    .placeholder = email goes here :)

Next up are Fluent strings, which have attributes in addition to the value or multiple attributes. A separate text input is available for each of them.

Strings with a value and attributes
Strings with a value and an attribute

shotIndexNoExpirationSymbol = ∞
    .title = This shot does not expire

Since Fluent lets you combine labels and access keys within the same string, we designed a dedicated UI that lists all the allowed access keys as you translate the label. You can also type in a custom key if none of the candidates meets your criteria.

Access keys
Access keys

file-menu =
    .label = File
    .accesskey = F

Terms are similar to regular messages but they can only be used as references in other messages. Hence, they are best used to define vocabulary and glossary items which can be used consistently across the localization of the entire product. Terms can have multiple variants, and Pontoon makes it easy to add/remove them.

Terms
Terms

# Translated string
-brandShortName = { $case ->
    [nominative] Firefox Račun
   *[genitive] Firefox Računa
}

Selectors are a powerful feature of Fluent. They are used when there’s a need for multiple variants of the string based on an external variable. In the case case, PLATFORM().

Selectors
Selectors

platform = { PLATFORM() ->
    [win] Options
   *[other] Preferences
}

A common use case of selectors are plurals. When you translate a pluralized source string, Pontoon renders empty CLDR plural categories used in the target locale, each accompanied by an example number.

Plurals
Plurals

delete-all-message = { $num ->
    [one] Delete this download?
   *[other] Delete { $num } downloads?
}

Selectors can also be used in attributes. Pontoon hides most of the complexity.

Selectors in attributes
Selectors in attributes

download-choose-folder =
    .label = { PLATFORM() ->
        [macos] Choose…
       *[other] Browse…
    }
    .accesskey = { PLATFORM() ->
        [macos] e
       *[other] o
    }

If a value or an attribute contains multiple selectors, Pontoon indents them for better readability.

Strings with multiple selectors
Strings with multiple selectors

selector-multi = There { $num ->
    [one] is one email
   *[other] are many emails
} for { $gender ->
   *[masculine] him
    [feminine] her
}

Strings with nested selectors are not supported in Fluent editor, in which case Pontoon falls back to the source editor. The source editor is always available, and you can switch to it by clicking on the FTL button.

Unsupported strings
Unsupported strings

Next steps

The current state of the Fluent UI in Pontoon allows us to use Fluent to localize Firefox and various other projects. We don’t want to stop here, though. Instead, we’re looking to improve the Fluent experience further and make it the best among available localization formats. You can track the list of Fluent-related Pontoon bugs in Bugzilla. Some of the highlights include:

  • Ability to change default variant
  • Ability to add/remove variants
  • Ability to add selectors to simple messages
  • Making machinery usable with more complex strings
  • Prefilling complex message skeleton in source editor

Mozilla Gfx TeamWebRender newsletter #43

WebRender is a GPU based 2D rendering engine for the web written in Rust, currently powering Mozilla’s research web browser servo and on its way to becoming Firefox‘s rendering engine.

A week in Toronto

The gfx team got together in Mozilla’s Toronto office last week. These gatherings are very valuable since the team is spread over many timezones (in no particular order, there are graphics folks in Canada, Japan, France, various parts of the US, Germany, England, Australia and New Zealand).

It was an intense week, filled with technical discussions and planning. I’ll go over some of them below:

New render task graph

Nical continues investigating a more powerful and expressive render task graph for WebRender. The work is happening in the toy render-graph repository and during the week a lot task scheduling and texture memory allocation strategies were discussed. It’s an interesting and open problem space where various trade-offs will play out differently on different platforms.
One of the things that came out of these discussions is the need for tools to understand the effects of the different graph scheduling and allocation strategies, and help debugging their effects and bugs once we get to integrating a new system into WebRender. As a result, Nical started building a standalone command-line interface for the experimental task graph and generate SVG visualizations.

rendergraph-example

Battery life

So far our experimentation has showed that energy consumption in WebRender (as well as Firefox’s painting and compositing architecture) is strongly correlated with the amount of pixels that are manipulated. In other word, it is dominated by memory bandwidth which is stressed by high screen resolutions. This is perhaps no surprise for someone having worked with graphics rendering systems, but what’s nice with this observation is that it gives us a very simple metric to measure and build optimizations and heuristics around.

Avenues for improvements in power consumption therefore include Doug’s work with document splitting and Markus’s work on better integration with MacOS’s compositing window manager with the Core Animation API. No need to tell the window manager to redraw the whole window when only the upper part changed (the document containing the browser’s UI for example).
The browser can break the window up into several surfaces and let the window manager composite them together which saves some work and potentially substantial amounts of memory bandwidth.
On Windows the same can be achieved with the Direct Composition API. On Linux with Wayland we could use sub-surfaces although in our preliminary investigation we couldn’t find a reliable/portable way to obtain the composited content for the whole browser window which is important for our testing infrastructure and other browser functionalities. Only recently did Android introduce similar functionalities with the SurfaceControl API.

We made some short and long term plans around the theme of better integration with the various window manager APIs. This is an area where I hope to see WebRender improve a lot this year.

Fission

Ryan gave us an overview of the architecture and progress of project Fission, which he has been involved with for some time. The goal of the project is to further isolate content from different origins by dramatically increasing the amount of content processes. There are several challenging aspects to this. Reducing the per-process memory overhead is an important one as we really want to remain better than Chrome overall memory usage. WebRender actually helps in this area as it moves most of the rendering out of content processes. There are also fun (with some notion of “fun”) edge cases to deal with such as page from domain A nested into iframe of domain B nested into iframe of domain A, and what this type of sandwichery implies in terms of processes, communication and what should happen when a CSS filter is applied to that kind of stack.

Fun stuff.

WebRender and software rendering

There will always be hardware and driver configurations that are too old, too buggy, or both, for us to support with WebRender using the GPU. For some time Firefox will fall back to the pre-WebRender architecture, but we’ll eventually want to get to a point where we phase out this legacy code while still work for all of our users. So WebRender needs some way to work when GPU doesn’t.

We discussed several avenues for this, one of which being to leverage WebRender’s OpenGL implementation with a software emulation such as SwiftShader. It’s unclear at his point whether or not we’ll be able to get acceptable performance this way, but Glenn’s early experiments show that there are a lot of low hanging fruits to pick and optimize such a software implementation, hopefully to the point where it provides a good user experience.

Other options include dedicated CPU rendering backend which we could safely expect to get to at least Firefox’s current software rendering performance, at the expense of more engineering resources.

Removing code

As WebRender replaces Firefox’s current rendering architecture, we’ll be able to remove large amounts of fairly complex code in the gfx and layout modules, which is an exciting prospect. We discussed how much we can simplify and at which stages of WebRender’s progressive rollout.

WebGPU status

Kvark gave us an update on the status of the WebGPU specification effort. Things are coming along in a lot of areas, although the binding model and shader format are still very much under debate. Apple proposes to introduce a brand new shading language called WebHLSL while Mozilla and Google want a bytecode format based on a safe subset of SPIRV (the Khronos group’s shader bytecode standard used in Vulkan OpenCL, and OpenGL through an extension). Having both a bytecode and high-level language is also on the table although fierce debates continue around the merits of introducing a new language instead of using and/or extending GLSL, already used in WebGL.

From what I have seen of the already-agreed-upon areas so far, WebGPU’s specification is shaping up to be a very nice and modern API, leaning towards Metal in terms of level of abstraction. I’m hopeful that it’ll eventually be extended into providing some of Vulkan’s lower level controls for performance.

Display list improvements

Display lists can be quite large and costly to process. Gankro is working on compression opportunities to reduce the IPC overhead, and Glenn presented plans to move some of the interning infrastructure to the API endpoints so as to to send deltas instead of the entire display lists at each layout update, further reducing IPC cost.

Debugging workshops

Kvark presented his approach to investigating WebRender bugs and shared clever tricks around manipulating and simplifying frame recordings.

Glenn presented his Android development and debugging workflow and shared his observations about WebRender’s performance characteristics on mobile GPUs so far.

To be continued

We’ve covered about half of the week’s topics and it’s already a lot for a single newsletter. We will go over the rest in the next episode.

Enabling WebRender in Firefox Nightly

In about:config, enable the pref gfx.webrender.all and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Using WebRender in a Rust project

WebRender is available as a standalone crate on crates.io (documentation)

Open Policy & AdvocacyUS House Votes to Save the Internet

Today, the House took a firm stand on behalf of internet users across the country. By passing the Save the Internet Act, members have made it clear that Americans have a fundamental right to access the open internet. Without these protections in place, big corporations like Comcast, Verizon, and AT&T could block, slow, or levy tolls on content at the expense of users and small businesses. We hope that the Senate will recognize the need for strong net neutrality protections and pass this legislation into law. In the meantime, we will continue to fight in the courts as the DC Circuit considers Mozilla v. FCC, our effort to restore essential net neutrality protections for consumers through litigation.

The post US House Votes to Save the Internet appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyWhat we think about the UK government’s ‘Online Harms’ white paper

The UK government has just outlined its plans for sweeping new laws aimed at tackling illegal and harmful content and activity online, described by the government as ‘the toughest internet laws in the world’. While the UK proposal has some promising ideas for what the next generation of content regulation should look like, there are several aspects that would have a worrying impact on individuals’ rights and the competitive ecosystem. Here we provide our preliminary assessment of the proposal, and offer some guidance on how it could be improved.

According to the UK white paper, companies of all sizes would be under a ‘duty of care’ to protect their users from a broad class of so-called ‘online harms’, and a new independent regulator would be established to police them. The proposal responds to legitimate public policy concerns around how platforms deal with illegal and harmful content online, as well as the general public demand for tech companies to ‘do more’. We understand that in many respects the current regulatory paradigm is not fit for purpose, and we support an exploration of what codified content ‘responsibility’ might look like.

The UK government’s proposed regulatory architecture (a duty of care overseen by an independent regulator) has some promising potential. It could shift focus to regulating systems and instilling procedural accountability, rather than a focus on individual pieces of content and the liability/no liability binary. If implemented properly, its principles-based approach could allow for scalability, fine tailoring, and future-proofing; features that are presently lacking in the European content regulation paradigm (see for instance the EU Copyright directive and the EU Terrorist Content regulation).

Yet while the high-level architecture has promise, the UK government’s vision for how this new regulatory model could be realised in practice contains serious flaws. These must be addressed if this proposal is to reduce rather than encourage online harms.

  • Scope issues: The duty of care would apply to an extremely broad class of online companies. While this is not necessarily problematic if the approach is a principles-based one, there is a risk that smaller companies will be disproportionately burdened if the Codes of Practice are developed with only the tech incumbents in mind. In addition, the scope would include both hosting services and messaging services, despite the fact that they have radically different technical structures.
  • A conflation of terms: The duty of care would apply not only to a range of types of content – from illegal content like child abuse material to legal but harmful material like disinformation – but also harmful activities – from cyber bullying, to immigration crime, to ‘intimidation’. This conflation of content/activities and legal/harmful is concerning, given that many content-related ‘activities’ are almost impossible to proactively identify, and there is rarely a shared understanding of what ‘harmful’ means in different contexts.
  • The role of the independent regulator: Given that this regulator will have unprecedented power to determine how online content control works, it is worrying that the proposal doesn’t spell out safeguards that will be put in place to ensure its Codes of Practice are rights-protective and workable for different types of companies. In addition, it doesn’t give any clarity as to how the development of the codes will be truly co-regulatory.

Yet as we noted earlier, the UK government’s approach holds some promise, and many of the above issues could be addressed if the government is willing. There’s some crucial changes that we’ll be encouraging the UK government to adopt when it brings forward the relevant legislation. These relate to:

  • The legal status: There is a whole corpus of jurisprudence around duties of care and negligence law that has developed over centuries, therefore it is essential that the UK government clarifies how this proposal would interact with and relate to existing duties of care
  • The definitions: There needs to be much more clarity on what is meant by ‘harmful’ content. Similarly, there must be much more clarity on what is meant by ‘activities’ and the duty of care must acknowledge that each of these categories of ‘online harms’ requires a different approach.
  • The independent regulator: The governance structure must be truly co-regulatory, to ensure the measures are workable for companies and protective of individuals’ rights.

We look forward to engaging with the UK government as it enters into a consultation period on the new white paper. Our approach to the UK government will mirror the one that we are taking vis-à-vis the EU – that is, building out a vision for a new content regulation paradigm that addresses lawmakers’ legitimate concerns, but in a way which is rights and ecosystem protective. Stay tuned for our consultation response in late June.

 

The post What we think about the UK government’s ‘Online Harms’ white paper appeared first on Open Policy & Advocacy.

hacks.mozilla.orgTeaching machines to triage Firefox bugs

Many bugs, not enough triage

Mozilla receives hundreds of bug reports and feature requests from Firefox users every day. Getting bugs to the right eyes as soon as possible is essential in order to fix them quickly. This is where bug triage comes in: until a developer knows a bug exists, they won’t be able to fix it.

Given the large number of bugs filed, it is unworkable to make each developer look at every bug (at the time of writing, we’d reached bug number 1536796!). This is why, on Bugzilla, we group bugs by product (e.g. Firefox, Firefox for Android, Thunderbird, etc.) and component (a subset of a product, e.g. Firefox::PDF Viewer).

Historically, the product/component assignment has been mostly done manually by volunteers and some developers. Unfortunately, this process fails to scale, and it is effort that would be better spent elsewhere.

Introducing BugBug

To help get bugs in front of the right Firefox engineers quickly, we developed BugBug, a machine learning tool that automatically assigns a product and component for each new untriaged bug. By presenting new bugs quickly to triage owners, we hope to decrease the turnaround time to fix new issues. The tool is based on another technique that we have implemented recently, to differentiate between bug reports and feature requests. (You can read more about this at https://marco-c.github.io/2019/01/18/bugbug.html).

High-level architecture of bugbug training and operation

High-level architecture of BugBug training and operation


Training a model

We have a large training set of data for this model: two decades worth of bugs which have been reviewed by Mozillians and assigned to products and components.

Clearly we can’t use the bug data as-is: any change to the bug after triage has been completed would be inaccessible to the tool during real operation. So, we “roll back” the bug to the time it was originally filed. (This sounds easy in practice, but there are a lot of corner cases to take into account!).

Also, although we have thousands of components, we only really care about a subset of these. In the past ~2 years, out of 396 components, only 225 components had more than 49 bugs filed. Thus, we restrict the tool to only look at components with a number of bugs that is at least 1% of the number of bugs of the largest component.

We use features collected from the title, the first comment, and the keywords/flags associated with each bug to train an XGBoost model.

High-level overview of the bugbug model

High-level overview of the BugBug model


During operation, we only perform the assignment when the model is confident enough of its decision: at the moment, we are using a 60% confidence threshold. With this threshold, we are able to assign the right component with a very low false positive ratio (> 80% precision, measured using a validation set of bugs that were triaged between December 2018 and March 2019).

BugBug’s results

Training the model on 2+ years of data (around 100,000 bugs) takes ~40 minutes on a 6-core machine with 32 GB of RAM. The evaluation time is in the order of milliseconds. Given that the tool does not pause and is always ready to act, the tool’s assignment speed is much faster than manual assignment (which, on average, takes around a week).

Since we deployed BugBug in production at the end of February 2019, we’ve triaged around 350 bugs. The median time for a developer to act on triaged bugs is 2 days. (9 days is the average time to act, but it’s only 4 days when we remove outliers.)

BugBug in action, performing changes on a newly opened bug

BugBug in action


Plans for the future

We have plans to use machine learning to assist in other software development processes, for example:

  • Identifying duplicate bugs. In the case of bugs which crash Firefox, users will typically report the same bug several times, in slightly different ways. These duplicates are eventually found by the triage process, and resolved, but finding duplicate bugs as quickly as possible provides more information for developers trying to diagnose a crash.
  • Providing additional automated help for developers, such as detecting bugs in which “steps to reproduce” are missing and asking reporters to provide them, or detecting the type of bug (e.g. performance, memory usage, crash, and so on).
  • Detecting bugs that might be important for a given Firefox release as early as possible.

Right now our tool only assigns components for Firefox-related products. We would like to extend BugBug to automatically assign components for other Mozilla products.

We also encourage other organizations to adopt BugBug. If you use Bugzilla, adopting it will be very easy; otherwise, we’ll need to add support for your bug tracking system. File an issue on https://github.com/mozilla/bugbug and we’ll figure it out. We are willing to help!

The post Teaching machines to triage Firefox bugs appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityDNS-over-HTTPS Policy Requirements for Resolvers

Over the past few months, we’ve been experimenting with DNS-over-HTTPS (DoH), a protocol which uses encryption to protect DNS requests and responses, with the goal of deploying DoH by default for our users. Our plan is to select a set of Trusted Recursive Resolvers (TRRs) that we will use for DoH resolution in Firefox. Those resolvers will be required to conform to a specific set of policies that put privacy first.

To that end, today we are releasing a list of DOH requirements, available on the Mozilla wiki, that we will use to vet potential resolvers for Firefox. The requirements focus on three areas: 1) limiting data collection and retention from the resolver, 2) ensuring transparency for any data retention that does occur, and 3) limiting any potential use of the resolver to block access or modify content. This is intended to cover resolvers that Firefox will offer by default and resolvers that Firefox might discover in the local network.

In publishing this policy, our goal is to encourage adherence to practices for DNS that respect modern standards for privacy and security.  Not just for our potential DoH partners, but for all DNS resolvers.

The post DNS-over-HTTPS Policy Requirements for Resolvers appeared first on Mozilla Security Blog.

Firefox UXDesigning Better Security Warnings

Security messages are very hard to get right, but it’s very important that you do. The world of internet security is increasingly complex and scary for the layperson. While in-house security experts play a key role in identifying the threats, it’s up to UX designers and writers to communicate those threats in ways that enlighten and empower users to make more informed decisions.

We’re still learning what works and what doesn’t in the world of security messages, but there are some key insights from recent studies from the field at large. We had a chance to implement some of those recommendations, as well as learnings from our own in-house research, in a recent project to overhaul Firefox’s most common security certificate warning messages.

Background

Websites prove their identity via security certificates (i.e., www.example.com is in fact www.example.com, and here’s the documentation to show it). When you try to visit any website, your browser will review the certificate’s authenticity. If everything checks out, you can safely proceed to the site.

If something doesn’t check out, you’ll see a security warning. 3% of Firefox users encounter a security certificate message on a daily basis. Nearly all users who see a security message see one of five different message types. So, it’s important that these messages are clear, accurate, and effective in educating and empowering users to make the informed (ideally, safer) choice.

These error messages previously included some vague, technical jargon nestled within a dated design. Given their prevalence, and Firefox’s commitment to user agency and safety, the UX and security team partnered up to make improvements. Using findings from external and in-house research, UX Designer Bram Pitoyo and I collaborated on new copy and design.

Old vs. New Designs

<figcaption>Example of an old Firefox security certificate message</figcaption>
<figcaption>Example of a new Firefox security message</figcaption>

Goals

Business goals:

  1. User safety: Prevent users from visiting potentially unsafe sites.
  2. User retention: Keep Firefox users who encounter these errors from switching to another browser.

User experience goals:

  1. Comprehension: The user understands the situation and can make an informed decision.
  2. Adherence: The user makes the informed, pro-safety choice. In the case of security warnings, this means the user does not proceed to a potentially unsafe site, especially if the user does not fully understand the situation and implications at hand.(1)

Results

We met our goals, as illustrated by three different studies:

1. A qualitative usability study (remote, unmoderated on usertesting.com) of a first draft of redesigned and re-written error pages. The study evaluated the comprehensibility, utility, and tone of the new pages. Our internal user researcher, Francis Djabri, tested those messages with eight participants and we made adjustments based on the results.

2. A quantitative survey comparing Firefox’s new error pages, Firefox’s current error pages, and Chrome’s current comparative pages. This was a paid panel study that asked users about the source of message, how they felt about the message, and what actions they would take as a result of the message. Here’s a snapshot of the results:

When presented with the redesigned error message, we saw a 22–50% decrease in users stating they would attempt to ignore the warning message.
When presented with the redesigned error message, we saw a 29–60% decrease in users stating they would attempt to access the website via another browser. (Only 4.7–8.5 % of users who saw the new Firefox message said they would try another browser, in contrast to 10–11.3% of users who saw a Chrome message).
(Source: Firefox Strategy & Insights, Tyler Downer, November 2018 Survey Highlights)

3. A live study comparing the new and old security messages with Firefox users confirmed that the new messages did not significantly impact usage or retention in a negative way. This gave us the green light to go-live with the redesigned pages for all users.

How we did it:

<figcaption>The process of creating new security messages</figcaption>

In this blog post, I identify the eight design and content tips — based on outside research and our own — for creating more successful security warning messages.

Content & Design Tips

1. Avoid technical jargon, and choose your words carefully

Unless your particular users are more technical, it’s generally good practice to avoid technical terms — they aren’t helpful or accessible for the general population. Words like “security credentials,” “encrypted,” and even “certificate” are too abstract and thus ineffective in achieving user understanding.(2)

It’s hard to avoid some of these terms entirely, but when you do use them, explain what they mean. In our new messages, we don’t use the technical term, “security certificates,” but we do use the term “certificates.” On first usage, however, we explain what “certificate” means in plain language:

Some seemingly common terms can also be problematic. Our own user study showed that the term, “connection,” confused people. They thought, mistakenly, that the cause of the issue was a bad internet connection, rather than a bad certificate.(3) So, we avoid the term in our final heading copy:

2. Keep copy concise and scannable…because people are “cognitive misers”

When confronted with decisions online, we all tend to be “cognitive misers.” To minimize mental effort, we make “quick decisions based on learned rules and heuristics.” This efficiency-driven decision making isn’t foolproof, but it gets the job done. It means, however, that we cut corners when consuming content and considering outcomes.(4)

Knowing this, we kept our messages short and scannable.

  • Since people tend to read in an F-shaped pattern, we served up our most important content in the prime real estate of the heading and upper left-hand corner of the page.
  • We used bolded headings and short paragraphs so the reader can find and consume the most important information quickly and easily. Employing headers and prioritizing content into a hierarchy in this way also makes your content more accessible:

We also streamlined the decision-making process with opinionated design and progressive disclosure (read on below).

3. Employ opinionated design, to an appropriate degree

“Safety is an abstract concept. When evaluating alternatives in making a decision, outcomes that are abstract in nature tend to be less persuasive than outcomes that are concrete.” — Ryan West, “The Psychology of Security

When users encounter a security warning, they can’t immediately access content or complete a task. Between the two options — proceed and access the desired content, or retreat to avoid some potential and ambiguous threat — the former provides a more immediate and tangible award. And people like rewards.(5)

Knowing that safety may be the less appealing option, we employed opinionated design. We encourage users to make the safer choice by giving it a design advantage as the “clear default choice.”(6) At the same time, we have to be careful that we aren’t being a big brother browser. If users want to proceed, and take the risk, that’s their choice (and in some cases, the informed user can do so knowing they are actually safe from the particular certificate error at hand). It might be tempting to add ten click-throughs and obscure the unsafe choice, but we don’t want to frustrate people in the process. And, the efficacy of additional hurdles depends on how difficult those hurdles are.(7)

Striving for balance, we:

  • Made the pro-safety choice the most prominent and accessible. The blue button pops against the gray background, and contains text to indicate it is indeed the “recommended” course of action. The color blue is also often used in traffic signals to indicate guidance and direction, which is fitting for the desired pro-safety path.
  • In contrast, the “Advanced…” button is a muted gray, and, after selecting this button, the user is presented with one last barrier. That barrier is additional content explaining the risk. It’s followed by the button to continue to the site in a muted gray with the foreboding text, “Accept the risk…” We used the word “risk” intentionally to capture the user’s attention and be clear that they are putting themselves in a potentially precarious position.

4. Communicate the risk, and make it tangible

In addition to “safety” being an abstract concept, users tend to believe that they won’t be the ones to fall prey to the potential threat (i.e., those kind of things happen to other people…they won’t happen to me).(8) And, save for our more tech-savvy users, the general population might not care what particular certificate error is taking place and its associated details.

So, we needed to make the risk as concrete as possible, and communicate it in more personal terms. We did the following:

  • Use the word “Warning” in our header to capture the user’s attention.
  • Explain the risk in terms of real potential outcomes. The old message simply said, “To protect your information from being stolen…” Our new message is much more explicit, including examples of what was at risk of being stolen. Google Chrome employs similarly concrete wording.
  • Communicate the risk early in your content hierarchy — in our case, this meant the first paragraph (rather than burying it under the “Advanced” section).

5. Practice progressive disclosure

While the general population might not need or want to know the technical details, you should provide them for the users that do…in the right place.

Users rarely click on links like “Learn more” and “More Information.”(9) Our own usability study confirmed this, as half of the participants did not notice or feel compelled to select the “Advanced” button.(10) So, we privileged content that is more broadly accessible and immediately important on our first screen, but provided more detail and technical information on the second half of the screen, or behind the “Advanced” button. Knowing users aren’t likely to click on “Advanced,” we moved any information that was more important, such as content about what action the user could take, to the first screen.

The “Advanced” section thus serves as a form of progressive disclosure. We avoided cluttering up our main screen with technical detail, while preserving a less obtrusive place for that information for the users who want it.

6. Be transparent (no one likes the internet browser who cried wolf)

In the case of security errors, we don’t know for sure if the issue at hand is the result of an attack, or simply a misconfigured site. Hackers could be hijacking the site to steal credit card information…or a site may just not have its security certificate credentials in order, for example.

When there is chance of attack, communicate the potential risk, but be transparent about the uncertainty. Our messages employ language like “potential” and “attackers could,” and we acknowledge when there are two potential causes for the error (the former unsafe, the latter safe):

The website is either misconfigured or your computer clock is set to the wrong time.

Explain why you don’t trust a site, and offer the ability to learn more in a support article:

Websites prove their identity via certificates. Firefox does not trust example.com because its certificate issuer is unknown, the certificate is self-signed, or the server is not sending the correct intermediate certificates. Learn more about this error

A participant in our usability study shared his appreciation for this kind of transparency:

“I’m not frustrated, I’m enlightened. Often software tries to summarize things; they think the user doesn’t need to know, and they’ll just say something a bit vague. As a user, I would prefer it to say ‘this is what we think and this is how we determined it.’”— Participant from a usability study on redesigned error messages (User Research Firefox UX, Francis Djabri, 2018)

7. Choose imagery and color carefully

Illustration, iconography, and color treatment are important communication tools to accompany the copy. Visual cues can be even “louder” than words and so it’s critical to choose these carefully.

We wanted users to understand the risk at hand but we didn’t want to overstate the risk so that browsing feels like a dangerous act. We also wanted users to know and feel that Firefox was protecting them from potential threats.

Some warning messages employ more dramatic imagery like masked eyes, a robber, or police officer, but their efficacy is disputed.(11) Regardless, that sort of explicit imagery may best be reserved for instances in which we know the danger to be imminent, which was not our case.

The imagery must also be on brand and consistent with your design system. At Firefox, we don’t use human illustration within the product — we use whimsical critters. Critters would not be an appropriate choice for error messages communicating a threat. So, we decided to use iconography that telegraphs risk or danger.

We also selected color scaled according to threat level. At Firefox, yellow is a warning and red signifies an error or threat. We used a larger yellow icon for our messages as there is a potential risk but the risk is not guaranteed. We also added a yellow border as an additional deterrent for messages in which the user had the option to proceed to the unsafe site (this isn’t always the case).

<figcaption>Example of a yellow border around one of the new error messages</figcaption>

8. Make it human

Any good UX copy uses language that sounds and feels human, and that’s an explicit guiding principle in Firefox’s own Voice and Tone guidelines. By “human,” I mean language that’s natural and accessible.

If the context is right, you can go a bit farther and have some fun. One of our five error messages did not actually involve risk to the user — the user simply needed to adjust her clock. In this case, Communication Design Lead, Sean Martell, thought it appropriate to create an “Old Timey Berror” illustration. People in our study responded well… we even got a giggle:

<figcaption>New clock-related error message</figcaption>

Conclusion

The field of security messaging is challenging on many levels, but there are things we can do as designers and content strategists to help users navigate this minefield. Given the amount of frustration error messages can cause a user, and the risk these obstructions pose to business outcomes like retention, it’s worth the time and consideration to get these oft-neglected messages right…or, at least, better.

Thank you

Special thanks to my colleagues: Bram Pitoyo for designing the messages and being an excellent thought partner throughout, Johann Hofmann and Dana Keeler for their patience and security expertise, and Romain Testard and Tony Cinotto for their mad PM skills. Thank you to Sharon Bautista, Betsy Mikel, and Michelle Heubusch for reviewing an earlier draft of this post.

References

Footnotes

  1. Adrienne Porter Felt et al., “Improving SSL Warnings: Comprehension and Adherence.”(Philadelphia: Google, 2015).
  2. Ibid.
  3. User Research, Firefox UX, Francis Djabri, 2018.
  4. West, Ryan. “The Psychology of Security.” Communications of the ACM 51, no. 4 (April 2008): 34–40. doi:10.1145/1330311.1330320.
  5. West, Ryan. “The Psychology of Security.” Communications of the ACM 51, no. 4 (April 2008): 34–40. doi:10.1145/1330311.1330320.
  6. Adrienne Porter Felt et al., “Experimenting At Scale With Google Chrome’s SSL Warning.” (Toronto: CHI2014, April 26 — May 01 2014). https://dl.acm.org/citation.cfm?doid=2556288.2557292
  7. Ibid.
  8. West, Ryan. “The Psychology of Security.” Communications of the ACM 51, no. 4 (April 2008): 34–40. doi:10.1145/1330311.1330320.
  9. Devdatta Akhawe and Adrienne Porter Felt, “Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness.” Semantic Scholar, 2015.
  10. User Research, Firefox UX, Francis Djabri, 2018.
  11. Devdatta Akhawe and Adrienne Porter Felt, “Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness.” Semantic Scholar, 2015.

Designing Better Security Warnings was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla VR BlogVoxelJS Reboot

VoxelJS Reboot

If you’ve ever played Minecraft then you have used a voxel engine. My 7 year old son is a huge fan of Minecraft and asked me to make a Minecraft for VR. After some searching I found VoxelJS, a great open source library created by Max Ogden (@maxogden) and James Halliday (@substack). Unfortunately it hasn’t been updated for about five years and doesn't work with newer libraries.

So what to do? Simple: I dusted it off, ported it to modern ThreeJS & Javascript, then added WebXR support. I call it VoxelJS Next.

VoxelJS Reboot

What is it?

VoxelJS Next is a graphics engine, not a game. I think of Minecraft as a category of game, not a particular instance. I’d like to see Minecraft style voxels used for all sorts of experiences. Chemistry, water simulation, infinite runners, and many other fun experiences.

VoxelJS lets you build your own Minecraft-like game easily on the web. It can run in desktop mode, on a touch screen, full screen, and even in VR thanks to WebXR support. VoxelJS is built on top of ThreeJS.

How Does it Work:

I’ll talk about how data is stored and drawn on screen in a future blog, but the short answer is this:

The world is divided into chunks. Each chunk contains a grid of blocks and is created on demand. These chunks are turned into ThreeJS meshes which are then added to the scene. The chunks come and go as the player moves around the world. So even if you have an infinitely large world only a small number of chunks need to be loaded at a time.

VoxelJS is built as ES6 modules with a simple entity system. This lets you load only the parts you want. There is a module each for desktop controls, touch controls, VR, and more. Thanks to modern browser module support you don’t need to use a build tool like Webpack. Everything works just by importing modules.

How Do I get it?

Check out a live demo and get the code.

Next steps:

I don’t want to over sell VoxelJS Next. This is a super alpha release. The code is incredibly buggy, performance isn’t even half of what it should be, only a few textures, and tons of features are missing. VoxelJS Next is just a start. However, it’s better to get early feedback than to build features no one wants, so please try it out.

You can find the full list of feature and issues here. And here are a good set of issues for beginners to start with.

I also created a #voxels channel in the ThreeJS slack group. You can join it here.

I find voxels really fun to play with, as they give you great freedom to experiment. I hope you'll enjoy building web creations with VoxelJS Next.

hacks.mozilla.orgSharpen your WebVR skills with experiments from Glitch and Mozilla


Join us for a week of Web VR experiments.

Earlier this year, we partnered with Glitch.com to produce a WebVR starter kit. In case you missed it, the kit includes a free, 5-part video course with interactive code examples that teach the fundamentals of WebVR using A-Frame. The kit is intended to help anyone get started – no coding experience required.

Today, we are kicking off a week of WebVR experiments. These experiments build on the basic fundamentals laid out in the starter kit. Each experiment is unique and is meant to teach and inspire as you craft your own WebVR experiences.

To build these, we once again partnered with the awesome team at Glitch.com as well as Glitch creator Andrés Cuervo. Andrés has put together seven experiments that range from incorporating motion capture to animating torus knots in 3D.

Today we are releasing the first 3 WebVR experiments, and we’ll continue to release a new one every day. If you want to follow along, you can click here and bookmark this page. The first three are shared below:


Dancing VR Scene

This example takes free motion capture data and adds it to a VR scene.

 


 


Noisy Sphere

The Noisy Sphere will teach you how to incorporate 3D models and textures into your project.

 


 


Winding Knots

In this example, you get a deeper dive on the torus knot shape and the animation component that is included in A-Frame.

 


 


If you are enjoying these experiments, we invite you to visit this page throughout the week as we’ll add a new experiment every day. We will post the final experiment this Friday.

Visit a week of WebVR Experiments.

The post Sharpen your WebVR skills with experiments from Glitch and Mozilla appeared first on Mozilla Hacks - the Web developer blog.

Mozilla UXPaying Down Enterprise Content Debt: Part 3

Gray box with number 1 and the text, "Framing the Problem." Signals first post in three-part series. Gray box with number 2 and the text, "Developing Solutions." Signals second post in three-part series. Purple box with number 3 and the text, "Implementation & Guidance." Signals third post in three-part series.

Summary: This series outlines the process to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. In this third and final piece, I describe the deliverables we created to support a new content model.

 

Content guidelines for the “author experience”

“Just as basic UX principles tell us to help users achieve tasks without frustration or confusion, author experience design focuses on the tasks and goals that CMS users need to meet—and seeks to make it efficient, intuitive, and even pleasurable for them to do so.” —Sara Wachter-Boettcher, Content Everywhere

 

A content model is a useful tool for organizations to structure, future-proof, and clean up their content. But that content model is only brought to life when content authors populate the fields you have designed with actual content. And the quality of that content is dependent in part on how the content system supports those authors in their endeavor.

We had discovered through user research that developers create extensions for a great variety of reasons—including as a side hobby or for personal enjoyment. They may not have the time, incentive, or expertise to produce high-quality, discoverable content to market their extensions, and they shouldn’t be expected to. But, we can make it easier for them to do so with more actionable guidelines, tools, and governance.

An initial review of the content submission flow revealed that the guidelines for developers needed to evolve. Specifically, we needed to give developers clearer requirements, explain why each content field mattered and where that content showed up, and provide them with examples. On top of that, we needed to give them writing exercises and tips when they hit a dead end.

So, to support our developer authors in creating our ideal content state, I drafted detailed content guidelines that walked extension developers through the process of creating each content element.

Screenshot of a draft content guidelines page for creating a good extension name. Includes a description of why it's important to create a good extension name, the requirements, and a list of "do's" and "don'ts."

Draft content guidelines for extension elements, mocked up in a rough Google Site for purposes of feedback and testing.

 

Once a draft was created, we tested it with Mozilla extension developer, Dietrich Ayala. Dietrich appreciated the new guidelines, and more importantly, they helped him create better content.

 

Screenshot of an section of an old, blue product page for the extension, Always Right. The icon is a photo of the developer's face. The subtitle repeats the first line of the long description, with dated installation details.

Sample of previous Product Page content

 

Screenshot of a section of the revised product information for the extension, "Always Right." Incliudes a new icon of a hand pointing right, with the caption:"New icon from Noun Project is clean, simple, and communicates what the extension does." The revised subtitle, "Open new tabs to the immediate right of your current tab," with the annotation: "New subtitle communicates what extension does and follows SEO recommendations for length."

Sample of revised Product Page content

 

New screenshots for the extension, Always Right. They show the tab strip of the browser. The top tab strip shows the default new tab placement at the end of the list of tabs. The bottom tabs strip shows how the extension opens a new tab to the immediate right of your current tab.

Sample of revised Product Page content: New screenshots to illustrate how extension works

We also conducted interviews with a cohort of developers in a related project  to redesign the extensions submission flow (i.e., the place in which developers create or upload their content). As part of that process, we solicited feedback from 13 developers about the new guidelines:

  • Developers found the guidelines to be helpful and motivating for improving the marketing and SEO of their extensions, thereby better engaging users.
  • The clear “do this/not that” section was very popular.
  • They had some suggestions for improvement, which were incorporated into the next version.

 

Excerpts from developer interviews:

 

“If all documentation was like this, the world would be a better place…It feels very considered. The examples of what to do, what not do is great. This extra mile stuff is, frankly, something I don’t see on developer docs ever: not only are we going to explain what we want in human language, but we are going to hold your hand and give you resources…It’s [usually] pulling teeth to get this kind of info [for example, icon sizes] and it’s right here. I don’t have to track down blogs or inscrutable documentation.”

“…be more upfront about the stuff that’s possible to change and the stuff that would have consequences if you change it.”

 

Finally, to deliver the guidelines in a useful, usable format, we partnered with the design agency, Turtle, to build out a website.

Screenshot of a sample draft content guidelines page for Extension Name. Includes the extension card for Remembear extension, examples of other successful extension names, a list of "Do's" and "Don'ts" and requirements.

Draft content guidelines page

Screenshot of an example writing exercise for developers creating subtitles. The heading, "Stuck? Try this," is followed by four fill-in-the-blank exercises with examples: Adjective + Action: The ______ [adjective/user benefit] to do _____ [action/what extension does]. Example, “Remembear”: The easiest way to remember passwords Features: ____ [feature], _____ [feature], ____ [feature] Example, “Honey”: Automatic Coupons, Promo Codes, & Deals Keywords: ____ [keyword] _____ [keyword] ____ [keyword] Example, “Ghostery”: Privacy ad blocker Action+ More Detail: ____[Action/what your extension does] where/how/by ___ [more detail about extension] Example, “Spotter”: Automatic connecting with public rooms Benefit + How: ____[User benefit] ___ [how it achieves that benefit] Example, “Privacy Badger”: Protects your privacy by blocking spying ads & trackers.

Draft writing exercise on the subtitle content guidelines page

Bringing the model to life

 

Now that the guidelines were complete, it was time to develop the communication materials to accompany the launch.

To bring the content to life, and put a human face on it, we created a video featuring our former Director of Firefox UX, Madhava Enros, and an extension developer. The video conveys the importance of creating good content and design for product pages, as well as how to do it.

Screenshot of a video. A hand is sketching a rough product page on a piece of paper. Next to the paper is a table of content that includes an introduction and the main content elements on a product page, such as Developer Name, Extension Name, etcetera.

Product Page Content & Design Video

 

Preliminary results

 

Our content model had a tall order to fill, as detailed in our objectives and measurements template.

So, how did we do against those objectives? While the content guidelines have not yet been published by the time of this blog post, here’s a snapshot of preliminary results:

Screenshot of an "Objectives" scorecard. The first column includes the objectives of improved user experience, improved developer experience, and improved content structure. The second column includes the scored measurements. The measurements that have been achieved, such as "improved search traffic, rankings, impressions" have a green checkmark; the others are labeled "TBD."

Snapshot of preliminary results

 

And, for illustration, some examples in action:

1. Improved Social Share Quality

Example of a Facebook share for the extension, uBlock Origin. Includes a confusing image, stats, the title "uBlock Origin—Add-ons for Firefox," and this text: "If you think this add-ons violates Mozilla's add-on policies or has security of privacy issues, please report these issues to Mozilla using this form."

Previous Facebook Share example

Sample Facebook share message for the extension, Tabby Cat. Includes a screenshot of a Tabby Cat new tab: it's blue with an orange kitty named "Fluffy MipMop." Beneath the image is the add-ons website URL, the title, "Tabby Cat—Get this Extension for (fox emoji) Firefox" and the line "Download Tabby Cat for Firefox. A new friend in every tab."

New Facebook Share example

2. New Google search snippet model reflective of SEO best practices and what we’d learned from user research—such as the importance of social proof via extension ratings.

Previous Google “search snippet”

 

Screenshot of a Google search snippet, or summary, for the extension, uBlock Origin. Includes the title, "uBlock Origin—Get this Extension for (fox emoji) Firefox (en-US)", the URL, the average star rating, and a subtitle.

New Google “search snippet”

3. Promising, but early, SEO findings:

  • Organic search traffic decline slowed down by 2x compared to summer 2018.
  • Impressions in search for the AMO site are much higher (30%) than before. Shows potential of the domain.
  • Overall rankings are stable, and top 10 rankings are a bit up.
  • Extension/theme installs via organic search are stable after previous year declines.
  • Conversion rate to installs via organic search are growing, from 42% to 55%.

 

Conclusion & resources

 

There’s isn’t a quick or easy fix to enterprise content debt, but investing the time and energy to think about your content as structured data, to cultivate a content model based upon your business goals, and develop the guidance and guardrails to realize that model with your content authors, pays dividends in the long haul.

Hopefully this series has provided a series of steps and tools to figure out your individual payment plan. For more on this topic:

And, if you need more help, and you’re fortunate enough to have a content strategist on your team, you can always ask that friendly content nerd for some content credit counseling.

Screenshot of a commercial from the 1980s. Includes the text, "Consolidate and reduce your monthly payments," and there is a still of a woman in a red blazer and headset. A 1-800 phone number is at the bottom of the screen.

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Mozilla UXPaying Down Enterprise Content Debt: Part 2

Gray box with number 1 and the text, "Framing the Problem." Signals first post in three-part series. Purple box with number 2 and the text, "Developing Solutions." Signals second post in three-part series. Gray box with number 3 and the text, "Implementation & Guidance." Signals third post in three-part series.

Summary: This series outlines the process to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. This piece walks through the eight steps to develop a new content model:

  • Step 1: Stakeholder interviews
  • Step 2: Documenting content elements
  • Step 3: Data analysis—content quality
  • Step 4: Domain expert review
  • Step 5: Competitor compare
  • Step 6: User research—What content matters?
  • Step 7: Creating a content model
  • Step 8: Refine and align

 

Step 1: Stakeholder interviews

 

To determine a payment plan for our content debt, we needed to first get a better understanding of the product landscape. Over the course of a couple of weeks, the team’s UX researcher and I conducted stakeholder interviews:

Who: Subject matter experts, decision-makers, and collaborators. May include product, engineering, design, and other content folks.

What: Schedule an hour with each participant. Develop a spreadsheet with questions that get at the heart of what you are trying to understand. Ask the same set of core questions to establish trends and patterns, as well as a smaller set specific to each interviewee’s domain expertise.

Screenshot of an Excel sheet, with three columns: "Topic," "Questions," "Answer." "Topics" include "Role," "Product," and "Users, Developers, Volunteers." Each topic contains questions. The questions are range from general: "Please tell me a bit about your role and work on add-onsIn your own words," to specific: "When developers create extension descriptions, what is that experience like? What kinds of challenges or opportunities do you see in this space?" The "Answer" column is blank.

Sample question template, including content-specific inquiries below

Screenshot of an Excel sheet with content-specific questions like "How would you describe the current state of content in the experience today? I've noticed some inconsistency—why do you think this is?"

 

After completing the interviews, we summarized the findings and walked the team through them. This helped build alignment with our stakeholders around the issues and prime them for the potential UX and content solutions ahead.

Stakeholder interviews also allowed us to clarify our goals. To focus our work and make ourselves accountable to it, we broke down our overarching goal—improve Firefox users’ ability to discover, trust, install, and enjoy extensions—into detailed objectives and measurements using an objectives and measurements template. Our main objectives fell into three buckets: improved user experience, improved developer experience, and improved content structure. Once the work was done, we could measure our progress against those objectives using the measurements we identified.

 

Step 2: Documenting content elements

 

Product environment surveyed, we dug into the content that shaped that landscape.

Extensions are recommended and accessed not only through AMO, but in a variety of places, including the Firefox browser itself, in contextual recommendations, and in external content. To improve content across this large ecosystem, we needed to start small…at the cellular content level. We needed to assess, evolve, and improve our core content elements.

By “content elements,” I mean all of the types of content or data that are attached to an extension—either by developers in the extension submission process, by Mozilla on the back-end, or by users. So, very specifically, these are things like description, categories, tags, ratings, etc. For example, the following image contains three content elements: icon, extension name, summary:

Example extension content: A green circular icon with the letter 'M' sits next to the extension's name, "Momentum." Beneath this is a sentence summary: "Replace your new tab with a personal dashboard featuring to-do, weather, daily inspiration, and more!"

Using Excel, I documented existing content elements. I also documented which elements showed up where in the ecosystem (i.e., “content touchpoints”):

Screenshot of an Excel that contains the rows: "Content Element," "In UI?," "Required?," "Length," and "Guidelines." These columns are filled in for four content elements: Developer Name, Extension Name, Extension Summary, Long Description.

Excerpt of content elements documentation

Screenshot of an Excel sheet with two columns: "Touchpoint" and "Content Elements." The two touchpoints are "Website Landing Page" and "Website Categories Page." Beneath each of these touchpoints you see various content elements listed if they appear on that touchpoint, such as "Extension icon," "Extension name," etcetera.

Excerpt of content elements documentation: content touchpoints

 

The content documentation Excel served as the foundational document for all the work that followed. As the team continued to acquire information to shape future solutions, we documented those learnings in the Excel, evolving the recommendations as we went.

 

Step 3: Data analysis—content quality

 

Current content elements identified, we could now assess the state of said content. To complete this analysis, we used a database query (created by our product manager) that populated all of the content for each content element for every extension and theme. Phew.

We developed a list of queries about the content…

Screenshot of an Excel sheet with two columns: "Content Type" and "Query." The Content Type includes Extensions and Themes. Next to either of these, there are queries like "Total number extensions" or "Average number of users."

Sample selection of data questions

…and then answered those queries for each content element field.

Screenshot of an Excel sheet for the content element of "Extension Name." Includes columns labeled, "In UI?," "Required?," "Length" and "Data Analysis."

Sample data analysis for “Extension Name”

  • For quantitative questions (like minimum/maximum content length per element), we used Excel formulas.
  • For questions of content quality, we analyzed a sub-section of the data. For example, what’s the content quality state of extension names for the top 100 extensions? What patterns, good and bad, do we see?

 

Step 4: Domain expert review

 

I also needed input from domain experts on the content elements, including content reviewers, design, and localization. Through this process, we discovered pain points, areas of opportunity, and considerations for the new requirements.

For example, we had been contemplating a 10-character minimum for our long description field. Conversations with localization expert, Peiying Mo, revealed that this would not work well for non-English content authors…while 10 characters is a reasonable expectation in English, it’s asking for quite a bit of content when we are talking about 10 Chinese characters.

Because improving search engine optimization (SEO) for add-ons was a priority, review by SEO specialist, Raphael Raue, was especially important. Based on user research and analytics, we knew users often find extensions, and make their way to the add-ons site, through external search engines. Thus, their first impression of an add-on, and the basis upon which they may assess their interest to learn more, is an extension title and description in Google search results (also called a “search snippet”). So, our new content model needed to be optimized for these search listings.

Screenshot of an Excel sheet with the Content Element of "Extension Name." Includes columns for "Reviewer Comments," "Design Comments," "Localization Comments," and "SEO Comments" and these have been filled in with text analysis.

Sample domain expert review comments for “Extension Name”

 

Step 5: Competitor compare

 

A picture of the internal content issues and needs was starting to take shape. Now we needed to look externally to understand how our content compared to competitors and other successful commercial sites.

Philip Walmsley, UX designer, identified those sites and audited their content elements, identifying surplus, gaps, and differences from Firefox. We discussed the findings and determined what to add, trim, or tweak in Firefox’s content element offerings depending on value to the user.

Screenshot of an Excel. The left column is entitled, "Item Name" and includes things like "Main CTA," "Category," "Category Rank," etcetera. The columns across identify different websites, including the Firefox add-ons site. If the site contains one of the items, its cell is shaded green with an "X." If the item is not present on the site, the cell is left empty.

Excerpt of competitive analysis

 

Step 6: User research—what content matters?

 

A fair amount of user research about add-ons had already been done before we embarked on this journey, and Jennifer Davidson, our UX researcher, lead additional, targeted research over the course of the year. That research informed the content element issues and needs. In particular, a large site survey, add-ons team think-aloud sessions, and in-person user interviews identified how users discover and decide whether or not to get an extension.

Regarding extension product pages in particular, we asked:

  • Do participants understand and trust the content on the product pages?
  • What type of information is important when deciding whether or not to get an extension?
  • Is there content missing that would aid in their discovery and comprehension?

Through this work, we deepened our understanding of the relative importance of different content elements (for example: extension name, summary, long description were all important), what elements were critical to decision making (such as social proof via ratings), and where we had content gaps (for example, desire for learning-by-video).

 

Step 7: Creating a content model

 

“…content modeling gives you systemic knowledge; it allows you to see what types of content you have, which elements they include, and how they can operate in a standardized way—so you can work with architecture, rather than designing each one individually.” —Sara Wachter-Boettcher, Content Everywhere, 31

 

Learnings from steps 1-6 informed the next, very important content phase: identifying a new content model for an add-ons product page.

A content model defines all of the content elements in an experience. It details the requirements and restrictions for each element, as well as the connections between elements. Content models take diverse shapes and forms depending on project needs, but the basic steps often include documentation of the content elements you have (step 2 above), analysis of those elements (steps 3-6 above), and then charting new requirements based on what you’ve learned and what the organization and users need.

Creating a content model takes quite a bit of information and input upfront, but it pays dividends in the long-term, especially when it comes to addressing and preventing content debt. The add-ons ecosystem did not have a detailed, updated content model and because of that, developers didn’t have the guardrails they needed to create better content, the design team didn’t have the content types it needed to create scalable, user-focused content, and users were faced with varying content quality.

A content model can feel prescriptive and painfully detailed, but each content element within it should provide the flexibility and guidance for content creators to produce content that meets their goals and the goals of the system.

Excerpt of an Excel that. shows the content model for the content element, "Extension Name." It includes the columns, "Required?," "Min. length," "Max length," "Allowed file type," "Quality & Guidelines," and "Clean-Up." These have all been filled in with information and analysis specific to the Extension Name element.

Sample content model for “Extension Name”

 

Step 8: Refine and align

 

Now that we had a draft content model—in other words, a list of recommended requirements for each content element—we needed review and input from our key stakeholders.

This included conversations with add-ons UX team members, as well as partners from the initial stakeholder interviews (like product, engineering, etc.). It was especially important to talk through the content model elements with designers Philip and Emanuela, and to pressure test whether each new element’s requirements and file type served design needs across the ecosystem. One of the ways we did this was by applying the new content elements to future designs, with both best and worst-case content scenarios.

Screenshot of a redesigned extension product page for the extension, "Remembear.". It has a black border across the top with the Firefox logo. Below this, set against a vibrant orange banner, is the extension card, which includes basic information about the extension like its name, author, subtitle, star ratings, etcetera. There is a blue button to add the extension, and beneath this large screenshots and the first section of a long description. The different content elements on the page are annotated with red arrows.

Re-designed product page with new content elements (note—not a final design, just a study). Design lead: Philip Walmsley.

Sample "universal extension card," which is a rectangular image that contains basic information about the Facebook Container extension. It includes the extension name, the author name, subtitle, average star rating, number of users, icon, and a blue "Add to Firefox" button. The different elements are annotated with red arrows.

Draft “universal extension card” with new content elements (note—not a final design, just a study). This card aims to increase user trust and learnability when user is presented with an extension offering anywhere in the ecosystem. Design lead: Emanuela Damiani.

Based on this review period and usability testing on usertesting.com, we made adjustments to our content model.

 

Okay, content model done. What’s next?

 

Now that we had our new content model, we needed to make it a reality for the extension developers creating product pages.

In part 3, I’ll walk through the creation and testing of deliverables, including content guidelines and communication materials.

 

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Mozilla UXPaying Down Enterprise Content Debt: Part 1

Purple box with number 1 and the text, "Framing the Problem." Signals first post in three-part series. Gray box with number 2 and the text, "Developing Solutions." Signals second post in three-part series. Gray box with number 3 and the text, "Implementation & Guidance." Signals third post in three-part series.

Summary: This series outlines the steps to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. Part 1 identifies and frames the enterprise content debt issue.

Background

If you want to block annoying ads or populate your new tab with sassy cats, you can do it…with browser extensions and themes. Users can download thousands of these “add-ons” from Firefox’s host site, addons.mozilla.org (“AMO”), to customize their browsing experience with new functionality or a dash of whimsy.

Screenshot of the Tabby Cat extension new tab. The background is blue, and in the center is a large cat with a sweat headband, sitting next to a kitten playing with a ball of yarn. A caption beneath the cats reads, "Criminal Winky."

The Tabby Cat extension takes over your new tab with adorable cats

Add-ons can be a useful and delightful way for people to improve their web experience—if they can discover, understand, trust, and appreciate their offerings. Over the last year, the add-ons UX pod at Firefox, in partnership with the larger add-ons team, worked on ways to do just that.

One of the ways we did this was by looking at these interconnected issues through the lens of content structure and quality. In this series, I’ll walk you through the steps we took to develop a new content model for the add-ons ecosystem.

 

Understanding the problem

 

Add-ons are largely created by third-party developers, who also create the content that describes the add-ons for users. That content includes things like extension name, icon, summary, long, description, screenshots, etcetera:

Example extension content: A green circular icon with the letter 'M' sits next to the extension's name, "Momentum." Beneath this is a sentence summary: "Replace your new tab with a personal dashboard featuring to-do, weather, daily inspiration, and more!"

Sample developer-provided content for the Momentum extension

With 10,000+ extensions and 400,000+ themes, we are talking about a lot of content. And while the add-ons team completely appreciated the value of the add-ons themselves, we didn’t really understand how valuable the content was, and we didn’t use it to its fullest potential.

The first shift we made was recognizing that what we had was enterprise content— structured content and metadata stored in a formal repository, reviewed, sometimes localized, and published in different forms in multiple places.

Then, when we assessed the value of it to the enterprise, we uncovered something called content debt.Illustration of a man leaning forward in a business suit. He's holding a large gold bag on his back, entitled, "Debt," clearly struggling under the weight. Blue mountains rise in the background behind him.

Content debt is the hidden cost of not managing the creation, maintenance, utility, and usability of digital content. It accumulates when we don’t treat content like an asset with financial value, when we value expediency over the big picture, and when we fail to prioritize content management. You can think of content debt like home maintenance. If you don’t clean your gutters now, you’ll pay in the long term with costly water damage.

AMO’s content debt included issues of quality (missed opportunities to communicate value and respond to user questions), governance (varying content quality with limited organizational oversight), and structure (the need for new content types to evolve site design and improve social shares and search descriptions).

 

A few examples of content debt in action:

 

Screenshot of a Facebook social share example of an extension named "uBlock origin." There is a random image from the extension containing some small square photos, with stats about "requests blocked." Beneath this, the title "uBlock Origin - Add-ons for Firefox" with the body copy, "If you think this add-on violates Mozilla's add-on policies or has security of privacy issues, please report these issues to Mozilla using this form."

Facebook social share experience: Confusing image accompanied by text describing how to report an issue with the extension

Screenshot of a Google search "snippet" for the extension, uBlock Origin. It contains the line "uBlock Origin—Add-ons for FIrefox - Firefox Add-ons - Mozilla," the add-on's link, and then body copy, which begins with "Illustrated overview of its efficiency," followed by a link to a bug. Then the next: "Usage: the big power button in the popup is to permanently disable/enable uBlock for the current web site. It applies to the current web site only. It it not a global power button. ***Flexible, it's more than an "ad..."

Google Search results example for an extension. Lacks description of basic functionality and value proposition. No SEO-optimized keywords or social proof like average rating.

Screenshot of search results in the add-ons website for the query, "Investing." Search results include, "Ecosia - The search engine that plants trees! extension," "Booking Flights & Hotel Deals" extension, and then the theme, "Invest in Morelos."

A search for “Investing” doesn’t yield the most helpful results

All of this equals quite a bit of content debt that prevents developers and users from being as successful as they could be in achieving their interconnected goals: offering and obtaining extensions. It also hinders the ecosystem as a whole when it comes to things like SEO (Search Engine Optimization), which the team wanted to improve.

Given the AMO site’s age (15 years), and the amount of content it contains, debt is to be expected. And it’s rare for a content strategist to be given a content challenge that doesn’t involve some debt because it’s rare that you are building a system completely from scratch. But, that’s not the end of the story.

When we considered the situation from a systems-wide perspective, we realized that we needed to move beyond thinking of the content as something created by an individual developer in a vacuum. Yes, the content is developer-generated—and with that a certain degree of variation is to be expected—but how could we provide developers with the support and perspective to create better content that could be used not only on their product page, but across the content ecosystem? While the end goal is more users with more extensions by way of usable content, we needed to create the underpinning rules that allowed for that content to be expressed across the experience in a successful way.

In part 2, I walk through the specific steps the team took to diagnose and develop solutions for our enterprise content debt. Meanwhile, speaking of the team…

 

Meet our ‘UX supergroup’

 

“A harmonious interface is the product of functional, interdisciplinary communication and clear, well-informed decision making.” —Erika Hall, Conversational Design, 108

 

While the problem space is framed in terms of content strategy, the diagnosis and solutions were very much interdisciplinary. For this work, I was fortunate to be part of a “UX supergroup,” i.e., an embedded project team that included a user researcher (Jennifer Davidson), a designer with strength in interaction design (Emanuela Damiani), a designer with strength in visual design (Philip Walmsley), and a UX content strategist (me).

We were able to do some of our best work by bringing to bear our discipline-specific expertise in an integrated and deeply collaborative way. Plus, we had a mascot—Pusheen the cat—inspired in part by our favorite tab-changing extension, Tabby Cat.

Photo of the four team members, standing in a row. All four wear black t-shirts that read, "Serene Pusheen." They are smiling and have their hands under their chins.

Left to right: Jennifer, Emanuela, Philip, Meridel #teampusheen

See Part 2, Paying Down Enterprise Content Debt: Developing Solutions

 

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Mozilla Add-ons BlogRecommended Extensions program — coming soon

In February, we blogged about the challenge of helping extension users maintain their safety and security while preserving their ability to choose their browsing experience. The blog post outlined changes to the ecosystem to better protect users, such as making them more aware of the risks associated with extensions, reducing the visibility of extensions that haven’t been vetted, and putting more emphasis on curated extensions.

One of the ways we’re helping users discover vetted extensions will be through the Recommended Extensions program, which we’ll roll out in phases later this summer. This program will foster a curated list of extensions that meet our highest standards of security, utility, and user experience. Recommended extensions will receive enhanced visibility across Mozilla websites and products, including addons.mozilla.org (AMO).

We anticipate the eventual formation of this list to number in the hundreds, but we’ll start smaller and build the program carefully. We’re currently in the process of identifying candidates and will begin reaching out to selected developers later this month. You can expect to see changes on AMO by the end of June.

How will Recommended extensions be promoted?

On AMO, Recommended extensions will be visually identifiable by distinct badging. Furthermore, AMO search results and filtering will be weighted higher toward Recommended extensions.

Recommended extensions will also supply the personalized recommendations on the “Get Add-ons” page in the Firefox Add-ons Manager (about:addons), as well as any extensions we may include in Firefox’s Contextual Feature Recommender.

How are extensions selected to be part of the program?

Editorial staff will select the initial batch of extensions for the Recommended list. In time, we’ll provide ways for people to nominate extensions for inclusion.

When evaluating extensions, curators are primarily concerned with the following:

  • Is the extension really good at what it does? All Recommended extensions should not only do what they promise, but be very good at it. For instance, there are many ad blockers out there, but not all ad blockers are equally effective.
  • Does the extension offer an exceptional user experience? Recommended extensions should be delightful to use. Curators look for content that’s intuitive to manage and well-designed. Common areas of concern include the post-install experience (i.e. once the user installs the extension, is it clear how to use it?), settings management, user interface copy, etc.
  • Is the extension relevant to a general audience? The tightly curated nature of Recommended extensions means we will be selective, and will only recommend  extensions that are appealing to a general Firefox audience.
  • Is the extension safe? We’re committed to helping protect users against third-party software that may—intentionally or otherwise—compromise user security. Before an extension receives Recommended status, it undergoes a security review by staff reviewers. (Once on the list, each new version of a Recommended extension must also pass a full review.)

Participation in the program will require commitment from developers in the form of active development and a willingness to make improvements.

How will the list be maintained?

It’s our intent to develop a Recommended list that can remain relevant over time, which is to say we don’t anticipate frequent turnover in the program. The objective is to promote Recommended extensions that users can trust to be useful and safe for the lifespan of the software they install.

We recognize the need to keep the list current, and will make room for new, emerging extensions. Firefox users want the latest, greatest extensions. Talented developers all over the world continue to find creative ways to leverage the powerful capabilities of extensions and deliver fantastic new features and experiences. Once the program launches later this summer, we’ll provide ways for people to suggest extensions for inclusion in the program.

Will the community be involved?

We believe it’s important to maintain community involvement in the curatorial process. The Community Advisory Board—which for years has contributed to helping identify featured content—will continue to be involved in the Recommended extensions program.

We’ll have more details to share in the coming months as the Recommended extensions program develops. Please feel free to post questions or comments on the add-ons Discourse page.

The post Recommended Extensions program — coming soon appeared first on Mozilla Add-ons Blog.

QMOFirefox 67 Beta 10 Testday, April 12th

Hello Mozillians,

We are happy to let you know that Friday, April 12th, we are organizing Firefox 67 Beta 10 Testday. We’ll be focusing our testing on: Graphics compatibility & support and Session Restore.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Web Application SecurityBackward-Compatibility FIDO U2F support shipping soon in Firefox

Web Authentication (WebAuthn), a recent web standard blending public-key cryptography into website logins, is our best technical response to credential phishing. That’s why we’ve championed it as a technology. The FIDO U2F API is the spiritual ancestor of WebAuthn; to-date, it’s still much more commonly used. Firefox has had experimental support for the Javascript FIDO U2F API since version 57, as it was used to validate our Web Authentication implementation that then shipped in Firefox 60. Both technologies can help secure the logins of millions of users already in possession of FIDO U2F USB tokens.

We encourage the adoption of Web Authentication rather than the FIDO U2F API. However, some large web properties are encountering difficulty migrating: WebAuthn works with security credentials produced by the FIDO U2F API. However, WebAuthn-produced credentials cannot be used with the FIDO U2F API. For the entities affected, this could lead to poor user experiences and inhibit overall adoption of this critical technology.

To smooth out this migration, after discussion on the mozilla.dev.platform mailing list, we have decided to enable our support for the FIDO U2F API by default for all Firefox users. It’s enabled now in Firefox Nightly 68, and we plan for it to be uplifted into Firefox Beta 67 in the coming week.

Enabling FIDO U2F API in Firefox

A FIDO U2F API demo website being activated

Firefox’s implementation of the FIDO U2F API accommodates only the common cases of the specification; for details, see the mailing list discussion. For those who are interested in using FIDO U2F API before they update to version 68, Firefox power users have successfully utilized the FIDO U2F API by enabling the “security.webauth.u2fpreference in about:config since Quantum shipped in 2017.

Currently, the places where Firefox’s implementation is incomplete are expected to remain so.  With the increase of using biometric mechanisms such as face recognition or fingerprints in devices, we are focusing our support on WebAuthn. It provides a sophisticated level of authentication and cryptography that will protect Firefox users.

The future of anti-phishing is Web Authentication

It’s important that the Web move to Web Authentication rather than building new capabilities with the deprecated, legacy FIDO U2F API. Now a published Recommendation at the W3C, Web Authentication has support for many more use cases than the legacy technology, and a much more robustly-examined browser security story.

Ultimately, it’s most important that Firefox users be able to protect their accounts with the strongest protections possible. We believe the strongest  to be Web Authentication, as it has improved usability via platform authenticators, capabilities for “passwordless” logins, and more advanced security keys and tokens.

The post Backward-Compatibility FIDO U2F support shipping soon in Firefox appeared first on Mozilla Security Blog.

Open Policy & AdvocacyA Path Forward: Rights and Rules to Protect Privacy in the United States

Privacy is on the tip of everyone’s tongue. Lawmakers are discussing how to legislate it, big tech is desperate to show they care about it, and everyday people are looking for tools and tips to help them reclaim it.

That’s why today, we are publishing our blueprint for strong federal privacy legislation in the United States. Our goals are straightforward: put people back in control of their data; establish clear, effective, and enforceable rules for those using that data; and move towards greater global alignment on governing data and the role of the internet in our lives.

For Mozilla, privacy is not optional. It’s fundamental to who we are and the work we do. It’s also fundamental to the health of the internet. Without privacy protections, we cannot trust the internet as a safe place to explore, transact, connect, and create. But thanks to a rising tide of abusive privacy practices and data breaches, trust in the internet is at an all time low.

We’ve reached this point because data practices and public policies have failed. Data has helped spur remarkable innovation and new products, but the long-standing ‘notice-and-consent’ approach to privacy has served people poorly. And the lack of truly meaningful safeguards and user protections have led to our social, financial and even political information being misused and manipulated without our understanding.

What’s needed to combat this is strong privacy legislation in the U.S. that codifies real protections for consumers and ensures more accountability from companies.

While we have seen positive movements on privacy and data protection around the globe, the United States has fallen behind. But this conversation about the problematic data practices of some companies has sparked promising interest in Congress.

Our framework spells out specifically what that law needs to accomplish. It must establish strong rights for people, rights that provide meaningful protection; it must provide clear rules for companies to limit how data is collected and used; and it must empower enforcement with clear authority and effective processes and remedies.

Clear rules for companies

  • Purposeful and limited collection and use – end the era of blanket collection and use, including collecting data for one purpose and then using it for another, by adopting clear rules for purposeful and limited collection and use of personal data.
  • Security – ensure that our data is carefully maintained and secured, and provide clear expectations around inactive accounts.

Strong rights for people

  • Access – we must be able to view the information that has been collected or generated about us, and know how it’s being used.
  • Delete – we should be able to delete our data when reasonable, and we should understand the policies and practices around our data if our accounts and services become inactive.
  • Granular, revocable consent – stop the practice of generic consent to data collection and use; limit consents to apply to specific collection and use practices, and allow them to be revoked.

Empowered enforcement

  • Clear mandate – empower the Federal Trade Commission with a strong authority and resources to keep up with advances in technology and evolving threats to privacy.
  • Civil penalties – streamline and strengthen the FTC’s enforcement through direct civil investigation and penalty authority, without the need for time- and resource-intensive litigation.
  • Rulemaking authority – empower the FTC to set proactive obligations to secure personal information and limits on the use of personal data in ways that may harm users.

We need real action to pass smart, strong privacy legislation that codifies real protections for consumers while preserving innovation. And we need it now, more than ever.

Mozilla U.S. Consumer Privacy Bill Blueprint 4.4.19

 

 

Photo by Louis Velazquez on Unsplash

The post A Path Forward: Rights and Rules to Protect Privacy in the United States appeared first on Open Policy & Advocacy.

QMOFirefox 67 Beta 6 Testday Results

Hello Mozillians!

As you may already know, last Friday March 29th – we held a new Testday event, for Firefox 67 Beta 6.

Thank you all for helping us make Mozilla a better place: amirtha V, Shanthi Priya G,  Rok Žerdin, Aishwarya Narasimhan, Mohamed Bawas.

From Mozilla Bangladesh CommunityMaruf Rahman, Sayed Ibn Masud, Reazul Islam.

Results:

– 2 bugs verified: 1540097, 1531658.

– several test cases executed for: Anti-tracking (Fingerprinting and Cryptominers) and Media playback & support.

Thanks for yet another awesome testday, we appreciate your contribution 🙂

We hope to see you all in our next events, keep an eye on QMO, we will make announcements as soon as somethings shows up!

The Bugzilla UpdateDemo of new Bugzilla features and UX at April 3 project meeting

Our regular monthly project meeting video conference call for April will be Wednesday, April 3 at 8pm UTC / 4pm EDT / 1pm PDT.  We’re getting close (relatively) with Bugzilla 6 – currently estimating end of summer or early fall, and we’ll be having some live demos of some of the recent new features and user experience improvements in Bugzilla at this month’s meeting, so you won’t want to miss it.

Firefox UXAn exception to our ‘No Guerrilla Research’ practice: A tale of user research at MozFest

Authors: Jennifer Davidson, Emanuela Damiani, Meridel Walkington, and Philip Walmsley

Sometimes, when you’re doing user research, things just don’t quite go as planned. MozFest was one of those times for us.

MozFest, the Mozilla Festival, is a vibrant conference and week-long “celebration for, by, and about people who love the internet.” Held at a Ravensbourne university in London, the festival features nine floors of simultaneous sessions. The Add-ons UX team had the opportunity to host a workshop at MozFest about co-designing a new submission flow for browser extensions and themes. The workshop was a version of the Add-ons community workshop we held the previous day.

On the morning of our workshop, we showed up bright-eyed, bushy-tailed, and fully caffeinated. Materials in place, slides loaded…we were ready. And then, no one showed up.

Perhaps because 1) there was too much awesome stuff going on at the same time as our workshop, 2) we were in a back corner, and 3) we didn’t proactively advertise our talk enough.

After processing our initial heartache and disappointment, Emanuela, a designer on the team, suggested we try something we don’t do often at Mozilla, if at all: guerrilla research. Guerrilla user research usually means getting research participants from “the street.” For example, a researcher could stand in front of a grocery store with a tablet computer and ask people to use a new app. This type of research method is different than “normal” user research methods (e.g. field research in a person’s home, interviewing someone remotely over video call, conducting a usability study in a conference room at an office) because there is much less control in screening participants, and all the research is conducted in the public eye [1].

Guerrilla research is a polarizing topic in the broader user research field, and with good reason: while there are many reasons one might employ this method, it can and is used as a means of getting research done on the cheap. The same thing that makes guerrilla research cheaper — convenient, easy data collection in one location — can also mean compromised research quality. Because we don’t have the same level of care and control with this method, the results of the work can be less effective compared to a study conducted with careful recruitment and facilitation.

For these reasons, we intentionally don’t use guerrilla research as a core part of the Firefox UX practice. However, on that brisk London day, with no participants in sight and our dreams of a facilitated session dashed, guerrilla research presented itself as a last-resort option, as a way we could salvage some of our time investment AND still get some valuable feedback from people on the ground. We were already there, we had the materials, and we had a potential pool of participants — people who, given their conference attendance, were more informed about our particular topic than those you would find in a random coffee shop. It was time to make the best of a tough situation and learn what we could.

And that’s just what we did.

Thankfully, we had four workshop facilitators, which meant we could evolve our roles to the new reality. Emanuela and Jennifer were traveling workshop facilitators. Phil, another designer on the team, took photos. Meridel, our content strategist, stationed herself at our original workshop location in case anyone showed up there, taking advantage of that time to work through content-related submission issues with our engineers.

<figcaption>A photo of Jennifer and Emanuela taking the workshop “on the road”</figcaption>

We boldly went into the halls at MozFest and introduced ourselves and our project. We had printouts and pens to capture ideas and get some sketching done with our participants.

<figcaption>A photo of Jennifer with four participants</figcaption>

In the end, seven people participated in our on-the-road co-design workshop. Because MozFest tends to attract a diverse group of attendees from journalists to developers to academics, some of these participants had experience creating and submitting browser extensions and themes. For each participant we:

  1. Approached them and introduced ourselves and the research we were trying to do. Asked if they wanted to participate. If they did, we proceeded with the following steps.
  2. Asked them if they had experience with extensions and themes and discussed that experience.
  3. Introduced the sketching exercise, where we asked them to come up with ideas about an ideal process to submit an extension or theme.
  4. Watched as the participant sketched ideas for the ideal submission flow.
  5. Asked the participant to explain their sketches, and took notes during their explanation.
  6. Asked participants if we could use their sketches and our discussion in our research. If they agreed, we asked them to read and sign a consent form.

The participants’ ideas, perspectives, and workflows fed into our subsequent team analysis and design time. Combined with materials and learnings from a prior co-design workshop on the same topic, the Add-ons UX team developed creator personas. We used those personas’ needs and workflows to create a submission flow “blueprint” (like a flowchart) that would match each persona.

<figcaption>A snippet of our submission flow “blueprint”</figcaption>

In the end, the lack of workshop attendance was a good opportunity to learn from MozFest attendees we may not have otherwise heard from, and grow closer as a team as we had to think on our feet and adapt quickly.

Guerrilla research needs to be handled with care, and the realities of its limitations taken into account. Happily, we were able to get real value out of our particular application, with note that the insights we gleaned from it were just one input into our final deliverables and not the sole source of research. We encourage other researchers and designers doing user research to consider adapting your methods when things don’t go as planned. It could recoup some of the resources and time you have invested, and potentially give you some valuable insights for your product design process.

Questions for us? Want to share your experiences with being adaptable in the field? Leave us a comment!

Acknowledgements

Much gratitude to our colleagues who created the workshop with us and helped us edit this blog post! In alphabetical order, thanks to Stuart Colville, Mike Conca, Kev Needham, Caitlin Neiman, Gemma Petrie, Mara Schwarzlose, Amy Tsay, and Jorge Villalobos.

Reference

[1] “The Pros and Cons of Guerrilla Research for Your UX Project.” The Interaction Design Foundation, 2016, www.interaction-design.org/literature/article/the-pros-and-cons-of-guerrilla-research-for-your-ux-project.

Also published on Firefox UX Blog.


An exception to our ‘No Guerrilla Research’ practice: A tale of user research at MozFest was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXAn exception to our ‘No Guerrilla Research’ practice: A tale of user research at MozFest

Sometimes, when you’re doing user research, things just don’t quite go as planned. MozFest was one of those times for us.

MozFest, the Mozilla Festival, is a vibrant conference and week-long “celebration for, by, and about people who love the internet.” Held at a Ravensbourne university in London, the festival features nine floors of simultaneous sessions. The Add-ons UX team had the opportunity to host a workshop at MozFest about co-designing a new submission flow for browser extensions and themes. The workshop was a version of the Add-ons community workshop we held the previous day.

On the morning of our workshop, we showed up bright-eyed, bushy-tailed, and fully caffeinated. Materials in place, slides loaded…we were ready. And then, no one showed up.

Perhaps because 1) there was too much awesome stuff going on at the same time as our workshop, 2) we were in a back corner, and 3) we didn’t proactively advertise our talk enough.

After processing our initial heartache and disappointment, Emanuela, a designer on the team, suggested we try something we don’t do often at Mozilla, if at all: guerrilla research. Guerrilla user research usually means getting research participants from “the street.” For example, a researcher could stand in front of a grocery store with a tablet computer and ask people to use a new app. This type of research method is different than “normal” user research methods (e.g. field research in a person’s home, interviewing someone remotely over video call, conducting a usability study in a conference room at an office) because there is much less control in screening participants, and all the research is conducted in the public eye [1].

Guerrilla research is a polarizing topic in the broader user research field, and with good reason: while there are many reasons one might employ this method, it can and is used as a means of getting research done on the cheap. The same thing that makes guerrilla research cheaper—convenient, easy data collection in one location— can also mean compromised research quality. Because we don’t have the same level of care and control  with this method, the results of the work can be less effective compared to a study conducted with careful recruitment and facilitation.

For these reasons, we intentionally don’t use guerrilla research as a core part of the Firefox UX practice. However, on that brisk London day, with no participants in sight and our dreams of a facilitated session dashed, guerrilla research presented itself as a last-resort option, as a way we could salvage some of our time investment AND still get some valuable feedback from people on the ground. We were already there, we had the materials, and we had a potential pool of participants—people who, given their conference attendance, were more informed about our particular topic than those you would find in a random coffee shop. It was time to make the best of a tough situation and learn what we could.

And that’s just what we did.

Thankfully, we had four workshop facilitators, which meant we could evolve our roles to the new reality. Emanuela and Jennifer were traveling workshop facilitators. Phil, another designer on the team, took photos. Meridel, our content strategist, stationed herself at our original workshop location in case anyone showed up there, taking advantage of that time to work through content-related submission issues with our engineers.

Jennifer on the left holding papers and Emanuela on the right flexing her arm and warning a sign with the title of the workshop on it.

A photo of Jennifer and Emanuela taking the workshop “on the road

We boldly went into the halls at MozFest and introduced ourselves and our project. We had printouts and pens to capture ideas and get some sketching done with our participants.

Jennifer crouching at a table with 4 participants in chairs. Participants are writing, speaking, and at their computers.

A photo of Jennifer with four participants

In the end, seven people participated in our on-the-road co-design workshop. Because MozFest tends to attract a diverse group of attendees from journalists to developers to academics, some of these participants had experience creating and submitting browser extensions and themes. For each participant we:

  1. Approached them and introduced ourselves and the research we were trying to do. Asked if they wanted to participate. If they did, we proceeded with the following steps.
  2. Asked them if they had experience with extensions and themes and discussed that experience.
  3. Introduced the sketching exercise, where we asked them to come up with ideas about an ideal process to submit an extension or theme.
  4. Watched as the participant sketched ideas for the ideal submission flow.
  5. Asked the participant to explain their sketches, and took notes during their explanation.
  6. Asked participants if we could use their sketches and our discussion in our research. If they agreed, we asked them to read and sign a consent form.

The participants’ ideas, perspectives, and workflows fed into our subsequent team analysis and design time. Combined with materials and learnings from a prior co-design workshop on the same topic, the Add-ons UX team developed creator personas. We used those personas’ needs and workflows to create a submission flow “blueprint” (like a flowchart) that would match each persona.

A screenshot of part of a flowchart. There are boxes and lines connecting them. The image is too small to read the details.

A snippet of our submission flow “blueprint”

In the end, the lack of workshop attendance was a  good opportunity to learn from MozFest attendees we may not have otherwise heard from, and grow closer as a team as we had to think on our feet and adapt quickly.

Guerrilla research needs to be handled with care, and the realities of its limitations taken into account. Happily, we were able to get real value out of our particular application, with note that the insights we gleaned from it were just one input into our final deliverables and not the sole source of research. We encourage other researchers and designers doing user research to consider adapting your methods when things don’t go as planned. It could recoup some of the resources and time you have invested, and potentially give you some valuable insights for your product design process.

Acknowledgements

Much gratitude to our colleagues who created the workshop with us and helped us edit this blog post! In alphabetical order, thanks to Stuart Colville, Mike Conca, Kev Needham, Caitlin Neiman, Gemma Petrie, Mara Schwarzlose, Amy Tsay, and Jorge Villalobos.

Reference

[1] “The Pros and Cons of Guerrilla Research for Your UX Project.” The Interaction Design Foundation, 2016, www.interaction-design.org/literature/article/the-pros-and-cons-of-guerrilla-research-for-your-ux-project.

Also published on medium.com.

hacks.mozilla.orgCrossing the Rust FFI frontier with Protocol Buffers

My team, the application services team at Mozilla, works on Firefox Sync, Firefox Accounts and WebPush.

These features are currently shipped on Firefox Desktop, Android and iOS browsers. They will soon be available in our new products such as our upcoming Android browser, our password manager Lockbox, and Firefox for Fire TV.

Solving for one too many targets

Until now, for historical reasons, these functionalities have been implemented in widely different fashions. They’ve been tightly coupled to the product they are supporting, which makes them hard to reuse. For example, there are currently three Firefox Sync clients: one in JavaScript, one in Java and another one in Swift.

Considering the size of our team, we quickly realized that our current approach to shipping products would not scale across more products and would lead to quality issues such as bugs or uneven feature-completeness across platform-specific implementations. About a year ago, we decided to plan for the future. As you may know, Mozilla is making a pretty big bet on the new Rust programming language, so it was natural for us to follow suit.

A cross-platform strategy using Rust

Our new strategy is as follows: We will build cross-platforms components, implementing our core business logic using Rust and wrapping it in a thin platform-native layer, such as Kotlin for Android and Swift for iOS.

Rust component bundled in an Android wrapper

The biggest and most obvious advantage is that we get one canonical code base, written in a safe and statically typed language, deployable on every platform. Every upstream business logic change becomes available to all our products with a version bump.

How we solved the FFI challenge safely

However, one of the challenges to this new approach is safely passing richly-structured data across the API boundary in a way that’s memory-safe and works well with Rust’s ownership system. We wrote the ffi-support crate to help with this, if you write code that does FFI (Foreign function interface) tasks you should strongly consider using it. Our initial versions were implemented by serializing our data structures as JSON strings and in a few cases returning a C-shaped struct by pointer. The procedure for returning, say, a bookmark from the user’s synced data looked like this:

Returning Bookmark data from Rust to Kotlin using JSON

Returning Bookmark data from Rust to Kotlin using JSON (simplified)

 

So what’s the problem with this approach?

  • Performance: JSON serializing and de-serializing is notoriously slow, because performance was not a primary design goal for the format. On top of that, an extra string copy happens on the Java layer since Rust strings are UTF-8 and Java strings are UTF-16-ish. At scale, it can introduce significant overhead.
  • Complexity and safety: every data structure is manually parsed and deserialized from JSON strings. A data structure field modification on the Rust side must be reflected on the Kotlin side, or an exception will most likely occur.
  • Even worse, in some cases we were not returning JSON strings but C-shaped Rust structs by pointer: forget to update the Structure Kotlin subclass or the Objective-C struct and you have a serious memory corruption on your hands.

We quickly realized there was probably a better and faster way that could be safer than our current solution.

Data serialization with Protocol Buffers v.2

Thankfully, there are many data serialization formats out there that aim to be fast. The ones with a schema language will even auto-generate data structures for you!

After some exploration, we ended up settling on Protocol Buffers version 2.

The —relative— safety comes from the automated generation of data structures in the languages we care about. There is only one source of truth—the .proto schema file—from which all our data classes are generated at build time, both on the Rust and consumer side.

protoc (the Protocol Buffers code generator) can emit code in more than 20 languages! On the Rust side, we use the prost crate which outputs very clean-looking structs by leveraging Rust derive macros.

Returning Bookmark data from Rust to Kotlin using Protocol Buffers 2

Returning Bookmark data from Rust to Kotlin using Protocol Buffers 2 (simplified)

 

And of course, on top of that, Protocol Buffers are faster than JSON.

There are a few downsides to this approach: it is more work to convert our internal types to the generated protobuf structs –e.g a url::Url has to be converted to a String first– whereas back when we were using serde-json serialization, any struct implementing serde::Serialize was a line away from being sent over the FFI barrier. It also adds one more step during our build process, although it was pretty easy to integrate.

One thing I ought to mention: Since we ship both the producer and consumer of these binary streams as a unit, we have the freedom to change our data exchange format transparently without affecting our Android and iOS consumers at all.

A look ahead

Looking forward, there’s probably a high-level system that could be used to exchange data over the FFI, maybe based on Rust macros. There’s also been talk about using FlatBuffers to squeeze out even more performance. In our case Protobufs provided the right trade-off between ease-of-use, performance, and relative safety.

So far, our components are present both on iOS, on Firefox and Lockbox, and on Lockbox Android, and will soon be in our new upcoming Android browser.

Firefox iOS has started to oxidize by replacing their password syncing engine with the one we built in Rust. The plan is to eventually do the same on Firefox Desktop as well.

If you are interested in helping us build the future of Firefox Sync and more, or simply following our progress, head to the application services Github repository.

The post Crossing the Rust FFI frontier with Protocol Buffers appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NChanging the Language of Firefox Directly From the Browser

Language Settings

In Firefox there are two main user facing settings related to languages:

  • Web content: when you visit a web page, the browser will communicate to the server which languages you’d like to see content in. Technically, this is done by sending an Accept-Language HTTP header, which contains a list of locale codes in the user’s preferred order.
  • User interface: the language in which you want to see the browser (menus, preferences, etc.).

The difference between the two is not as intuitive as it might seem. A lot of users change the web content settings, and expect the user interface to change.

The Legacy Chaos

While the preferences for web content have been exposed in Firefox since its first versions, changing the language used for the user interface has always been challenging. Here’s the painful journey users had to face until a few months ago:

  • First of all, you need to be aware that there are other languages available. There are no preferences exposing this information, and documentation is spread across different websites.
  • You need to find and install a language pack – a special type of add-on – from addons.mozilla.org.
  • If the language used in the operating system is different from the one you’re trying to install in Firefox, you need to create a new preference in about:config, and set it to the correct locale code. Before Firefox 59 and intl.locale.requested, you would need to manually set the general.useragent.locale pref in any case, not just when there’s a discrepancy.

The alternative is to install a build of Firefox already localized in your preferred language. But, once again, they’re not so easy to find. Imagine you work in a corporate environment that provides you with an operating system in English (en-US). You search the Web for “Firefox download”, and automatically end up on this download page, which doesn’t provide information on the language you’re about to download. You need to pay a lot of attention and notice the link to other languages.

If you already installed Firefox in the wrong language, you need to uninstall it, find the installer in the correct language, and reinstall it. As part of the uninstall process on Windows, we ask users to volunteer for a quick survey to explain why they’re uninstalling the browser, and the amount of them saying something along the line of “wrong language” or “need to reinstall in the right language” is staggering, especially considering this is an optional comment within an optional survey. It’s a clear signal, if one was ever needed, that things need to improve.

Everything Changes with Firefox 65

Fast forward to Firefox 65:

We introduced a new Language section. Directly from the General pane it’s now possible to switch between languages already available in Firefox, removing the need for manually setting preferences in about:config. What if the language is not available? Then you can simply Search for more languages… from the dropdown menu.

Add your preferred language to the list (French in this case).

And restart the browser to have the interface localized in French. Notice how the message is displayed in both languages, to provide users with another hint to the user that they selected the right language.

If you’re curious, you can see a diagram of the complete user interaction here.

A lot happens under the hood for this brief interaction:

  • When the user asks for more languages, Firefox connects to addons.mozilla.org via API to retrieve the list of languages available for the version in use.
  • When the user adds the language, Firefox downloads and installs the language pack for the associated locale code.
  • In order to improve the user experience, if available it also downloads dictionaries associated with the requested language.
  • When the browser is restarted, the new locale code is set as first in the intl.locale.requested preference.

This feature is enabled by default in Beta and Release versions of Firefox. Language packs are not reliable on Nightly, given that strings change frequently, and a language still considered compatible but incomplete could lead to a completely broken browser (condition known as “yellow screen of death“, where the XUL breaks due to missing DTD entities).

What’s Next

First of all, there are still a few bugs to fix (you can find a list in the dependencies of the tracking bug). Sadly, there are still several places in the code that assume the language will never change, and cache the translated content too aggressively.

The path forward has yet to be defined. Completing the migration to Fluent would drastically improve the user experience:

  • Language switching would be completely restartless.
  • Firefox would support a list of fallback locales. With the old technology, if a translation is missing it can only fall back to English. With Fluent, we can set a fallback chain, for example Ligurian->Italian->English.

There are also areas of the browser that are not covered by language packs, and would need to be rewritten (e.g. the profile manager). For these, the language used always remains the one packaged in the build.

The user experience could also be improved. For example, can we make selecting the language part of a multiplatform onboarding experience? There are a lot of hints that we could take from the host operating system, and prompt the user about installing and selecting a different language. What if the user browses a lot of German content, but he’s using the browser in English? Should we suggest them that it’s possible to switch language?

With Firefox 66 we’re also starting to collect Telemetry data about internationalization settings – take a look at the table at the bottom of about:support – to understand more about our users, and how these changes in Preferences impact language distribution and possibly retention.

For once, Firefox is well ahead of the competition. For example, for Chrome it’s only possible to change language on Windows and Chromebook, while Safari only uses the language of your macOS.

This is the result of an intense cross-team effort. Special thanks to Zibi Braniecki for the initial idea and push, and all the work done under the hood to improve Firefox internationalization infrastructure, Emanuela Damiani for UX, and Mark Striemer for the implementation.

Mozilla Add-ons BlogApril’s featured extensions

Firefox Logo on blue background

Pick of the Month: Disable WebRTC

by Chris Antaki
Do you use VPN? This extension prevents your IP address from leaking through WebRTC.

“Simple and effective!”

Featured: CSS Exfil Protection

by Mike Gualtieri
Gain protection against a particular type of attack that occurs through Cascading Style Sheets (CSS).

“I had no idea this was an issue until reading about it recently.”

Featured: Cookie Quick Manager

by Ysard
Take full control of the cookies you’ve accumulated while browsing.

“The best cookie manager I have tested (and I have tested a lot, if not them all!)”

Featured: Amazon Container

by JackymanCS4
Prevent Amazon from tracking your movements around the web.

(NOTE: Though similarly titled to Mozilla’s Facebook Container and Multi-Account Containers, this extension is not affiliated with Mozilla.)

“Thank you very much.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post April’s featured extensions appeared first on Mozilla Add-ons Blog.

The Mozilla Thunderbird BlogAll Thunderbird Bugs Have Been Fixed!

April Fools!

We still have open bugs, but we’d like your help to close them!

We are grateful to have a very active set of users who generate a lot of bug reports and we are requesting your help in sorting them, an activity called bug triage. We’re holding “Bug Days” on April 8th (all day, EU and US timezones) and April 13th (EU and US timezones until 4pm EDT). During these bug days we will log on and work as a community to triage as many bugs as possible. All you’ll need is a Bugzilla account, Thunderbird Daily, and we’ll teach you the rest! With several of us working at the same time we can help each other in real time – answering questions, sharing ideas ideas, and enjoying being with like-minded people.

No coding or special skills are required, and you don’t need to be an expert or long term user of Thunderbird.

Some things you’ll be doing if you participate:

  • Help other users by checking their bug reports to see if you can reproduce the behavior of their reported problem.
  • Get advice about your own bug report(s).
  • Learn the basics about Thunderbird troubleshooting and how to contribute.

We’re calling this the “Game of Bugs”, named after the popular show Game of Thrones – where we will try to “slay” all the bugs. Those who participate fully in the event will get a Thunderbird Game of Bugs t-shirt for their participation (with the design below).

Thunderbird: Game of Bugs T-shirt design

Thunderbird: Game of Bugs

Sorry for the joke! But we hope you’ll join us on the 8th or the 13th via #tb-qa on Mozilla’s IRC so that we can put these bugs in their place which helps make Thunderbird even better. If you have any questions feel free to email ryan@thunderbird.net.

P.S. If you are unable to participate in bug day you can still help by checking out our Get Involved page on the website and contributing in the way you’d like!

SUMO BlogFirefox services experiments on SUMO

Over the last week or so, we’ve been promoting Firefox services on support.mozilla.org.

In this experiment, which we’re running for the next two weeks, we are promoting the free services Sync, Send and Monitor. These services fit perfectly into our mission: to help people create take control of their online lives.

  • Firefox Sync allows Firefox users to instantly share preferences, bookmarks, history, passwords, open tabs and add-ons to other devices.
  • Firefox Send is a free encrypted file transfer service that allows people to safely share files from any browser.
  • Firefox Monitor allows you to check your email address against known data breaches across the globe. Optionally you can sign up to receive a full report of past breaches and new breach alerts.

The promotions are minimal and intended to not distract people from getting help with Firefox. So why promote anything at all on a support website when people are there to get help? People visit the support site when they have a problem, sure. But just as many are there to learn. Of the top articles that brought Firefox users to support.mozilla.org in the past month, half were about setting up Firefox and understanding its features.

This experiment is about understanding whether Firefox users on the support site can discover our connected services and find value in them. We are also monitoring whether the promotions are too distracting or interfere with the mission of support.mozilla.org. This experiment is about understanding whether Firefox users on the support site can discover our connected services and find value in them. We are also monitoring whether the promotions are too distracting or interfere with the mission of support.mozilla.org. In the meantime, if you find issues with the content please report it.

The test will run for the next two weeks and we will report back here and in our weekly SUMO meeting on the results and next steps.

hacks.mozilla.orgA Real-Time Wideband Neural Vocoder at 1.6 kb/s Using LPCNet

This is an update on the LPCNet project, an efficient neural speech synthesizer from Mozilla’s Emerging Technologies group. In an an earlier demo from late last year, we showed how LPCNet combines signal processing and deep learning to improve the efficiency of neural speech synthesis.

This time, we turn LPCNet into a very low-bitrate neural speech codec that’s actually usable on current hardware and even on phones (as described in this paper). It’s the first time a neural vocoder is able to run in real-time using just one CPU core on a phone (as opposed to a high-end GPU)! The resulting bitrate — just 1.6 kb/s — is about 10 times less than what wideband codecs typically use. The quality is much better than existing very low bitrate vocoders. In fact, it’s comparable to that of more traditional codecs using higher bitrates.

LPCNet sample player

Screenshot of a demo player that demonstrates the quality of LPCNet-coded speech

This new codec can be used to improve voice quality in countries with poor network connectivity. It can also be used as redundancy to improve robustness to packet loss for everyone. In storage applications, it can compress an hour-long podcast to just 720 kB (so you’ll still have room left on your floppy disk). With some further work, the technology behind LPCNet could help improve existing codecs at very low bitrates.

Learn more about our ongoing work and check out the playable demo in this article.

The post A Real-Time Wideband Neural Vocoder at 1.6 kb/s Using LPCNet appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgScroll Anchoring in Firefox 66

Firefox 66 was released on March 19th with a feature called scroll anchoring.

It’s based on a new CSS specification that was first implemented by Chrome, and is now available in Firefox.

Have you ever had this experience before?

You were reading headlines, but then an ad loads and moves what you were reading off the screen.

Or how about this?!

You rotate your phone, but now you can’t find the paragraph that you were just reading.

There’s a common cause for both of these issues.

Browsers scroll by tracking the distance you are from the top of the page. As you scroll around, the browser increases or decreases your distance from the top.

But what happens if an ad loads on a page above where you are reading?

The browser keeps you at the same distance from the top of the page, but now there is more content between what you’re reading and the top. In effect, this moves the visible part of the page up away from what you’re reading (and oftentimes into the ad that’s just loaded).

Or, what if you rotate your phone to portrait mode?

Now there’s much less horizontal space on the screen, and a paragraph that was 100px tall may now be 200px tall. If the paragraph you were reading was 1000px from the top of the page before rotating, it may now be 2000px from the top of the page after rotating. If the browser is still scrolled to 1000px, you’ll be looking at content far above where you were before.

The key insight to fixing these issues is that users don’t care what distance they are from the top of the page. They care about their position relative to the content they’re looking at!

Scroll anchoring works to anchor the user to the content that they’re looking at. As this content is moved by ads, screen rotations, screen resizes, or other causes, the page now scrolls to keep you at the same relative position to it.

Demos

Let’s take a look at some examples of scroll anchoring in action.

Here’s a page with a slider that changes the height of an element at the top of the page. Scroll anchoring prevents the element above the viewport from changing what you’re looking at.

Here’s a page using CSS animations and transforms to change the height of elements on the page. Scroll anchoring keeps you looking at the same paragraph even though it’s been moved by animations.

And finally, here’s the original video of screen rotation with scroll anchoring disabled, in contrast to the view with scroll anchoring enabled.

Notice how we jump to an unrelated section when scroll anchoring is disabled?

How it works

Scroll anchoring works by first selecting an element of the DOM to be the anchor node and then attempting to keep that node in the same relative position on the screen.

To choose an anchor node, scroll anchoring uses the anchor selection algorithm. The algorithm attempts to pick content that is small and near the top of the page. The exact steps are slightly complicated, but roughly it works by iterating over the elements in the DOM and choosing the first one that is visible on the screen.

When a new element is added to the page, or the screen is rotated/resized, the page’s layout needs to be recalculated. During this process, we check to see if the anchor node has been moved to a new location. If so, we scroll to keep the page in the same relative position to the anchor node.

The end result is that changes to the layout of a page above the anchor node are not able to change the relative position of the anchor node on the screen.

Web compatibility

New features are great, but do they break websites for users?

This feature is an intervention. It breaks established behavior of the web to fix an annoyance for users.

It’s similar to how browsers worked to prevent popup-ads in the past, and the ongoing work to prevent autoplaying audio and video.

This type of workaround comes with some risk, as existing websites have expectations about how scrolling works.

Scroll anchoring mitigates the risk with several heuristics to disable the feature in situations that have caused problems with existing websites.

Additionally, a new CSS property has been introduced, overflow-anchor, to allow websites to opt-out of scroll anchoring.

To use it, just add overflow-anchor: none on any scrolling element where you don’t want to use scroll anchoring. Additionally, you can add overflow-anchor: none to specific elements that you want to exclude from being selected as anchor nodes.

Of course there are still possible incompatibilities with existing sites. If you see a new issue caused by scroll anchoring, please file a bug!

Future work

The version of scroll anchoring shipping now in Firefox 66 is our initial implementation. In the months ahead we will continue to improve it.

The most significant effort will involve improving the algorithm used to select an anchor.

Scroll anchoring is most effective when it selects an anchor that’s small and near the top of your screen.

  1. If the anchor is too large, it’s possible that content inside of it will expand or shrink in a way that we won’t adjust for.
  2. If the anchor is too far from the top of the screen, it’s possible that content below what you’re looking at will expand and cause unwanted scroll adjustments.

We’ve found that our implementation of the specification can select inadequate anchors on pages with table layouts or significant content inside of overflow: hidden.

This is due to a fuzzy area of the specification where we chose an approach different than Chrome’s implementation. This is one of the values of multiple browser implementations: We have gained significant experience with scroll anchoring and hope to bring that to the specification, to ensure it isn’t defined by the implementation details of only one browser.

The scroll anchoring feature in Firefox has been developed by many people. Thanks go out to Daniel Holbert, David Baron, Emilio Cobos Álvarez, and Hiroyuki Ikezoe for their guidance and many reviews.

The post Scroll Anchoring in Firefox 66 appeared first on Mozilla Hacks - the Web developer blog.