The Mozilla BlogThoughts on the Latest Development in the U.S. Administration Travel Ban case

This morning, the U.S. Supreme Court decided to hear the lawfulness of the U.S. Administration’s revised Travel Ban. We’ve opposed this Executive Order from the beginning as it undermines immigration law and impedes the travel necessary for people who build, maintain, and protect the Internet to come together.

Today’s new development means that until the legal case is resolved the travel ban cannot be enforced against people from the six predominantly Muslim countries who have legitimate ties or relationships to family or business in the U.S. This includes company employees and those visiting close family members.

However, the Supreme Court departed from lower court opinions by allowing the ban to be enforced against visa applicants with no connection to the U.S.  We hope that the Government will apply this standard in a manner so that qualified visa applicants who demonstrate valid reasons for travel to the U.S. are not discriminated against, and that these decisions are reliably made to avoid the chaos that travelers, families, and business experienced earlier this year.

Ultimately, we would like the Court to hold that blanket bans targeted at people of particular religions or nationalities are unlawful under the U.S. Constitution and harmfully impact families, businesses, and the global community.  We will continue to follow this case and advocate for the free flow of information and ideas across borders, of which travel is a key part.

The post Thoughts on the Latest Development in the U.S. Administration Travel Ban case appeared first on The Mozilla Blog.

hacks.mozilla.orgOpus audio codec version 1.2 released

The Opus audio codec just got another major upgrade with the release of version 1.2 (see demo). Opus is a totally open, royalty-free, audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. Its standardization by the Internet Engineering Task Force (IETF) in 2012 (RFC 6716) was a major victory for open standards. Opus is the default codec for WebRTC and is now included in all major web browsers.

This new release brings many speech and music quality improvements, especially at low bitrates. The result is that Opus can now push stereo music bitrates down to 32 kb/s and encode full-band speech down to 14 kb/s. All that is achieved while remaining fully compatible with RFC 6716. The new release also includes optimizations, new options, as well as many bug fixes. This demo shows a few of the upgrades that users and implementers will care about the most, including audio samples. For those who haven’t used Opus yet, now’s a good time to give it a try.

hacks.mozilla.orgAn inside look at Quantum DOM Scheduling

Use of multi-tab browsing is becoming heavier than ever as people spend more time on services like Facebook, Twitter, YouTube, Netflix, and Google Docs, making them a part of their daily life and work on the Internet.

Quantum DOM: Scheduling is a significant piece of Project Quantum, which focuses on making Firefox more responsive, especially when lots of tabs are open. In this article, we’ll describe problems we identified in multi-tab browsing, the solutions we figured out, the current status of Quantum DOM, and opportunities for contribution to the project.

Problem 1: Task prioritization in different categories

Since multiprocess Firefox (e10s) was first enabled in Firefox 48, web content tabs now run in separate content processes in order to reduce overcrowding of OS resources in a given process. However, after further research, we found that the task queue of the main thread in the content process was still crowded with tasks in multiple categories. The tasks in the content process can come from a number of possible sources: through IPC (interprocess communication) from the main process (e.g. for input events, network data, and vsync), directly from web pages (e.g. from setTimeout, requestIdleCallback, or postMessage), or internally in the content process (e.g. for garbage collection or telemetry tasks). For better responsiveness, we’ve learned to prioritize tasks for user inputs and vsync above tasks for requestIdleCallback and garbage collection.

Problem 2: Lack of task prioritization between tabs

Inside Firefox, tasks running in foreground and background tabs are executed in First-Come-First-Served order, in a single task queue. It is quite reasonable to prioritize the foreground tasks over than the background ones, in order to increase the responsiveness of the user experience for Firefox users.

Goals & solutions

Let’s take a look at how we approached these two scheduling challenges, breaking them into a series of actions leading to achievable goals:

  • Classify and prioritize tasks on the main thread of the content processes in 2 dimensions (categories & tab groups), to provide better responsiveness.
  • Preempt tasks that are running the background tabs if this preempting is not noticeable to the user.
  • Provide an alternative to multiple content processes (e10s multi) when fewer content processes are available due to limited resources.

Task categorization

To resolve our first problem, we divide the task queue of the main thread in the content processes into 3 prioritized queues: High (User Input and Refresh Driver), Normal (DOM Event, Networking, TimerCallback, WorkerMessage), and Low (Garbage Collection, IdleCallback). Note: The order of tasks of the same priority is kept unchanged.

Task grouping

Before describing the solution to our second problem, let’s define a TabGroup as a set of open tabs that are associated via window.opener and window.parent. In the HTML standard, this is called a unit of related browsing contexts. Tasks are isolated and cannot affect each other if they belong to different TabGroups. Task grouping ensures that tasks from the same TabGroup are run in order while allowing us to interrupt tasks from background TabGroups in order to run tasks from a foreground TabGroup.

In Firefox internals, each window/document contains a reference to the TabGroup object it belongs to, which provides a set of useful dispatch APIs. These APIs make it easier for Firefox developers to associate a task with a particular TabGroup.

How tasks are grouped inside Firefox

Here are several examples to show how we group tasks in various categories inside Firefox:

  1. Inside the implementation of window.postMessage(), an asynchronous task called PostMessageEvent will be dispatched to the task queue of the main thread:
void nsGlobalWindow::PostMessageMozOuter(...) {
  RefPtr<PostMessageEvent> event = new PostMessageEvent(...);

With the new association of DOM windows to their TabGroups and the new dispatching API provided in TabGroup, we can now associate this task with the appropriate TabGroup and specify the TaskCategory:

void nsGlobalWindow::PostMessageMozOuter(...) {
  RefPtr<PostMessageEvent> event = new PostMessageEvent(...);
  // nsGlobalWindow::Dispatch() helps to find the TabGroup of this window for dispatching.
  Dispatch("PostMessageEvent", TaskCategory::Other, event);
  1. In addition to the tasks that can be associated with a TabGroup, there are several kinds of tasks inside the content process such as telemetry data collection and resource management via garbage collection, which have no relationship to any web content. Here is how garbage collection starts:
void GCTimerFired() {
  // A timer callback to start the process of Garbage Collection.

void nsJSContext::PokeGC(...) {
  // The callback of GCTimerFired will be invoked asynchronously by enqueuing a task
  // into the task queue of the main thread to run GCTimerFired() after timeout.
  sGCTimer->InitWithFuncCallback(GCTimerFired, ...);

To group tasks that have no TabGroup dependencies, a special group called SystemGroup is introduced. Then, the PokeGC() method can be revised as shown here:

void nsJSContext::PokeGC(...) {
  sGCTimer->InitWithFuncCallback(GCTimerFired, ...);

We have now grouped this GCTimerFired task to the SystemGroup with TaskCategory::GC specified. This allows the scheduler to interrupt the task to run tasks for any foreground tab.

  1. In some cases, the same task can be requested either by specific web content or by an internal Firefox script with system privileges in the content process. We’ll have to decide if the SystemGroup makes sense for a request when it is not tied to any window/document. For example, in the implementation of DNSService in the content process, an optional TabGroup-versioned event target can be provided for dispatching the result callback after the DNS query is resolved. If the optional event target is not provided, the SystemGroup event target in TaskCategory::Network is chosen. We make the assumption that the request is fired from an internal script or an internal service which has no relationship to any window/document.
nsresult ChildDNSService::AsyncResolveExtendedNative(
 const nsACString &hostname,
 nsIDNSListener *listener,
 nsIEventTarget *target_,
 nsICancelable  **result)
  nsCOMPtr<nsIEventTarget> target = target_;
  if (!target) {
    target = SystemGroup::EventTargetFor(TaskCategory::Network);

  RefPtr<DNSRequestChild> childReq =
    new DNSRequestChild(hostname, listener, target);

  return NS_OK;

TabGroup categories

Once the task grouping is done inside the scheduler, we assign a cooperative thread per tab group from a pool to consume the tasks inside a TabGroup. Each cooperative thread is pre-emptable by the scheduler via JS interrupt at any safe point. The main thread is then virtualized via these cooperative threads.

In this new cooperative-thread approach, we ensure that only one thread at a time can run a task. This allocates more CPU time to the foreground TabGroup and also ensures internal data correctness in Firefox, which includes many services, managers, and data designed intentionally as singleton objects.

Obstacles to task grouping and scheduling

It’s clear that the performance of Quantum-DOM scheduling is highly dependent on the work of task grouping. Ideally, we’d expect that each task should be associated with only one TabGroup. In reality, however, some tasks are designed to serve multiple TabGroups, which require refactoring in advance in order to support grouping, and not all the tasks can be grouped in time before scheduler is ready to be enabled. Hence, to enable the scheduler aggressively before all tasks are grouped, the following design is adopted to disable the preemption temporarily when an ungrouped task arrives because we never know which TabGroup this ungrouped task belongs to.

Current status of task grouping

We’d like to send thanks to the many engineers from various sub-modules including DOM, Graphic, ImageLib, Media, Layout, Network, Security, etc., who’ve helped clear these ungrouped (unlabeled) tasks according to the frequency shown in telemetry results.

The table below shows telemetry records of tasks running in the content process, providing a better picture of what Firefox is actually doing:

The good news is that over 80% of tasks (weighted with frequency) have cleared recently. However, there are still a fair amount of anonymous tasks to be cleared. Additional telemetry will help check the mean time between 2 ungrouped tasks arriving to the main thread. The larger the mean time, the more performance gain we’ll see from Quantum-DOM Scheduler.

Contribute to Quantum DOM development

As mentioned above, the more tasks are grouped (labeled), the more benefit we gain from the scheduler. If you are interested in contributing to Quantum-DOM, here are some ways you can help:

  • Pick any bug from labeling meta-bug without assignee and follow this guideline for labeling.
  • If you are not familiar with these unlabeled bugs, but you want to help on naming the tasks to reduce the anonymous tasks in the telemetry result to improve the analysis in the future, this guideline will be helpful to you. (Update: Naming anonymous tasks are going to be addressed by some automation tool in this bug.)

If you get started fixing bugs and run into issues or questions, you can usually find the Quantum DOM team in Mozilla’s #content IRC channel.

Air MozillaA Welcome to All Hands San Francisco 2017 - Chris Beard

A Welcome to All Hands San Francisco 2017 - Chris Beard .

Firefox UXLet‘s tackle the same challenge again, and again.

Actually, let’s not!

The products we build get more design attention as our Firefox UX team has grown from about 15 to 45 people. Designers can now continue to focus on their product after the initial design is finished, instead of having to move to the next project. This is great as it helps us improve our products step by step. But this also leads to increasing efforts to keep this growing team in sync and able to timely answer all questions posed to us.

Scaling communication from small to big teams leads to massive effort for a few.

Especially for engineers and new designers it is often difficult to get timely answers to simple questions. Those answers are often in the original spec, which too often is hard to locate. Or worse, it may be in the mind of the designer, who may have left, or receives too many questions to respond timely.

In a survey we ran in early 2017, developers reported to feel they

  • spend too much time identifying the right specs to build from,
  • spend too much time waiting for feedback from designers, and
  • spend too much time mapping new designs to existing UI elements.

In the same survey designers reported to feel they

  • spend too much time identifying current UI to re-use in their designs, and
  • spend too much time re-building current UI to use in their designs.

All those repetitive tasks people feel they spend too much time on ultimately keep us from tackling newer and bigger challenges. ‒ So, actually, let‘s not spend our time on those.

Let’s help people spend time on what they love to do.

Shifting some communication to a central tool can reduce load on people and lower the barrier for entry.

Let’s build tools that help developers know what a given UI should look like, without them needing to wait for feedback from designers. And let’s use that system for designers to identify UI we already built, and to learn how they can re-use it.

We call this the Photon Design System,
and its first beta version is ready to be used:

We are happy to receive feedback and contributions on the current content of the system, as well as on what content to add next.

Photon Design System

Based on what we learned from people, we are building our design system to help people:

  • find what they are looking for easily,
  • understand the context of that quickly, and
  • more deeply understand Firefox Design.

Currently the Photon Design System covers fundamental design elements like icons, colors, typography and copy-writing as well as our design principles and guidelines on how to design for scale. Defining those already helped designers better align across products and features, and developers have a definitive source to fall back to when a design does not specify a color, icon or other.


With all the design fundamentals in place we are starting to combine them into defined components that can easily be reused to create consistent Firefox UI across all platforms, from mobile to desktop, and from web-based to native. This will add value for people working on Firefox products, as well as help people working on extensions for Firefox.

If you are working on Firefox UI

We would love to learn from you what principles, patterns & components your team’s work touches, and what you feel is worth documenting for others to learn from, and use in their UI.

Share your principle/pattern/component with us!

And if you haven’t yet, ask yourself where you could use what’s already documented in the Photon Design System and help us find more and more synergies across our products to utilize.

If you are working on a Firefox extension

We would love to learn about where you would have wanted design support when building your extension, and when you had to spend more time on design then you intended to.

Share with us!

Let‘s tackle the same challenge again, and again. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaMozilla Gigabit Eugene Open House

Mozilla Gigabit Eugene Open House Hello Eugene, Oregon! Come meet with local innovators, educators, entrepreneurs, students, and community advocates and learn about what it means to be a “Mozilla Gigabit...

Air MozillaGigabit Community Fund June 2017 RFP Webinar

Gigabit Community Fund June 2017 RFP Webinar This summer, we're launching a new round of the Mozilla Gigabit Community Fund. We're funding projects that explore how high-speed networks can be leveraged for...

hacks.mozilla.orgPowerful New Additions to the CSS Grid Inspector in Firefox Nightly

CSS Grid is revolutionizing web design. It’s a flexible, simple design standard that can be used across all browsers and devices. Designers and developers are rapidly falling in love with it and so are we. That’s why we’ve been working hard on the Firefox Developer Tools Layout panel, adding powerful upgrades to the CSS Grid Inspector and Box Model. The latest improvements are now available in Firefox Nightly.

Layout Panel Improvements

The new Layout Panel lists all the available CSS Grid containers on the page and includes an overlay to help you visualize the grid itself. Now you can customize the information displayed on the overlay, including grid line numbers and dimensions.

This is especially useful if you’re still getting to know CSS Grid and how it all works.

There’s also a new interactive grid outline in the sidebar. Mouse over the outline to highlight parts of the grid on the pages and display size, area, and position information.

The new “Display grid areas” setting shows the bounding areas and the associated area name in every cell. This feature was inspired by CSS Grid Template Builder, which was created by Anthony Dugois.

Finally, the Grid Inspector is capable of visualizing transformations applied to the grid container. This lets developers accurately see where their grid lines are on the page for any grids that are translated, skewed, rotated or scaled.

Improved Box Model Panel

We also added a Box Model Properties component that lists properties that affect the position, size and geometry of the selected element. In addition, you’ll be able to see and edit the top/left/bottom/right position and height/width properties—making live layout tweaks quick and easy.

Finally, you’ll also be able to see the offset parent for any positioned element, which is useful for quickly finding nested elements.

As always, we want to hear what you like or don’t like and how we can improve Firefox Dev Tools. Find us on Discourse or @firefoxdevtools on twitter.

Thanks to the Community

Many people were influential in shipping the CSS Layout panel in Nightly, especially the Firefox Developer Tools and Developer Relations teams. We thank them for all their contributions to making Firefox awesome.

We also got a ton of help from the amazing people in the community, and participants in programs like Undergraduate Capstone Open Source Projects (UCOSP) and Google Summer of Code (GSoC). Many thanks to all the contributors who helped land features in this release including:

Micah Tigley – Computer science student at the University of Lethbridge, Winter 2017 UCOSP student, Summer 2017 GSoC student. Micah implemented the interactive grid outline and grid area display.

Alex LockhartDalhousie University student, Winter 2017 UCOSP student. Alex contributed to the Box Model panel with the box model properties and position information.

Sheldon Roddick –  Student at Thompson Rivers University, Winter 2017 UCOSP student. Sheldon did a quick contribution to add the ability to edit the width and height in the box model.

If you’d like to become a contributor to Firefox Dev Tools hit us up on GitHub or Slack or #devtools on Here you will find all the resources you need to get started.

Air MozillaReps Weekly Meeting Jun. 22, 2017

Reps Weekly Meeting Jun. 22, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air MozillaCommunity Participation Guidelines Revision Brownbag (APAC)

Community Participation Guidelines Revision Brownbag (APAC) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Air MozillaThe Joy of Coding - Episode 103

The Joy of Coding - Episode 103 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Add-ons BlogUpcoming changes for add-on usage statistics

We’re changing the way we calculate add-on usage statistics on AMO so they better reflect their real-world usage. This change will go live on the site later this week.

The user count is a very important part of AMO. We show it prominently on listing and search pages. It’s a key factor of determining add-on popularity and search ranking.

Most popular add-ons on AMO

However, there are a couple of problems with it:

  • We count both enabled and disabled installs. This means some add-ons with high disable rates have a higher ranking than they should.
  • It’s an average over a period of several weeks. Add-ons that are rapidly growing in users have user numbers that are lagging behind.

We’ll be calculating the new average based on enabled installs for the past two weeks of activity. We believe this will reflect add-on usage more accurately.

What it means for add-on developers

We expect most add-ons to experience a small drop in their user numbers, due to the removal of disabled installs. Most add-on rankings on AMO won’t change significantly. This change also doesn’t affect the detailed statistics dashboard developers have access to. Only the number displayed on user-facing sections of the site will change.

If you notice any problems with the statistics or anything else on AMO, please let us know by creating an issue.

The post Upcoming changes for add-on usage statistics appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgDesigning for performance: A data-informed approach for Quantum development

When we announced Project Quantum last October, we talked about how users would benefit from our focus on “performance gains…that will be so noticeable that your entire web experience will feel different.”

We shipped the first significant part of this in Firefox 53, and continue to work on the engineering side. Now let’s dive into the performance side and the work we’re doing to ensure that our users will enjoy a faster Web experience.

What makes work on performance so challenging and why is it so important to include the user from the very beginning?

Performance — a contested subject, to say the least!

Awareness of performance as a UX issue often begins with a negative experience – when things get slow or don’t work as expected. In fact, good performance is already a table stake, something that everyone expects from an online product or service. Outstanding performance will very soon become the new baseline point of reference.

The other issue is that there are different perspectives on performance. For users, performance is about their experience and is very often unspecific. For them, perception of good performance can range from “this is amazingly fast” to “SLOW!”, from “WOW!” to “NO!”. For engineers, performance is about numbers and processes. The probes that collect data in the code often measure one specific task in the pipeline. Measuring and tracking capabilities like Garbage Collection (GC) enables engineers to react to regressions in the data quickly, and work on fixing the root causes.

This is why there can be a disconnect between user experience and engineering efforts at mitigation. We measure garbage collection, but it’s often measured without context, such as whether it runs during page load, while the user interacts with a website, or during event queue idle time. Often, GC is within budget, which means that users will hardly perceive it. More generally, specific aspects of what we measure with our probes can be hard to map to the unspecific experience of performance that users have.

Defining technical and perceived performance

To describe an approach for optimizing performance for users, let us start by defining what performance means. For us, there are two sides to performance: technical performance and perceived performance.

Under technical performance, we include the things that we can measure in the browser: how long page elements take to render, how fast we can parse JavaScript or — and that is often more important to understand — how slow certain things are. Technical performance can be measured and the resulting data can be used to investigate performance issues. Technical performance represents the engineer’s viewpoint.

On the other hand, there is the topic of how users experience performance. When users talk about their browser’s performance, they talk about perceived performance or “Quality of Experience” (QoE). Users express QoE in terms of any perceivable, recognized, and nameable characteristic of the product. In the QoE theory, these are called QoE features. We may assume that these characteristics are related to factors in the product that impact technical performance, the QoE factors, but this is not necessarily given.

A promising approach to user-perceived optimization of performance is to identify those factors that have the biggest impact on QoE features and focus on optimizing their technical performance.

Understanding perception

The first step towards optimizing Quantum for perceived performance is to understand how human perception works. We won’t go into details here, but it’s important to know that there are perception thresholds of duration that we can leverage. The most prominent ones for Web interactions were defined by Jacob Nielsen back in the 1990s, and even today, they are informing user-centric performance models like RAIL. Following Nielsen’s thresholds gives a first good estimate about the budget available for certain tasks to be performed by the browser engine.

With our user research team, we are validating and investigating these perceptual thresholds for modern web content. We are running experiments with users, both in the lab and remotely. Of course, this will only happen with users’ consent and everybody will be able to opt in and opt out of these studies at any time. With tools like Shield, we run a set of experiments that allow us to learn about performance and how to improve it for users.

However, knowing the perceptual thresholds and the respective budget is just an important first step. Following, we will go a bit more into detail about how we use a data-informed approach for benchmarking and optimizing performance during the development of our new browser engine.

Three pillars of perceived Web performance

The challenge with optimizing perceived performance of a browser engine is that there are many components involved in bringing data from the network to our screens. All these components may have an impact on the perceived performance and on the underlying perceptual thresholds. However, users don’t know about this structure and the engine. From their point of view, we can define three main pillars for how users perceive performance on the Web: page load, smoothness and responsiveness.

  • Page load: This is what people notice each time when loading a new page. Users care about fast page loads, and we have seen in user research that this is often the way users determine good or bad performance in their browser. Key events defining the perceptual budget during page load are: an immediate response to the user request for a new page, also known as “First Render” or “First non-blank Paint“, and the moment when all important elements are displayed, currently discussed as Hero Element Timing.
  • Smoothness: Scrolling and panning have become challenging activities on modern websites, with infinite scrolling, parallax effects, and dynamic sticky elements. Animations create a better user experience when interacting with the page. Our users want to enjoy a smooth experience for scrolling the web and web animations, be it on social media pages or when shopping for the latest gadget. Often, people nowadays also refer to smoothness as “always 60 fps”.
  • Responsiveness: Beyond scrolling and panning, the other big group of user interactions on websites are mouse, touch, and keyboard inputs. As modern web services create a native-like experience, user expectations for web services are more demanding, based on what they have come to expect for native apps on their laptops and desktop computers. Users have become sensitive to input latency, so we are currently looking at an ideal maximum delay of 100ms.

Targeted optimization for the whole Web

But how do we optimize these three pillars for the whole of the Web? It’s a bigger job than optimizing the performance of a single web service. In building Firefox, we face the challenge of optimizing our browser engine without knowing which pages our users visit or what they do on the Web, due to our commitment to user privacy. This also limits us in collecting data for specific websites or specific user tasks. However, we want to create the best Quality of Experience for as many users and sites as possible.

To start, we decided to focus on the types of content that are currently most popular with Web users. These categories are:

  • Search (e.g.Yahoo Search, Google, Bing)
  • Productivity (e.g. Yahoo Mail, Gmail, Outlook, GSuite)
  • Social (e.g. Facebook, LinkedIn, Twitter, Reddit)
  • Media (e.g. YouTube, Netflix, SoundCloud, Amazon Video)
  • E-commerce (e.g. eBay or Amazon)
  • News & Reference (e.g. NYTimes, BBC, Wikipedia)

Our goal is to learn from this initial set of categories and the most used sites within them and extend our work on improvements to other categories over time. But how do we now match technical to perceived performance and fix technical performance issues to improve the perceived ones?

A data-informed approach to optimizing a browser engine

The goal of our approach here is to take what matters to users and apply that knowledge to achieve technical impact in the engine. With the basics defined above, our iterative approach for optimizing the engine is as follows:

  1. Identification: Based on the set of categories in focus, we specify scenarios for page load, smoothness, and responsiveness that exceed the performance budget and negatively impact perceived performance.
  2. Benchmarks: We define test cases for the identified scenarios so that they become reproducible and quantifiable in our benchmarking testbeds.
  3. Performance profiles: We record and analyze performance profiles to create a detailed view into what’s happening in the browser engine and guide engineers to identify and fix technical root causes.

Identification of scenarios exceeding performance budget

Input for identifying those scenarios come through different sources. They are either informed by results from user research or can be reported through bugs or user feedback. Here are two examples of such a scenario:

  • Scenario: browser startup
  • Category: a special case for page load
  • Performance budget: 1000ms for First Paint and 1500ms for Hero Element
  • Description: Open the browser by clicking the icon > wait for the browser to be fully loaded as maximized window
  • What to measure: First Paint: browser window appears on Desktop, Hero Element: “Search” placeholder in the search box of the content window
  • Scenario: Open chat window on Facebook
  • Category: Responsiveness
  • Performance budget: 150ms
  • Description: Log in to Facebook > Wait for the homepage to be fully loaded > click on a name in the chat panel to open chat window
  • What to measure: time from mouse-click input event to showing the chat window on screen


We have built different testbeds that allow us to obtain valid and reproducible results, in order to create a baseline for each of the scenarios, and also to be able to track improvements over time. Talos is a python-driven performance testing framework that, among many other tests, has a defined set of tests for browser startup and page load. It’s been recently updated to match the new requirements and measure events closer to user perception like First Paint.

Hasal, on the other hand, focuses on benchmarks around responsiveness and smoothness. It runs a defined set of scripts that perform the defined scenarios (like the “open chat window” scenario above) and extracts the required timing data through analyzing videos captured during the interaction.

Additionally, there is still a lot of non-automated, manual testing involved, especially for first rounds of baselining new scenarios before scripting them for automated testing. Therefore, we use a HDMI capture card and analyze the recorded videos frame-by-frame manually.

All these testbeds give us data about how critical the identified scenarios are in terms of exceeding their respective perceptual budgets. Running benchmarks regularly (once a week or even more often) for critical scenarios like browser startup also tracks improvements over time and provides good direction when improvements have moved the scenario into the perceptual budget.

Performance profiles

Now that we have defined our scenarios and understand how much improvement is required to create good Quality of Experience, the last step is to enable engineers to achieve these improvements. The way that engineers look at performance problems in the browser engine is through performance profiles. Performance profiles are a snapshot of what happens in the browser engine during a specific user task such as one of our defined scenarios.

A performance profile using the Gecko Profiler. The profile shows Gecko’s main thread, four content threads, and the compositor main thread. Below is the call stack.


A profile consists of a timeline with tracing markers, different thread timelines and the call tree. The timeline consists of several rows that indicate interesting events in terms of tracing markers (colored segments). With the timeline, you can also zoom in to get more details for marked areas. The thread timelines show a list of profiled threads, like Gecko’s Main Thread, four content process threads (thanks to multi-process), and the main thread of the compositor process, as seen in the profile above. The x-axis is synced to the timeline above, and the y-axis shows the stack depth at a given point in time. Finally, the call tree shows the collected samples within a given timeframe organized by ‘Running Time’.

It requires some experience to be able to read these performance profiles and translate them into actions. However, because they map critical user scenarios directly to technical performance, performance profiles serve as a good tool to improve the browser engine according to what users care about. The challenge here is to identify root causes to improve performance broadly, rather than focus on specific sites and individual bugs. This is also the reason why we focus on categories of pages and not an individual set of initial websites.

For in-depth information about performance profiles, here is an article and a talk from Ehsan Akhgari about performance profiles. We are continuously working on improving the profiler addon which is now written in React/Redux.

Iterative testing and profiling performance

The initial round of baselining and profiling performance for the scenarios above can help us go from identifying user performance issues to fixing those issues in the browser engine. However, only iterative testing and profiling of performance can ensure that patches that land in the code will also lead to the expected benefits in terms of performance budget.

Additionally, iterative benchmarking will also help identify the impact that a patch has on other critical scenarios. Looking across different performance profiles and capturing comparable interactions or page load scenarios actually leads to fixing root causes. By fixing root causes rather than focusing on one-off cases, we anticipate that we will be able to improve QoE and benefit entire categories of websites and activities.

Continuous performance monitoring with Telemetry

Ultimately, we want to go beyond a specific set of web categories and look at the Web as a whole. We also want to go beyond manual testing, as this is expensive and time-consuming. And we want to apply knowledge that we have obtained from our initial data-driven approach and extend it to monitoring performance across our user base through Telemetry.

We recently added probes to our Telemetry system that will help us to track events that matter to the user, in the wild across all websites, like first non-blank paint during page load. Over time, we will extend the set of probes meaningfully. A good first attempt to define and include probes that are closer to what users perceive has been taken by the Google Chrome team and their Progressive Web Metrics.

A visualization of Progressive Web Metrics during page load and page interaction. The upper field shows the user interaction level and critical interactions related to the technical measures.


As mentioned in the beginning, for users performance is a table stake, something that they expect. In this article, we have explored: how we capture issues in perceived performance, how we use benchmarks to measure the criticality of performance issues, and how to fix the issue by looking at performance profiles.

Beyond the scope of the current approach to performance, there’s an even more interesting question: Will improved performance lead to more usage of the browser or changes to how users use their browser? Can performance improvements increase user engagement?

But these are topics that still need more research — and, at some point in time, will be the subject for another blog post.

Meanwhile, if you are now interested to follow along on performance improvements and experience the enhanced performance of the Firefox browser, go download and install the latest Firefox Nightly build and see what you think of its QoE.

Air MozillaCommunity Participation Guidelines Revision Brownbag (EMEA)

Community Participation Guidelines Revision Brownbag (EMEA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

The Mozilla BlogA $2 Million Prize to Decentralize the Web. Apply Today

We’re fueling a healthy Internet by supporting big ideas that keep the web accessible, decentralized and resilient. What will you build?


Mozilla and the National Science Foundation are offering a $2 million prize for big ideas that decentralize the web. And we’re accepting applications starting today.

Mozilla believes the Internet is a global public resource that must be open and accessible to all.  In the 21st century, a lack of Internet access is far more than an inconvenience — it’s a staggering disadvantage. Without access, individuals miss out on substantial economic and educational opportunities, government services and the ability to communicate with friends, family and peers.

Currently, 34 million people in the U.S. — 10% of the country’s population — lack access to high-quality Internet connectivity. This number jumps to 39% in rural communities and 41% on Tribal lands. And when disasters strike, millions more can lose vital connectivity right when it’s needed most.

To connect the unconnected and disconnected across the U.S., Mozilla today is accepting applications for the Wireless Innovation for a Networked Society (WINS) challenges. Sponsored by NSF, a total of $2 million in prize money is available for wireless solutions that get people online after disasters, or that connect communities lacking reliable Internet access.

The details:

Off-the-Grid Internet Challenge

When disasters like earthquakes and hurricanes strike, communications networks are among the first pieces of critical infrastructure to overload or fail. How can we leverage both the Internet’s decentralized design and current wireless technology to keep people connected to each other — and vital messaging and mapping services — in the aftermath of a disaster?

Challenge applicants will be expected to design both the means to access the wireless network (i.e. hardware) and the applications provided on top of that network (i.e. software). Projects should be portable, easy to power and simple to access.

Here’s an example: A backpack containing a hard drive computer, battery and Wi-Fi router. The router provides access, via a Wi-Fi network, to resources on the hard drive like maps and messaging applications.

Smart Community Networks Challenge

Many communities across the U.S. lack reliable Internet access. Sometimes commercial providers don’t supply affordable access; sometimes a particular community is too isolated; sometimes the speed and quality of access is too slow. How can we leverage existing infrastructure — physical or network — to provide high-quality wireless connectivity to communities in need?

Challenge applicants should plan for a high density of users, far-reaching range and robust bandwidth. Projects should also aim to make a minimal physical footprint and uphold users’ privacy and security.

Here’s an example: A neighborhood wireless network where the nodes are housed in, and draw power from, disused phone booths or similarly underutilized infrastructure.

These challenges are open to individuals and teams, nonprofits and for-profits. Applicants could be academics, technology activists, entrepreneurs or makers. We’re welcoming anyone with big ideas and passion for a healthy Internet to apply. Prizes will be available for both early-stage design concepts and fully-working prototypes.

To learn more and apply, visit This challenge is one of Mozilla’s open innovation competitions, which also includes the Equal Rating Innovation Challenge.

Related Reading: Internet access is an essential part of life, but the quality of that access can vary wildly, writes Mozilla’s Executive Director Mark Surman in Quartz

The post A $2 Million Prize to Decentralize the Web. Apply Today appeared first on The Mozilla Blog.

Air MozillaRain of Rust -3rd online meeting

Rain of Rust -3rd online meeting This event belongs to a series of online Rust events that we run in the month of June, 2017

Air MozillaMartes Mozilleros, 20 Jun 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

The Mozilla BlogFirefox Focus New to Android, blocks annoying ads and protects your privacy

Last year, we introduced Firefox Focus, a new browser for the iPhone and iPad, designed to be fast, simple and always private. A lot has happened since November; and more than ever before, we’re seeing consumers play an active role in trying to protect their personal data and save valuable megabytes on their data plans.

While we knew that Focus provided a useful service for those times when you want to keep your web browsing to yourself, we were floored by your response  – it’s the highest rated browser from a trusted brand for the iPhone and iPad, earning a 4.6 average rating on the App Store.

Today, I’m thrilled to announce that we’re launching our Firefox Focus mobile app for Android.

Like the iPhone and iPad version, the Android app is free of tabs and other visual clutter, and erasing your sessions is as easy as a simple tap.  Firefox Focus allows you to browse the web without being followed by tracking ads which are notoriously known for slowing down your mobile experience.  Why do we block these ad trackers? Because they not only track your behavior without your knowledge, they also slow down the web on your mobile device.

Check out this video to learn more:


New Features for Android

For the Android release of Firefox Focus, we added the following features:

  • Ad tracker counter – For the curious, there’s a counter to list the number of ads that are blocked per site while using the app.
  • Disable tracker blocker – For sites that are not loading correctly, you can disable the tracker blocker to quickly take care of it and get back to where you’ve left off.
  • Notification reminder – When Focus is running in the background, we’ll remind you through a notification and you can easily tap to erase your browsing history.

For Android users we also made Focus a great default browser experience. Since we support both custom tabs and the ability to disable the ad blocking as needed, it works great with apps like Facebook when you just want to read an article without being tracked. We built Focus to empower you on the mobile web, and we will continue to introduce new features that make our products even better. Thanks for using Firefox Focus for a faster and more private mobile browsing experience.


Firefox Focus Settings View

Firefox Focus Settings View

You can download Firefox Focus on Google Play and in the App Store.

The post Firefox Focus New to Android, blocks annoying ads and protects your privacy appeared first on The Mozilla Blog.

QMOFirefox 55 Beta 4 Testday, June 23rd

Hello Mozillians,

We are happy to let you know that Friday, June 23rd, we are organizing Firefox 55 Beta 4 Testday. We’ll be focusing our testing on the following new features: Screenshots and Simplify Page.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Air MozillaCommunity Participation Guidelines Revision Brownbag (NALA)

Community Participation Guidelines Revision Brownbag (NALA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Air MozillaMozilla Weekly Project Meeting, 19 Jun 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Air MozillaRep. Eshoo Net Neutrality Roundtable

Rep. Eshoo Net Neutrality Roundtable Congresswoman Anna Eshoo (D-CA) will convene a roundtable to discuss the impacts of net neutrality and the consequence of eviscerating the policy. Eshoo will hear...

Mozilla L10NParis Localization Workshop

From May 6th-7th, 42 participants (coincidence? I think not) gathered in the beautiful city of Paris for another successful edition of our l10n workshops. This was one of our larger scale events with a total of twelve localization communities from four continents: Acholi, Fulah, Songhay, Xhosa, Wolof, Azerbaijani, Turkish, Uzbek, Arabic, Persian, Hebrew and Urdu. What a diverse group! This was in fact the broadest geographical coverage in a single l10n event.

Marcia Knous and Pascal Chevrel, both from the Release Management team and working on Firefox Nightly, joined us and held a Nightly workshop on the side with members from the French community – as well as some joint sessions around Project Dawn and Nightly testing updates, that were of equal interest for both our groups present (i.e., L10n and Nightly communities).

Some may have noticed that, this year, our workshops have slowly started to evolve with each iteration. For example, although spectrograms have been a big part of our past workshops and have proved to be very useful, we decided not to do them this time around. With time, we’ve realized they are unfortunately a time constraint, and that we needed to shift our focus a bit (after all, we had been doing these for the past 2 years).

Instead, l10n-drivers present held a Q&A session – which resembled the Mozilla fireside chats in a way. Overall this proved to be very valuable as it lets localizers really get clarifications and dive deeper into whichever topic they are interested in. In fact, we had l10n-drivers from across the board present, which helped this session be broad enough, and technical enough, to interest everyone present.

(picture by Emin Mastizada)

To give an idea, the l10n-drivers at the event were:

Delphine Lebédel: L10n Firefox mobile project manager

Peiying Mo: L10n project manager

Théo Chevalier: Mozilla Foundation l10n project manager

Axel Hecht: L10n tech lead

Francesco Lodolo: L10n Firefox desktop project manager

Drivers also gave short updates on current projects’ status and general Mozilla plans and goals for the year.

(Picture by Christophe Villeneuve)

We also made room for community presentations this time around. Any community who wanted to present their work was welcome to do so. One of these was a presentation about RTL (right-to-left) on mobile. It was great to see the Arabic, Hebrew, Persian and Urdu communities spontaneously pair up and start collaborating not only for this presentation, but during the workshop as a whole.

Ibrahima from the Fulah localization team also gave a presentation on Unicode and UTF-8 from the perspective of his own community, sharing insights and lessons learned from years of diving into the topic.

Communities spent most of the rest of their time on accomplishing the goals they had set themselves for the weekend (goals and general agenda details here).

Thanks to Flore’s (a long time Mozilla contributor) spontaneous initiative, there was even a cultural activity organized, which included walking to the Trocadéro and seeing the beautiful Tour Eiffel – a must when you are in Paris. Led by Flore, participants bravely faced the parisian rain in order to get a breath of fresh air.

Although logistics were somewhat trickier to plan for such a large and diverse group, it was an incredible experience that allowed communities from completely different backgrounds to share insights and tips amongst each other – which, as always, was a beautiful and humbling moment to witness.

Thanks to the community feedback that we always gather from our final survey, we are able to learn and grow – making each edition an improvement over the previous ones, and building upon each event we hold. One take-away from this (and past) event(s) is that we are currently looking into how other organizations make sure that their events are as “food-friendly” – and accessible – to as many people as possible.

We are also tweaking up our format a bit with each workshop we do this year. We have realized that we need change, and that we can improve things much more quickly by being flexible and adapting with each workshop we hold. Communities are different, issues are different, cultures are different. One format does not fit them all! So we are eager to continue exploring this year and reporting back what we have found. And then building upon the lessons we learn with each new event. Maybe tweaking things up to be more like a localization unconference for the next iteration is going to be our next playground…

In any case, it seems to become clearer each time we organize one of these events that community needs diversity, and needs more meet-ups that let them mingle with a larger crowd. Two days may not be enough, especially given we are flying people from so far away. We’ve learned from our surveys that community misses the larger Mozcamp and MozSummit formats: they need to exchange more with like-minded people, from a much more diverse group, in order to strive and progress effectively. Exploring the idea of global localization workshops, with maybe other l10n communities from other open source projects, is one idea we are also currently playing with.

As usual, thoughts and feedback on improvements are always welcome. We want to hear from you! Always feel free to pitch any ideas by reaching out to anyone in our team.

Next up… Paraguay in August! Stay tuned 🙂

(Picture by Christophe Villeneuve)

(Picture by Christophe Villeneuve)

(Picture by Christophe Villeneuve)

Air MozillaWebdev Beer and Tell: June 2017

Webdev Beer and Tell: June 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

SeaMonkey4 non blondes’ song…

Hi all,

No, the blog hasn’t been hacked.  You may wonder what “4 Non blonde’s song” mean.

They released a song some time ago called “What’s going on?”  And I’m sure that’s exactly what everyone’s thinking of.

Yes.  What’s going on?  Some might even say, “WTF is going on? Where the hell are the new releases?”

I had wanted to write out an analogy of how it’s like walking up a large sand dune in slippers and then being transported/teleported into a jungle where you’re sinking in quicksand only to then find yourself treking up a sand dune again….

Then I gave up and felt it’d be easier just to say that I am doing my best in getting the following done:

  1. set up a new infra somewhere
  2. get us migrated to balrog
  3. Keep our current infra ‘working’.

“Hey, dude.. there are ONLY three things you need to do?  What the hell’s the hold up?”

Also note I have a 9 to 6 job.

So there’s really only so much time I can spend on this.  Well, to be fair, it isn’t so much as ‘time’ as ‘brain cells’.  My brain isn’t as good as it used to.. (hey what’s that laughter in the background?).   The “Cogito ergo sum” is starting to be quite questionable since while I think,  I’m not sure I am.  And if I am, I am what?

Anyway, I just want to apologize for not getting things done quickly. I am certainly feeling the pressure that the trains are still rolling.







hacks.mozilla.orgNetwork Monitor Reloaded (Part 1)

The Network Monitor tool has been available in Firefox since the earliest days of Firefox Dev Tools. It’s an invaluable tool for anyone who cares about page load performance and fast modern web pages. This tool went through extensive refactoring recently (under the project codename Netmonitor.html), and this post is intended as an explanation of how we designed the new architecture and what cool new technologies we used.

See the Network Monitor running inside the Firefox Developer Toolbox:


One of the main goals of the refactoring was to rebuild the entire tool on top of standard web technologies. We removed all Firefox-specific legacy code like XUL (XML User Interface Language) but also the code that used Firefox-specific APIs. This is a great step forward since using web standards now allows you to run the entire tool’s code base in two different environments:

  • The Developer Toolbox
  • Any web page

The first case is well known to anyone who’s familiar with Firefox Developer Tools (see also the screenshot above). The Developer Toolbox can easily be opened at the bottom of a browser window with various tools, Network Monitor included, at your fingertips.

The second use case is new. Now the tool can be loaded within a browser tab just like any other standard web application. See how it looks like in the next screenshot:

Note that the page is loaded from localhost:8000. This is where the development server is running.

The ability to run the tool as a web app is a big deal! Now we can use all in-browser tools for the development workflow. Although it was possible to use DevTools to debug DevTools before (with the Browser Toolbox), it is now so much easier and more convenient to simply use the in-browser tools. And of course, we can also load the tool in other browsers. The development is also simpler since we don’t have to build Firefox. Instead, a simple tab-refresh is enough to get Network Monitor reloaded and test your code changes.


We’ve build the new Network Monitor front-end on top of the following technologies:

Firefox Developer Tools need complex UI features and we are using the popular React & Redux combo for all of our tools to build a clean and consistent code base. The Network Monitor is no exception. We’ve implemented a set of React components that are responsible for rendering the view (UI), a store with all data collected by HTTP interception and finally a set of actions the user might want to execute.

We’ve also changed the way we write tests. Instead of using the Firefox specific test harness we are slowly shifting towards well known libraries like Mocha and Enzyme. This way it is easier to understand our code base and also contribute to it.

We are using Webpack to build a bundle when running inside a web page. The bundle is consequently served through localhost:8000.

The general architecture is based on a flow introduced in the React & Redux concept.

  • The root component representing the NetMonitorApp can be rendered within Developer Toolbox or a web page.
  • Actions are responsible for things like filtering, clearing the list of requests, sorting and opening a side panel with detailed information.
  • All of our data are stored within a store object. Including all the collected data about HTTP traffic.

New Features

We’ve been focused mostly on codebase refactoring, but there were some new features/UI improvements implemented along the way as well. Let’s see some of them.

Column Picker

There are new columns with additional information about individual requests and the user can use the context menu to select those that are important.

Summary Data

We’ve implemented a better summary for currently displayed requests in the list. It’s now located at the bottom of the panel.

  • Number of requests in the list
  • Size/transferred size of all requests
  • Total time needed to load all requests
  • Time when DomContentLoaded event occurred
  • Time when load event occurred

Filtering By Properties

The existing Filter UI is now a lot more powerful. It’s possible to filter the list of requests according to various properties. For example, you can type: larger-than:50 into the Filter input box to see only those requests that are larger than 50 bytes.

Read more about filtering by properties on MDN.

Learn More in MDN

There are links in many places in the UI pointing to MDN for more information. For example, you can quickly learn how various HTTP headers are used.


We believe that building the new generation of Firefox Developer Tools on top of web standards is the right way to go since it means the tools can run in different environments and integrate more effectively with other projects (e.g., IDEs). Building on web standards makes many things possible: Now we can also think about shipping our tools as an online web service that can benefit from the internet platform. We can share collected data as well as debugging context across the web, opening doors to a real social debugging world.

The Netmonitor.html team has done a tremendous amount of work on the refactoring. Big thanks to the core team:

  • Ricky Chien
  • Fred Lin

But there has been many external contributors as well:

  • Jaroslav Snajdr
  • Leonardo Couto
  • Tim Nguyen
  • Deepjyoti Mondal
  • Locke Chen
  • Michael Brennan
  • Ruturaj Vartak
  • Vangelis Katsikaros
  • Adrien Enault
  • And many more…

Let us know what you think. You can join us on the devtools-html Slack.

Jan ‘Honza’ Odvarko

Read next: Hacking on the Network Monitor

hacks.mozilla.orgHacking on the Network Monitor Developer Tool (Part 2)

In the previous post, Network Monitor Reloaded, we walked through the reasoning for refactoring the Network Monitor tool. We also learned that using web standards for building Dev Tools enables us to running them in different environments – loaded either within the Firefox Developer Toolbox or within a browser tab as a standard web application.

In this companion article, we’ll show you how to try these things and see the Network Monitor in action.

Get to the Source

The Firefox Developer Tools code base is currently part of the Firefox source repository, and so, downloading it requires downloading the entire repo. There are several ways how to get the source code and work on the code. You might want to start with our Github docs for detailed instructions.

One option is to use Mercurial and clone the mozilla-central repository to get a local copy.

# This may take a while...
hg clone
cd mozilla-central

Part of our strategy to use web standards to build tools for the web also involves moving our code base from Mercurial to Git (on So, ultimately, the way to get the source code will change permanently, and it will be easier and faster to clone and work with.

Run Developer Toolbox

For now, if you want to build the Network Monitor and run it inside the Firefox Developer Toolbox, follow these detailed instructions.

Essentially, all you need to do is use the mach command.

cd mozilla-central
./mach build

After the build is complete, start the compiled binary and open the Developer Toolbox (Tools -> Web Developer -> Toggle Tools).

You can rebuild quickly after making changes to the source code as follows:

./mach build faster

Run Development Server

In order to run Net Monitor inside a web page (experimental) you’ll need to install the following packages:

We’ve developed a simple container that allows running Firefox Dev Tools (not only the Network Monitor) inside a web page. This is called Launchpad. The Launchpad is responsible for making a connection to the instance of Firefox being debugged and loading our Network Monitor tool.

The following diagram depicts the entire concept:

  • The Net Monitor tool (client) is running inside a Browser tab just like any other standard web application.
  • The app is served by the development server (server) through localhost:8000
  • The Net Monitor tool (client) is connecting to the target (debugged) Firefox instance through a WebSocket.
  • The target Firefox instance needs to listen on port 6080 to allow the WebSocket connection to be created.
  • The development server is started using yarn start

Let’s take a closer look at how to set up the development environment.

First we need to install dependencies for our development server:

cd mozilla-central
cd devtools/client/netmonitor
yarn install

Now we can run it:

yarn start

If all is ok, you should see the following message:

Development Server Listening at http://localhost:8000

Next, we need to listen for incoming connection in the target Firefox browser we want to debug. Open Developer Toolbar (Tools -> Web Developer -> Developer Toolbar) and type the following command into it. This will start listening so tools can connect to this browser.

listen 6080

The Developer Toolbar UI should be opened at the bottom of the browser window.

Finally, you can load localhost:8000

You should see the Launchpad user interface now. It lists the opened browser tabs in the target Firefox browser. You should also see that one of these tabs is the Launchpad itself (the last net monitor tab running from localhost:8000).

All you need to do is to click one of the tabs you want to debug. As soon as the Launchpad and Network monitor tools connect to the selected browser tab, you can reload the connected tab and see a list of HTTP requests.

If you change the underlying source code and refresh the page you’ll see your changes immediately.

Check out the following screencast for a detailed walk-through of running the Network monitor tool on top of the Launchpad and utilizing the hot-reload feature to see code changes instantly.

You might also want to read mozilla-central/devtools/client/netmonitor/ for more detailed info about how to build and run the Network Monitor tool.

Future Plans

We believe that building tools for the web using standard web technologies is the right way to go! Our tools are for web developers. We’d like you to be able to work with our tools using the same skills and knowledge that you already apply when developing web apps and services.

We are planning many more powerful features for Firefox Dev Tools, and we believe that the future holds a lot of exciting things. Here’s a teaser for what’s ahead on the roadmap.

  • Connecting to Chrome
  • Connecting to NodeJS
  • Integration with existing IDEs

Stay tuned!

Jan ‘Honza’ Odvarko

Air MozillaReps Weekly Meeting Jun. 15, 2017

Reps Weekly Meeting Jun. 15, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Add-ons BlogWebExtensions in Firefox 55

Firefox 55 landed in Beta this week, so it’s time for another update on WebExtensions. Because the development period for this latest release was about twice as long as normal, we have many more updates. Documentation for the APIs discussed here can be found on MDN.


The webRequest API has seen many improvements. Empty types and URLs in webRequest filters are now rejected. Requests can be cancelled before cookie processing occurs. Websockets can be processed through webRequest using the ws:// and wss:// protocols. Requests from the top frame now have the correct frameId, and more error conditions on requests are picked up by the onErrorOccurred event.

The sidebar API now re-opens automatically if you reload the add-on using about:debugging or web-ext. If you are following along with project Photon, you’ll note that the sidebar works great with the new Photon designs. Shiny!

The runtime.onMessageExternal API has been implemented, which allows WebExtensions add-ons to communicate with other WebExtensions add-ons. The runtime.onInstalled API will now activate if an add-on is installed temporarily, and the event will now include the previousVersion of the extension.

In order to limit the amount of CSS that a developer has to write and provide some degree of uniformity, there is a browser_style option for the browserAction API. We’ve also provided this to options V2 and the sidebar APIs.

Context menus now work in browserAction popups. The onClickData event in the context menu also gets the frameID. Context menu clicks can now open browser actions, page actions and sidebars. To do this, specify _execute_browser_action, _execute_page_action or _execute_sidebar_action in the command field for creating a context menu.

If you load a page from your extension, you get a long moz-extension://…. URL in the URL bar. We’ve added a notification in the identity box to indicate that the page what extension loaded it.

Other changes include:

A new API is now available for the nsiProfiler. This allows the Gecko Profiler to be used without legacy add-on support. This was essential for the Quantum Flow work happening in Firefox. Because of the sensitive nature of the content and the limited appeal of this API, access to this API is currently restricted.


With Firefox 55, the user interface for required and optional permissions is now enabled for WebExtensions add-ons. Required permissions and hosts will trigger a prompt on installation for the user. Here’s an example:

When an extension is updated and the hosts or permissions have changed, the current extension remains enabled, but the user has to accept the updated permissions in order to continue.

There is also a new user interface for side loading add-ons that is more consistent with other installation methods. Side loading is when extensions are installed outside of Firefox, by other software. It now appears in the hamburger menu as a notification:

This permissions dialog is slightly different as well:

Once an extension has been installed, if it would like more permissions or hosts, it can ask for those as needed — these are called optional permissions. They are accessible using the browser.permissions.request API. An example of using optional permissions is available in the example repository on github.

Developer tools

With the introduction of devtools.inspectedWindow.eval bindings, many more add-ons are now able to support WebExtensions APIs. The developer tools team at Mozilla has been reaching out to developers with add-ons that might be affected as you can see on Twitter. For example, the Redux DevTools extension is now a WebExtensions add-on using the same code base as other browsers.

An API for devtools.panels.themeName has been implemented. The devtools panel icon is no longer inverted if a light theme is chosen.

There have been some improvements to the about:debugging page:

These changes are aimed at improving the ease of development. Temporary extensions now appear at the top of the page, a remove button is present, help is shown if the extension has a temporary ID, the location of the extension on the file system is shown, and the internal UUID is shown.


Firefox for Android has gained browserAction support. Currently a textual menu item is added to the bottom of the menu on Android. It supports browserAction.onClicked and setTitle and getTitle. Tabs support was added to pageAction.


The beginnings of theme support, as detailed in this blog post, has landed in Firefox. In Firefox 55 you can use the browser.theme.update API. The theme API allows you to set some key values in Firefox such as:

 images: {
   headerURL: "header.png",
 colors: {
   accentcolor: "#000",
   textcolor: "#fff",

This WebExtensions API will apply the theme to Firefox by setting the header image and some CSS colors. At this point the theme API applies a very similar set of functionality as the existing lightweight theme. However, using this API you can do this dynamically in your extension.

Additionally, APIs have been implemented for themes. These allow you to enable and disable themes by only using the management API. For an example check out the example repository on github.


The proxy API allows extension authors to insert proxy configuration files into Firefox. This API implementation is quite different from the one in Chrome to take advantage of some of the improved support in Firefox for proxies. As a result, to prevent confusion, this API is not present in the chrome namespace.

The proxy configuration file will contain a function for dealing with the incoming request:

function FindProxyForURL(url, host) {
 // ...

And this will then be registered in the API:


For an example of using the proxy API, please see the repository on github.


One focus of the Firefox 55 release was the performance of WebExtensions, particularly the scenario where there is at least one WebExtension on startup.

These performance improvements include speeding up host matching, limiting the cloning of add-on messages, lazily loading APIs when they are needed. We’ve also been adding in some telemetry measurements into Firefox, such as background page load times and extension start up times.

The next largest performance gain is the moving of WebExtensions add-ons to their own process. To enable this, we made the debugging tools work seamlessly with out-of-process add-ons. We are hoping to enable this feature for Windows users in Firefox 56 once the remaining graphics issues have been resolved.

You can see some of the results of performance improvements in the Quantum newsletter which Ehsan posts to his blog. These improvements aren’t just limited to WebExtensions add-ons. For example, the introduction of off script decoding brought a large performance improvement to startup measurements for all Firefox users, as well as those with WebExtensions add-ons:


As ever we need to thank the community who contributed to this release. This includes: Tushar Saini, Tomislav Jovanovic, Rob Wu, Martin Giger and Geoff Lankow. Thank you to you all.

The post WebExtensions in Firefox 55 appeared first on Mozilla Add-ons Blog.

Air MozillaThe Joy of Coding - Episode 102

The Joy of Coding - Episode 102 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Add-ons BlogAdd-ons Update – 2017/06

Here’s the monthly update of the state of the add-ons world.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 2,209 listed add-on submissions:

  • 1202 in fewer than 5 days (54%).
  • 173 between 5 and 10 days (8%).
  • 834 after more than 10 days (38%).

235 listed add-ons are awaiting review.

If you compare these numbers with last month’s, you’ll see a very clear difference, both in reviews done and add-ons still awaiting review. The admin reviewers have been doing an excellent job clearing the queues of add-ons that use the WebExtensions API, which are generally safer and can be reviewed more easily. There’s still work to do so we clear the review backlog, but we’re on track to being in a good place by the end of the month.

However, this doesn’t mean we won’t need volunteer reviewers in the future. If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 55 and the bulk validation script will be run in a week or so. The compatibility post for 56 is still a few weeks away.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest to ensure they continue working in Firefox. And as always, we recommend that you test your add-ons on Beta.

You may also want  to review the post about upcoming changes to the Developer Edition channel. Firefox 55 is the first version that will move directly from Nightly to Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.


We would like to thank the following people for their recent contributions to the add-ons world:

  • Tushar Saini
  • harikishen
  • Geoff Lankow
  •  Trishul Goel
  • Andrew Truong
  • raajitr
  • Christophe Villeneuve
  • zombie
  • Perry Jiang
  • vietngoc

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/06 appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgA crash course in memory management

This is the 1st article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

To understand why ArrayBuffer and SharedArrayBuffer were added to JavaScript, you need to understand a bit about memory management.

You can think of memory in a machine as a bunch of boxes. I think of these like the mailboxes that you have in offices, or the cubbies that pre-schoolers have to store their things.

If you need to leave something for one of the other kids, you can put it inside a box.

A column of boxes with a child putting something in one of the boxes

Next to each one of these boxes, you have a number, which is the memory address. That’s how you tell someone where to find the thing you’ve left for them.

Each one of these boxes is the same size and can hold a certain amount of info. The size of the box is specific to the machine. That size is called word size. It’s usually something like 32-bits or 64-bits. But to make it easier to show, I’m going to use a word size of 8 bits.

A box with 8 smaller boxes in it

If we wanted to put the number 2 in one of these boxes, we could do it easily. Numbers are easy to represent in binary.

The number two, converted to binary 00000010 and put inside the boxes

What if we want something that’s not a number though? Like the letter H?

We’d need to have a way to represent it as a number. To do that, we need an encoding, something like UTF-8. And we’d need something to turn it into that number… like an encoder ring. And then we can store it.

The letter H, put through an encoder ring to get 72, which is then converted to binary and put in the boxes

When we want to get it back out of the box, we’d have to put it through a decoder to translate it back to H.

Automatic memory management

When you’re working in JavaScript you don’t actually need to think about this memory. It’s abstracted away from you. This means you don’t touch the memory directly.

Instead, the JS engine acts as an intermediary. It manages the memory for you.

A column of boxes with a rope in front of it and the JS engine standing at that rope like a bouncer

So let’s say some JS code, like React, wants to create a variable.

Same as above, with React asking the JS engine to create a variable

What the JS engine does is run that value through an encoder to get the binary representation of the value.

The JS engine using an encoder ring to convert the string to binary

And it will find space in the memory that it can put that binary representation into. This process is called allocating memory.

The JS engine finding space for the binary in the column of boxes

Then, the engine will keep track of whether or not this variable is still accessible from anywhere in the program. If the variable can no longer be reached, the memory is going to be reclaimed so that the JS engine can put new values there.

The garbage collector clearing out the memory

This process of watching the variables—strings, objects, and other kinds of values that go in memory—and clearing them out when they can’t be reached anymore is called garbage collection.

Languages like JavaScript, where the code doesn’t deal with memory directly, are called memory-managed languages.

This automatic memory management can make things easier for developers. But it also adds some overhead. And that overhead can sometimes make performance unpredictable.

Manual memory management

Languages with manually managed memory are different. For example, let’s look at how React would work with memory if it were written in C (which would be possible now with WebAssembly).

C doesn’t have that layer of abstraction that JavaScript does on the memory. Instead, you’re operating directly on memory. You can load things from memory, and you can store things to memory.

A WebAssembly version of React working with memory directly

When you’re compiling C or other languages down to WebAssembly, the tool that you use will add in some helper code to your WebAssembly. For example, it would add code that does the encoding and decoding bytes. This code is called a runtime environment. The runtime environment will help handle some of the stuff that the JS engine does for JS.

An encoder ring being shipped down as part of the .wasm file

But for a manually managed language, that runtime won’t include garbage collection.

This doesn’t mean that you’re totally on your own. Even in languages with manual memory management, you’ll usually get some help from the language runtime. For example, in C, the runtime will keep track of which memory addresses are open in something called a free list.

A free list next to the column of boxes, listing which boxes are free right now

You can use the function malloc (short for memory allocate) to ask the runtime to find some memory addresses that can fit your data. This will take those addresses off of the free list. When you’re done with that data, you have to call free to deallocate the memory. Then those addresses will be added back to the free list.

You have to figure out when to call those functions. That’s why it’s called manual memory management—you manage the memory yourself.

As a developer, figuring out when to clear out different parts of memory can be hard. If you do it at the wrong time, it can cause bugs and even lead to security holes. If you don’t do it, you run out of memory.

This is why many modern languages use automatic memory management—to avoid human error. But that comes at the cost of performance. I’ll explain more about this in the next article.

hacks.mozilla.orgA cartoon intro to ArrayBuffers and SharedArrayBuffers

This is the 2nd article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

In the last article, I explained how memory-managed languages like JavaScript work with memory. I also explained how manual memory management works in languages like C.

Why is this important when we’re talking about ArrayBuffers and SharedArrayBuffers?

It’s because ArrayBuffers give you a way to handle some of your data manually, even though you’re working in JavaScript, which has automatic memory management.

Why is this something that you would want to do?

As we talked about in the last article, there’s a trade-off with automatic memory management. It is easier for the developer, but it adds some overhead. In some cases, this overhead can lead to performance problems.

A balancing scale showing that automatic memory management is easier to understand, but harder to make fast

For example, when you create a variable in JS, the engine has to guess what kind of variable this is and how it should be represented in memory. Because it’s guessing, the JS engine will usually reserve more space than it really needs for a variable. Depending on the variable, the memory slot may be 2–8 times larger than it needs to be, which can lead to lots of wasted memory.

Additionally, certain patterns of creating and using JS objects can make it harder to collect garbage. If you’re doing manual memory management, you can choose an allocation and de-allocation strategy that’s right for the use case that you’re working on.

Most of the time, this isn’t worth the trouble. Most use cases aren’t so performance sensitive that you need to worry about manual memory management. And for common use cases, manual memory management may even be slower.

But for those times when you need to work at a low-level to make your code as fast as possible, ArrayBuffers and SharedArrayBuffers give you an option.

A balancing scale showing that manual memory management gives you more control for performance fine-tuning, but requires more thought and planning

So how does an ArrayBuffer work?

It’s basically like working with any other JavaScript array. Except, when using an ArrayBuffer, you can’t put any JavaScript types into it, like objects or strings. The only thing that you can put into it are bytes (which you can represent using numbers).

Two arrays, a normal array which can contain numbers, objects, strings, etc, and an ArrayBuffer, which can only contain bytes

One thing I should make clear here is that you aren’t actually adding this byte directly to the ArrayBuffer. By itself, this ArrayBuffer doesn’t know how big the byte should be, or how different kinds of numbers should be converted to bytes.

The ArrayBuffer itself is just a bunch of zeros and ones all in a line. The ArrayBuffer doesn’t know where the division should be between the first element and the second element in this array.

A bunch of ones and zeros in a line

To provide context, to actually break this up into boxes, we need to wrap it in what’s called a view. These views on the data can be added with typed arrays, and there are lots of different kinds of typed arrays they can work with.

For example, you could have an Int8 typed array which would break this up into 8-bit bytes.

Those ones and zeros broken up into boxes of 8

Or you could have an unsigned Int16 array, which would break it up into 16-bit bites, and also handle this as if it were an unsigned integer.

Those ones and zeros broken up into boxes of 16

You can even have multiple views on the same base buffer. Different views will give you different results for the same operations.

For example, if we get elements 0 & 1 from the Int8 view on this ArrayBuffer, it will give us different values than element 0 in the Uint16 view, even though they contain exactly the same bits.

Those ones and zeros broken up into boxes of 16

In this way, the ArrayBuffer basically acts like raw memory. It emulates the kind of direct memory access that you would have in a language like C.

You may be wondering why don’t we just give programmers direct access to memory instead of adding this layer of abstraction. Giving direct access to memory would open up some security holes. I will explain more about this in a future article.

So, what is a SharedArrayBuffer?

To explain SharedArrayBuffers, I need to explain a little bit about running code in parallel and JavaScript.

You would run code in parallel to make your code run faster, or to make it respond faster to user events. To do this, you need to split up the work.

In a typical app, the work is all taken care of by a single individual—the main thread. I’ve talked about this before… the main thread is like a full-stack developer. It’s in charge of JavaScript, the DOM, and layout.

Anything you can do to remove work from the main thread’s workload helps. And under certain circumstances, ArrayBuffers can reduce the amount of work that the main thread has to do.

The main thread standing at its desk with a pile of paperwork. The top part of that pile has been removed

But there are times when reducing the main thread’s workload isn’t enough. Sometimes you need to bring in reinforcements… you need to split up the work.

In most programming languages, the way you usually split up the work is by using something called a thread. This is basically like having multiple people working on a project. If you have tasks that are pretty independent of each other, you can give them to different threads. Then, both those threads can be working on their separate tasks at the same time.

In JavaScript, the way you do this is using something called a web worker. These web workers are slightly different than the threads you use in other languages. By default they don’t share memory.

Two threads at desks next to each other. Their piles of paperwork are half as tall as before. There is a chunk of memory below each, but not connected to the other's memory

This means if you want to share some data with the other thread, you have to copy it over. This is done with the function postMessage.

postMessage takes whatever object you put into it, serializes it, sends it over to the other web worker, where it’s deserialized and put in memory.

Thread 1 shares memory with thread 2 by serializing it, sending it across, where it is copied into thread 2's memory

That’s a pretty slow process.

For some kinds of data, like ArrayBuffers, you can do what is called transferring memory. That means moving that specific block of memory over so that the other web worker has access to it.

But then the first web worker doesn’t have access to it anymore.

Thread 1 shares memory with thread 2 by transferring it. Thread 1 no longer has access to it

That works for some use cases, but for many use cases where you want to have this kind of high performance parallelism, what you really need is to have shared memory.

This is what SharedArrayBuffers give you.

The two threads get some shared memory which they can both access

With the SharedArrayBuffer, both web workers, both threads, can be writing data and reading data from the same chunk of memory.

This means they don’t have the communication overhead and delays that you would have with postMessage. Both web workers have immediate access to the data.

There is some danger in having this immediate access from both threads at the same time though. It can cause what are called race conditions.

Drawing of two threads racing towards memory

I’ll explain more about those in the next article.

What’s the current status of SharedArrayBuffers?

SharedArrayBuffers will be in all of the major browsers soon.

Logos of the major browsers high-fiving

They’ve already shipped in Safari (in Safari 10.1). Both Firefox and Chrome will be shipping them in their July/August releases. And Edge plans to ship them in their fall Windows update.

Even once they are available in all major browsers, we don’t expect application developers to be using them directly. In fact, we recommend against it. You should be using the highest level of abstraction available to you.

What we do expect is that JavaScript library developers will create libraries that give you easier and safer ways to work with SharedArrayBuffers.

In addition, once SharedArrayBuffers are built into the platform, WebAssembly can use them to implement support for threads. Once that’s in place, you’d be able to use the concurrency abstractions of a language like Rust, which has fearless concurrency as one of its main goals.

In the next article, we’ll look at the tools (Atomics) that these library authors would use to build up these abstractions while avoiding race conditions.

Layer diagram showing SharedArrayBuffer + Atomics as the foundation, and JS libaries and WebAssembly threading building on top

hacks.mozilla.orgAvoiding race conditions in SharedArrayBuffers with Atomics

This is the 3rd article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

In the last article, I talked about how using SharedArrayBuffers could result in race conditions. This makes working with SharedArrayBuffers hard. We don’t expect application developers to use SharedArrayBuffers directly.

But library developers who have experience with multithreaded programming in other languages can use these new low-level APIs to create higher-level tools. Then application developers can use these tools without touching SharedArrayBuffers or Atomics directly.

Layer diagram showing SharedArrayBuffer + Atomics as the foundation, and JS libaries and WebAssembly threading building on top

Even though you probably shouldn’t work with SharedArrayBuffers and Atomics directly, I think it’s still interesting to understand how they work. So in this article, I’ll explain what kinds of race conditions concurrency can bring, and how Atomics help libraries avoid them.

But first, what is a race condition?

Drawing of two threads racing towards memory


Race conditions: an example you may have seen before

A pretty straightforward example of a race condition can happen when you have a variable that is shared between two threads. Let’s say one thread wants to load a file and the other thread checks whether it exists. They share a variable, fileExists, to communicate.

Initially, fileExists is set to false.

Two threads working on some code. Thread 1 is loading a file if fileExists is true, and thread 2 is setting fileExists

As long as the code in thread 2 runs first, the file will be loaded.

Diagram showing thread 2 going first and file load succeeding

But if the code in thread 1 runs first, then it will log an error to the user, saying that the file does not exist.

Diagram showing thread 1 going first and file load failing

But that’s not the problem. It’s not that the file doesn’t exist. The real problem is the race condition.

Many JavaScript developers have run into this kind of race condition, even in single-threaded code. You don’t have to understand anything about multithreading to see why this is a race.

However, there are some kinds of race conditions which aren’t possible in single-threaded code, but that can happen when you’re programming with multiple threads and those threads share memory.

Different classes of race conditions and how Atomics help

Let’s explore some of the different kinds of race conditions you can have in multithreaded code and how Atomics help prevent them. This doesn’t cover all possible race conditions, but should give you some idea why the API provides the methods that it does.

Before we start, I want to say again: you shouldn’t use Atomics directly. Writing multithreaded code is a known hard problem. Instead, you should use reliable libraries to work with shared memory in your multithreaded code.

Caution sign

With that out of the way…

Race conditions in a single operation

Let’s say you had two threads that were incrementing the same variable. You might think that the end result would be the same regardless of which thread goes first.

Diagram showing two threads incrementing a variable in turn

But even though, in the source code, incrementing a variable looks like a single operation, when you look at the compiled code, it is not a single operation.

At the CPU level, incrementing a value takes three instructions. That’s because the computer has both long-term memory and short-term memory. (I talk more about how this all works in another article).

Drawing of a CPU and RAM

All of the threads share the long-term memory. But the short-term memory—the registers—are not shared between threads.

Each thread needs to pull the value from memory into its short-term memory. After that, it can run the calculation on that value in short-term memory. Then it writes that value back from its short-term memory to the long-term memory.

Diagram showing a variable being loaded from memory to a register, then being operated on, and then being stored back to memory

If all of the operations in thread 1 happen first, and then all the operations in thread 2 happen, we will end up with the result that we want.

Flow chart showing instructions happening sequentially on one thread, then the other

But if they are interleaved in time, the value that thread 2 has pulled into its register gets out of sync with the value in memory. This means that thread 2 doesn’t take thread 1’s calculation into consideration. Instead, it just clobbers the value that thread 1 wrote to memory with its own value.

Flow chart showing instructions interleaved between threads

One thing atomic operations do is take these operations that humans think of as being single operations, but which the computer sees as multiple operations, and makes the computer see them as single operations, too.

This is why they’re called atomic operations. It’s because they take an operation that would normally have multiple instructions—where the instructions could be paused and resumed—and it makes it so that they all happen seemingly instantaneously, as if it were one instruction. It’s like an indivisible atom.

Instructions encased in an atom

Using atomic operations, the code for incrementing would look a little different.

Atomics.add(sabView, index, 1)

Now that we’re using Atomics.add, the different steps involved in incrementing the variable won’t be mixed up between threads. Instead, one thread will finish its atomic operation and prevent the other one from starting. Then the other will start its own atomic operation.

Flow chart showing atomic execution of the instructions

The Atomics methods that help avoid this kind of race are:

You’ll notice that this list is fairly limited. It doesn’t even include things like division and multiplication. A library developer could create atomic-like operations for other things, though.

To do that, the developer would use Atomics.compareExchange. With this, you get a value from the SharedArrayBuffer, perform an operation on it, and only write it back to the SharedArrayBuffer if no other thread has updated it since you first checked. If another thread has updated it, then you can get that new value and try again.

Race conditions across multiple operations

So those Atomic operations help avoid race conditions during “single operations”. But sometimes you want to change multiple values on an object (using multiple operations) and make sure no one else is making changes to that object at the same time. Basically, this means that during every pass of changes to an object, that object is on lockdown and inaccessible to other threads.

The Atomics object doesn’t provide any tools to handle this directly. But it does provide tools that library authors can use to handle this. What library authors can create is a lock.

Diagram showing two threads and a lock

If code wants to use locked data, it has to acquire the lock for the data. Then it can use the lock to lock out the other threads. Only it will be able to access or update the data while the lock is active.

To build a lock, library authors would use Atomics.wait and Atomics.wake, plus other ones such as Atomics.compareExchange and If you want to see how these would work, take a look at this basic lock implementation.

In this case, thread 2 would acquire the lock for the data and set the value of locked to true. This means thread 1 can’t access the data until thread 2 unlocks.

Thread 2 gets the lock and uses it to lock up shared memory

If thread 1 needs to access the data, it will try to acquire the lock. But since the lock is already in use, it can’t. The thread would then wait—so it would be blocked—until the lock is available.

Thread 1 waits until the lock is unlocked

Once thread 2 is done, it would call unlock. The lock would notify one or more of the waiting threads that it’s now available.

Thread 1 is notified that the lock is available

That thread could then scoop up the lock and lock up the data for its own use.

Thread 1 uses the lock

A lock library would use many of the different methods on the Atomics object, but the methods that are most important for this use case are:

Race conditions caused by instruction reordering

There’s a third synchronization problem that Atomics take care of. This one can be surprising.

You probably don’t realize it, but there’s a very good chance that the code you’re writing isn’t running in the order you expect it to. Both compilers and CPUs reorder code to make it run faster.

For example, let’s say you’ve written some code to calculate a total. You want to set a flag when the calculation is finished.

subTotal = price + fee; total += subTotal; isDone = true

To compile this, we need to decide which register to use for each variable. Then we can translate the source code into instructions for the machine.

Diagram showing what that would equal in mock assembly

So far, everything is as expected.

What’s not obvious if you don’t understand how computers work at the chip level (and how the pipelines that they use for executing code work) is that line 2 in our code needs to wait a little bit before it can execute.

Most computers break down the process of running an instruction into multiple steps. This makes sure all of the different parts of the CPU are busy at all times, so it makes the best use of the CPU.

Here’s one example of the steps an instruction goes through:

  1. Fetch the next instruction from memory
  2. Figure out what the instruction is telling us to do (aka decode the instruction), and get the values from the registers
  3. Execute the instruction
  4. Write the result back to the register

Pipeline Stage 1: fetch the instruction
Pipeline Stage 2: decode the instruction and fetch register values
Pipeline Stage 3: Execute the operation
Pipeline Stage 4: Write back the result

So that’s how one instruction goes through the pipeline. Ideally, we want to have the second instruction following directly after it. As soon as it has moved into stage 2, we want to fetch the next instruction.

The problem is that there is a dependency between instruction #1 and instruction #2.

Diagram of a data hazard in the pipeline

We could just pause the CPU until instruction #1 has updated subTotal in the register. But that would slow things down.

To make things more efficient, what a lot of compilers and CPUs will do is reorder the code. They will look for other instructions which don’t use subTotal or total and move those in between those two lines.

Drawing of line 3 of the assembly code being moved between lines 1 and 2

This keeps a steady stream of instructions moving through the pipe.

Because line 3 didn’t depend on any values in line 1 or 2, the compiler or CPU figures it’s safe to reorder like this. When you’re running in a single thread, no other code will even see these values until the whole function is done, anyway.

But when you have another thread running at the same time on another processor, that’s not the case. The other thread doesn’t have to wait until the function is done to see these changes. It can see them almost as soon as they are written back to memory. So it can tell that isDone was set before total.

If you were using isDone as a flag that the total had been calculated and was ready to use in the other thread, then this kind of reordering would create race conditions.

Atomics attempt to solve some of these bugs. When you use an Atomic write, it’s like putting a fence between two parts of your code.

Atomic operations aren’t reordered relative to each other, and other operations aren’t moved around them. In particular, two operations that are often used to enforce ordering are:

All variable updates above in the function’s source code are guaranteed to be done before is done writing its value back to memory. Even if the non-Atomic instructions are reordered relative to each other, none of them will be moved below a call to which comes below in the source code.

And all variable loads after Atomics.load in a function are guaranteed to be done after Atomics.load fetches its value. Again, even if the non-atomic instructions are reordered, none of them will be moved above an Atomics.load that comes above them in the source code.

Diagram showing and Atomics.load maintaining order

Note: The while loop I show here is called a spinlock and it’s very inefficient. And if it’s on the main thread, it can bring your application to a halt. You almost certainly don’t want to use that in real code.

Once again, these methods aren’t really meant for direct use in application code. Instead, libraries would use them to create locks.


Programming multiple threads that share memory is hard. There are many different kinds of race conditions just waiting to trip you up.

Drawing of shared memory with a dragon and "Here be dragons" above

This is why you don’t want to use SharedArrayBuffers and Atomics in your application code directly. Instead, you should depend on proven libraries by developers who are experienced with multithreading, and who have spent time studying the memory model.

It is still early days for SharedArrayBuffer and Atomics. Those libraries haven’t been created yet. But these new APIs provide the basic foundation to build on top of.

The Mozilla BlogMozilla Launches Campaign to Raise Awareness for Internet Health

Today, Mozilla unveils several initiatives including an event focused on Internet Health with special guests DeRay McKesson, Lauren Duca and more, a brand new podcast, new tech to help create a voice database, as well as some local SF pop-ups.

Mozilla is doing this to draw the public’s attention to mounting concern over the consolidation of power online, including the Federal Communications Commission’s proposed actions to kill net neutrality.

New Polling

60 percent of people in the U.S. are worried about online services being owned by a small number of services, according to a new Mozilla/Ipsos poll released today.

“The Internet is a vital tool that touches every aspect of modern life,” said Mark Surman, Mozilla’s Executive Director. “If you care about freedom of speech, economic growth and a level playing field, then you care about guarding against those who would throttle, lock down or monopolize the web as if they owned it.

According to another Mozilla/Ipsos poll, seventy-six percent of people in the U.S. support net neutrality.

“At Mozilla, we’re fueling a movement to ensure the web is something that belongs to all of us. Forever,” Surman added.

“A Night for Internet Health”

On Thursday, June 29, Mozilla will host “A Night for Internet Health” — a free live event featuring prominent thinkers, performers, and political voices discussing power, progress, and life on the Web.

Mozilla will be joined by musician Neko Case, Pod Save the People host DeRay McKesson, Teen Vogue columnist Lauren Duca, comedian Moshe Kasher, tech media personality Veronica Belmont, and Sens. Al Franken and Ron Wyden via video.

The event is from 7-10 p.m. (PDT), June 29 at the SFJazz Center in San Francisco. Tickets will be available through the Center’s Box Office starting on June 15.

Credentials are available for media.

IRL podcast

On June 26, Mozilla will debut the podcast IRL: Because Online Life is Real Life. Host Veronica Belmont will share stories from the wilds of the Web, and real talk about online issues that affect us all.

People can listen to the IRL trailer or pre-subscribe to IRL on Apple Podcasts, Stitcher, Pocket Casts, Overcast, or RadioPublic.

Project Common Voice: The World’s First Crowdsourced Voice Database

Voice-enabled devices represent the next major disruption, but access to databases is expensive and doesn’t include a diverse set of accents and languages. Mozilla’s Project Common Voice aims to solve the problem by inviting people to donate samples of their voices to a massive global project that will allow anyone to quickly and easily train voice-enabled applications. Mozilla will make this resource available to the public later this year.

The project will be featured at guerilla pop-ups in San Francisco, where people can also create custom tote bags or grab a T-shirt that expresses their support for a healthy Internet and net neutrality.

To get started, you can download the Common Voice iOS app and visit the project’s website.


  • Wednesday, June 28: From noon – 6 p.m. PDT at Justin Herman Plaza in San Francisco.
  • Thursday, June 29: From 7 – 10 at SFJazz in San Francisco.
  • Friday, June 30 – July 1:  From noon – 6 p.m. PDT at Union Square in San Francisco.

SF Take-Over

Beginning on Monday, June 19, Mozilla will launch a provocative advertising campaign across San Francisco and online, highlighting what’s at stake with the attacks on net neutrality and power consolidation on the web.

The advertisements juxtapose opposing messages, highlighting the power dynamics of the Internet and offering steps people can take to create a healthier Internet. For example, one advertisement contrasts “Let’s Kill Innovation” with “Actually, let’s not. Raise your voice for net neutrality.”

San Franciscans and visitors will see the ads across the city and will be placed along Market and Embarcadero Streets, San Francisco Airport, projected on buildings– as well as online, radio, social media and prominent websites.

About Mozilla

Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets and mobile devices. For more information, visit

The post Mozilla Launches Campaign to Raise Awareness for Internet Health appeared first on The Mozilla Blog.

Air MozillaRust Bay Area Meetup June 2017

Rust Bay Area Meetup June 2017 Tentative agenda will be: - Andrew Stone from VMWare talking about Haret - William Morgan from Buoyant talking about linkerd-tcp

Air MozillaRust Libs Meeting 2017-06-13

Rust Libs Meeting 2017-06-13 walkdir crate evaluation

The Mozilla BlogThe Best Firefox Ever

With E10s, our new version of Firefox nails the “just right” balance between memory and speed

On the Firefox team, one thing we always hear from our users is that they rely on the web for complex tasks like trip planning and shopping comparisons. That often means having many tabs open. And the sites and web apps running in those tabs often have lots of things going on– animations, videos, big pictures and more. Complex sites are more and more common. The average website today is nearly 2.5 megabytes – the same size as the original version of the game Doom, according to Wired. Up until now, a complex site in one Firefox tab could slow down all the others. That often meant a less than perfect browsing experience.

To make Firefox run even complex sites faster, we’ve been changing it to run using multiple operating system processes. Translation? The old Firefox used a single process to run all the tabs in a browser. Modern browsers split the load into several independent processes. We named our project to split Firefox into multiple processes ‘Electrolysis ’ (or E10s) after the chemical process that divides water into its core elements. E10s is the largest change to Firefox code in our history. And today we’re launching our next big phase of the E10s initiative.

A Faster Firefox With Four Content Processes

With today’s release, Firefox uses up to four processes to run web page content across all open tabs. This means that a heavy, complex web page in one tab has a much lower impact on the responsiveness and speed in other tabs. By separating the tabs into separate processes, we make better use of the hardware on your computer, so Firefox can deliver you more of the web you love, with less waiting.

I’ve been living with this turned on by default in the pre-release version of Firefox (Nightly). The performance improvements are remarkable. Besides running faster and crashing less, E10S makes websites feel more smooth. Even busy pages, like Facebook newsfeeds, spool out smoothly and cleanly. After making the switch to Firefox with E10s, now I can’t live without it.

Firefox 54 with E10s makes sites run much better on all computers, especially on computers with less memory. Firefox aims to strike the “just right” balance between speed and memory usage. To learn more about Firefox’s multi-process architecture, and how it’s different from Chrome’s, check out Ryan Pollock’s post about the search for the Goldilocks browser.

Multi-Process Without Memory Bloat Firefox Wins Memory Usage Comparison

In our tests comparing memory usage for various browsers, we found that Firefox used significantly less RAM than other browsers on Windows 10, macOS, and Linux. (RAM stands for Random Access Memory, the type of memory that stores the apps you’re actively running.) This means that with Firefox you can browse freely, but still have enough memory left to run the other apps you want to use on your computer.

The Best Firefox Ever

This is the best release of Firefox ever, with improvements that will be very noticeable to even casual users of our beloved browser. Several other enhancements are shipping in Firefox today, and you can visit our release notes to see the full list. If you’re a web developer, or if you’ve built a browser extension, check out the Hacks Blog to read about all the new Web Platform and WebExtension APIs shipping today.

As we continue to make progress on Project Quantum, we are pushing forward in building a completely revamped browser made for modern computing. It’s our goal to make Firefox the fastest and smoothest browser for PCs and mobile devices. Through the end of 2017, you’ll see some big jumps in capability and performance from Team Firefox. If you stopped using Firefox, try it again. We think you’ll be impressed. Thank you and let us know what you think.

The post The Best Firefox Ever appeared first on The Mozilla Blog.

Air MozillaSelling Your Attention: The Web and Advertising with Tim Wu

Selling Your Attention: The Web and Advertising with Tim Wu You don't need cash to search Google or to use Facebook, but they're not free. We pay for these services with our attention and with...

hacks.mozilla.orgFirefox 54: E10S-Multi, WebExtension APIs, CSS clip-path

“E10S-Multi:” A new multi-process model for Firefox

Today’s release completes Firefox’s transformation into a fully multi-process browser, running many simultaneous content processes in addition to a UI process and, on Windows, a special GPU process. This design makes it easier to utilize all of the cores available on modern processors and, in the future, to securely sandbox web content. It also improves stability, ensuring that a single content process crashing won’t take out all of your other tabs, nor the rest of the browser.

Illustration of Firefox's new multi-process architecture, showing one Firefox UI process talking to four Content Processes. Each content process has several tabs within it.

An initial version of multi-process Firefox (codenamed “Electrolysis”, or “e10s” for short) debuted with Firefox 48 last August. This first version moved Firefox’s UI into its own process so that the browser interface remains snappy even under load. Firefox 54 takes this further by running many content processes in parallel: each one with its own RAM and CPU resources managed by the host operating system.

Additional processes do come with a small degree of memory overhead, no matter how well optimized, but we’ve worked wonders to reduce this to the bare minimum. Even with those optimizations, we wanted to do more to ensure that Firefox is respectful of your RAM. That’s why, instead of spawning a new process with every tab, Firefox sets an upper limit: four by default, but configurable by users (dom.ipc.processCount in about:config). This keeps you in control, while still letting Firefox take full advantage of multi-core CPUs.

To learn more about Firefox’s multi-process architecture, check out this Medium post about the search for the “Goldilocks” browser.

New WebExtension APIs

Firefox continues its rapid implementation of new WebExtension APIs. These APIs are designed to work cross-browser, and will be the only APIs available to add-ons when Firefox 57 launches this November.

Most notably, it’s now possible to create custom DevTools panels using WebExtensions. For example, the screenshot below shows the Chrome version of the Vue.js DevTools running in Firefox without any modifications. This dramatically reduces the maintenance burden for authors of devtools add-ons, ensuring that no matter which framework you prefer, its tools will work in Firefox.

Screenshot of Firefox showing the Vue.js DevTools extension running in Firefox


Read about the full set of new and changed APIs on the Add-ons Blog, or check out the complete WebExtensions documentation on MDN.

CSS shapes in clip-path

The CSS clip-path property allows authors to define which parts of an element are visible. Previously, Firefox only supported clipping paths defined as SVG files. With Firefox 54, authors can also use CSS shape functions for circles, ellipses, rectangles or arbitrary polygons (Demo).

Like many CSS values, clipping shapes can be animated. There are some rules that control how the interpolation between values is performed, but long story short: as long as you are interpolating between the same shapes, or polygons with the same number of vertices, you should be fine. Here’s how to animate a circular clipping:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

You can also dynamically change clipping according user input, like in this example that features a “periscope” effect controlled by the mouse:

See the Pen clip-path (periscope) by ladybenko (@ladybenko) on CodePen.

To learn more, check our article on clip-path from last week.

Project Dawn

Lastly, the release of Firefox 54 marks the completion of the Project Dawn transition, eliminating Firefox’s pre-beta release channel, codenamed “Aurora.” Firefox releases now move directly from Nightly into Beta every six weeks. Firefox Developer Edition, which was based on Aurora, is now based on Beta.

For early adopters, we’ve also made Firefox Nightly for Android available on Google Play.

Air MozillaMozilla Weekly Project Meeting, 12 Jun 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Air MozillaRain of Rust - 2nd online meeting

Rain of Rust - 2nd online meeting This event belongs to a series of online Rust events that we run in the month of June, 2017

SUMO BlogEvent Report: SUMO Community Meeting – Abidjan (3-4 June 2017)

Hey there, SUMO Nation!

You may remember Abbackar’s previous post about meetings in Ivory Coast. I am very happy to inform you that the community there is going strong and keeps support Mozilla’s mission. Read Abbackar’s report from the recent meeting in Abidjan below.

On the weekend of 3rd and 4th of June, the community members of Côte d’Ivoire met in Abidjan for a SUMO Community Meetup. The event was attended by 21 people, six of who were new contributors, interested in participating in Mozilla’s mission through SUMO.

The Saturday meeting started at 9 and went on for six hours, with a small lunch break. During that time we talked about the state of SUMO and the Mozilla updates that had an influence for our community over the past months.

We also introduced new contributors to the website and the philosophy of SUMO – as well as the Respond social support tool. New contributors had a chance to see both sites in action, learn how they worked and discuss their future contributions.

After that, we had a practical session in Respond, allowing existing and new contributors to exchange knowledge and experiences.

An important fact to mention is that the computer we used for the event is a “Jerry” – a computer in a can – made from recycled materials and recycled by our community members.

After the training and a session of answering questions, we ended the first day of the meetup.

Sunday started with the analysis of the 2016 balance sheet and a discussion of our community’s roadmap for 2017. We talked about ways of increasing our community engagement in SUMO in 2017. Several solutions were discussed at length, allowing us to share and assign tasks to people present at the event.

We decided to train together on a single theme each month to increase focus. We also acknowledged the cancellation of our Nouchi localization project, due to the difficulties with creating a new technical vocabulary within that language. Our localization efforts will be focused on French from now on.

The Sunday lunch had in a great atmosphere as we shared a local dish called garba. The meeting ended with a Q&A session focused on addressing the concerns and doubts of the new contributors.

The meeting in Abidjan was a great opportunity to catch up, discuss the most recent updates, motivate existing contributors and recruit new ones for Mozilla’s mission. We ended the whole event with a family photo of all the people present.

We are all looking forward to the second session in the Bouake, in the center of Côte d’Ivoire.

We are humbled and grateful for the effort and passion of the community in Ivory Coast. Thank you for your inspiring report and local leadership, Abbackar :-) Onwards and forwards, to Bouake!

hacks.mozilla.orgCSS Shapes, clipping and masking – and how to use them

The release of Firefox 54 is just around the corner and it will introduce new features into an already cool CSS property: clip-path.

clip-path is a property that allows us to clip (i.e., cut away) parts of an element. Up until now, in Firefox you could only use an SVG to clip an element:

But with Firefox 54, you will be able to use CSS shapes as well: insets, circles, ellipses and polygons!

Note: this post contains many demos, which require support for clip-path and mask. To be able to see and interact with every demo in this post, you will need Firefox 54 or higher.

Basic usage

It’s important to take into account that clip-path does not accept “images” as input, but as <clipPath> elements:

See the Pen clip-path (static SVG mask) by ladybenko (@ladybenko) on CodePen.

A cool thing is that these <clipPath> elements can contain SVG animations:

See the Pen clip-path (animated SVG) by ladybenko (@ladybenko) on CodePen.

However, with the upcoming Firefox release we will also have CSS shape functions at our disposal. These allow us to define shapes within our stylesheets, so there is no need for an SVG. The shape functions we have at our disposal are: circle, ellipse, inset and polygon. You can see them in action here:

See the Pen oWJBwW by ladybenko (@ladybenko) on CodePen.

And not only that, but we can animate them with CSS as well. The only restrictions are that we cannot “mix” function shapes (i.e., morphing from a circle to an inset), and that when animating polygons, the polygons must preserve the same number of vertices during the whole animation.

Here’s a simple animation using a circle shape:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

And here is another animation using polygon. Note: Even though we are restricted to preserving our set number of vertices, we can “merge” them by repeating the values. This creates the illusion of animating to a polygon with any number of sides.

See the Pen Animated clip-path (polygon) by ladybenko (@ladybenko) on CodePen.

Note that clip-path also opens new possibilities layout-wise. The following demo uses clipping to make an image more interesting in a multi-column article:

See the Pen Layout example by ladybenko (@ladybenko) on CodePen.

Spicing things up with JavaScript

Clipping opens up cool possibilities. In the following example, clip-path has been used to isolate elements of a site –in this case, simulating a tour/tutorial:

See the Pen tour with clip-path by ladybenko (@ladybenko) on CodePen.

It’s done with JavaScript by fetching the dimensions of an element on the fly, and calculating the distance with respect to a reference container, and then using that distance to update the inset shape used on the clip-path property.

We can now also dynamically change the clipping according to user input, like in this example that features a “periscope” effect controlled by the mouse:

See the Pen clip-path (periscope) by ladybenko (@ladybenko) on CodePen.

clip-path or mask?

There is a similar CSS property, mask, but it is not identical to clip-path. Depending on your specific use case, you should choose one or the other. Also note that support varies across browsers, and currently Firefox is the only browser that fully supports all the mask features, so you will need to run Firefox 54 to interact with the demos below on Codepen.

Masking can use an image or a <mask> element in an SVG. clip-path, on the other hand, uses an SVG path or a CSS shape.

Masking modifies the appearance of the element it masks. For instance, here is a circular mask filled with a linear gradient:

Linear gradient mask

And remember that you can use bitmap images as well even if they don’t have an alpha channel (i.e., transparency), by tweaking the mask-mode:

mask-mode example

The key concept of masking is that it modifies the pixels of an image, changing their values – to the point of making some of them fully transparent.

On the other hand, clipping “cuts” the element, and this includes its collision surface. Check out the following demo showing two identical pictures masked and clipped with the same cross shape. Try hovering over the pictures and see what happens. You will notice that in the masked image the collision area also contains the masked parts. In the clipped image, the collision area is only the visible part (i.e., the cross shape) of the element.

Mask vs clip comparison

Is masking then superior to clipping, or vice versa? No, they are just used for different things.

I hope this post has made you curious about clip-path. Check out the upcoming version of Firefox to try it!

Mozilla Add-ons BlogKeeping Up with the Add-ons Community

With the add-ons community spread out among multiple projects and several communication platforms, it can feel difficult to stay connected and informed.

To help bridge some of these gaps, here is a quick refresher guide on our most-used communication channels and how you can use them to stay updated about the areas you care about most.


Announcements will continue to be posted to the Add-ons Blog and cross-posted to Discourse.

Find Documentation

MDN Web Docs has great resources for creating and publishing extensions and themes.

You can also find documentation and additional information about specific projects on the Add-ons wiki and the WebExtensions wiki.

Get Technical Help

Join a Public Meeting

Please feel welcome to join any or all of the following public meetings:

Add-ons Community Meeting (every other Tuesday at 17:00 UTC)

Join the add-ons community as we discuss current and upcoming happenings in the add-ons world. Agendas will be posted in advance to the Add-ons > Contribute category on Discourse. See the wiki for the next meeting date and call-in information.

Good First Bugs Triage (every other Tuesday at 17:00 UTC)

Come and help triage good first bugs for new contributors! See the wiki for the next meeting date and call-in information.

WebExtensions API Triage (every Tuesday at 17:30 UTC)

Join us to discuss proposals for new WebExtension APIs. Agendas are distributed in advance to the dev-addons mailing list and the Add-ons > Contribute category on Discourse. See the wiki for the next meeting date and call-in information. To request a new API, please read this first.

Be Social With Us

Get Involved

Check out the Contribute wiki for ways you can get involved.

The post Keeping Up with the Add-ons Community appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgCSS shapes, clipping and masking

The release of Firefox 54, is just around the corner and it will introduce new features into an already cool CSS property: clip-path.

clip-path is a property that allows us to clip (i.e. cut away) parts of an element. Until today, in Firefox you could only use an SVG to clip an element:

But with Firefox 54, you will be able to use CSS shapes as well: insets, circles, ellipses and polygons!

Basic usage

It’s important to take into account that clip-path does not accept “images” as input, but <clipPath> elements:

See the Pen clip-path (static SVG mask) by ladybenko (@ladybenko) on CodePen.

A cool thing is that these <clipPath> elements can contain SVG animations:

See the Pen clip-path (animated SVG) by ladybenko (@ladybenko) on CodePen.

However, with the upcoming Firefox release we will also have CSS shape functions at our disposal. These allow us to define shapes within our stylesheets, so there is no need for an  SVG. The shape functions we have at our disposal are: circle, ellipse, inset and polygon. You can see them in action here:

See the Pen oWJBwW by ladybenko (@ladybenko) on CodePen.

And not only that, but we can animate them with CSS as well. The only restriction is that we cannot “mix” function shapes (i.e., morphing from a circle to an inset), and that when animating polygons they must preserve the same number of vertices during the whole animation.

Here’s a simple animation using a circle shape:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

And here is another animation using polygon. Note: Even though we are restricted to preserving  our set number of vertices, we can “merge” them by repeating the values. This creates the illusion of animating to a polygon with any number of sides.

Note that clip-path also opens new possibilities layout-wise. The following demo uses clipping to make an image more interesting in a multi-column article:


Spicing things up with JavaScript

Clipping opens up cool possibilities. In the following example, clip-path has been used to isolate elements of a site –in this case, simulating a tour/tutorial:

It’s done with JavaScript by fetching the dimensions of an element on the fly, and calculating the distance respect a reference container, and then using that distance to update the inset shape used on the clip-path property.

We can now also  dynamically change the clipping according to user input, like in this example that features a “periscope” effect controlled by the mouse:

clip-path or mask?

There is a similar CSS property, mask, but it is  not identical to clip-path. Depending on your specific use case, you should choose one or the other. Also note that support varies across browsers, and currently Firefox is the only browser that fully supports all the mask features, so you will need Firefox to interact with the demos below on Codepen.

Masking can use an image or a <mask> element in an SVG. clip-path, on the other hand, uses an SVG path or a CSS shape.

Masking modifies the appearance of the element it masks. For instance, here is a circular mask filled with a linear gradient:

And remember that you can use bitmap images as well even if they don’t have an alpha channel (i.e. transparency), by tweaking the mask-mode:

The key concept of masking is that it modifies the pixels of an image, changing their values –to the point of making some of them fully transparent.

On the other hand, clipping “cuts” the element, and this includes its collision surface. Check out the following demo showing two identical pictures masked and clipped with the same cross shape. Try hovering it and see what happens. You will notice that in the masked image the collision area also contains the masked parts. In the clipped image, the collision area is only the visible part (i.e. the cross shape) of the element.


Is masking then superior to clipping, or vice versa? No, they are just used for different things.

I hope this post has made you curious about clip-path. Stay tuned to the upcoming version of Firefox to try it!

Air MozillaMozilla Science Lab June 2017 Bi-Monthly Community Call

Mozilla Science Lab June 2017 Bi-Monthly Community Call Mozilla Science Lab Bi-monthly Community Call

hacks.mozilla.orgCross-browser extensions, available now in Firefox

We’re modernizing the way developers build extensions for Firefox! We call the new APIs WebExtensions , because they’re written using the technologies of the Web: HTML, CSS, and JavaScript. And just like the technologies of the Web, you can write one codebase that works in multiple places.

WebExtensions APIs are inspired by the existing Google Chrome extension APIs, and are supported by Opera, Firefox, and Microsoft Edge. We’re working to standardize these existing APIs as well as proposing new ones! Our goal is to make extensions as easy to share between browsers as the pages they browse, and powerful enough to let people customize their browsers to match their needs.

Want to know more?

Build WebExtensions

Port an existing Chrome extension

Air MozillaReps Weekly Meeting Jun. 08, 2017

Reps Weekly Meeting Jun. 08, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Mozilla BlogIncreasing Momentum Around Tech Policy

Mozilla’s new tech policy fellowship brings together leading experts to advance Internet health around the world


Strong government policies and leadership are key to making the Internet a global public resource that is open and accessible to all.

To advance this work from the front lines, some of the world’s experts on these issues joined government service. These dedicated public servants have made major progress in recent years on issues like net neutrality, open data and the digital economy.

But as governments transition and government leaders move on, we risk losing momentum or even backsliding on progress made. To sustain that momentum and invest in those leaders, today the Mozilla Foundation officially launches a new Tech Policy Fellowship. The program is designed to give individuals with deep expertise in government and Internet policy the support and structure they need to continue their Internet health work.

The fellows, who hail from around the globe, will spend the next year working independently on a range of tech policy issues. They will collaborate closely with Mozilla’s policy and advocacy teams, as well as the broader Mozilla network and other key organizations in tech policy. Each fellow will bring their expertise to important topics currently at issue in the United States and around the world.

For example:

Fellow Gigi Sohn brings nearly 30 years of experience, most recently at the Federal Communications Commission (FCC), dedicated to defending and preserving fundamental competition and innovation policies for broadband Internet access. At a time when we are moving closer to a closed Internet in the United States, her expertise is more valuable than ever.

Fellow Alan Davidson will draw on his extensive professional history working to advance a free and open digital economy to support his work on education and advocacy strategies to combat Internet policy risks.

With the wave of data collection and use fast growing in government and the private sector, fellow Linet Kwamboka will analyze East African government practices for the collection, handling and publishing of data. She will develop contextual best practices for data governance and management.

Meet the initial cohort of the Tech Policy Fellows here and below, and keep an eye on the Tech Policy Fellowship website for ways to collaborate in this work.


Our Mozilla Tech Policy Fellows


Alan Davidson | @abdavdson

Alan will work to produce a census of major Internet policy risks and will engage in advocacy and educational strategy to minimize those risks. Alan is also a Fellow at New America in Washington, D.C. Until January 2017, he served as the first Director of Digital Economy at the U.S. Department of Commerce and a Senior Advisor to the Secretary of Commerce. Prior to joining the department, Davidson was the director of the Open Technology Institute at New America. Earlier, Davidson opened Google’s Washington policy office in 2005 and led the company’s public policy and government relations efforts in North and South America. He was previously Associate Director of the Center for Democracy and Technology. Alan has a bachelor’s degree in mathematics and computer science and a master’s degree in technology and policy from the Massachusetts Institute of Technology (MIT). He is a graduate of Yale Law School.


Credit: New America

Amina Fazlullah

Amina Fazlullah will work to promote policies that support broadband connectivity in rural and vulnerable communities in the United States. Amina joins the fellowship from her most recent role as Policy Advisor to the National Digital Inclusion Alliance, where she led efforts to develop policies that support broadband deployment, digital inclusion, and digital equity efforts across the United States. Amina has worked on a broad range of Internet policy issues including Universal Service, consumer protection, antitrust, net neutrality, spectrum policy and children’s online privacy. She has testified before Congress, the Federal Communications Commission, the Department of Commerce and Federal Trade Commission. Amina was formerly the Benton Foundation’s Director of Policy in Washington, D.C., where she worked to further government policies to address communication needs of vulnerable communities. Before that, Amina worked with the U.S. Public Interest Research Group, for the Honorable Chief Judge James M. Rosenbaum of the U.S. District Court of Minnesota and at the Federal Communications Commission. She is graduate of the University of Minnesota Law School and Pennsylvania State University.


Camille Fischer | @camfisch

Camille will be working to promote individual rights to privacy, security and free speech on the Internet. In the last year of the Obama Administration, Camille led the National Economic Council’s approach to consumers’ economic and civil rights on the Internet and in emerging technologies. She represented consumers’ voices in discussions with other federal agencies regarding law enforcement access to data, including encryption and international law enforcement agreements. She has run commercial privacy and security campaigns, like the BuySecure Initiative to increase consumers’ financial security, and also worked to promote an economic voice within national security policy and to advocate for due process protections within surveillance and digital access reform. Before entering the government as a Presidential Management Fellow, Camille graduated from Georgetown University Law Center where she wrote state legislation for the privacy-protective commercial use of facial recognition technology. Camille is also an amateur photographer in D.C.


Caroline Holland

Caroline will be working to to promote a healthy internet by exploring current competition issues related to the Internet ecosystem. Caroline served most recently as Chief Counsel for Competition Policy and Intergovernmental Relations at the U.S. Department of Justice Antitrust Division. In that role, she was involved in several high-profile matters while overseeing the Division’s competition policy and advocacy efforts, interagency policy initiatives, and congressional relations. Caroline previously served as Chief Counsel and Staff Director of the Senate Antitrust Subcommittee where she advised the committee chairmen on a wide variety of competition issues related to telecommunications, technology and intellectual property. Before taking on this role, she was a counsel on the Senate Judiciary Committee and an attorney in private practice focusing on public policy and regulatory work. Caroline holds a J.D. from Georgetown University Law Center and a B.A. in Public Policy from Trinity College in Hartford, Connecticut. Between college and law school, Caroline served in the Antitrust Division as an honors paralegal and as Clerk of the Senate Antitrust Subcommittee.


Linet Kwamboka | @linetdata

Linet will work on understanding the policies that guide data collection and dissemination in East Africa (Kenya, Uganda, Tanzania and Rwanda). Through this, she aims to publish policy recommendations on existing policies, proposed policy amendments and a report outlining her findings. Linet is the Founder and CEO of DataScience LTD, which builds information systems to generate and use data to discover intelligent insights about people, products and services for resource allocation and decision making. She was previously the Kenya Open Data Initiative Project Coordinator for the Government of Kenya at the Kenya ICT Authority. Linet is also a director of the World Data Lab–Africa, working to make data personal, tangible and actionable to help citizens make better informed choices about their lives. She also consults with the UNDP in the Strengthening Electoral Processes in Kenya Project, offering support to the Independent Electoral Boundaries Commission in information systems and technology. She has worked at the World Bank as a GIS and Technology Consultant and was a Software Engineering Fellow at Carnegie Mellon University, Pittsburgh. Her background is in computer science, data analysis and Geographical Information Systems. Linet is a recognized unsung hero by the American Embassy in Kenya in her efforts to encourage more women into technology and computing, has been a finalist in the Bloomberg award of global open data champions and is a member of the Open Data Institute Global Open Data Leaders’ Network.


Terah Lyons | @terahlyons

Terah will work on advancing policy and governance around the future of machine intelligence, with a specific focus on coordination in international governance of AI. Her work targets questions related to the responsible development and deployment of AI and machine learning, including how society can minimize the risks of AI while maximizing its benefits, and what AI development and advanced automation means for humankind across cultural and political boundaries. Terah is a former Policy Advisor to the U.S. Chief Technology Officer in the White House Office of Science and Technology Policy (OSTP). She most recently led a policy portfolio in the Obama Administration focused on machine intelligence, including AI, robotics, and intelligent transportation systems. In her work at OSTP, Terah helped establish and direct the White House Future of Artificial Intelligence Initiative, oversaw robotics policy and regulatory matters, led the Administration’s work from the White House on civil and commercial unmanned aircraft systems/drone integration into the U.S. airspace system, and advised on Federal automated vehicles policy. She also advised on issues related to diversity and inclusion in the technology industry and entrepreneurial ecosystem. Prior to her work at the White House, Terah was a Fellow with the Harvard School of Engineering and Applied Sciences based in Cape Town, South Africa. She is a graduate of Harvard University, where she currently sits on the Board of Directors of the Harvard Alumni Association.


Marilia Monteiro

Marilia will be analyzing consumer protection and competition policy to contribute to the development of sustainable public policies and innovation. From 2013-15, she was Policy Manager at the Brazilian Ministry of Justice’s Consumer Office coordinating public policies for the consumer protection in digital markets and law enforcement actions targeting ISP and Internet application. She has researched the intersection between innovation technologies and society in different areas: current democratic innovations in Latin America regarding e-participation at the Wissenschaftszentrum Berlin für Sozialforschung and the development of public policies on health privacy and data protection at the “Privacy Brazil” project with the Internet Lab in partnership with Ford Foundation in Brazil. She is a board member at Coding Rights, a Brazilian-born, women-led, think-and-do tank and active in Internet Governance fora. Marilia holds a Master in Public Policy from the Hertie School of Governance in Berlin focusing on policy analysis, a bachelor in Law from Fundação Getulio Vargas School of Law in Rio de Janeiro and specialises in digital rights.


Jason Schultz | @lawgeek

Jason will analyze the impacts and effects of new technologies such as artificial intelligence/machine learning and the Internet of Things through the lenses of consumer protection, civil liberties, innovation, and competition. His research aims to help policymakers navigate these important legal concerns while still allowing for open innovation and for competition to thrive. Jason is a Professor of Clinical Law, Director of NYU’s Technology Law & Policy Clinic, and Co-Director of the Engelberg Center on Innovation Law & Policy. His clinical projects, research, and writing primarily focus on the ongoing struggles to balance traditional areas of law such as intellectual property, consumer protection, and privacy with the public interest in free expression, access to knowledge, civil rights, and innovation in light of new technologies and the challenges they pose. During the 2016-2017 academic year, Jason was on leave at the White House Office of Science and Technology Policy, where he served as Senior Advisor on Innovation and Intellectual Property to former U.S. Chief Technology Officer Megan Smith. With Aaron Perzanowski, he is the author of The End of Ownership: Personal Property in the Digital Economy (MIT Press 2016), which argues for retaining consumer property rights in a marketplace that increasingly threatens them. Prior to joining NYU, Jason was an Assistant Clinical Professor of Law and Director of the Samuelson Law, Technology & Public Policy Clinic at the UC Berkeley School of Law (Boalt Hall). Before joining Boalt Hall, he was a Senior Staff Attorney at the Electronic Frontier Foundation and before that practiced intellectual property law at the firm of Fish & Richardson, PC. He also served as a clerk to the Honorable D. Lowell Jensen of the Northern District of California. He is a member of the American Law Institute.


Gigi Sohn | @gigibsohn

Gigi will be working to promote an open Internet in the United States. She is one of the nation’s leading public advocates for open, affordable, and democratic communications networks. Gigi is also a Distinguished Fellow at the Georgetown Law Institute for Technology Law & Policy and an Open Society Foundations Leadership in Government Fellow. For nearly 30 years, Gigi has worked across the United States to defend and preserve the fundamental competition and innovation policies that have made broadband Internet access more ubiquitous, competitive, affordable, open, and protective of user privacy. Most recently, Gigi was Counselor to the former Chairman of the U.S. Federal Communications Commission, Tom Wheeler, who she advised on a wide range of Internet, telecommunications and media issues. Gigi was named by the Daily Dot in 2015 as one of the “Heroes Who Saved the Internet” in recognition of her role in the FCC’s adoption of the strongest-ever net neutrality rules. Gigi co-founded and served as CEO of Public Knowledge, the leading communications policy advocacy organization. She was previously a Project Specialist in the Ford Foundation’s Media, Arts and Culture unit and Executive Director of the Media Access Project, the first public interest law firm in the communications space. Gigi holds a B.S. in Broadcasting and Film, Summa Cum Laude, from the Boston University College of Communication and a J.D. from the University of Pennsylvania Law School.


Cori Zarek | @corizarek

Cori is the Senior Fellow leading the Tech Policy Fellows team and serving as a liaison with the Mozilla Foundation. Her work as a fellow will focus on the intersection of tech policy and transparency. Before joining Mozilla, Cori was Deputy U.S. Chief Technology Officer at the White House where she led the team’s work to build a more digital, open, and collaborative government. Cori also coordinated U.S. involvement with the global Open Government Partnership, a 75-country initiative driving greater transparency and accountability around the world. Previously, she was an attorney at the U.S. National Archives, working on open government and freedom of information policy.  Before joining the U.S. government, Cori was the Freedom of Information Director at The Reporters Committee for Freedom of the Press where she assisted journalists with legal issues, and she also practiced for a Washington law firm. Cori received her B.A. from the University of Iowa where she was editor of the student-run newspaper, The Daily Iowan. Cori also received her J.D. from the University of Iowa where she wrote for the Iowa Law Review and The Des Moines Register. She was inducted into the Freedom of Information Hall of Fame in 2016. Cori is also the President of the D.C. Open Government Coalition and teaches a media law class at American University.

The post Increasing Momentum Around Tech Policy appeared first on The Mozilla Blog.

about:communityFirefox 54 new contributors

With the release of Firefox 54, we are pleased to welcome the 36 developers who contributed their first code change to Firefox in this release, 33 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Air MozillaBugzilla Project Meeting, 07 Jun 2017

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Air MozillaThe Joy of Coding - Episode 101

The Joy of Coding - Episode 101 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaWeekly SUMO Community Meeting June 07, 2017

Weekly SUMO Community Meeting June 07, 2017 This is the sumo weekly call

Open Policy & AdvocacyEngaging on e-Privacy at the European Parliament

Last week, I participated in the European Parliament’s Technical Roundtable regarding the draft e-Privacy Regulation currently under consideration – specifically, I joined the discussion on “cookies”. The Roundtable was hosted by lead Rapporteur on the file, MEP Marju Lauristin (Socialists and Democrats, Estonia), and MEP Michal Boni (European People’s Party, Poland). It was designed to bring together a range of stakeholders to inform the Parliament’s consideration of what could be a major change to how Europe regulates the privacy and security of communications online, related to but with a different scope and purpose than the recently adopted General Data Protection Regulation (GDPR).

Below the fold is a brief overview of my intervention, which describes our proposed changes for some of the key aspects of the Regulation, including how it handles “cookies”, and more generally how to deliver maximal benefits for the privacy and security of communications, with minimum unnecessary or problematic complexities for technology design and engineering. I covered the following three points:

  1. We support incentives for companies to offer privacy protective options to users.
  2. The e-Privacy Regulation must be future-proofed by ensuring technological neutrality.
  3. Browsers are not gatekeepers nor ad-blockers; we are user agents.

The current legal instrument on this topic, the e-Privacy Directive, leaves much to be desired when it comes to effective privacy protections and user benefit, illustrated quite prominently by the “cookie banner” which users click through to “consent” to the use of cookies by a Web site. The e-Privacy Regulation is is an important piece of legislation – for Mozilla, for Europe, and ultimately, for the health of the global internet. We support the EU’s ambitious vision,  and we will continue working with the Parliament, the Commission, and the Council by sharing our views and experiences with investing in privacy online. We hope that the Regulation will contribute to a better communications ecosystem, one that offers meaningful control, transparency, and choice to individuals, and helps to rebuild trust online.

1 – We support incentives for companies to offer privacy protective options to users.

We view one of the primary objectives of the Regulation to be catalyzing more offerings of privacy protective technologies and services for users. We strongly support this objective. This is the approach we take with Firefox: Users can browse in regular mode, which permits Web sites to place cookies, or in private browsing mode, which has our Tracking Protection technology built in. We invest in making sure that both options are desirable user experiences, and the user is free to choose which they go with – and can switch between them at will, and use both at the same time. We’d like to see more of this in the industry, and welcome the language in Article 10(1) of the draft Regulation which we believe is intended to encourage this.

2 – The e-Privacy Regulation must be future-proofed by ensuring technological neutrality.

One of the principles that shaped the current e-Privacy Directive was technological neutrality. It’s critical that the Regulation similarly follow this principle, to ensure practical application and to keep it future-proof. It should therefore focus on the underlying privacy risk to users created by cross-site and cross-device tracking, rather than on specific technologies that create that risk. To achieve that, the current draft of the Regulation would benefit from two changes.

First, the Parliament should revise  references to specific tracking techniques, like first and third party cookies to ensure that other forms of tracking aren’t overlooked. While blocking third party cookies may seem at first glance to be a low hanging fruit to better protect user privacy and security online — see this Firefox add-on called Lightbeam, which demonstrates the amount of first and third party sites that can “follow” you online — there are a number of different ways a user can be tracked online; via third party cookies is only an implementation of one form (albeit a common one). Device fingerprinting, for example, creates a unique, persistent identifier that undermines user consent mechanisms and that requires a regulatory solution. Similarly, Advertising identifiers are a pervasive tracking tool on mobile platforms that are currently not addressed. The Regulation should use terminology that more accurately captures the targeted behavior, and not only one possible implementation of tracking.

Second, the Regulation includes a particular focus on Web browsers (such as Recitals 22-24), without proper consideration of the diversity of forms of online communications today. We aren’t suggesting that the Regulation exclude Web browsing, of course. But to focus on one particular client-side software technology risks missing other technology with significant privacy implications, such as tracking facilitated by mobile operating systems or cloud services accessed via mobile apps. Keeping a principle-based approach will ensure that the Regulation doesn’t impose a specific solution that does not meaningfully deliver on transparency, choice, and control outside of the Web browsing context.

3 – Browsers are not gatekeepers nor ad-blockers; we are user agents.

Building on the above, the Parliament ought to view the Web browser in a manner that reflects its place in the technology ecosystem. Web browsers are user agents facilitating the communication between internet users and Web sites. For example, Firefox offers deep customisation options, and its goal is to put the user in the driver seat. Similarly, Firefox private browsing mode includes Tracking Protection technology, which blocks certain third party trackers through a blacklist (learn more about our approach to content blocking here). Both of these are user agent features, embedded in code shipped to users and run on their local devices – neither is a service that we functionally intermediate or operate as it is used. It’s not constructive from a regulatory perspective, nor an accurate understanding of the technology, to describe Web browsers as gatekeepers in the way the Regulation does today.

The post Engaging on e-Privacy at the European Parliament appeared first on Open Policy & Advocacy.

hacks.mozilla.orgIntroducing FilterBubbler: A WebExtension built using React/Redux

A  few months ago my long time Free Software associate, Don Marti, called me about an idea for a WebExtension. WebExtensions is the really cool new standard for browser extensions that Mozilla and the Chrome team are collaborating on (as well as Opera, Edge and a number of other major browsers). The WebExtensions API lets you write add-ons using the same JavaScript and HTML methodologies you use to implement any other web site.

Don’s idea was basically to build a text analysis toolkit with the new WebExtensions API. This toolkit would let you monitor various browser activities and resources (history, bookmarks, etc.) and then let you use text analysis modules to discover patterns in your own browsing history. The idea was to turn the tables on the kinds of sophisticated analysis that advertisers do with the everyday browsing activities we take for granted. Big companies are using advanced techniques to model user behavior and control the content they receive, in order to manipulate outcomes like the time a user spends on the system or the ads they see. If we provided tools for users to do this with their own browsing data, they would have a better chance to understand their own behaviors and and a greater awareness of when external systems are trying to manipulate them.

The other major goal would be to provide a well-documented example of using the new WebExtensions API.  The more I read about WebExtensions the more I realized they represent a game-changer for moving web browsing intelligence “out to the edge”. All sorts of analysis and automation can be done with WebExtensions in a way that potentially lets the tools be used on any of the popular modern web browsers. About the only thing I saw missing was a way to easily collaborate around these “recipes” for analysing web content. I suggested we create a WordPress plugin that would supply a RESTful interface for sharing classifications and the basic plan for “FilterBubbler” was born.

Our initial prototype was a proof of concept that used an extremely basic HTML pop-up and a Bayesian classifier. This version proved that we could provide useful classification of web page content based on hand-loaded corpora, but it was clear that we would need additional tooling to get to a “consumer” feel. Before we could start adding important features like remote recipe servers, classification displays and configuration screens, we clearly needed to make some decisions about our infrastructure. In this article, I will cover our efforts to provide a modern UI environment and the challenges that posed when working with WebExtensions.


The React framework and its associated Flux implementation took the HTML UI world by storm when Facebook released the tool as Free Software in 2013. React was originally deployed in 2011 as part of the newsfeed display infrastructure on Facebook itself. Since then the library has found use in Instagram, Netflix, AirBnB and many other popular services. The tool revolves around a strategy called Flux which tightly defines the way state is updated in an application.

Flux is a strategy not an actual implementation, and there are many libraries that provide its functionality. One of the most popular libraries today is Redux. The Redux core value is a simplified universal view of the application state. Because there is a single state for the application, the behavior that results from a series of action events is completely deterministic and predictable. This makes your application easier to reason about, test and debug. A full discussion of the concepts behind React and Redux is beyond the scope of this article so if you are just getting started, I would recommend that you read the Redux introductory material or check out Dan Ambramov’s excellent introductory course at Egghead.

Integrating with WebExtensions

Digging into the WebExtensions framework, one of the first hurdles is that the UI pop-up and config page context is separate from the background context. The state of the UI context is recreated each time you open and close the UI displays. Communication between the UI context and the background script context is achieved via a message-passing architecture.

The state of the FilterBubbler extension will be stored in the background context but we’ll need to bind that state to UI elements in the pop-up and config page context. Alexander Ivantsov’s Redux-WebExt project offers one solution for this problem. His tool provides an abstraction layer between the UI and background pages with a proxy. The proxy gives the appearance of direct access to the Redux store in the UI, but it actually forwards actions to the background context, and then sends resulting state modifications generated by the reducers back to the UI context.

Action mapping

It took me some effort to get things working across the Redux-WebExt bridge. The React components that run in the UI contexts think they are talking to a Redux store; in fact, it’s a facade that is exchanging messages with your background context. The action objects that you think are headed to your reducers are actually serialized into messages, sent to the background context and then unpacked and delivered to the store. Once the reducers finish their state modifications, the resulting state is packed up and sent back to the proxy so that it can update the state of the UI peers.

Redux-WebExt puts a mapping table in the middle of this process that lets you modify how action events from the front-end get delivered to the store. In some cases (i.e., asynchronous operations) you really need this mapping to separate out actions that can’t be serialized into message objects (like callback functions).

In some cases this may be a straight mapping that only copies the data from a UI action event, like this one from FilterBubbler’s mappings in store.js:

actions[formActionTypes.TOUCH] = (data) => { 
    return { type: formActionTypes.TOUCH, }; 

Or, you may need to map that UI action to something completely different, like this mapping that calls an asynchronous function only available in the background store:

actions[UI_REQUEST_ACTIVE_URL] = requestActiveUrl;

In short, be mindful of the mapper! It took me a few hours to get my head wrapped around its purpose. Understanding this is critical if you want to use React/Redux in your extension as we are.

This arrangement makes it possible to use standard React/Redux tooling with minimal changes and configuration. Existing sophisticated libraries for form-handling and other major UI tasks can plug into the WebExtension environment without any knowledge of the underlying message-based connectivity. One example tool we have already integrated is Redux Form, which provides a full system for managing form input with validation and the other services you would expect in a modern development effort.

Having established that we can use a major UI toolkit without starting from scratch, our next concern is to make things look good. Google’s Material Design is one popular look and feel standard and the React platform has the popular Material UI, which implements the Google standard as a set of React/Redux components. This gives us the ability to produce great looking UI popups and config screens without having to develop a new UI toolkit.

Get thunky

Some of the operations we need to perform are callback-based, which makes them asynchronous. In the React/Redux model this presents some issues. Action generator functions and reducers are designed to do their work immediately when called. Solutions like providing access to the store within an action generator and calling dispatch in a callback are considered an anti-pattern. One popular solution to this problem is the Redux-Thunk middleware. Adding Redux-Thunk to your application is easy, you just pass it in when you create the store.

import thunk from 'redux-thunk'
const store = createStore(

With Redux-Thunk installed you are provided with a new style of action generators in which you return a function to the store that will later be passed a dispatch function. This inversion of control allows Redux to stay in the driver’s seat when it comes to sequencing your asynchronous operations with other actions in the queue. As an example, here’s a function that requests the URL of the current tab and then dispatches a request to set the result in the UI:

export function requestActiveUrl() {
    return dispatch => {
        return browser.tabs.query({active: true}, tabs => {
            return dispatch(activeUrl(tabs[0].url));

The activeUrl() function looks more typical:

export function activeUrl(url) {
    return {
        type: ACTIVE_URL,

Since WebExtensions span several different contexts and communicate with asynchronous messaging, a tool like Redux-Thunk is indispensable.

Debugging WebExtensions

Debugging WebExtensions presents a few new challenges that work a little differently depending on the browser you are using. Whichever browser you use, the first major difference is that the background context of the extension has no visible page and must be specifically selected for debugging. Let’s walk through getting started with that process on Firefox and Chrome.


On Firefox, you can access your extension by typing “about:debugging” into the browser’s URL field. This page will allow you to load an unpacked extension with the “Load Temporary Add-On” button (or you can use the handy web-ext tool that allows you to start the process from the command line). Pressing the “Debug” button here will bring up a source debugger for your extension. With FilterBubbler, we are using the flexible webpack build tool to take advantage of the latest JavaScript features. Webpack uses the babel transpiler to convert new JavaScript language features into code that is compatible with current browsers. This means that the sources run by the browser are significantly altered from their originals. Be sure to select the “Show Original Sources” option from the preferences menu in the debugger or your code will seem very unfamiliar!

Once you have that selected you should see something more like what you expect:

From here you can set breakpoints and do everything else you would expect.


On Chrome it’s all basically the same idea, just a few small differences in the UI. First you will go to the main menu, dig down to “more tools” and then select “extensions”:

That will take you to the extension listing page.

The “Inspect views” section will allow you to bring up a debugger for your background scripts.

Where the Firefox debugger shows all of your background and foreground activity in one place, the Chrome environment does things a little differently. The foreground UI view is activated by right-clicking the icon of your WebExtension and selecting the “Inspect popup” option.

From there things are pretty much as you would expect. If you have written JavaScript applications and used the browser’s built-in functionality then you should find things pretty familiar.

Classification materials

With all our new infrastructure in place and a working debugger we were back on track adding features to FilterBubbler.  One of our goals for the prototype is to provide the API that recipes will run in. The main ingredients for FilterBubbler recipes are:

One or more sources: A source provides a stream of classification events on given URLs. The prototype provides a simple source which will emit a classification request any time the browser switches to a particular page. Other possible sources could include a source that scans a social network or a newsfeed for content, a stream of email messages or a segment of the user’s browser history.

A classifier: The classifier takes content from a source and returns at least one classification label with a strength value. The classifier may return an array of label and strength value pairs. If the array is empty then the system assumes that the classifier was not able to produce a match.

One or more corpora: FilterBubbler corpora provide a list of URLs with label and strength values. The labels and strength values are used to train the classifier.

One or more sinks: A sink is the destination for the classification events. The prototype includes a simple sink that connects a given classifier to a UI widget, which displays the classifications in the WebExtensions pop-up. Other possible sinks could generate outbound emails for certain classification label matches or a sink that writes URLs into a bookmark folder named with the classification label.

Maybe a diagram helps. The following configuration could tell you whether the page you are currently looking at is “awesome” or “stupid”!

Passing on the knowledge

The configurations for these arrangements are called “recipes” and can be loaded into your local configuration. A recipe is defined with a simple JSON format like so:

    “recipe-version”: “0.1”,
    “name”: “My Recipe”,
        “classifier”: “naive-bayes”,
        “corpora”: [
    “source”: [ “default” ],
    “sink”: [ “default” ]

The simple bit of demo code above can help a user differentiate between fake news, UFO sightings and food blogs (more of a problem than you might expect in some cities). Currently the classifiers, sources and sinks must be one of the provided implementations and cannot be loaded dynamically in the initial prototype. In upcoming articles, we will expand this functionality and describe the challenges these activities present in the WebExtensions environment.


FilterBubbler on GitHub:
React based Material UI:
Redux WebExt:
Redux Form:
Mozilla browser polyfill:

Open Policy & AdvocacyMozilla Poll Shows Americans Agree: Protect the Internet, Net Neutrality

With the partisan fight over net neutrality raging in Washington, we thought it would be interesting to see what others – people who use and depend on the internet – thought. We commissioned a poll, and the results are very interesting, but not surprising! Americans support net neutrality and the protections that the FCC’s Title II reclassification in 2015 provide. But the FCC is trying to remove those protections.

Title II classification for Internet Service Providers is the only consistent and legally viable way to protect net neutrality with enforceable rules under the Communications Act. Pai’s optimism that Title I classification would protect internet users runs counter to three D.C. Circuit Court decisions – and counter to the late Justice Antonin Scalia’s opinion on the Brand X Supreme Court case that described Title I classification as “an implausible reading of the statute.” There’s no reason to believe that the courts will change course now.

One of the most interesting questions in our poll asked respondents who they thought net neutrality was good for. For example, the Republican respondents said it was good for everyone: big business (53%), small business (75%), ISPs (60%), innovators (64%), and people like them (70%). Democrats pretty much agreed – and so do we! Small businesses and innovators shouldn’t have to choose whether to pay for “fast lanes” to compete with bigger players – and users should be able to choose what they do, see, and make online. And bigger players should welcome regulatory certainty of clear rules that are set out ahead of time – which a D.C. Circuit opinion said the FCC couldn’t do under Title I.

Your ISP shouldn’t be picking winners and losers online, and you should always have access to the entire internet. As our Manifesto says: The internet is a global public resource that must remain open and accessible. That’s definitely good for you! Net neutrality is key to ensuring that the internet is a global public resource that belongs to all users, not a few corporations. We already have cable television – the internet is, by design, different.

It was also notable to see strong bipartisan support of net neutrality – 76% of respondents support it. But people aren’t sure whether the U.S. government will protect their access to the internet. When it comes to protecting internet access, 70% don’t trust the Administration, 58% don’t trust the FCC, and a whopping 78% don’t trust Congress. It’s sad to see that the debate in Washington isn’t reflective of that support across party lines – but not surprising. There’s a reason that respondents don’t trust government institutions to protect their internet access: The discussion in Washington doesn’t reflect the public sentiment nationally. This poll makes it clear that the public wants a government that protects, rather than guts, net neutrality. Access and openness should be an issue we all agree on, instead of a political experiment.

The more data we have available about what Americans outside the beltway really think, the clearer it becomes that they want the FCC to protect net neutrality – not punt. We’ll keep fighting to keep your internet access neutral – here and globally. That’s why we’re encouraging all internet users to file a comment with the FCC to explain why THEY think net neutrality matters and must be protected. And it’s why we’ll be joining a massive, internet-wide Day of Action next month to support net neutrality and to push back on the FCC’s proposed rules. And we’ll keep working for net neutrality – stay tuned for more on that front.

The post Mozilla Poll Shows Americans Agree: Protect the Internet, Net Neutrality appeared first on Open Policy & Advocacy.

The Mozilla BlogNew Mozilla Poll: Americans from Both Political Parties Overwhelmingly Support Net Neutrality

Our survey also reveals that a majority of Americans do not trust the government to protect Internet access


There’s something that Americans of varied political affiliations — Democrats, Republicans and Independents — largely agree on: the need to protect net neutrality.

A recent public opinion poll carried out by Mozilla and Ipsos revealed overwhelming support across party lines for net neutrality, with over three quarters of Americans (76%) supporting net neutrality. Eighty-one percent of Democrats and 73% of Republicans are in favor of it.

Another key finding: Most Americans do not trust the U.S. government to protect access to the Internet. Seventy percent of Americans place no or little trust in the Trump administration or Congress (78%) to do so.

Mozilla and Ipsos carried out the poll in late May, on the heels of the FCC’s vote to begin dismantling Obama-era net neutrality rules. We polled approximately 1,000 American adults across the U.S., a sample that included 354 Democrats, 344 Republicans, and 224 Independents.

At Mozilla, we believe net neutrality is integral to a healthy Internet: it enables Americans to say, watch and make what they want online, without meddling or interference from ISPs (Internet Service Providers, such as AT&T, Verizon, and Time Warner). Net neutrality is fundamental to free speech, competition, innovation and choice online.

As you may have seen, the FCC has proposed rolling back net neutrality protections that were enacted in 2015, and will collect public comments on net neutrality through August 18th. Then, hopefully drawing on those comments, the FCC will vote whether to adopt the order and strip their ability to create net neutrality rules.

In the coming months, Mozilla will continue to work with the majority of Americans who endorse net neutrality. We will directly engage with key policymakers. We will continue our advocacy work — like our net neutrality petition, which has garnered more than 100,000 signatures and over 50 hours of voicemail messages for the FCC (just a few of the almost five million comments on the order). And Mozilla will participate in the July 12 Day of Action, joining Fight for the Future, Free Press, Demand Progress and others to call for all Internet users to defend net neutrality.

Below, more key findings from the poll:

Respondents across the political spectrum (78%) believe that equal access to the Internet is a right, with large majorities of Democrats (88%), Independents (71%), and Republicans (67%) in agreement

Respondents have little trust in government institutions to protect their access to the Internet. The highest levels of distrust were reported for the Trump administration (70%), Congress (78%) and the FCC (58%)

When it comes to corporations protecting access to the Internet, 54% of respondents distrust ISPs

Americans view net neutrality as having a positive impact on most of society. Respondents said it is a “good thing” for small businesses (70%), individuals (69%), innovators (65%) and ISPs (55%), but fewer think that it will benefit big businesses (46%)

Below, the full results from our poll.

Q1. How much do you trust the following institutions, if at all, to protect your access to the internet?

ISPs (Internet service providers, such as AT&T, Verizon, Time Warner, etc.)

Total Democrat Republican Independent
Trust completely 9% 9% 10% 8%
Mostly trust 35% 38% 39% 27%
Trust a little bit 38% 38% 37% 37%
Do not trust at all 16% 12% 13% 26%
Don’t know 3% 3% 2% 2%

The Trump Administration

Total Democrat Republican Independent
Trust completely 10% 5% 21% 6%
Mostly trust 15% 4% 31% 14%
Trust a little bit 20% 10% 31% 24%
Do not trust at all 50% 78% 15% 46%
Don’t know 5% 2% 3% 9%

The Federal Communications Commission

Total Democrat Republican Independent
Trust completely 6% 7% 9% 3%
Mostly trust 28% 30% 32% 21%
Trust a little bit 34% 34% 35% 37%
Do not trust at all 24% 21% 16% 32%
Don’t know 9% 8% 8% 7%

Internet Companies

Total Democrat Republican Independent
Trust completely 8% 6% 11% 7%
Mostly trust 29% 34% 33% 21%
Trust a little bit 44% 43% 42% 42%
Do not trust at all 16% 12% 12% 28%
Don’t know 4% 4% 2% 1%


Total Democrat Republican Independent
Trust completely 6% 6% 8% 3%
Mostly trust 13% 13% 16% 10%
Trust a little bit 34% 37% 35% 30%
Do not trust at all 44% 41% 38% 52%
Don’t know 4% 3% 3% 5%

Q2. Which of the following statements do you agree more with?

Total Democrat Republican Independent
Consumers should be able to freely and quickly access their preferred content on the internet 86% 88% 81% 85%
ISPs should be able to offer fast lanes with quicker load times to websites that pay a premium 14% 12% 19% 15%

Q3. Based on all the things you know or have heard, do you support or oppose net neutrality?

(Note: Participants saw net neutrality defined as: “Net neutrality is the principle that internet service providers providing consumer connection to the Internet should treat all data on the internet the same, not giving specific advantages or penalties in access by user, content, website, platform, or application.”)

Total Democrat Republican Independent
Strongly support 30% 35% 25% 29%
Somewhat support 46% 46% 48% 42%
Somewhat oppose 20% 17% 20% 24%
Strongly oppose 4% 2% 6% 5%

Q4. Do you think that net neutrality is a good thing or a bad thing for the following groups?

Small businesses

Total Democrat Republican Independent
Bad thing 9% 9% 10% 10%
Good thing 70% 68% 75% 72%
Makes no difference 21% 23% 15% 18%

Big business

Total Democrat Republican Independent
Bad thing 21% 29% 15% 20%
Good thing 46% 41% 53% 50%
Makes no difference 33% 30% 32% 31%


Total Democrat Republican Independent
Bad thing 10% 10% 11% 12%
Good thing 65% 68% 64% 64%
Makes no difference 25% 22% 25% 24%

Internet service providers

Total Democrat Republican Independent
Bad thing 18% 20% 18% 20%
Good thing 55% 55% 60% 55%
Makes no difference 26% 25% 22% 25%

People like me

Total Democrat Republican Independent
Bad thing 8% 6% 9% 11%
Good thing 69% 70% 70% 68%
Makes no difference 23% 24% 21% 21%

Q5. To what extent do you agree or disagree with the following statements?

Internet services providers will voluntarily look out for consumers’ best interests

Total Democrat Republican Independent
Strongly agree 11% 12% 10% 11%
Somewhat agree 26% 28% 28% 21%
Somewhat disagree 33% 32% 35% 33%
Strongly disagree 26% 22% 26% 33%
Don’t know 4% 6% 2% 3%

Equal access to the internet is a right

Total Democrat Republican Independent
Strongly agree 41% 52% 27% 44%
Somewhat agree 37% 36% 40% 31%
Somewhat disagree 10% 6% 17% 9%
Strongly disagree 8% 3% 13% 9%
Don’t know 4% 3% 3% 7%


About the Study

These are findings from an Ipsos poll conducted May 24-25, 2017 on behalf of Mozilla. For the survey, a sample of roughly 1,008 adults age 18+ from the continental U.S., Alaska and Hawaii was interviewed online in English. The sample includes 354 Democrats, 344 Republicans, and 224 Independents.

The sample for this study was randomly drawn from Ipsos’s online panel (see link below for more info on “Access Panels and Recruitment”), partner online panel sources, and “river” sampling (see link below for more info on the Ipsos “Ampario Overview” sample method) and does not rely on a population frame in the traditional sense. Ipsos uses fixed sample targets, unique to each study, in drawing sample. After a sample has been obtained from the Ipsos panel, Ipsos calibrates respondent characteristics to be representative of the U.S. Population using standard procedures such as raking-ratio adjustments. The source of these population targets is U.S. Census 2013 American Community Survey data. The sample drawn for this study reflects fixed sample targets on demographics. Post-hoc weights were made to the population characteristics on gender, age, race/ethnicity, region, and education.

Statistical margins of error are not applicable to online polls. All sample surveys and polls may be subject to other sources of error, including, but not limited to coverage error and measurement error. Where figures do not sum to 100, this is due to the effects of rounding. The precision of Ipsos online polls is measured using a credibility interval. In this case, the poll has a credibility interval of plus or minus 3.5 percentage points for all respondents. Ipsos calculates a design effect (DEFF) for each study based on the variation of the weights, following the formula of Kish (1965). This study had a credibility interval adjusted for design effect of the following (n=1,008, DEFF=1.5, adjusted Confidence Interval=5.0).

The poll also has a credibility interval plus or minus 5.9 percentage points for Democrats, plus or minus 6.0 percentage points for Republicans, and plus or minus 7.5 percentage points for Independents.

For more information about conducting research intended for public release or Ipsos’ online polling methodology, please visit our Public Opinion Polling and Communication page where you can  download our brochure, see our public release protocol, or contact us.

The post New Mozilla Poll: Americans from Both Political Parties Overwhelmingly Support Net Neutrality appeared first on The Mozilla Blog.

hacks.mozilla.orgAnnouncing WebVR on Mac via Firefox Nightly

Mozilla is pleased to announce WebVR is now available for macOS today via Firefox Nightly. This follows our announcement last week that WebVR is shipping in Firefox 55 for Windows.

More than 20% of Hacks readers (on desktop) and a quarter of web developers accessing the Mozilla Developer Network are on macOS. Many developers go to great lengths to test and develop in familiar environments and platforms. Our hope with this announcement is to reduce friction and empower this community with the first wave of official VR hardware support for macOS.

Our team has worked closely with the engineers at Valve to ensure that a high-performance implementation of WebVR is available on macOS just as Valve releases SteamVR into beta for macOS.

“With the combination of SteamVR and room-scale from Valve, and WebVR in Firefox for Mac from Mozilla, developers are now able to bring their content to all three PC platforms,” said Joe Ludwig from Valve. “We look forward to seeing all the new experiences enabled by these updates.”

The cross-platform nature of web development means that all the web applications, engines, and frameworks that we mentioned in last week’s post are available today, in every WebVR capable browser. Frameworks like A-Frame (used over 10 million times a month) make it easy to build and release cross-platform VR content on the web. And with the availability of WebVR on MacOS, web developers can build and test content on their preferred platform, whether it’s Windows, MacOS, or Linux (also available!)

If you haven’t worked in Firefox Nightly yet, check out WebVR in Firefox for MacOS and discover a new world of VR on the Web with WebVR! If you run into problems, have questions, or just want to show off what you’re doing, please tweet to @MozillaVR and say hi in the WebVR Slack.

Blog of DataMeasuring Search in Firefox

Today we are launching a new search data collection initiative in Firefox. This data will allow us to greatly improve the Firefox search experience while still respecting user privacy.

Search is both a fundamental method for navigating the Web and how Mozilla makes much of its revenue. Our research shows users have complicated search workflows. We know from internal user research studies that users often start a search from places like the Awesome Bar or search bar and then continue to refine their search on the search engine results page. We call these additional searches follow-on searches.

Firefox telemetry already includes a count of the searches users perform in all Firefox search bars. Firefox does not yet count follow-on searches. This is a real challenge for Mozilla, because we don’t understand how well the Firefox search experience works for our users.

A new experiment launching today will measure follow-on searches. When you search with one of the search engines that we include in Firefox, we will increment a counter for each follow-on search. Our telemetry system will count follow-on searches the same way we already count direct searches from our search bars. We won’t collect search queries (the words you type into the search box) nor any other Web browsing activity.

We will roll out the new experiment to a random sample of 10% of Firefox release users. If successful, we will extend these follow-on search measurements to our entire release population as a part of our normal telemetry system.

We seek these new measurements to gain missing insight into a crucial browser interaction. These new measurements are consistent with our data collection principles. Data helps us decide where to apply our limited resources to improve Firefox, while also safeguarding user privacy.  Mozilla will continue to provide public documentation and user controls for all telemetry collected within Firefox. With better insight into search behavior, we can improve Firefox and continue to sustain Mozilla’s mission.

Javaun Moradi,  Sr. Product Manager, Firefox Search

Air MozillaMozilla Weekly Project Meeting, 05 Jun 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Mozilla Add-ons BlogJune’s Featured Add-ons

Firefox Logo on blue background

Pick of the Month: lesspass

by Guillaume Vincent
Generate complex passwords and sign in automatically to all of your secure sites.

“I’ve been looking for this kind of password manager for years.”

Featured: LanguageTool Grammar Checker

by Daniel Naber
Get grammar help with this robust language tool available in more than two dozen languages.

“Easy to use and open source. A must-have if you want to write in a language that is not 100% in your grasp.”

Featured: Fraudscore

by Fraudscore
Block malware and shady ads that try to peep your personal info.

“Blocks a lot of popups and bad websites. Happy with product.”

Featured: Wikipedia Peek

by NiklasG
Displays handy pop-up previews of linked articles on Wikipedia pages.

“Saves me a LOT of time by presenting the essence of the linked page in a flash. A lookup that would often take a minute or more in the past now takes just a few seconds.”

Featured: Speed Dial (Lite)

by Mark Lumnert
Quickly access your favorite websites through a stylish panel layout.

“Very good.”

Featured: Honey

by Honey
A shopping assistant that saves you money! Honey helps you find coupons and price codes for thousands of stores throughout North America, India, Australia, and the U.K.

“I use this multiple times a day. Probably the best add-on I’ve ever downloaded on my browser.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post June’s Featured Add-ons appeared first on Mozilla Add-ons Blog.

Mozilla IndiaRain Of Rust Campaign – Live

Hey awesome Mozillians!

As announced in our earlier email, the Rain of Rust Campaign would be starting from today. It is a month-long global campaign which is specifically focused on the Rust language. This is a global campaign that will take place in the month of June 2017 in collaboration with the Rust community.

Here are all the things you need to start with the campaign

You can join the campaign Telegram channel

Interested in joining us? Let us know by adding a comment on this discourse topic.

Boots up! Its gonna get Rusty! 🙂

[Teaching kits link]
[Github Link]
[Telegram group]

Mehul Patel.
On behalf of Mozilla India


The Mozilla BlogMozilla Brings Virtual Reality to all Firefox Users

We are delighted to announce that WebVR will ship on by default for all Windows users with an HTC VIVE or Oculus Rift headset in Firefox 55 (currently scheduled for August 8th). WebVR transforms Virtual Reality (VR) into a first-class experience on the web, giving it the infinite possibilities found in the openness and interoperability of the Web Platform. When coupled with WebGL to render 3D graphics, these APIs transform the browser into a platform that allows VR content to be published to the Web and instantaneously consumed from any capable VR device.

We would like to invite content developers to use A-Frame, a framework used over 10 million times a month, or one of the other amazing web engines and frameworks, such as React VR, to target WebVR and begin to develop VR.

In the eight months since we announced an initial version of WebVR in Firefox Nightly edition (hidden behind a user preference), we’ve co-organized a workshop and incredible cross-vendor and community collaboration on the WebVR specification. This widespread adoption means that you can target WebVR today and expect it to work on every major device:

* Mozilla Firefox Google Chrome Microsoft Edge Oculus Browser Samsung Internet Safari on iOS
HTC Vive Developer Edition Chromium Experimental build
Oculus Rift Developer Edition Chromium Experimental build
Windows Mixed Reality Windows 10 with Creators Update and Developer Mode enabled
Samsung Gear VR Supported Supported
Google Daydream Chrome for Android (with Origin Trial)
Cardboard Chrome for Android via polyfill


What’s next

In the coming months, all of the browser makers are working to share our WebVR compliance tests, led by Microsoft and rapidly picked up by other engines. This collaborative effort will help ensure that content works across multiple platforms and implementations, increasing the reach of the content.

See WebVR Rocks for detailed instructions for browsers on all platforms, along with pointers to some incredible content that showcases the power of each of those platforms.

Whether it’s adding 360° video content to an existing site, walking through a museum, exploring temples in Cambodia, or even grooving in a virtual dance hall, it’s awesome to see what has already been built. Visit the A-Frame school to learn how to build your own WebVR experiences today!

If you do build something with A-Frame, we’d love to see it! Share on Twitter and mention @aframevr. We include the best submissions in our weekly newsletter, A Week of A-Frame.

The post Mozilla Brings Virtual Reality to all Firefox Users appeared first on The Mozilla Blog.

Air MozillaWeekly SUMO Community Meeting May 31, 2017

Weekly SUMO Community Meeting May 31, 2017 This is the sumo weekly call

hacks.mozilla.orgVR development from the comfort of your regular environment

If, like me, you’re new at developing VR content, maybe you’ve recently switched to a Windows PC. Coming from Mac and Linux systems, I find switching to and from Windows can be an annoying experience. If this is your situation too, I’ve researched some setups that minimize and sometimes avoid disruptive context switches. Here’s a walkthrough of my setup for virtual reality development, that maintains the comfort of a familiar context.

Run on Windows; use on Mac/Linux

Remote Desktop Protocol or RDP enables local computers to connect and control a desktop session on a remote Windows machine. RDP has very light overhead and is perfect for programming. However, VR development usually involves working with 3D modelling solutions, the kind of software that does not interact well with RDP.

In this brief video tutorial, I walk you through every step of the process for setting up RDP to enable your familiar dev environment. I’ll show you how to configure the MacOS RDP client, establish a remote session to a Windows PC, and overcome some graphic issues when launching modelling tools like Magica Voxel or Blender. This may require some trade-off between comfort and responsiveness.

Check the downloads section at the end of the article to find all the software you may need and pay special attention if your Windows version is Home Edition.

Sharing your keyboard and mouse

I was working with this setup but while trying to record some videos I started to notice the effect of the RDP overhead, which caused unacceptable frame drops. To avoid this overhead, as you continue working with your regular mouse and keyboard, you can use a virtual KVM (kernel-based virtual machine) like Share Mouse to allow your Mac peripherals to control the Windows PC. However, if you want to use this, you will need to physically connect your Windows PC to your monitor.

If your regular development environment is Linux, you can use Synergy instead, although notice there is no free version.

Edit: thanks to Avi Kac for reminding that synergy software is open source. So you can clone the repository, compile and install by your own.

What’s next?

Much has changed since the last time I developed in Windows. I strongly recommend you read Windows Development Environment before you begin. The guide includes installing a package provider for Windows, terminal setup, as well as other useful tools, tips and tricks, and offers a complete tutorial for getting started with modern Windows development.

Is it a perfect setup? Probably not for everyone, but it works for me. I would be interested in hearing what works for you. Please, join the conversation and tell us about your favorite setup in the comments or join the WebVR Slack to share with other practitioners.


If coming from Mac: RDP client, TeamViewer and Share Mouse for mouse and keyboard sharing.

If coming from Linux (not tested): several RDP clients available, TeamViewer and Synergy for mouse and keyboard sharing.

For the RDP solution to work in Windows Home Edition, you’ll need to install RDP Wrapper. Notice it is not yet compatible with the latest Windows Creators Update.

Open Policy & AdvocacyFair Use is Essential to Software Development

Creativity and innovation do not occur in a vacuum. The greatest songs, movies, and art are frequently at least inspired by and sometimes borrow heavily from predecessors and contemporaries. Software is no different. For this reason, today, we filed an amicus brief with the Federal Circuit advocating that the implementation of APIs should be considered a fair use under U.S. copyright law.

The story of the technology at stake in Oracle v. Google is about interoperability. When Google created the Android smartphone environment, it wanted to ensure that developers accustomed to writing Java-based software could immediately develop software for Android. To achieve this compatibility, Google used the “declaring code” of 37 Java API packages and implemented them with its own code.

Years later, shortly after it acquired Sun Microsystems and the Java platform, Oracle sued Google for copyright infringement. In 2014, the Federal Circuit Court of Appeals held that the declaring code and “structure, sequence, and organization” of the Java APIs were subject to copyright. We had filed a joint amicus brief, in that case, advocating for the opposite conclusion and were disappointed by the Court’s decision. Since then, the outstanding question has been whether Google’s use of the declaring code is nevertheless a fair use. Despite a jury unanimously deciding ‘yes’ last year, Oracle is arguing on its second appeal that the law requires the answer always be ‘no’.

Fair use is essential for software: it allows for new and creative re-implementations of code and is critical for interoperability between technologies and systems. Mozilla has been a consistent advocate for the importance of fair use rights, so much so that even the Mozilla Public License explicitly acknowledges that it “is not intended to limit any rights” under applicable copyright doctrines such as fair use. We hope that the Federal Circuit will recognize the value of fair use, and reach a decision that allows it to continue to play an important role in software innovation.

The post Fair Use is Essential to Software Development appeared first on Open Policy & Advocacy.

Air MozillaHands-On Parsing in Rust! - Toronto Rust Meetup

Hands-On Parsing in Rust! - Toronto Rust Meetup Michael Layzell provides a guided introduction to writing parsers using the popular Rust library called nom.

Air MozillaRust Libs Team Meeting 2017-05-30

Rust Libs Team Meeting 2017-05-30 Rust Libs Team Meeting

The Mozilla BlogMozilla’s Giant, Distributed, Open-Source Hackathon

Our annual Global Sprint is June 1 and 2. Scientists, developers, artists and educators will swap ideas and code to make the web a better place


Your skills and expertise are needed to help combat fake news, empower people to protect their privacy online, and build a healthier Internet.

This Thursday and Friday, we’re hosting a giant, distributed, open-source hackathon. We want to fuel the network of people who are rolling up their sleeves to make the web a safer, more secure, more inclusive place.

Will you join us?

Mozilla’s annual Global Sprint is scheduled for June 1 and 2. It’s an international public event: an opportunity for anyone, anywhere to energize their open-source projects with fresh insight and input from around the world.

Participants include biostatisticians from Brazil, research scientists from Canada, engineers from Nepal, gamers from the U.S., and fellows from Princeton University. In years past, hundreds of individuals in more than 35 cities have participated in the Global Sprint.


Who’s invited

You. And anyone else who wants to make the web a better place. The Global Sprint isn’t just about open source code—teachers, artists and activists will share their ideas and projects, too.

So far, attendees span more than 20 countries. Come June 1, we will assemble on GitHub, on Twitter, in Gitter and elsewhere to collaborate. The Global Sprint has real-world locations, too. There are already more than 65 venues booked in Australia, Bangladesh, Brazil, Japan and beyond.

What we’re making

Presently, the Global Sprint has more than 84 registered open-source projects seeking contributions. Here are just a handful:

EchoBurst, a browser extension that uses natural language processing to combat fake news, polarization and toxicity online, and bolster constructive dialogue. “Finding common ground is the only antidote to the poison of polarization,” says EchoBurst’s Tyler K. “EchoBurst takes the first step of finding that common ground, which is engaging in discussion with people you don’t agree with.”

Cryptomancer, an RPG akin to Dungeons & Dragons that also teaches users about online security. “At Global Sprint, you can invent role-playing adventures for Cryptomancer,” says Chad Sansing. “The game that asks, ‘What if dragons, dwarves, and elves had an Internet of their own?’ If you need a copy of the game, we have a free and DRM-free copy waiting for you.”

Internet Safety Driving License, an online curriculum that teaches privacy best practices, webiquette and other digital skills. “Right now we are working on Module 1: Cyberbullying Awareness Skills,” says co-creator Lisa Wright. “This is the module that we are looking for help with at the Global Sprint.

Aerogami, an interactive paper plane workshop that teaches the principles of aerodynamics. “Explaining engineering concepts is very difficult—there is too much new information and complex concepts that a student is supposed to learn in an hour of class,” says Aerogami’s Kshitiz Khanal. “I wanted to change this and make learning more interactive, intuitive and fun.”

GirlScript, a nonprofit project that empowers young Indian women through technology workshops. “We impart skills online and offline,” says GirlScript’s Anubha. “We are still preparing the online curriculum, however, offline trainings and workshops have already started in three cities of India: Nagpur, Bhopal and Ahmedabad.”

The Embryo Digital Atlas, an open-source, web-based platform to visualize complex experimental datasets of embryogenesis in an easy (and beautiful) way. “The aim of The Embryo Digital Atlas is to make public datasets of embryogenesis easily accessible for an audience ranging from curious citizen, to students, to professional researchers,” says creator Paul Villoutreix.

You can find all Global Sprint projects here.

How to participate

First, take a moment to learn about the basics of open source participation:


Register to collaborate at a site in your area, or as a virtual participant. Registration info is here.

Select a project that’s a good fit for your interests and skills. Information for Participants is here.

Bring a Project to the Sprint. Information for Project Leads is here.

See you there!

The post Mozilla’s Giant, Distributed, Open-Source Hackathon appeared first on The Mozilla Blog.

Mozilla L10NReuse Mozilla translations in external projects

Translation memory is now available for download in Translation Memory eXchange (TMX) file format for every project you localize in Pontoon. This functionality allows you to reuse Mozilla translations when translating other projects, even if you use other translation tools.

To download the TMX file, select Download Translation Memory form the profile menu of the translation interface. Files are generated on demand, which means translations submitted right before downloading the file are also included.

How does Translation memory work in Pontoon?

Every approved translation is stored in Translation memory, no matter if it’s submitted through Pontoon or imported from VCS. Pontoon’s metasearch engine then displays exact and fuzzy matches from Translation memory in the Machinery tab and the Machinery page.

Open Policy & AdvocacyAadhaar isn’t progress — it’s dystopian and dangerous

This opinion piece by Mozilla Executive Chairwoman Mitchell Baker and Mozilla community member Ankit Gadgil first appeared in the Business Standard.

Imagine your government required you to consent to ubiquitous stalking in order to participate in society — to do things such as log into a wifi hotspot, register a SIM card, get your pension, or even obtain a food ration of rice. Imagine your government was doing this in ways your Supreme Court had indicated were illegal.

This isn’t some dystopian future, this is happening in India right now. The government of India is pushing relentlessly to roll out a national biometric identity database called Aadhaar, which it wants India’s billion-plus population to use for virtually all transactions and interactions with government services.

The Indian Supreme Court has directed that Aadhaar is only legal if it’s voluntary and restricted to a limited number of schemes. Seemingly disregarding this directive, Prime Minister Narendra Modi’s government has made verification through Aadhaar mandatory for a wide range of government services, including vital subsidies that some of India’s poorest citizens rely on to survive. Vital subsidies aren’t voluntary.

Even worse, the government of India is selling access to this database to private companies to use and combine with other datasets as they wish. This would allow companies to have access to some of your most intimate details and create detailed profiles of you, in ways you can’t necessarily see or control. The government can also share user data “in the interest of national security,” a term that remains dangerously undefined. There are little to no protections on how Aadhaar data is used, and certainly no meaningful user consent. Individual privacy and security cannot be adequately protected and users cannot have trust in systems when they do not have transparency or a choice in how their private information will be used.

This is all possible because India currently does not have any comprehensive national law protecting personal security through privacy. India’s Attorney General has recently cast doubt on whether a right to privacy exists in arguments before the Supreme Court, and has not addressed how individual citizens can enjoy personal security without privacy.

We have long argued that enacting a comprehensive privacy and data protection law should be a national policy priority for India. While it is encouraging to see the Attorney General also indicate to the Supreme Court in a separate case that the government of India intends to develop a privacy and data protection law by Diwali, it is not at all clear that the draft law the government will put forward will contain the robust protections needed to ensure the security and privacy of individuals in India. At the same time, the government of India is still exploiting this vacuum in legal protections by continuing to push ahead with a massive initiative that systematically threatens individuals’ security and privacy. The world is looking to India to be a leader on internet policy, but it is unclear if Prime Minister Modi’s government will seize this opportunity and responsibility for India to take its place as a global leader on protecting individual security and privacy.

The protection of individual security and privacy is critical to building safe online systems. It is the lifeblood of the online ecosystem, without which online efforts such as Aadhaar and Digital India are likely to fail or become deeply dangerous.

One of Mozilla’s founding principles is the idea that security and privacy on the internet are fundamental and must not be treated as optional. This core value underlines and guides all of Mozilla’s work on online privacy and security issues—including our product development and design decisions and policies, and our public policy and advocacy work. The Mozilla Community in India has also long sought to empower Indians to protect their privacy themselves including through national campaigns with privacy tips and tools. Yet, we also need the government to do its part to protect individual security and privacy.

The Mozilla Community in India has further been active in promoting the use, development, and adoption of open source software. Aadhaar fails here as well.

The Government of India has sought to soften the image of Aadhaar by wrapping it in the veneer of open source. It refers to the Aadhaar API as an “Open API” and its corporate partners as “volunteers.” As executive chairwoman and one of the leading contributors to Mozilla, one of the largest open source projects in the world, let us be unequivocally clear: There’s nothing open about this. The development was not open, the source code is not open, and companies that pay to get a license to access this biometric identity database are not volunteers. Moreover, requiring Indians to use Aadhaar to access so many services dangerously intensifies the already worrying trend toward centralisation of the internet. This is disappointing given the government of India’s previous championing of open source technologies and the open internet.

Prime Minister Modi and the government of India should pause the further roll out of Aadhaar until a strong, comprehensive law protecting individual security and privacy is passed. We further urge a thorough and open public process around these much-needed protections, India’s privacy law should not be passed in a rushed manner in the dead of night as the original Aadhaar Act was. As an additional act of openness and transparency and to enable an informed debate, the government of India should make Aadhaar actually open source rather than use the language of open source for an initiative that has little if anything “open” about it. We hope India will take this opportunity to be a beacon to the world on how citizens should be protected.

The post Aadhaar isn’t progress — it’s dystopian and dangerous appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogView Source links removed from listing pages

Up until a few weeks ago, AMO had “View Source” links that allowed users to inspect the contents of listed add-ons. Due to performance issues, the viewer didn’t work reliably, so we disabled it to investigate the issue. Unfortunately, the standard error message that was shown when people tried to click the links led to some confusion, and we decided to remove them altogether.

What’s Next for Source Viewing

The open issue has most of the background and some ideas on where to go from here. It’s likely we won’t support this feature again. Instead, we can make it easier for developers to point users to their code repositories. I think this is an improvement, since sites like GitHub can provide a much better source-viewing experience. If you still want to inspect the actual package hosted on AMO, you can download it, decompress it, and use your tool of preference to give it a look.

The post View Source links removed from listing pages appeared first on Mozilla Add-ons Blog.

Air MozillaLocalization Community Bi-Monthly Call, 25 May 2017

Localization Community Bi-Monthly Call These calls will be held in the Localization Vidyo room every second (14:00 UTC) and fourth (20:00 UTC) Thursday of the month and will be...

Mozilla L10NTaipei Localization Workshop

In front of the iconic Taipei 101.

In discussing our plans for this years’ event, the city of Taipei was on a short list of preferred locations. Peter from our Taipei community helped us solidify the plan. We set the date for April 21-22 in favour of cooler weather and to avoid typhoon season!  This would be my third visit in Taiwan.

Working with our community leaders, we developed nomination criteria and sent out invitations. In addition to contributing to localizing content, we also reviewed community activities in other areas such as testing Pontoon, leading and managing community projects, and active participation in community channels.

360° view of the meetup.

In total, we invited representatives from 12 communities and all were represented at our event. We had a terrific response, more than 80% of the invitees accepted the invitation and were able to join us. It was a good mix of familiar faces and newcomers. We asked everyone to set personal goals in addition to team goals. Flod and Gary joined me for the second year in a row, while this was Axel’s first meeting with these communities in Asia.

Based on the experience and feedback from last year’s event, we switched things up, balancing discussion and presentation sessions with community-oriented breakout sessions throughout the weekend. These changes were well received.

Our venue was the Mozilla Taipei office, right at the heart of financial centre, a few minutes from Taipei 101. On Saturday morning, Axel covered the removal of the Aurora branch and cross-channel, while later Flod talked about Quantum and Photon and their impact on localization. We then held a panel Q&A session with the localisers and l10n-drivers. Though we solicited questions in advance, most questions were spontaneous, both technical and non-technical. They covered a broad range of subjects including Firefox, new brand design, vendor management and crowd sourcing practices by other companies. We hoped this new format would be interactive. And it was! We loved it, and from the survey, the response was positive too. In fact, we were asked to conduct another session the following day, so more questions could be answered.

Localisers were briefed on product updates.

The upcoming Firefox browser launch in autumn creates new challenges for our communities, including promoting the product in their languages. In anticipation, we are developing a Firefox l10n marketing kit for the communities. We took advantage of the event to collect input on local experiences that worked well and that didn’t. We covered communication channels, materials needed for organising an event, and key messages to promote the localised product. Flod shared the design of Photon, with a fun, new look and feel.

On Sunday, Flod demonstrated all the new development on Pontoon, including how to utilise the tool to work more efficiently. He covered the basic activities for different roles as a suggester, as a translator and as a locale manager. He also covered advanced features such as batch processing, referencing other languages for inspiration and filters, before describing future feature improvements. Though it was unplanned, many localisers tried their hands on the tool while they listened in attentively. It worked out better than expected!

Quality was the focus and theme for this year’s event. We shared test plans for desktop, mobile, and, then allowed the communities to spend the breakout sessions testing their localisation work. Axel also made a laptop available to test Windows Installer. Each community worked on their group goals between sessions for the rest of the weekend.

Last stop of the 貓空纜車 (Maokong Gondola ride)

Of course, we found some time to play. Though the weather was not cooperative, we braved unseasonally cold, wet, and windy weather to take a gondola ride on 貓空纜車 (Taipei Maokong Gondola) over the Taipei Zoo in the dark. Irvin introduced the visitors to the local community contributors at 摩茲工寮  (Mozilla community space). Gary led a group to visit Taipei’s famed night markets. Others followed Joanna to her workplace at 三七茶堂 (7 Tea House), to get an informative session on tea culture. Many brought home some local teas, the perfect souvenir from Taiwan.

Observing the making of the famous dumplings at 鼎泰豐 (Din Tai Fung at Taipei 101)

We were also spoiled by the abundance of food Taipei had to offer. The local community put a lot of thought in the planning phase. Among the challenges were the size of the group, the diversity of the dietary needs, and the desire of having a variety of cuisines. Flod and Axel had an eye opening experience with all the possible food options! There was no shortage, between snacks, lunch and dinner. Many of us gained a few pounds before heading home.

All of us were pleased with the active participation of all the attendees, their collaborations within the community and beyond. We hope you had achieved your personal goals. We are especially grateful for the tremendous support from Peter, Joanna and Lora who helped with each step of the planning, hotel selection, transportation directions, visa application process, food and restaurant selections and cultural activities. We could have not done it with their knowledge, patience and advice in planning and execution. Behind the scenes, community veterans Bob and Irvin lent their support to make sure things went as seamlessly as possible. It was true team effort to host a successful event of this size. Thanks to you all for creating this wonderful experience together.

We look forward to another event in Asia next year. In which country, using what format? We want to hear from you!