Blog of DataTwo Days, or How Long Until the Data is In

Two days.

It doesn’t seem like long, but that is how long you need to wait before looking at a day’s Firefox data and being sure than 95% of it has been received.

There are some caveats, of course. This only applies to current versions of Firefox (55 and later). This will very occasionally be wrong (like, say, immediately after Labour Day when people finally get around to waking up their computers that have been sleeping for quite some time). And if you have a special case (like trying to count nearly everything instead of just 95% of it) you might want to wait a bit longer.

But for most cases: Two Days.

As part of my 2017 Q3 Deliverables I looked into how long it takes clients to send their anonymous usage statistics to us using Telemetry. This was a culmination of earlier ponderings on client delay, previous work in establishing Telemetry client health, and an eighteen-month (or more!) push to actually look at our data from a data perspective (meta-data).

This led to a meeting in San Francisco where :mreid, :kparlante, :frank, :gfritzsche, and I settled upon a list of metrics that we ought to measure to determine how healthy our Telemetry system is.

Number one on that list: latency.

It turns out there’s a delay between a user doing something (opening a tab, for instance) and them sending that information to us. This is client delay and is broken into two smaller pieces: recording delay (how long from when the user does something until when we’ve put it in a ping for transport), and submission delay (how long it takes that ready-for-transport ping to get to Mozilla).

If you want to know how many tabs were opened on Tuesday, September the 5th, 2017, you couldn’t tell on the day itself. All the tabs users open late at night won’t even be in pings, and anyone who puts their computer to sleep won’t send their pings until they wake their computer in the morning of the 6th.

This is where “Two Days” comes in: On Thursday the 7th you can be reasonably sure that we have received 95% of all pings containing data from the 5th. In fact, by the 7th, you should even have that data in some scheduled datasets like main_summary.

How do we know this? We measured it:

Screenshot-2017-9-12 Client "main" Ping Delay for Latest Version(1).png(Remember what I said about Labour Day? That’s the exceptional case on beta 56)

Most data, most days, comes in within a single day. Add a day to get it into your favourite dataset, and there you have it: Two Days.

Why is this such a big deal? Currently the only information circulating in Mozilla about how long you need to wait for data is received wisdom from a pre-Firefox-55 (pre-pingsender) world. Some teams wait up to ten full days (!!) before trusting that the data they see is complete enough to make decisions about.

This slows Mozilla down. If we are making decisions on data, our data needs to be fast and reliably so.

It just so happens that, since Firefox 55, it has been.

Now comes the hard part: communicating that it has changed and changing those long-held rules of thumb and idées fixes to adhere to our new, speedy reality.

Which brings us to this blog post. Consider this your notice that we have looked into the latency of Telemetry Data and is looks pretty darn quick these days. If you want to know about what happened on a particular day, you don’t need to wait for ten days any more.

Just Two Days. Then you can have your answers.

:chutten

(Much thanks to :gsvelto and :Dexter’s work on pingsender and using it for shutdown pings, :Dexter’s analyses on ping delay that first showed these amazing improvements, and everyone in the data teams for keeping the data flowing while I poked at SQL and rearranged words in documents.)

(This is a cross-post from chuttenblog. I have quite a few posts on there that you might like, including a series on Windows XP in Firefox, a bunch of Satisfying Graphs, and Reasons Why Data Science Is Hard)

Open Policy & AdvocacyMozilla’s Cyber(in)security Summit

We’re excited to announce Mozilla’s Cyber(in)security Summit on October 24th in Washington, D.C. and streaming on Air Mozilla. Join us for a discussion on how we can all help secure the internet ecosystem.

Mozilla is excited to announce Cyber(in)security, a half-day policy summit that will explore the key issues surrounding the U.S. Government’s role in cybersecurity, the full cycle process of how the U.S. Government acquires, discloses and exploits vulnerabilities and what steps it can take to make Americans more secure. This is an important part of securing the global internet.

“With nonstop news of data breaches and ransomware attacks, it is critical to discuss the U.S. Government’s role in cybersecurity,” said Denelle Dixon, Mozilla’s Chief Business and Legal Officer. “User security is a priority and we believe it is necessary to have a conversation about the reforms needed to strengthen and improve the Vulnerabilities Equities Process to ensure that it is properly transparent and doesn’t compromise our national security or our fellow citizens’ privacy. Protecting cybersecurity is a shared responsibility and governments, tech companies and users all need to work together to make the internet as secure as possible.”

Cyber(in)security, to be held on Tuesday, October 24th at the Loft at 600 F in Washington, D.C., will take place from 1:00 pm to 7:00 pm ET. There will be four one-hour sessions followed by a networking happy hour.

You can RSVP here to attend here.

The post Mozilla’s Cyber(in)security Summit appeared first on Open Policy & Advocacy.

Mozilla Gfx TeamWebRender newsletter #4

We skipped the newsletter for a few weeks (sorry about that!), but we are back. I don’t have a lot to report today, in part because I don’t yet have a good workflow to track the interesting changes (especially in gecko) so I am most likely missing a lot of them, and a lot of us are working on big pieces of the project that are taking time to come together and I am waiting for these to be completed before they make it in the newsletter.

Notable WebRender changes

  • Glenn started reorganizing the shader sources to make them compile faster (important for startup time).
  • Morris implemented the backface-visibility property.
  • Glenn added some optimizations to the clipping code.
  • Glenn improved the scheduling/batching of alpha passes to reduce the number of render target switches.
  • Sotaro improved error handling.
  • Glenn improved the transfer of the primitive data to the GPU by using pixel buffer objects instead of texture uploads.
  • Glenn added a web-based debugger UI to WebRender. It can inspect display lists, batches and can control various other debugging options.

Notable Gecko changes

  • Kats enabled layers-free mode for async scrolling reftests.
  • Kats and Morris enabled rendering tables in WebRender.
  • Gankro fixed a bug with invisible text not casting shadows.
  • Gankro improved the performance of generating text display items.

Air MozillaMozilla Weekly Project Meeting, 18 Sep 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Blog of DataRecording new Telemetry from add-ons

One of the successes for Firefox Telemetry has been the introduction of standardized data types; histograms and scalars.

They are well defined and allow teams to autonomously add new instrumentation. As they are listed in machine-readable files, our data pipeline can support them automatically and new probes just start showing up in different tools. A definition like this enables views like this:

The distribution view for the max_concurrent_tabs scalar on the TMO dashboard.

The distribution view for the max_concurrent_tabs scalar on the TMO dashboard.

This works great when shipping probes in the Firefox core code, going through our normal release and testing channels, which takes a few weeks.

Going faster

However, often we want to ship code faster using add-ons: this may mean running experiments through Test Pilot and SHIELD or deploying Firefox features through system add-ons.

When adding new instrumentation in add-ons, there are two options:

  • Instrumenting the code in Firefox core code, then waiting a few weeks until it is in release.
  • Implementing a custom ping and submitting it through Telemetry, requiring additional client and pipeline work.

Neither are satisfactory; there is significant manual effort for running simple experiments and adding features.

Filling the gap

This is one of the main pain-points coming up for adding new data collection, so over the last months we were planning how to solve this.

As the scope of an end-to-end solution is rather large, we are currently focused on getting the support built into Firefox first. This can enable some use-cases right away. We can then later add better and automated integration in our data pipeline and tooling.

The basic idea is to use the existing Telemetry APIs and seamlessly allow them to record data from new probes as well. To enable this, we will extend the API with registration of new probes from add-ons at runtime.

The recorded data will be submitted with the main ping, but in a separate bucket to tell them apart.

What we have now

We now support add-on registration of events from Firefox 56 on. We expect event recording to mostly be used with experiments, so it made sense to start here.

With this new addition, events can be registered at runtime by Mozilla add-ons instead of using a registry file like Events.yaml.

When starting, add-ons call nsITelemetry.registerEvents() with information on the events they want to record:

Services.telemetry.registerEvents(“myAddon.ui”, {
  “click”: {
    methods: [“click”],
    objects: [“redButton”, “blueButton”],
  }
});

Now, events can be recorded using the normal Telemetry API:

Services.telemetry.recordEvent(“myAddon.ui”, “click”,
                               “redButton”);

This event will be submitted with the next main ping in the “dynamic” process section. We can inspect them through about:telemetry:

The event view in about:telemetry, showing that an event ["myAddon.ui", "click", "redButton"] was successfully recorded with a timestamp.

The event view in about:telemetry.

On the pipeline side, the events are available in the events table in Redash. Custom analysis can access them in the main pings under payload/processes/dynamic/events.

The larger plan

As mentioned, this is the first step of a larger project that consists of multiple high-level pieces. Not all of them are feasible in the short-term, so we intend to work towards them iteratively.

The main driving goals here are:

  1. Make it easy to submit new Telemetry probes from Mozilla add-ons.
  2. New Telemetry probes from add-ons are easily accessible, with minimal manual work.
  3. Uphold our standards for data quality and data review.
  4. Add-on probes should be discoverable from one central place.

This larger project then breaks down into roughly these main pieces:

Phase 1: Client work.

This is currently happening in Q3 & Q4 2017. We are focusing on adding & extending Firefox Telemetry APIs to register & record new probes.

Events are supported in Firefox 56, scalars will follow in 57 or 58, then histograms on a later train. The add-on probe data is sent out with the main ping.

Phase 2: Add-on tooling work.

To enable pipeline automation and data documentation, we want to define a variant of the standard registry formats (like Scalars.yaml). By providing utilities we can make it easier for add-on authors to integrate them.

Phase 3: Pipeline work.

We want to pull the probe registry information from add-ons together in one place, then make it available publically. This will enable automation of data jobs, data discovery and other use-cases. From there we can work on integrating this data into our main datasets and tools.

The later phases are not set in stone yet, so please reach out if you see gaps or overlap with other projects.

Questions?

As always, if you want to reach out or have questions:

QMOFirefox Developer Edition 56 Beta 12 Testday Results

Hello Mozillians!

As you may already know, last Friday – September 15th – we held a new Testday event, for Developer Edition 56 Beta 12.

Thank you all for helping us make Mozilla a better place – Athira Appu.

From India team: Baranitharan & BaraniCool, Abirami& AbiramiSD, Vinothini.K, Surentharan, vishnupriya.v, krishnaveni.B, Nutan sonawane, Shubhangi Patil, Ankita Lahoti, Sonali Dhurjad, Yadnyesh Mulay, Ankitkumar Singh.

From Bangladesh team: Nazir Ahmed Sabbir, Tanvir Rahman, Maruf Rahman, Saddam Hossain, Iftekher Alam, Pronob Kumar Roy, Md. Raihan Ali, Sontus Chandra Anik, Saheda Reza Antora, Kazi Nuzhat Tasnem, Md. Rahimul Islam, Rahim Iqbal, Md. Almas Hossain, Ali sarif, Md.Majedul islam, JMJ Saquib, Sajedul Islam, Anika Alam, Tanvir Mazharul, Azmina Akter Papeya, sayma alam mow. 

Results:

– several test cases executed for the Preferences Search, CSS Grid Inspector Layout View and Form Autofill features.

– 6 bugs verified: 1219725 , 13739351391014 , 1382341, 1383720 , 1377182

– 1 new bug filed: 1400203

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

The Mozilla BlogBusting the myth that net neutrality hampers investment

This week I had the opportunity to share Mozilla’s vision for an Internet that is open and accessible to all with the audience at MWC Americas.

I took this opportunity because we are at a pivotal point in the debate between the FCC, companies, and users over the FCC’s proposal to roll back protections for net neutrality. Net neutrality is a key part of ensuring freedom of choice to access content and services for consumers.

Earlier this week Mozilla’s Heather West wrote a letter to FCC Chairman Ajit Pai highlighting how net neutrality has fueled innovation in Silicon Valley and can do so still across the United States.

The FCC claims these protections hamper investment and are bad for business. And they may vote to end them as early as October. Chairman Pai calls his rule rollback “restoring internet freedom” but that’s really the freedom of the 1% to make decisions that limit the rest of the population.

At Mozilla we believe the current rules provide vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Millions of people commented on the FCC docket, including those who commented through Mozilla’s portal that removing these core protections will hurt consumers and small businesses alike.

Mozilla is also very much focused on the issues preventing people coming online beyond the United States. Before addressing the situation in the U.S., journalist Rob Pegoraro asked me what we discovered in the research we recently funded in seven other countries into the impact of zero rating on Internet use:


(Video courtesy: GSMA)

If you happen to be in San Francisco on Monday 18th September please consider joining Mozilla and the Internet Archive for a special night: The Battle to Save Net Neutrality. Tickets are available here.

You’ll be able to watch a discussion featuring former FCC Chairman Tom Wheeler; Representative Ro Khanna; Mozilla Chief Legal and Business Officer Denelle Dixon; Amy Aniobi, Supervising Producer, Insecure (HBO); Luisa Leschin, Co-Executive Producer/Head Writer, Just Add Magic (Amazon); Malkia Cyril, Executive Director of the Center for Media Justice; and Dane Jasper, CEO and Co-Founder of Sonic. The panel will be moderated by Gigi Sohn, Mozilla Tech Policy Fellow and former Counselor to Chairman Wheeler. It will discuss how net neutrality promotes democratic values, social justice and economic opportunity, what the current threats are, and what the public can do to preserve it.

The post Busting the myth that net neutrality hampers investment appeared first on The Mozilla Blog.

Air MozillaWebdev Beer and Tell: September 2017, 15 Sep 2017

Webdev Beer and Tell: September 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Add-ons BlogAdd-ons Update – 2017/09

Here’s your monthly add-ons update.

The Review Queues

In the past month, our team reviewed 2,490 listed add-on submissions:

  • 2,074 in fewer than 5 days (83%).
  • 89 between 5 and 10 days (4%).
  • 327 after more than 10 days (13%).

244 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 56 and the bulk validation has been run. This is the last one of these we’ll do, since compatibility is a much smaller problem with the WebExtensions API.

Firefox 57 is now on the Nightly channel and will soon hit Beta, only accepting WebExtension add-ons by default. Here are some changes we’re implementing on AMO to ease the transition to 57.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Amola Singh
  • yfdyh000
  • bfred-it
  • Tiago Morais Morgado
  • Divya Rani
  • angelsl
  • Tim Nguyen
  • Atique Ahmed Ziad
  • Apoorva Pandey
  • Kevin Jones
  • ljbousfield
  • asamuzaK
  • Rob Wu
  • Tushar Sinai
  • Trishul Goel
  • zombie
  • tmm88
  • Christophe Villeneuve
  • Hemanth Kumar Veeranki

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/09 appeared first on Mozilla Add-ons Blog.

Air MozillaMeasuring the Subjective: The Performance Dashboard with Estelle Weyl

Measuring the Subjective: The Performance Dashboard with Estelle Weyl Performance varies quite a bit depending on the site, the environment and yes, the user. And users don't check your performance metrics. Instead, they perceive...

about:communityFirefox 56 new contributors

With the upcoming release of Firefox 56, we are pleased to welcome the 37 developers who contributed their first code change to Firefox in this release, 29 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Air MozillaReps Weekly Meeting Sep. 14, 2017

Reps Weekly Meeting Sep. 14, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

hacks.mozilla.orgBuilding the DOM faster: speculative parsing, async, defer and preload

In 2017, the toolbox for making sure your web page loads fast includes everything from minification and asset optimization to caching, CDNs, code splitting and tree shaking. However, you can get big performance boosts with just a few keywords and mindful code structuring, even if you’re not yet familiar with the concepts above and you’re not sure how to get started.

The fresh web standard <link rel="preload">, that allows you to load critical resources faster, is coming to Firefox later this month. You can already try it out in Firefox Nightly or Developer Edition, and in the meantime, this is a great chance to review some fundamentals and dive deeper into performance associated with parsing the DOM.

Understanding what goes on inside a browser is the most powerful tool for every web developer. We’ll look at how browsers interpret your code and how they help you load pages faster with speculative parsing. We’ll break down how defer and async work and how you can leverage the new keyword preload.

Building blocks

HTML describes the structure of a web page. To make any sense of the HTML, browsers first have to convert it into a format they understand – the Document Object Model, or DOM. Browser engines have a special piece of code called a parser that’s used to convert data from one format to another. An HTML parser converts data from HTML into the DOM.

In HTML, nesting defines the parent-child relationships between different tags. In the DOM, objects are linked in a tree data structure capturing those relationships. Each HTML tag is represented by a node of the tree (a DOM node).

The browser builds up the DOM bit by bit. As soon as the first chunks of code come in, it starts parsing the HTML, adding nodes to the tree structure.

The DOM has two roles: it is the object representation of the HTML document, and it acts as an interface connecting the page to the outside world, like JavaScript. When you call document.getElementById(), the element that is returned is a DOM node. Each DOM node has many functions you can use to access and change it, and what the user sees changes accordingly.

CSS styles found on a web page are mapped onto the CSSOM – the CSS Object Model. It is much like the DOM, but for the CSS rather than the HTML. Unlike the DOM, it cannot be built incrementally. Because CSS rules can override each other, the browser engine has to do complex calculations to figure out how the CSS code applies to the DOM.

 

The history of the <script> tag

As the browser is constructing the DOM, if it comes across a <script>...</script> tag in the HTML, it must execute it right away. If the script is external, it has to download the script first.

Back in the old days, in order to execute a script, parsing had to be paused. It would only start up again after the JavaScript engine had executed code from a script.

Why did the parsing have to stop? Well, scripts can change both the HTML and its product―the DOM. Scripts can change the DOM structure by adding nodes with document.createElement(). To change the HTML, scripts can add content with the notorious document.write() function. It’s notorious because it can change the HTML in ways that can affect further parsing. For example, the function could insert an opening comment tag making the rest of the HTML invalid.

Scripts can also query something about the DOM, and if that happens while the DOM is still being constructed, it could return unexpected results.

document.write() is a legacy function that can break your page in unexpected ways and you shouldn’t use it, even though browsers still support it. For these reasons, browsers have developed sophisticated techniques to get around the performance issues caused by script blocking that I will explain shortly.

What about CSS?

JavaScript blocks parsing because it can modify the document. CSS can’t modify the document, so it seems like there is no reason for it to block parsing, right?

However, what if a script asks for style information that hasn’t been parsed yet? The browser doesn’t know what the script is about to execute—it may ask for something like the DOM node’s background-color which depends on the style sheet, or it may expect to access the CSSOM directly.

Because of this, CSS may block parsing depending on the order of external style sheets and scripts in the document. If there are external style sheets placed before scripts in the document, the construction of DOM and CSSOM objects can interfere with each other. When the parser gets to a script tag, DOM construction cannot proceed until the JavaScript finishes executing, and the JavaScript cannot be executed until the CSS is downloaded, parsed, and the CSSOM is available.

Another thing to keep in mind is that even if the CSS doesn’t block DOM construction, it blocks rendering. The browser won’t display anything until it has both the DOM and the CSSOM. This is because pages without CSS are often unusable. If a browser showed you a messy page without CSS, then a few moments later snapped into a styled page, the shifting content and sudden visual changes would make a turbulent user experience.

See the Pen Flash of Unstyled Content by Milica (@micikato) on CodePen.

That poor user experience has a name – Flash of Unstyled Content or FOUC

To get around these issues, you should aim to deliver the CSS as soon as possible. Recall the popular “styles at the top, scripts at the bottom” best practice? Now you know why it was there!

Back to the future – speculative parsing

Pausing the parser whenever a script is encountered means that every script you load delays the discovery of the rest of the resources that were linked in the HTML.

If you have a few scripts and images to load, for example–

<script src="slider.js"></script>
<script src="animate.js"></script>
<script src="cookie.js"></script>
<img src="slide1.png">
<img src="slide2.png">

–the process used to go like this:

 

That changed around 2008 when IE introduced something they called “the lookahead downloader”. It was a way to keep downloading the files that were needed while the synchronous script was being executed. Firefox, Chrome and Safari soon followed, and today most browsers use this technique under different names. Chrome and Safari have “the preload scanner” and Firefox – the speculative parser.

The idea is: even though it’s not safe to build the DOM while executing a script, you can still parse the HTML to see what other resources need to be retrieved. Discovered files are added to a list and start downloading in the background on parallel connections. By the time the script finishes executing, the files may have already been downloaded.

The waterfall chart for the example above now looks more like this:

The download requests triggered this way are called “speculative” because it is still possible that the script could change the HTML structure (remember document.write ?), resulting in wasted guesswork. While this is possible, it is not common, and that’s why speculative parsing still gives big performance improvements.

While other browsers only preload linked resources this way, in Firefox the HTML parser also runs the DOM tree construction algorithm speculatively. The upside is that when a speculation succeeds, there’s no need to re-parse a part of the file to actually compose the DOM. The downside is that there’s more work lost if and when the speculation fails.

(Pre)loading stuff

This manner of resource loading delivers a significant performance boost, and you don’t need to do anything special to take advantage of it. However, as a web developer, knowing how speculative parsing works can help you get the most out of it.

The set of things that can be preloaded varies between browsers. All major browsers preload:

  • scripts
  • external CSS
  • and images from the <img> tag

Firefox also preloads the poster attribute of video elements, while Chrome and Safari preload @import rules from inlined styles.

There are limits to how many files a browser can download in parallel. The limits vary between browsers and depend on many factors, like whether you’re downloading all files from one or from several different servers and whether you are using HTTP/1.1 or HTTP/2 protocol. To render the page as quickly as possible, browsers optimize downloads by assigning priority to each file. To figure out these priorities, they follow complex schemes based on resource type, position in the markup, and progress of the page rendering.

While doing speculative parsing, the browser does not execute inline JavaScript blocks. This means that it won’t discover any script-injected resources, and those will likely be last in line in the fetching queue.

var script = document.createElement('script');
script.src = "//somehost.com/widget.js";
document.getElementsByTagName('head')[0].appendChild(script);

You should make it easy for the browser to access important resources as soon as possible. You can either put them in HTML tags or include the loading script inline and early in the document. However, sometimes you want some resources to load later because they are less important. In that case, you can hide them from the speculative parser by loading them with JavaScript late in the document.

You can also check out this MDN guide on how to optimize your pages for speculative parsing.

defer and async

Still, synchronous scripts blocking the parser remains an issue. And not all scripts are equally important for the user experience, such as those for tracking and analytics. Solution? Make it possible to load these less important scripts asynchronously.

The defer and async attributes were introduced to give developers a way to tell the browser which scripts to handle asynchronously.

Both of these attributes tell the browser that it may go on parsing the HTML while loading the script “in background”, and then execute the script after it loads. This way, script downloads don’t block DOM construction and page rendering. Result: the user can see the page before all scripts have finished loading.

The difference between defer and async is which moment they start executing the scripts.

defer was introduced before async. Its execution starts after parsing is completely finished, but before the DOMContentLoaded event. It guarantees scripts will be executed in the order they appear in the HTML and will not block the parser.

async scripts execute at the first opportunity after they finish downloading and before the window’s load event. This means it’s possible (and likely) that async scripts are not executed in the order in which they appear in the HTML. It also means they can interrupt DOM building.

Wherever they are specified, async scripts load at a low priority. They often load after all other scripts, without blocking DOM building. However, if an async script finishes downloading sooner, its execution can block DOM building and all synchronous scripts that finish downloading afterwards.

Note: Attributes async and defer work only for external scripts. They are ignored if there’s no src.

preload

async and defer are great if you want to put off handling some scripts, but what about stuff on your web page that’s critical for user experience? Speculative parsers are handy, but they preload only a handful of resource types and follow their own logic. The general goal is to deliver CSS first because it blocks rendering. Synchronous scripts will always have higher priority than asynchronous. Images visible within the viewport should be downloaded before those below the fold. And there are also fonts, videos, SVGs… In short – it’s complicated.

As an author, you know which resources are the most important for rendering your page. Some of them are often buried in CSS or scripts and it can take the browser quite a while before it even discovers them. For those important resources you can now use <link rel="preload"> to communicate to the browser that you want to load them as soon as possible.

All you need to write is:

<link rel="preload" href="very_important.js" as="script">

You can link pretty much anything and the as attribute tells the browser what it will be downloading. Some of the possible values are:

  • script
  • style
  • image
  • font
  • audio
  • video

You can check out the rest of the content types on MDN.

Fonts are probably the most important thing that gets hidden in the CSS. They are critical for rendering the text on the page, but they don’t get loaded until browser is sure that they are going to be used. That check happens only after CSS has been parsed, and applied, and the browser has matched CSS rules to the DOM nodes. This happens fairly late in the page loading process and it often results in an unnecessary delay in text rendering. You can avoid it by using the preload attribute when you link fonts.

One thing to pay attention to when preloading fonts is that you also have to set the crossorigin attribute even if the font is on the same domain:

<link rel="preload" href="font.woff" as="font" crossorigin>

The preload feature has limited support at the moment as the browsers are still rolling it out, but you can check the progress here.

Conclusion

Browsers are complex beasts that have been evolving since the 90s. We’ve covered some of the quirks from that legacy and some of the newest standards in web development. Writing your code with these guidelines will help you pick the best strategies for delivering a smooth browsing experience.

If you’re excited to learn more about how browsers work here are some other Hacks posts you should check out:

Quantum Up Close: What is a browser engine?
Inside a super fast CSS engine: Quantum CSS (aka Stylo)

The Mozilla BlogPublic Event: The Fate of Net Neutrality in the U.S.

Mozilla is hosting a free panel at the Internet Archive in San Francisco on Monday, September 18. Hear top experts discuss why net neutrality matters and what we can do to protect it

 

Net neutrality is under siege.

Despite protests from millions of Americans, FCC Chairman Ajit Pai is moving forward with plans to dismantle hard-won open internet protections.

“Abandoning these core protections will hurt consumers and small businesses alike,” Mozilla’s Heather West penned in an open letter to Pai earlier this week, during Pai’s visit to San Francisco.

The FCC may vote to gut net neutrality as early as October. What does this mean for the future of the internet?

Join Mozilla and the nation’s leading net neutrality experts at a free, public event on September 18 to discuss just this. We will gather at the Internet Archive to discuss why net neutrality matters to a healthy internet — and what can be done to protect it.

RSVP: The Battle to Save Net Neutrality

Net neutrality is under siege. Mozilla is hosting a public panel in San Francisco to explore what’s ahead

<WHAT>

The Battle to Save Net Neutrality, a reception and discussion in downtown San Francisco. Register for free tickets

<WHO>

Mozilla Tech Policy Fellow and former FCC Counselor Gigi Sohn will moderate a conversation with the nation’s leading experts on net neutrality, including Mozilla’s Chief Legal and Business Officer, Denelle Dixon, and:

Tom Wheeler, Former FCC Chairman who served under President Obama and was architect of the 2015 net neutrality rules

Representative Ro Khanna, (D-California), who represents California’s 17th congressional district in the heart of Silicon Valley

Amy Aniobi, Supervising Producer of HBO’s “Insecure”

Luisa Leschin, Co-Executive Producer/Head Writer of Amazon’s “Just Add Magic”

Malkia Cyril, Executive Director of the Center for Media Justice

and Dane Jasper, CEO and Co-Founder of Sonic.

<WHEN>

Monday, September 18, 2017 from 6 p.m. to 9 p.m. PT

<WHERE>

The Internet Archive, 300 Funston Avenue San Francisco, CA 94118

RSVP: The Battle to Save Net Neutrality

The post Public Event: The Fate of Net Neutrality in the U.S. appeared first on The Mozilla Blog.

Web Application SecurityVerified cryptography for Firefox 57

Traditionally, software is produced in this way: write some code, maybe do some code review, run unit-tests, and then hope it is correct. Hard experience shows that it is very hard for programmers to write bug-free software. These bugs are sometimes caught in manual testing, but many bugs still are exposed to users, and then must be fixed in patches or subsequent versions. This works for most software, but it’s not a great way to write cryptographic software; users expect and deserve assurances that the code providing security and privacy is well written and bug free.

Even innocuous looking bugs in cryptographic primitives can break the security properties of the overall system and threaten user security. Unfortunately, such bugs aren’t uncommon. In just the last year, popular cryptographic libraries have issued dozens of CVEs for bugs in their core cryptographic primitives or for incorrect use of those primitives. These bugs include many memory safety errors, some side-channels leaks, and a few correctness errors, for example, in bignum arithmetic computations… So what can we do?

Fortunately, recent advances in formal verification allow us to significantly improve the situation by building high assurance implementations of cryptographic algorithms. These implementations are still written by hand, but they can be automatically analyzed at compile time to ensure that they are free of broad classes of bugs. The result is that we can have much higher confidence that our implementation is correct and that it respects secure programming rules that would usually be very difficult to enforce by hand.

This is a very exciting development and Mozilla has partnered with INRIA and Project Everest  (Microsoft Research, CMU, INRIA) to bring components from their formally verified HACL* cryptographic library into NSS, the security engine which powers Firefox. We believe that we are the first major Web browser to have formally verified cryptographic primitives.

The first result of this collaboration, an implementation of the Curve25519 key establishment algorithm (RFC7748), has just landed in Firefox Nightly. Curve25519 is widely used for key-exchange in TLS, and was recently standardized by the IETF.  As an additional bonus, besides being formally verified, the HACL* Curve25519 implementation is also almost 20% faster on 64 bit platforms than the existing NSS implementation (19500 scalar multiplications per second instead of 15100) which represents an improvement in both security and performance to our users. We expect to ship this new code as part as our November Firefox 57 release.

Over the next few months, we will be working to incorporate other HACL* algorithms into NSS, and will also have more to say about the details of how the HACL* verification works and how it gets integrated into NSS.

Benjamin Beurdouche, Franziskus Kiefer & Tim Taubert

The post Verified cryptography for Firefox 57 appeared first on Mozilla Security Blog.

Air MozillaThe Joy of Coding - Episode 112

The Joy of Coding - Episode 112 mconley livehacks on real Firefox bugs while thinking aloud.

Open Policy & AdvocacyAnnouncing the 2017 Ford-Mozilla Open Web Fellows!

At the foundation of our net policy and advocacy platforms at Mozilla is our support for the growing network of leaders all over the world. For the past two years, Mozilla and the Ford Foundation have partnered over fourteen organizations with progressive technologists operating at the intersection of open web security and policy; and in 2017-2018 we plan to continue our Open Web Fellows Program with our largest cohort yet! Following months of deliberation, and a recruitment process that included close to 300 competitive applicants from our global community, we’re delighted to introduce you to our 2016-2017 Open Web Fellows:

                      

This year, we’ll host an unprecedented set of eleven fellows embedded in four incumbent and seven new host organizations! These fellows will partner with their host organizations over the next 10 months to work on independent research and project development that amplifies issues of Internet Health, privacy and security, as well as net neutrality and open web policy on/offline.

If you’d like to learn more about our fellows, we encourage you to browse their bios, read up on their host organizations, and follow them on Twitter! We look forward to updating you on our Fellows’ progress, and can’t wait to learn more from them over the coming months. Stay tuned!

The post Announcing the 2017 Ford-Mozilla Open Web Fellows! appeared first on Open Policy & Advocacy.

The Mozilla BlogMozilla Announces 15 New Fellows for Science, Advocacy, and Media

These technologists, researchers, activists, and artists will spend the next 10 months making the Internet a better place

 

Today, Mozilla is announcing 15 new Fellows in the realms of science, advocacy, and media.

Fellows hail from Mexico, Bosnia & Herzegovina, Uganda, the United States, and beyond. They are multimedia artists and policy analysts, security researchers and ethical hackers.

Over the next several months, Fellows will put their diverse abilities to work making the Internet a healthier place. Among their many projects are initiatives to make biomedical research more open; uncover technical solutions to online harassment; teach privacy and security fundamentals to patrons at public libraries; and curtail mass surveillance within Latin American countries.

 

<Meet our Ford-Mozilla Open Web Fellows>

 

The 2017 Ford-Mozilla Open Web Fellows

Ford-Mozilla Open Web Fellows are talented technologists who are passionate about privacy, security, and net neutrality. Fellows embed with international NGOs for 10 months to work on independent research and project development.

Past Open Web Fellows have helped build open-source whistle-blowing software, and analyzed discriminatory police practice data.

Our third cohort of Open Web Fellows was selected from more than 300 applications. Our 11 2017 Fellows and host organizations are:

Sarah Aoun | Hollaback!

Carlos Guerra | Derechos Digitales

Sarah Kiden | Research ICT Africa

Bram Abramson | Citizen Lab

Freddy Martinez | Freedom of the Press Foundation

Rishab Nithyanand | Data & Society

Rebecca Ricks | Human Rights Watch

Aleksandar Todorović | Bits of Freedom

Maya Wagoner | Brooklyn Public Library

Orlando Del Aguila | Majal

Nasma Ahmed | MPower Change

Learn more about our Open Web Fellows.

 

<Meet our Mozilla Fellows in Science>

Mozilla’s Open Science Fellows work at the intersection of research and openness. They foster the use of open data and open source software in the scientific community, and receive training and support from Mozilla to hone their skills around open source, participatory learning, and data sharing.

Past Open Science fellows have developed online curriculum to teach the command line and scripting languages to bioinformaticians. They’ve defined statistical programming best-practices for instructors and open science peers. And they’ve coordinated conferences on the principles of working open.

Our third cohort of Open Science Fellows — supported by the Siegel Family Endowment — was selected from a record pool of 1,090 applications. Our two 2017 fellows are:

Amel Ghouila

A computer scientist by background, Amel earned her PhD in Bioinformatics and is currently a bioinformatician at Institut Pasteur de Tunis. She works on the frame of the pan-African bioinformatics network H3ABionet, supporting researchers and their projects while developing bioinformatics capacity throughout Africa. Amel is passionate about knowledge transfer and working open to foster collaborations and innovation in the biomedical research field. She is also passionate about empowering and educating young girls — she launched the Technovation Challenge Tunisian chapter to help Tunisian girls learn how to address community challenges by designing mobile applications.

Follow Amel on Twitter and Github.

 

Chris Hartgerink

Chris is an applied statistics PhD-candidate at Tilburg University, as part of the Metaresearch group. He has contributed to open science projects such as the Reproducibility Project: Psychology. He develops open-source software for scientists. And he conducts research on detecting data fabrication in science. Chris is particularly interested in how the scholarly system can be adapted to become a sustainable, healthy environment with permissive use of content, instead of a perverse system that promotes unreliable science. He initiated Liberate Science to work towards such a system.

Follow Chris on Twitter and Github.

Learn more about our Open Science Fellows.

 

<Meet our Mozilla Fellows in Media>

This year’s Mozilla Fellows cohort will also be joined by media producers.  These makers and activists have created public education and engagement work that explores topics related to privacy and security.  Their work incites curiosity and inspires action, and over their fellowship year will work closely with the Mozilla fellows cohort to understand and explain the most urgent issues facing the open Internet. Through a partnership with the Open Society Foundation, these fellows join other makers who have benefited from Mozilla’s first grants to media makers. Our two 2017 fellows are:

Hang Do Thi Duc

Hang Do Thi Duc is a media maker whose artistic work is about the social web and the effect of data-driven technologies on identity, privacy, and society. As a German Fulbright and DAAD scholar, Hang received an MFA in Design and Technology at Parsons in New York City. She most recently created Data Selfie, a browser extension that aims to provide users with a personal perspective on data mining and predictive analytics through their Facebook consumption.

Joana Varon

Joana is Executive Directress and Creative Chaos Catalyst at Coding Rights, a women-run organization working to expose and redress the power imbalances built into technology and its application. Coding Rights focuses on imbalances that reinforce gender and North/South inequalities.

 

Meet more Mozilla fellows. The Mozilla Tech Policy Fellowship, launched in June 2017, brings together tech policy experts from around the world. Tech Policy Fellows participate in policy efforts to improve the health of the Internet. Find more details about the fellowship and individuals involved. Learn more about the Tech Policy Fellows.

The post Mozilla Announces 15 New Fellows for Science, Advocacy, and Media appeared first on The Mozilla Blog.

Mozilla VR BlogSHA Hacker Camp: Learning a byte about Virtual Reality on the Web

SHA Hacker Camp: Learning a byte about Virtual Reality on the Web

SHA (Still Hacking Anyways) is an nonprofit, outdoor hacker-camp series organized every four years. SHA2017 was held this August 4-8 in Zeewolde, Netherlands.

Attended by more than 3500 hackers, SHA was a fun, knowledge-packed four-day festival. The festival featured a wide range of talks and workshops, including sessions related to Internet of Things (IoT), hardware and software hacking, security, privacy, and much more!

Ram Dayal Vaishnav, a Tech Speaker from Mozilla’s Indian community, presented a session on WebVR, Building a Virtual-Reality Website using A-Frame. Check out a video recording of Ram’s talk:

Head on over to Ram’s personal blog to catch a few more highlights from SHA2017.

Air MozillaRust Berlin Meetup September 2017

Rust Berlin Meetup September 2017 Talks: An overview of the Servo architecture by Emilio and rust ❤️ sensors by Claus

hacks.mozilla.orgExperimenting with WebAssembly and Computer Vision

This past summer, four time-crunched engineers with no prior WebAssembly experience began experimenting. The result after six weeks of exploration was WebSight: a real-time face detection demo based on OpenCV.

By compiling OpenCV to WebAssembly, the team was able to reuse a well-tested C/C++ library directly in the browser and achieve performance an order of magnitude faster than a similar JavaScript library.

I asked the team members—Brian Feldman, Debra Do, Yervant Bastikian, and Mark Romano—to write about their experience.

Note: The report that follows was written by the team members mentioned above.

WebAssembly (“wasm”) made a splash this year with its MVP release, and eager to get in on the action, we set out to build an application that made use of this new technology.

We’d seen projects like WebDSP compile their own C++ video filters to WebAssembly, an area where JavaScript has historically floundered due to the computational demands of some algorithms. This got us interested in pushing the limits of wasm, too. We wanted to use an existing, specialized, and time-tested C++ library, and after much deliberation, we landed on OpenCV, a popular open-source computer vision library.

Computer vision is highly demanding on the CPU, and thus lends itself well to wasm. Building off of some incredible work put forward by the UC Irvine SysArch group and Github user njor, we were able to update outdated asm.js builds of OpenCV to compile with modern versions of Emscripten, exposing much of OpenCV’s core functionality in JavaScript callable formats.

Working with these Emscripten builds went much differently than we expected. As Web developers, we’re used to writing code and being able to iterate and test very quickly. Introducing a large C++ library with 10-15 minute build times was a foreign experience, especially when our normal working environments are Webpack, Nodemon, and hot reloading everywhere. Once compiled, we approached the wasm build as a bit of a black box: the module started as an immutable beast of an object, and though we understood it more and more throughout the process, it never became ‘transparent’.

The efforts spent on compiling the wasm file, and then incorporating it into our JavaScript were worthwhile: it outperformed JavaScript with ease, and was significantly quicker than WebAssembly’s predecessor, asm.js.

We compared these formats through the use of a face detection algorithm. The architecture of the functions that drove these algorithms was the same, the only difference was the implementation language for each algorithm. Using web workers, we passed video stream data into the algorithms, which returned with the coordinates of a rectangle that would frame any faces in the image, and calculated an FPS measure. While the range of FPS is dependent on the user’s machine and the browser being used (Firefox takes the cake!), we noted that the FPS of the wasm-powered algorithm was consistently twice as high as the FPS of the asm.js implementation, and twenty times higher than the JS implementation, solidifying the benefits of web assembly.

Building in cutting edge technology can be a pain, but the reward was worth the temporary discomfort. Being able to use native, portable, C/C++ code in the browser, without third-party plugins, is a breakthrough. Our project, WebSight, successfully demonstrated the use of OpenCV as a WebAssembly module for face and eye detection. We’re really excited about the  future of WebAssembly, especially the eventual addition of garbage collection, which will make it easier to efficiently run other high-level languages in the browser.

You can view the demo’s GitHub repository at github.com/Web-Sight/WebSight.

Air MozillaMartes Mozilleros, 12 Sep 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Open Policy & AdvocacyWelcome to San Francisco, Chairman Pai – We Depend on Net Neutrality

This is an open letter to FCC Chairman Ajit Pai as he arrives in San Francisco for an event. He has said that Silicon Valley is a magically innovative place – and we agree. An open internet makes that possible, and enables other geographical areas to grow and innovate too.

Welcome to San Francisco, Chairman Pai! As you have noted in the past, the Bay Area has been a hub for many innovative companies. Our startups, technology companies, and service providers have added value for billions of users online.

The internet is a powerful tool for the economy and creators. No one owns the internet – we can all create, shape, and benefit from it. And for the future of our society and our economy, we need to keep it that way – open and distributed.

We are very concerned by your proposal to roll back net neutrality protections that the FCC enacted in 2015 and that are currently in place. That enforceable policy framework provides vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Abandoning these core protections will hurt consumers and small businesses alike.

As network engineers have noted, your proposal mischaracterizes many aspects of the internet, and does not show that the 2015 open internet order would benefit anyone other than major broadband providers. Instead, this seems like a politically loaded decision made about rules that have not been tested, either in the courts or in the field. User rights, the American economy, and free speech should not be used as political footballs. We deserve more from you, an independent regulator.

Broadband providers are in a position to restrict internet access for their own business objectives: favoring their own products, blocking sites or brands, or charging different prices (either to users or to content providers) and offering different speeds depending on content type. Net neutrality prohibits network providers from discriminating based on content, so everyone has equal access to potential users – whether you are a powerful incumbent or an up-and-coming disruptive service. That’s key to a market that works.

The open internet aids free speech, competition, innovation and user choice. We need more than the hollow promises and wishful thinking of your proposal – we must have enforceable rules. And net neutrality enforcement under non-Title II theories has been roundly rejected by the courts.

Politics is a terrible way to decide the future of the internet, and this proceeding increasingly has the makings of a spectator sport, not a serious debate. Protecting the internet should not be a political, or partisan, issue. The internet has long served as a forum where all voices are free to be heard – which is critical to democratic and regulatory processes. These suffer when the internet is used to feed partisan politics. This partisanship also damages the Commission’s strong reputation as an independent agency. We don’t believe that net neutrality, internet access, or the open internet is – or ever should be – a partisan issue. It is a human issue.

Net neutrality is most essential in communities that don’t count giant global businesses as their neighbors like your hometown in Kansas. Without it, consumers and businesses will not be able to compete by building and utilizing new, innovative tools. Proceed carefully – and protect the entire internet, not just giant ISPs.

The post Welcome to San Francisco, Chairman Pai – We Depend on Net Neutrality appeared first on Open Policy & Advocacy.

Air MozillaAutomating Web Accessibility Testing

Automating Web Accessibility Testing A conclusion to my internship on automating web accessibility testing.

QMOFirefox 56 Beta 8 Testday Results

As you may already know, last Friday – September 1st – we held a new Testday event, for Firefox 56 Beta 8.

Thank you Fahima Zulfath A,  Surentharan, P Avinash Sharma and Surentharan R.A for helping us make Mozilla a better place.

It seems that from technical problems the Bangladesh team did not received a reminder for this event, so we hope to see you on our next events.

Note that on September 15th we are organizing Firefox Developer Edition 56 Beta 12 Testday.

Results:
– several test cases executed for Form Autofill and Media Block Autoplay features;

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

The Mozilla BlogA Copyright Vote That Could Change the EU’s Internet

On October 10, EU lawmakers will vote on a dangerous proposal to change copyright law. Mozilla is urging EU citizens to demand better reforms.

 

On October 10, the European Parliament Committee on Legal Affairs (JURI) will vote on a proposal to change EU copyright law.

The outcome could sabotage freedom and openness online. It could make filtering and blocking online content far more routine, affecting the hundreds of millions of EU citizens who use the internet everyday.

Dysfunctional copyright reform is threatening Europe’s internet

Why Copyright Reform Matters

The EU’s current copyright legal framework is woefully outdated. It’s a framework created when the postcard, and not the iPhone, was a reigning communication method.

But the EU’s proposal to reform this framework is in many ways a step backward. Titled “Directive on Copyright in the Digital Single Market,” this backward proposal is up for an initial vote on October 10 and a final vote in December.

“Many aspects of the proposal and some amendments put forward in the Parliament are dysfunctional and borderline absurd,” says Raegan MacDonald, Mozilla’s Senior EU Policy Manager. “The proposal would make filtering and blocking of online content the norm, effectively undermining innovation, competition and freedom of expression.”

Under the proposal:

  • If the most dangerous amendments pass, everything you put on the internet will be filtered, and even blocked. It doesn’t even need to be commercial — some proposals are so broad that even photos you upload for friends and family would be included.

 

  • Linking to and accessing information online is also at stake: extending copyright to cover news snippets will restrict our ability to learn from a diverse selection of sources. Sharing and accessing news online would become more difficult through the so-called “neighbouring right” for press publishers.

 

  • The proposal would remove crucial protections for intermediaries, and would force most online platforms to monitor all content you post — like Wikipedia, eBay, software repositories on Github, or DeviantArt submissions.

 

  • Only scientific research institutions would be allowed to mine text and datasets. This means countless other beneficiaries — including librarians, journalists, advocacy groups, and independent scientists — would not be able to make use of mining software to understand large data sets, putting Europe in a competitive disadvantage in the world.

Mozilla’s Role

In the weeks before the vote, Mozilla is urging EU citizens to phone their lawmakers and demand better reform. Our website and call tool — changecopyright.org — makes it simple to contact Members of European Parliament (MEPs).

This isn’t the first time Mozilla has demanded common-sense copyright reform for the internet age. Earlier this year, Mozilla and more than 100,000 EU citizens dropped tens of millions of digital flyers on European landmarks in protest. And in 2016, we collected more than 100,000 signatures calling for reform.

Well-balanced, flexible, and creativity-friendly copyright reform is essential to a healthy internet. Agree? Visit changecopyright.org and take a stand.

Note: This blog has been updated to include a link to the reform proposal.

The post A Copyright Vote That Could Change the EU’s Internet appeared first on The Mozilla Blog.

QMOFirefox Developer Edition 56 Beta 12, September 15th

Hello Mozillians!

We are happy to let you know that Friday, September 15th, we are organizing Firefox Developer Edition 56 Beta 12 Testday. We’ll be focusing our testing on the following new features: Preferences SearchCSS Grid Inspector Layout View and Form Autofill. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

hacks.mozilla.orgMeta 2 AR Headset with Firefox

One of the biggest challenges in developing immersive WebVR experiences today is that immersion takes you away from your developer tools. With Meta’s new augmented reality headset, you can work on and experience WebVR content today without ever taking a headset on or off, or connecting developer tools to a remote device. Our friends at Meta have just released their Meta 2 developer kit and it works right out of the box with the latest 64-bit Firefox for Windows.

The Meta 2 is a tethered augmented reality headset with six degrees of freedom (6DOF). Unlike existing 3D mobile experiences like Google Cardboard, the Meta 2 can track both your orientation (three degrees of freedom) and your position (another three degrees). This means that not only can you look at 3D content, you can also move towards and around it. (3+3 = 6DOF).

In the video above, talented Mozilla engineer Kip Gilbert is editing the NYC Snowglobe demo with the A-Frame inspector on his desktop. After he edits the project, he just lifts his head up to see the rendered 3D scene in the air in front of him.  Haven’t tried A-Frame yet? It’s the easiest way for web developers to build interactive 3D apps on the web. Best of all, Kip didn’t have to rewrite the snowglobe demo to support AR. It just works! Meta’s transparent visor combined with Firefox enables this kind of seamless 3D development.

The Meta 2 is stereoscopic and also has a 90-degree field of view, creating a more immersive experience on par with a traditional VR headset. However, because of the see-through visor, you are not isolated from the real world. The Meta 2 attaches to your existing desktop or laptop computer, letting you work at your desk without obstructing your view, then just look up to see virtual windows and objects floating around you.

In this next video, Kip is browsing a Sketchfab gallery. When he sees a model he likes he can simply look up to see the model live in his office. Thanks to the translucent visor optics, anything colored black in the original 3D scene automatically becomes transparent in the Meta 2 headset.

Meta 2 is designed for engineers and other professionals who need to both work at a computer and interact with high performance visualizations like building schematics or a detailed 3D model of a new airplane. Because the Meta 2 is tethered it can use the powerful GPU in your desktop or laptop computer to render high definition 3D content.

Currently, the Meta team has released Steam VR support and is working to add support for hands as controllers. We will be working with the Meta engineers to transform their native hand gestures into Javascript events that you can interact with in code. This will let you build fully interactive high performance 3D apps right from the comfort of your desktop browser. We are also using this platform to help us develop and test proposed extensions for AR devices to the existing WebVR specification.

You can get your own Meta 2 developer kit and headset on the Meta website. WebVR is supported in the latest release version of FireFox for Windows, with other platforms coming soon.

Mozilla Add-ons BlogLast chance to migrate your legacy user data

If you are working on transitioning your add-on to use the WebExtensions API, you have until about mid-October (a month before Firefox 57 lands to allow time for testing and migrating), to port your legacy user data using an Embedded WebExtension.

This is an important step in giving your users a smooth transition because they can retain their custom settings and preferences when they update to your WebExtensions version. After Firefox 57 reaches the release channel on November 13, you will no longer be able to port your legacy data.

If you release your WebExtensions version after the release of Firefox 57, your add-on will be enabled again for your users and they will still keep their settings if you port the data beforehand. This is because WebExtensions APIs cannot read legacy user settings, and legacy add-ons are disabled in Firefox 57. In other words, even if your WebExtensions version won’t be ready until after Firefox 57, you should still publish an Embedded WebExtension before Firefox 57 in order to retain user data.

When updating to your new version, we encourage you to adopt these best practices to ensure a smooth transition for your users.

The post Last chance to migrate your legacy user data appeared first on Mozilla Add-ons Blog.

Blog of DataFirefox data platform & tools update, Q2 2017

The Firefox data platform and tools teams work on our core Telemetry system, the data pipeline, providing core datasets and maintaining some central data viewing tools.

What’s new in the last few months?

A lot of work in the last months was on reducing latency, supporting experimentation and providing a more reliable experience of the data platform.

On the data collection side, we have significantly improved reporting latency from Firefox 55, with preliminary results from Beta showing we receive 95% of the “main” ping within 8 hours (compared to previously over 90 hours). Curious for more detail? #1 and #2 should have you covered.

We also added a “new-profile” ping, which gives a clear and timely signal for new clients.

There is a new API to record active experiments in Firefox. This allows annotating experiments or interesting populations in a standard way.

The record_in_processes is now required for all histograms. This removes ambiguity about which process they are recorded in.

The data documentation moved to a new home: docs.telemetry.mozilla.org. Are there gaps in the documentation you want to see filled? Let us know by filing a bug.

For datasets, we added telemetry_new_profile_parquet, which makes the data from the “new-profile” ping available.

Additionally, the main_summary dataset now includes all scalars and uses a whitelist for histograms, making it easy to add them. Important fields like active_ticks and Quantum release criteria were also added and backfilled.

For custom analysis on ATMO, cluster lifetimes can now be extended self-serve in the UI. The stability of scheduled job stability also saw major improvements.

There were first steps towards supporting Zeppelin notebooks better; they can now be rendered as Markdown in Python.

The data tools work is focused on making our data available in a more accessible way. Here, our main tool re:dash saw multiple improvements.

Large queries should no longer show the slow script dialog. A new Athena data source was introduced, which contains a subset of our Telemetry-based derived datasets. This brings huge performance and stability improvements over Presto.

Finally, scheduled queries can now have an expiration date.

What is up next?

For the next few months, interesting projects in the pipeline include:

  • The experiments viewer & pipeline, which will make it much easier to run pref-flipping experiments in Firefox.
  • Recording new probes from add-ons into the main ping (events, scalars, histograms).
  • We will define and monitor basic guarantees for the Telemetry client data (like reporting latency ranges).
  • A re-design of about:telemetry is currently on-going, with more improvements on the way.
  • A first version of Mission Control will be available, a tool for more real-time release monitoring.
  • Analyzing the results of the Telemetry survey, (thanks everyone!) to inform our planning.
  • Extending the main_summary dataset to include all histograms.
  • Adding a pre-release longitudinal dataset, which will include all measures on those channels.
  • Looking into additional options to decrease the Firefox data reporting latency.

How to contact us.

Please reach out to us with any questions or concerns.

Web Application SecurityMozilla Releases Version 2.5 of Root Store Policy

Recently, Mozilla released version 2.5 of our Root Store Policy, which continues our efforts to improve standards and reinforce public trust in the security of the Web. We are grateful to all those in the security and Certificate Authority (CA) communities who contributed constructively to the discussions surrounding the new provisions.

The changes of greatest note in version 2.5 of our Root Store Policy are as follows:

  • CAs are required to follow industry best practice for securing their networks, for example by conforming to the CA/Browser Forum’s Network Security Guidelines or a successor document.
  • CAs are required to use only those methods of domain ownership validation which are specifically documented in the CA/Browser Forum’s Baseline Requirements version 1.4.1.
  • Additional requirements were added for intermediate certificates that are used to sign certificates for S/MIME. In particular, such intermediate certificates must be name constrained in order to be considered technically-constrained and exempt from being audited and disclosed on the Common CA Database.
  • Clarified that point-in-time audit statements do not replace the required period-of-time assessments. Mozilla continues to require full-surveillance period-of-time audits that must be conducted annually, and successive audit periods must be contiguous.
  • Clarified the information that must be provided in each audit statement, including the distinguished name and SHA-256 fingerprint for each root and intermediate certificate in scope of the audit.
  • CAs are required to follow and be aware of discussions in the mozilla.dev.security.policy forum, where Mozilla’s root program is coordinated, although they are not required to participate.
  • CAs are required at all times to operate in accordance with the applicable Certificate Policy (CP) and Certificate Practice Statement (CPS) documents, which must be reviewed and updated at least once every year.
  • Our policy on root certificates being transferred from one organization or location to another has been updated and included in the main policy. Trust is not transferable; Mozilla will not automatically trust the purchaser of a root certificate to the level it trusted the previous owner.

The differences between versions 2.5 and 2.4.1 may be viewed on Github. (Version 2.4.1 contained exactly the same normative requirements as version 2.4 but was completely reorganized.)

As always, we re-iterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Mozilla Security Team

The post Mozilla Releases Version 2.5 of Root Store Policy appeared first on Mozilla Security Blog.

Mozilla Add-ons BlogTell your users what to expect in your WebExtensions version

The migration to WebExtensions APIs is picking up steam, with thousands of compatible add-ons now available on addons.mozilla.org (AMO). To ensure a good experience for the growing number of users whose legacy add-ons have been updated to WebExtensions versions, we’re encouraging developers to adopt the following best practices.

(If your new version has the same features and settings as your legacy version, your users should get a seamless transition once you update your listing, and you can safely ignore the rest of this post.)

If your new version has different features, is missing legacy features, or requires additional steps to recover custom settings, please do one or both of the following.

Update your AMO listing description

If your new version did not migrate with all of its legacy features intact, or has different features, please let your users know in the “About this Add-on” section of your listing.

If your add-on is losing some of its legacy features, let your users know if it’s because they aren’t possible with the WebExtensions API, or if you are waiting on bug fixes or new APIs to land before you can provide them. Include links to those bugs, and feel free to send people to the forum to ask about the status of bug fixes and new APIs.

Retaining your users’ settings after upgrade makes for a much better experience, and there’s still time to do it using Embedded WebExtensions. But if this is not possible for you and there is a way to recover them after upgrade, please include instructions on how to do that, and refer to them in the Version notes. Otherwise, let your users know which settings and preferences cannot be recovered.

Add an announcement with your update

If your new version is vastly different from your legacy version, consider showing a new tab to your users when they first get the update. It can be the same information you provide in your listing, but it will be more noticeable if your users don’t have to go to your listing page to see it. Be sure to show it only on the first update so it doesn’t annoy your users.

To do this, you can use the runtime.onInstalled API which can tell you when an update or install occurs:

function update(details) {

if (details.reason === 'install' || details.reason === 'update') {

browser.tabs.create({url: 'update-notes.html'});

}

}

browser.runtime.onInstalled.addListener(update);

This will open the page update-notes.html in the extension when the install occurs. For example:

For greater control, the runtime.onInstalled event also lets you know when the user updated and what their previous version was so you can tailor your release notes.

Thank you

A big thanks to all the developers who have put in the effort to migrate to the WebExtensions API. We are here to support you, so please reach out if you need help.

The post Tell your users what to expect in your WebExtensions version appeared first on Mozilla Add-ons Blog.

Mozilla L10NL10n Report: September Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

In the past weeks we’ve added several languages to Pontoon, in particular from the Mozilla Nativo project:

  • Mixteco Yucuhiti (meh)
  • Mixtepec Mixtec (mix)
  • Quechua Chanka (quy)
  • Quichua (qvi)

We’ve also started localizing Firefox in Interlingua (ia), while Shuar (jiv) will be added soon for Firefox for Android.

New content and projects

What’s new or coming up in Firefox desktop

A few deadlines are approaching:

  • September 13 is the last day to make changes to Beta projects.
  • September 20 is merge day, and all strings move from Central to Beta. There are currently a few discussions about moving this date, but nothing has been decided yet. We’ll communicate through all channels if anything changes.

Photon in Nightly is almost ready for Firefox 57, only a few small changes need to land for the onboarding experience. Please make sure to test your localization on clean profiles, and ask your friend to test and report bugs like mistranslations, strings not displayed completely in the interface, etc.

What’s new or coming up in Test Pilot

Firefox Send holds the record for the highest number of localizations in the Test Pilot family (with SnoozeTabs), with 38 languages completely translated.

For those interested in more technical details, Pontoon is now committing localizations for the Test Pilot project in a l10n branch. This also means that the DEV server URL has changed. Note that the link is also available in the project resources in Pontoon.

What’s new or coming up in mobile
  • Have you noticed that Photon is slowly but surely arriving on Firefox for Android Nightly version? The app is getting a visual refresh and things are looking bright and shiny! There’s a new onboarding experience, icons are different, the awesome bar has never been this awesome, tabs have a new look… and the whole experience is much smoother already! Come check it out.
  • Zapoteco and Belarussian are going to make it to release with the upcoming Firefox Android 56 release.
What’s new or coming up in web projects
  • Mozilla.org:
    • This past month, we continued the trend of creating new pages to replace the old ones, with new layout and color scheme.  We will have several new pages in the work in September.  Some are customized for certain markets and others will have two versions to test the markets.
    • Thanks to all the communities that have completed the new Firefox pages released for localization in late June. The pages will be moved to the new location at Firefox/… replacing the obsolete pages.
    • Germany is the focused market with a few more customized pages than other locales.
    • New pages are expected for mobile topic in September and in early October. Check web dashboard and email communications for pending projects.
  • Snippets: We will have a series snippets campaigns starting early September targeting users of many Mozilla products.
  • MOSS: the landing page is made available in Hindi in time for the partnership announcement on August 31 along with a press release.
  • Legal: Firefox Privacy Notice will be rewritten.  Once localization is complete in a few locales, we invite communities to review them.
What’s new or coming up in Foundation projects
  • Our call tool at changecopyright.org is live! Many thanks to everyone who participated in the localization of this campaign, let’s call some MEPs!
  • The IoT survey has been published, and adding new languages plus snippets made a huge difference. You can learn more in the accomplishments section below.
What’s new or coming up in Pontoon
  • Check out the brand new Pontoon Tools Firefox extension, which you can install from AMO! It brings notifications from Pontoon directly to your Firefox, but that’s just the beginning. It also shows you your team’s statistics and allows you to search for strings straight from Mozilla.org and SUMO. A huge shout out to its creator Michal Stanke, a long time Pontoon user and contributor!
  • We changed the review process by introducing the ability to reject suggestions instead of deleting them. Each suggestion can now be approved, unreviewed or rejected. This will finally make it easy to list all suggestions needing a review using the newly introduced Unreviewed Suggestions filter. To make the filter usable out of the box, all existing suggestions have been automatically rejected if an approved translation was available and approved after the suggestion has been submitted. The final step in making unreviewed suggestions truly discoverable is to show them in dashboards. Thanks to Adrian, who only joined Pontoon team in July and already managed to contribute this important patch!
  • Pontoon homepage will now redirect you to the team page you make most contributions to. You can also pick a different team page or the default Pontoon homepage in your user settings. Thanks to Jarosław for the patch!
  • Editable team info is here! If you have manager permission, you can now edit the content of the Info tab on your team page:

  • Most teams use this place to give some basic information to newcomers. Thanks to Axel, who started the effort of implementing this feature and Emin, who took over!
  • The notification popup (opened by clicking on the bell icon) is no longer limited to unread notifications. Now it displays the latest 7 notifications, which includes both – read and unread. If there are more than 7 unread notifications, all are displayed.
  • Sync with version control systems is now 10 times faster and uses 12 times less computing power. Average sync time dropped from around 20 minutes to less than 2.
  • For teams that localize all projects in Pontoon, we no longer pull Machinery suggestions from Transvision, because they are already included in Pontoon’s internal translation memory. This has positive impact on Machinery performance and the overall string navigation performance. Transvision is still enabled for the following locales: da, de, es-ES, it, ja, nl, pl.
  • Thanks to Michal Vašíček, Pontoon logo now looks much better on HiDPI displays.
  • Background issues have been fixed on in-context pages with a transparent background like the Thimble feature page.
  • What’s coming up next? We’re working on making searching and filtering of strings faster, which will also allow for loading, searching and filtering of strings across projects. We’re also improving the experience of localizing FTL files, adding support for using Microsoft Terminology during the translation process and adding API support.
Newly published localizer facing documentation
  • Community Marketing Kit: showcases ways to leverage existing marketing content, resort to approved graphic asset, and utilize social channels to market Mozilla products in your language.
  • AMO: details the product development cycle that impacts localization. AMO frontend will be revamped in Q4. The documentation will be updated accordingly.
  • Snippets: illustrates the process on how to create locale relevant snippet, or launch snippet in languages that is not on the default snippet locale list.
  • SUMO: covers the process to localize the product, which is different from localizing the articles.
Events
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

 

Accomplishments

We would like to share some good results

Responses by country (not locale), for the 32,000 responses to the privacy survey ran by the Advocacy team back in March, localized in French and German:

It was good, but now let’s compare that with the responses by country for our IoT survey How connected are you? that received over 190,000 responses! We can see that the survey performed better in France, Germany and Italy than in the US. Spanish is underrepresented because it’s spread across several countries, but we expect the participation to be similar. These major differences are explained by the fact that we added support for three more languages, and promoted it with snippets in Firefox. This will give us way more diverse results, so thanks for your hard work everyone! This also helped get new people subscribed to our newsletter, which is really important for our advocacy activities, to fuel a movement for a healthier Internet.
The survey results might also be reused by scientists and included in the next edition of the Internet Health Report How cool is that? Stay tuned for the results.

 

Friends of the Lion

Image by Elio Qoshi

  • Kabyle (kab) organized a Kab Mozilla Days on August, 18-19 in Algeria, discussing localization, Mozilla mission, open source and promotion of indigenous languages.
  • Triqui (trs) community has made significant progress post Asunción workshop, Triqui is now officially supported on mozilla.org. Congratulations!!
  • Wolof (wo): Congrats to Ibra and Ibra (!) who have been keeping up with Firefox for Android work. They have now been added to multi-locale builds, which means they reach release at the same time as Firefox 57! Congrats guys!
  • Eduardo (eo): thanks for catching the mistake in a statement appeared on mozilla.org. The paragraph has been since corrected, published and localized.
  • Manuel (azz) from Spain and Misael (trs) from Mexico met for the first time at the l10n workshop in Asunción, Paraguay. They bonded instantly! Misael will introduce his friends who are native speakers of Highland Puebla Nahuatl, the language Manuel is working on all by himself. He can’t wait to be connected with these professionals, to collaborate, and promote the language through Mozilla products.

 

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

 

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

 

Open Policy & AdvocacyMaking Privacy More Transparent

How do you make complex privacy information easily accessible and understandable to users?  At Mozilla, we’ve been thinking through this for the past several months from different perspectives: user experience, product management, content strategy, legal, and privacy.  In Firefox 56 (which releases on September 26), we’re trying a new idea, and we’d love your feedback.

Many companies, including Mozilla, present a Privacy Notice to users prior to product installation.  You’ll find a link to the Firefox Privacy Notice prominently displayed under the Firefox download button on our websites.

Our testing showed that less than 1% of users clicked the link to view the “Firefox Privacy Notice” before downloading Firefox.  Another source of privacy information in Firefox is a notification bar displayed within the first minute of a new installation.  We call this the “Privacy Info Bar.”

User testing showed this was a confusing experience for many users, who often just ignored it.  For users who clicked the button, they ended up in the advanced settings of Firefox.  Once there, some people made unintentional changes that impacted browser performance without understanding the consequences.  And because this confusing experience occurred within the first few minutes of using a brand new browser, it took away from the primary purpose of installing a new browser: to navigate the web.

We know that many Firefox users care deeply about privacy, and we wanted to find a way to increase engagement with our privacy practices.  So we went back to the drawing board to provide users with more meaningful interactions. And after further discovery and iteration, our solution, which we’re implementing in Firefox 56, is a combination of several product and experience changes.  Here are our improvements:

  1. Displaying the Privacy Notice as the second tab of Firefox for all new installs;
  2. Reformatting and improving the Firefox Privacy Notice; and
  3. Improving the language in the preferences menu.

We reformatted the Privacy Notice to make it more obvious what data Firefox uses and sends to Mozilla and others.  Not everyone uses the same features or cares about the same things, so we layered the notice with high-level data topics and expanders to let you dig into details based on your interest.  All of this is now on the second tab of Firefox after a new installation, so it’s much more accessible and user-friendly.  The Privacy Info Bar became redundant with these changes, so we removed it.

We also improved the language in the Firefox preferences menu to make data collection and choices more clear to users.  We also used the same data terms in the preferences menu and privacy notice that our engineers use internally for data collection in Firefox.

These are just a few changes we made recently, but we are continuously seeking innovative ways to make the privacy and data aspects of our products more transparent.  Internally at Mozilla, data and privacy are topics we discuss constantly.  We challenge our engineers and partners to find alternative approaches to solving difficult problems with less data.  We have review processes to ensure the end-result benefits from different perspectives.  And we always consider issues from the user perspective so that privacy controls are easy to find and data practices are clear and understandable.

You can join the conversation on Github, or commenting on our governance mailing list.

Special thanks to Michelle Heubusch, Peter Dolanjski, Tina Hsieh, Elvin Lee, and Brian Smith for their invaluable contributions to our revised privacy notice structure.

The post Making Privacy More Transparent appeared first on Open Policy & Advocacy.

Air MozillaThe Joy of Coding - Episode 111

The Joy of Coding - Episode 111 mconley livehacks on real Firefox bugs while thinking aloud.

hacks.mozilla.orgI built something with A-Frame in 2 days (and you can too)

A few months ago, I had the opportunity to try out several WebVR experiences for the first time, and I was blown away by the possibilities. Using just a headset and my Firefox browser, I was able to play games, explore worlds, paint, create music and so much more. All through the open web. I was hooked.

A short while later, I was introduced to A-Frame, a web framework for building virtual reality experiences. The “Hello World” demo is a mere 15 lines of code. This blew my mind. Building an experience in Virtual Reality seems like a task reserved for super developers, or that guy from Mr. Robot. After glancing through the A-Frame documentation, I realized that anyone with a little front-end experience can create something for Virtual Reality…even me – a marketing guy who likes to build websites in his spare time.

My team had an upcoming presentation to give. Normally we would create yet another slide deck. This time, however, I decided to give A-Frame a shot, and use Virtual Reality to tell our story and demo our work.

Within two days I was able to teach myself how to build this (slightly modified for sharing purposes). You can view the GitHub repo here.

The result was a presentation that was fun and unique. People were far more engaged in Virtual Reality than they would have been watching us flip through slides on a screen.

This isn’t a “how-to get started with A-Frame” post (there are plenty of great resources for that). I did, however, find solutions for a few “gotchas” that I’ll share below.

Walking through walls

One of the first snags I ran into was that the camera would pass through objects and walls. After some research, I came across a-frame-extras. It includes an add-on called “kinematic-body” that helped solve this issue for me.

Controls

A-frame extras also has helpers for controls. It gave me an easy way to implement controls for keyboard, mouse, touchscreen, etc.

Generating rooms

It didn’t take me long to figure out how to create and position walls to create a room. I didn’t just want a room though. I wanted multiple rooms and hallways. Manually creating them would take forever. During my research I came across this post, where the author created a maze using an array of numbers. This inspired me to create generate my own map using a similar method:


const map = {
  "data": [
    0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0,
    0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0,
    0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
    0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0,
    0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
    0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0,
    0, 4, 4, 4, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0,
    4, 0, 0, 0, 4, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0,
    4, 0, 0, 0, 4, 4, 4, 1, 0, 8, 0, 0, 0, 0, 0, 1, 0, 0, 0,
    4, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0,
    0, 4, 4, 4, 4, 4, 4, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0
  ],
  "height":19,
  "width":19
}

0 = no walls
1 – 4 = walls with various textures
8 = user start position
9 = log position to console

This would allow me to try different layouts, start at different spots around the map, and quickly get coordinates for positioning items and rooms (you’ll see why this is useful below). You can view the rest of the code here.

Duplicating rooms

Once I created a room, I wanted to recreate a variation of this room at different locations around the map. This is where I learned to embrace the “ object. When you use “ as a container, it allows entities inside the container to be positioned relative to that parent entity object. I found this post about relative positioning to be helpful in understanding the concept. This allowed me to duplicate the code for a room, and simply provide new position coordinates for the parent entity.

Conclusion

I have no doubt that there are better and more efficient ways to create something like this, but the fact that a novice like myself was able to build something in just a couple of days speaks volumes to the power of A-Frame and WebVR. The A-Frame community also deserves a lot of credit. I found libraries, code examples, and blog posts for almost every issue and question I had.

Now is the perfect time to get started with WebVR and A-Frame, especially now that it’s supported for anyone using the latest version of Firefox on Windows. Check out the website, join the community, and start building.

The Mozilla BlogMozilla and the Washington Post Are Reinventing Online Comments

To engage readers, build community, and strengthen journalism, Mozilla’s open-source commenting platform will be integrated across washingtonpost.com

 

Digital journalism has revolutionized how we engage with the news, from the lightning speed at which it’s delivered to different formats on offer.

But the comments section beneath that journalism? It’s… broken. Trolls, harassment, enmity and abuse undermine meaningful discussion and push many people away. Many major newsrooms are removing their comments. Many new sites are launching without them.

Instead, newsrooms are directing interaction and engagement to social media. As a result, tools are limited, giant corporations control the data, and news organizations cannot build a direct relationship with their audience. 

At Mozilla, we’re not giving up on online comments. We believe that engaging readers and building community around the news strengthens not just journalism, but also open society. We believe comments are a fundamental part of the decentralized web.

Mozilla has been researching, testing, and building software in this area since 2015. Today, our work is taking a huge step forward as the Washington Post integrates Talk — Mozilla’s open-source commenting platform — across washingtonpost.com.

Talk is currently deployed across the Washington Post’s Politics, Business, and The Switch (technology) sections, and will roll out to more sections in the coming weeks.

Talk is open-source commenting software developed by Mozilla.

What is Talk?

Talk is developed by The Coral Project, a Mozilla creation that builds open-source tools to make digital journalism more inclusive and more engaging, both for audience members and journalists. Starting this summer, Talk will also be integrated across Fairfax Media’s websites in Australia, including the Sydney Morning Herald and The Age. One of The Coral Project’s other tools, Ask, is currently being used by 13 newsrooms, including the Miami Herald, Univision, and PBS Frontline.

“Trust in journalism relies on building effective relationships with your audience,” says Andrew Losowsky, project lead of The Coral Project. “Talk rethinks how moderation, comment display and conversation can function on news websites. It encourages more meaningful interactions between journalists and the people they serve.”

“Talk is informed by a huge amount of research into online communities,” Losowsky adds. “We’ve commissioned academic studies and held workshops around the world to find out what works, and also published guides to help newsrooms change their strategies. We’ve interviewed more than 300 people from 150 newsrooms in 30 countries, talking to frequent commenters, people who never comment, and even trolls. We’ve learned how to turn comments — which have so much potential — into a productive space for everyone.”

“Commenters and comment viewers are among the most loyal readers The Washington Post has,” said Greg Barber, The Post’s director of newsroom product. “Through our work with Mozilla, The New York Times, and the Knight Foundation in The Coral Project, we’ve invested in a set of tools that will help us better serve them, powering fruitful discussion and debate for years to come.”

The Coral Project was created thanks to a generous grant from the Knight Foundation and is currently funded by the Democracy Fund, the Rita Allen Foundation, and Mozilla. It also offers hosting and consulting services for newsrooms who need support in running their software.

Here’s what makes Talk different

It’s filled with features that improve interactions, including functions that show the best comments first, ignore specific users, find great commenters, give badges to staff members, filter out unreliable flaggers, and offer a range of audience reactions.

You own your data. Unlike the most popular systems, every organization using Talk runs its own version of the software, and keeps its own data. Talk doesn’t contain any tracking, or digital surveillance. This is great for journalistic integrity, good for privacy, and important for the internet.

It’s fast. Talk is small — about 300kb — and lightweight. Only a small number of comments initially load, to keep the page load low. New comments and reactions update instantaneously.

It’s flexible. Talk uses a plugin architecture, so each newsroom can make their comments act in a different way. Plugins can be written by third parties — the Washington Post has already written and open sourced several — and applied within the embed code, in order to change the functionality for particularly difficult topics.

It’s easy to moderate. Based on feedback from moderators at 12 different companies, we’ve created a simple moderation system with keyboard shortcuts and a feature-rich configuration.

It’s great for technologists. Talk is fully extensible with a RESTful and Graph API, and a plugin architecture that includes webhooks. The CSS is also fully customizable.

It’s 100% free. The code is public and available for you to download and run. And if you want us to help you host or integrate Talk into your site, we offer paid services that support the project.

Learn more about The Coral Project.

The post Mozilla and the Washington Post Are Reinventing Online Comments appeared first on The Mozilla Blog.

SUMO BlogArmy of Awesome’s Retirement and Mozilla Social Support’s Participation Outreach

Twitter is a social network used by millions of users around the world for many reasons – and this include helping Firefox users when they need it. If you have a Twitter account, you like to help people, you like to share your knowledge, and want to be a Social Support member of Firefox – join us!

We aim to have the engine that has been powering Army of Awesome (AoA) officially disabled before the end of 2017. To continue with the incredible work that has been accomplished for several years, we plan a new approach to supporting users on Twitter using TweetDeck.

TweetDeck is a web tool made available by Twitter allowing you to post to your timeline and manage your user profile within the social network, additionally boasting several features and filters to improve the general experience.

Through the application of filters in TweetDeck you can view comments, questions, and problems of Firefox users. Utilizing simple, but at the same time rather advanced tools, we can offer quality support to the users right where they are.

If you are interested, please take a look at the guidelines of the project, that take careful note of the successes and failures of past programs. Once you are filled with amazing-ness of the guidelines, fill out this form with your email and we will send you more information about everything you need to know about the program’s mutual purpose. After completing the form you can start configuring TweetDeck to display the issues to be answered and users to be helped.

We are sure you will have an incredible experience for all of you who are passionate about Mozilla and Twitter – and we can hardly wait to see the great results of your actions!

Air MozillaWebdev Beer and Tell: September 2017, 05 Sep 2017

Webdev Beer and Tell: September 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Add-ons BlogSeptember’s Featured Extensions

Firefox Logo on blue background

Pick of the Month: Search Image

by Didier Lafleur
Highlight any text and perform a Google image search with a couple clicks.

“I’ve been looking for something like this for years, to the point I wrote my own script. This WORKS for me.”

Featured: Cookie AutoDelete

by Kenny Do
Automatically delete stagnant cookies from your closed tabs. Offers whitelist capability, as well.

“Very good replacement for Self-Destructing Cookies.”

Featured: Tomato Clock

by Samuel Jun
A super simple but effective time management tool. Use Tomato Clock to break your work bursts into meaningful 25-minute “tomato” intervals.

“A nice way to track my productivity for the day.”

Featured: Country Flags & IP Whois

by Andy Portmen
This extension will display the country flag of a website’s server location. Simple, informative.

“It does what it should.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post September’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Air MozillaBay Area Rust Meetup August 2017

Bay Area Rust Meetup August 2017 https://www.meetup.com/Rust-Bay-Area/ Siddon Tang from PingCAP will be speaking about Futures and gRPC in Rust Sean Leffler will be talking about Rust's Turing Complete typesystem

The Mozilla BlogStatement on U.S. DACA Program

We believe that the young people who would benefit from the Deferred Action for Childhood Arrivals (DACA) deserve the opportunity to take their full and rightful place in the U.S. The possible changes to the DACA that were recently reported would remove all benefits and force people out of the U.S. – that is simply unacceptable.

Removing DREAMers from classrooms, universities, internships and workforces threaten to put the very innovation that fuels our technology sector at risk. Just as we said with previous Executive Orders on Immigration, the freedom for ideas and innovation to flow across borders is something we strongly believe in as a tech company. More importantly it is something we know is necessary to fulfill our mission to protect and advance the internet as a global public resource that is open and accessible to all.

We can’t allow talent to be pushed out or forced into hiding. We also shouldn’t stand by and allow families to be torn apart. More importantly, as employers, industry leaders and Americans — we have a moral obligation to protect these children from ill-willed policies and practices. Our future depends on it.

We want DREAMers to continue contributing to this country’s future and we do not want people to live in fear.  We urge the Administration to keep the DACA program intact. At the same time, we urge leaders in government to enact a bipartisan permanent solution, one that will allow these bright minds to prosper in the country we know and love.

The post Statement on U.S. DACA Program appeared first on The Mozilla Blog.

Air MozillaIntern Presentations: Round 7: Thursday, August 31st

Intern Presentations: Round 7: Thursday, August 31st Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 6 SF

Mozilla VR BlogglTF Exporter in three.js and A-Frame

A brief introduction

glTF Exporter in three.js and A-Frame

When creating WebVR experiences developers usually face a common problem: it’s hard to find assets other than just basic primitives. There’re several 3D packages to generate custom objects and scenes that use custom file formats, and although they give you the option to export to a common file format like Collada or OBJ, each exporter saves the information in a slightly different way. Because of these differences, when we try to import these files in the 3D engine that we are using, we often find that the result that we see on the screen is quite different from what we created initially.

glTF Exporter in three.js and A-Frame

The Khronos Group created the glTF 3d file format to have an open, application agnostic and well defined structure that can be imported and exported in a consistent way. The resulting file is smaller than most of the availables alternatives, it’s also optimized for real time applications to be fast to read since we don’t need to consolidate the data. Once we’ve read the buffers we can push them directly to the GPU.
The main features that glTF provides and a 3D file format comparison can be found in this article by Juan Linietsky

A few months ago feiss wrote an introduction to the glTF workflow he used to create the assets for our A-Saturday-Night demo.
Many things have improved since then. The glTF blender exporter is already stable and has glTF 2.0 support. The same goes for three.js and A-Frame: both have a much better support for 2.0.
Now, most of the pain he experienced by converting from Blender to collada and then to glTF has gone, and we can export directly to glTF from Blender.

glTF is here to stay and its support has grown widely in the last months, being available in most of the 3D web engines and applications out there like three.js, babylonjs, cesium, sketchfab, blocks ...
The following video from the first glTF BOF (held on Siggraph this year) illustrates how the community has embraced the format:


glTF Exporter on the web

One of the most requested features for A-Painter has been the ability to export to some standard format so people could reuse the drawing as an asset or placeholder in 3D content creation software (3ds Max, Maya,...) or engines like Unity or Unreal.
I started playing with the idea of exporting to OBJ but lot of changes were required on the original three.js exporter because of the lack of triangle_strip fully support so I left it in standby.

After seeing all the industry support and adoption of glTF at Siggraph 2017 I decided to give it a second try.

The work was much easier than expected thanks to the nice THREE.js / A-Frame loaders that Don McCurdy and Takahiro have been driving. I thought it would be great to export content created directly on the web to glTF, and it would serve as a great excuse to go deep on the spec and understand it better.

glTF Exporter in three.js

Thanks to the great glTF spec documentation and examples, I got a glTF exporter working pretty fast.

The first version of the exporter has already landed in r87 is still in early stage and under development. There’s an open issue If you want to get involved and follow the conversations about the missing features: https://github.com/mrdoob/three.js/issues/11951

API

The API follows the same structure of the existing exporters available in three.js:

  • Create an instance of THREE.GLTFExporter.
  • Call parse with the objects or scene that you want to export.
  • Get the result in a callback and use it as you want.
var gltfExporter = new THREE.GLTFExporter();  
gltfExporter.parse( input, function( result ) {

  var output = JSON.stringify( result, null, 2 );
  console.log( output );
  downloadJSON( output, 'scene.gltf' );

}, options );

More detailed and updated information for the API can be found on the three.js docs

Together with the exporter I created a simple example in three.js trying to combine the different type of primitives, helpers, rendering modes and materials and exposing all the options the exporter has so we could use it as a testing scene through the development

glTF Exporter in three.js and A-Frame

Integration in three.js editor

The integration with the three.js editor was pretty straightforward and I think it’s one of the most useful features, since the editor supports importing plenty of 3d formats, it can be used an an advanced converter from these formats to glTF, allowing the user to delete unneeded data, tweak parameters, modify materials etc before exporting.

glTF Exporter in three.js and A-Frame


glTF Exporter on A-Frame

Please note that as three.js v87 is required to use the GLTFExporter currently just master branch of A-Frame is supported, and the first stable version compatible will be 0.7.0 to be released later this month.

Integration with A-Frame inspector

After the successful integration with three.js’ editor the next step was to integrate the same functionality into the A-Frame inspector.
I’ve added two options to export the content to GLTF:

  • Clicking on the export icon on the scenegraph will export the whole scene to glTF

glTF Exporter in three.js and A-Frame

  • Clicking on the entity’s attributes panel will export the selected entity to glTF

glTF Exporter in three.js and A-Frame

Exporter component in A-Frame

Last but not least, I’ve created an A-Frame component so users could export scenes and entities programmatically.

The API is quite simple, just call the export function from the gltf-exporter system:

sceneEl.systems['gltf-exporter'].export(input, options);  

The function accepts severals different input values: None (export the whole scene), one entity, an array of entities, or a NodeList (eg: the result from a querySelectorAll)

The options accepted are the same as the original three.js function.

A-Painter exporter

The whole history wouldn’t be complete if the initial issue that made me go into glTF wasn’t satisfied :) After all the previous work described above it was trivial to add support to export to gltf in A-Painter.

  • Include the aframe-gltf-exporter-component script:
<script src="https://unpkg.com/aframe-gltf-exporter-component/dist/aframe-gltf-exporter-component.min.js"></script>  
  • Attach the component to a-scene:
<a-scene gltf-exporter/>  
  • And finally register a shortcut (g) to save the current drawing to glTF:
if (event.keyCode === 71) {  
  // Export to GTF (g)
  var drawing = document.querySelector('.a-drawing');
  self.sceneEl.systems['gltf-exporter'].export(drawing);
}

glTF Exporter in three.js and A-Frame


Extra: Exporter bookmarklet

While developing the exporter I found very useful creating a bookmarklet to inject the exporter code on every A-Frame or three.js page. This way I could just export the whole scene by clicking on it.
If A-FRAME is defined it will export AFRAME.scenes[0] as is the default scene loaded. If not, it will try to look for the global variable scene that is the most commonly used in three.js examples.
It is not bulletproof so you may need to do some changes if it doesn’t work on your app, probably by looking for something else than scene.

To use it you should create a new bookmark on your browser and paste the following code on the URL input box:

glTF Exporter in three.js and A-Frame

What’s next?

From Mozilla we are committed to help improving the glTF specification and its ecosystem.
glTF will keep evolving and many interesting features are being proposed on the roadmap discussion. If you have any suggestion don't hesitate to comment there, since all proposals are being discussed and taken into account.

As I stated before the glTF exporter is still in an early stage but it’s being actively developed so please feel free to jump into the discussion to prioritize on new features.

Finally: wouldn't it be great to see more content creation tools on the web with glTF support so you don't depend on a desktop application to generate your assets?.

Air MozillaReps Weekly Meeting Aug. 31, 2017

Reps Weekly Meeting Aug. 31, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Mozilla BlogA ₹1 Crore Fund to Support Open Source Projects in India

Today Mozilla is announcing the launch of “Global Mission Partners: India”, an award program specifically focused on supporting open source and free software.  The new initiative builds on the existing “Mission Partners” program. Applicants based in India can apply for funding to support any open source/free software projects which significantly further Mozilla’s mission.

Our mission, as embodied in our Manifesto, is to ensure the Internet is a global public resource, open and accessible to all; an Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.

We know that many other software projects around the world, and particularly in India, share the goals of a free and open Internet with us, and we want to use our resources to help and encourage others to work towards this end.

If you are based in India and you think your project qualifies, Mozilla encourages you to apply.  You can find the complete guidelines about this exciting award program on Mozilla’s wiki page.

The minimum award for a single application to the “Global Mission Partners: India” initiative is ₹1,25,000, and the maximum is ₹50,00,000.

The deadline for applications for the initial batch of “Global Mission Partners: India” is the last day of September 2017, at midnight Indian Time. Organizations can apply beginning today, in English or Hindi.

You can find a version of this post in Hindi here.

The post A ₹1 Crore Fund to Support Open Source Projects in India appeared first on The Mozilla Blog.

Web Application SecurityRemoving Disabled WoSign and StartCom Certificates from Firefox 58

In October 2016, Mozilla announced that, as of Firefox 51, we would stop validating new certificates chaining to the root certificates listed below that are owned by the companies WoSign and StartCom.

The announcement also indicated our intent to eventually completely remove these root certificates from Mozilla’s Root Store, so that we would no longer validate any certificates issued by those roots. That time has now arrived. We plan to release the relevant changes to Network Security Services (NSS) in November, and then the changes will be picked up in Firefox 58, due for release in January 2018. Websites using certificates chaining up to any of the following root certificates need to migrate to another root certificate.

This announcement applies to the root certificates with the following names:

  • CA 沃通根证书
  • Certification Authority of WoSign
  • Certification Authority of WoSign G2
  • CA WoSign ECC Root
  • StartCom Certification Authority
  • StartCom Certification Authority G2

Mozilla Security Team

The post Removing Disabled WoSign and StartCom Certificates from Firefox 58 appeared first on Mozilla Security Blog.

Air MozillaIntern Presentations: Round 6: Tuesday, August 29th

Intern Presentations: Round 6: Tuesday, August 29th Intern Presentations 5 presenters Time: 1:00PM - 2:15PM (PDT) - each presenter will start every 15 minutes 3 MTV, 1 PDX, 1 TOR

hacks.mozilla.orgLife After Flash: Multimedia for the Open Web

Flash delivered video, animation, interactive sites and, yes, ads to billions of users for more than a decade, but now it’s going away. Adobe will drop support for Flash by 2020. Firefox no longer supports Flash out of the box, and neither does Chrome. So what’s next? There are tons of open standards that can do what Flash does, and more.

Truly Open Multimedia

Flash promised to deliver one unifying platform for building and delivering interactive multimedia websites. And, for the most part, it delivered. But the technology was never truly open and accessible, and Flash Player was too resource-hungry for mobile devices. Now open-source alternatives can do everything Flash does—and more. These are the technologies you should learn if you’re serious about building tomorrow’s interactive web, whether you’re doing web animation, games, or video.

Web Animation

 

CSS
CSS animation is relatively new, but it’s the easiest way to get started with web animation. CSS is made to style websites with basic rules that dictate layout, typography, colors, and more. With the release of CSS3, animations are now baked into the standard, and as a developer, it’s up to you to tell the browser how to animate. CSS is human readable, which means it basically does what it says on the tin. For example, the property “animation-direction,” does exactly that: specifies the direction of your animation.

Right now you can create smooth, seamless animations with CSS. It’s simple to create keyframes, adjust timing, animate opacity, and more.  And all the animations work with anything you’d style normally with CSS: text, images, containers, and so on.

You can do animation with CSS, even if you’re unfamiliar with programming languages. Like many open-source projects, the code is out there on the web for you to play around with. Mozilla has also created (and maintains) exhaustive CSS animation documentation. Most developers recommend using CSS animation for simple projects and JavaScript for more complex sites.

JavaScript
Developers have been animating with JavaScript since the early days. Basic mouseover scripts have been around for more than two decades and today JavaScript, along with HTML5 <canvas> elements, can do some pretty amazing things. Even simple scripts can yield great results. With JavaScript, you can draw shapes, change colors, move and change images, and animate transparency. JavaScript animation uses the SVG (scalable vector graphics) format for animations, meaning artwork is actually drawn live based on math rather than being loaded and rendered. That means they remain crisp at any scale (thus the name) and can be completely controlled. SVG offers anti-aliased rendering, pattern and gradient fills, sophisticated filter-effects, clipping to arbitrary paths, text and animations. And, of course, it’s an open standard W3C recommendation rather than a closed binary. Using SVG, JavaScript, and CSS3, developers can create impressive interactive animations that don’t require any specialized formats or players.

JavaScript animation can be very refined, including bouncing, stop, pause, rewind, or slow down. It’s also interactive and can be programmed to respond to mouse clicks and rollovers. The new Web Animations API, built with JavaScript, lets you fine-tune animations with more control over keyframes and elements, but it’s still in the early, experimental phases of development and some features may not be supported by all browsers.

Additionally, JavaScript animations can be programmed to respond to input fields, form submissions, and keystrokes. And that makes it perfect for building web games.

Web Games

At one time, Flash ruled web games. It was easy to learn, use, and distribute. It was also robust, able to deliver massively multiplayer online games to millions. But today it’s possible to deliver the same—if not better—experience using JavaScript, HTML5, WebGL and WebAssembly. With modern browsers and open-source frameworks, it’s possible to build 3D action shooters, RPGs, adventure games, and more. In fact, you can now even create fully immersive virtual reality experiences for the web with technologies like WebVR and A-Frame.

Web games rely on an ecosystem of open-source frameworks and platforms to work. Each one plays an important role, from visuals to controls to audio to networking. The Mozilla Developer Network has a thorough list of technologies that are currently in use. Here are just a few of them and what they’re used for:

WebGL
Lets you create high-performance, hardware-accelerated 3D (and 2D) graphics from Web content. This is a Web-supported implementation of OpenGL ES 2.0. WebGL 2 goes even further, enabling OpenGL ES 3.0 level of support in browsers.

JavaScript
JavaScript, the programming language used on the Web, works well in browsers and is getting faster all the time. It’s already used to build thousands of games and new game frameworks are being developed constantly.

HTML audio
The <audio> element lets you easily play simple sound effects and music. If your needs are more involved, check out the Web Audio API for real audio processing power!

Web Audio API
This API for controlling the playback, synthesis, and manipulation of audio from JavaScript code lets you create awesome sound effects as well as play and manipulate music in real time.

WebSockets
The WebSocket API lets you connect your app or site to a server to transmit data back and forth in real-time. Perfect for multiplayer turn-based or even-based gaming, chat services, and more.

WebRTC
WebRTC is an ultra-fast API that can be used by video-chat, voice-calling, and P2P-file-sharing Web apps. It can be used for real-time multiplayer games that require low latency.

WebAssembly
HTML5/JavaScript game engines are better than ever, but they still can’t quite match the performance of native apps. WebAssembly promises to bring near-native performance to web apps. The technology lets browsers run compiled C/C++ code, including games made with engines like Unity and Unreal.

With WebAssembly, web games will be able to take advantage of multithreading. Developers will be able to produce staggering 3D games for the web that run close to the same speed as native code, but without compromising on security. It’s a tremendous breakthrough for gaming — and the open web. It means that developers will be able to build games for any computer or system that can access the web. And because they’ll be running in browsers, it’ll be easy to integrate online multiplayer modes.

Additionally, there are many HTML5/JavaScript game engines out there. These engines take care of the basics like physics and controls, giving developers a framework/world to build on. They range from lightweight and fast, like atom and Quick 2D engines, to full-featured 3D engines like WhitestormJS and Gladius. There are dozens to choose from, each with their own unique advantages and disadvantages for developers. But in the end, they all produce games that can be played on modern web browsers without plug-ins. And most of those games can run on less-powerful hardware, meaning you can reach even more users. In fact, games written for the web can run on tablets, smartphones, and even smart TVs.

MDN has extensive documentation on building web games and several tutorials on building games using pure JavaScript and the Phaser game framework. It’s a great place to start for web game development.

Video

Most video services have already switched to HTML5-based streaming using web technologies and open codecs; others are  sticking with the Flash-based FLV or FV4 codecs. As stated earlier, Flash video formats rely on software rendering that can tax web browsers and mobile platforms. Modern video codecs can use hardware rendering for video playback, greatly increasing responsiveness and efficiency. Unfortunately, there’s only one way to switch from Flash to HTML5: Re-encoding your video. That means converting your source material into HTML5-friendly formats via a free converter like FFmpeg and Handbrake.

Mozilla is actively helping to build and improve the HTML5-friendly and open-source video format WebM. It’s based on the Matroska container and uses VP8 and VP9 video codecs and Vorbis or Opus codecs.

Once your media has been converted to an HTML5-friendly format, you can repost your videos on your site. HTML5 has built-in media controls, so there’s no need to install any players. It’s as easy as pie. Just use a single line of HTML:

<video src="videofile.webm" controls></video>

Keep in mind that native controls are inconsistent between browsers. Because they’re made with HTML5, however, you can customize them with CSS and link them to your video with JavaScript. That means you can build for accessibility, add your own branding, and keep the look and feel consistent between browsers.

HTML5 can also handle adaptive streaming with Media Source Extensions (MSEs). Although they may be difficult to set up on their own, you can use pre-packaged players like Shaka Player and JW Player that can handle the details.

The developers at MDN have created an in-depth guide for converting Flash video to HTML5 video with many more details on the process. Fortunately, it’s not as difficult as it seems.

Flash Forward

The future of the web is open (hopefully) and Flash, despite being a great tool for creatives, wasn’t open enough. Thankfully, many open source tools  can do what Flash does, and more. But we’re still in the early stages and creating animations, interactive websites, and web games takes some coding knowledge. Everything you need to know is out there, just waiting for you to learn it.

Open web technologies promise to be better than Flash ever was, and will be accessible to anyone with an Internet connection.

hacks.mozilla.orgFlash, In Memoriam

Adobe will drop Flash by 2020. Firefox no longer supports Flash out of the box, and neither does Chrome. The multimedia platform is being replaced with open internet technologies like HTML5, CSS3, and JavaScript. But at one time, Flash was cutting edge. It inspired a generation of animators and developers and gave us some fantastic websites, games, TV shows, and even movies.

Macromedia launched Flash 1.0 (originally FutureWave SmartSketch) in 1996 with a grand vision: A single multimedia platform that would work flawlessly in any browser or any computer. No pesky browser interoperability issues, no laborious cross-browser testing. Just experiences that looked and acted the same in every browser.

A slick GUI, novel drawing and animation tools, and a simple scripting language made Flash a smash hit. Many artists, developers, filmmakers, and storytellers (myself included) were smitten. The platform sparked a revolution of multimedia websites rife with elaborate mouseover effects, thumping electronic music, and motion-sickness-inducing transitions. Corporations and businesses of all shapes and sizes created Flash websites. Millions of Flash-based games hit the web via sites like Newgrounds and many popular games were developed with Flash, including Angry Birds,Clash of Clans,FarmVille,AdventureQuest andMachinarium.

Flash also became a popular animation tool. Hit kids’ shows like Pound Puppies and My Little Pony: Friendship is Magic and comedy series like Total Drama and Squidbillies were made exclusively in Flash. The 2009 Academy Award nominated animated movie The Secret of Kells was also made in Flash. Then, of course, there was the Internet phenomenon Homestar Runner—animated web series, interactive website, and games hub.

In 2005, Macromedia was purchased by Adobe. That same year, YouTube launched. The streaming video service used the Flash player to deliver video to millions. At one time, 75% of all video content on the web was delivered via the Flash player.

Over the years, Flash grew, but didn’t necessarily improve. Its codebase became bloated and processor-power hungry. Then Apple released the iPhone, famously without Flash support. Flash used software rendering for video, which hurt battery life and performance on mobile devices. Instead, Apple recommended the HTML5 <video> tag  for video delivery on the web, using formats which can be rendered in hardware much more efficiently. YouTube added support for HTML5-friendly video and in 2015 announced that it would drop all support for Flash.

Flash is also, at its core, a closed and proprietary platform. Its code is controlled exclusively by Adobe with little or no community support.

Finally, Adobe itself announced the end of Flash. The company will no longer support Flash after 2020. It will continue to support Adobe AIR, however, which packages Flash material and scripts into a runtime for desktop and mobile devices.

Flash undoubtedly made a huge contribution to the web, despite it’s drawbacks. It triggered a wave of creativity and inspired millions of people around the world to create digital media for the web.

In my next post, Life After Flash, I’ll walk you through some of new open standards, tools, and technologies that make online multimedia more performant and interactive than ever.

Mozilla Gfx TeamWebRender newsletter #3

WebRender work is coming along nicely. I haven’t managed to properly track what landed this week so the summary below is somewhat short. This does little justice to the great stuff that is happening on the side. For example I won’t list the many bugs that Sotaro finds and fixes on a daily basis, or the continuous efforts Kats puts into keeping Gecko’s repository in sync with WebRender’s, or Ryan’s work on cbindgen (the tool we made to auto-generate C bindings for WebRender), or the unglamorous refactoring I got myself into in order to get some parts of Gecko to integrate with WebRender without breaking the web. Lee has been working on the dirty and gory details of fonts for a while but that won’t make it to the newsletter until it lands. Morris’s work on display items conversion hasn’t yet received due credit here, nor Jerry’s work on handling the many (way too many) texture formats that have to be supported by WebRender for video playback. Meanwhile Gankro is working on changes to the rust language itself that will make our life easier when dealing with fallible allocation and Kvark, after reviewing most of what lands in the WebRender repo and triaging all of the issues, manages to find the time to add tools to measure pixel coverage of render passes, and plenty of other things I don’t even know about because following everything closely would be a full-time job. You get the idea. I just wanted to give a little shout out to the people working on very important parts of the project that may not always appear in the highlights below, either because the work hasn’t landed yet, because I missed it, or because it was hidden behind Glenn’s usual round of epic optimization.

Notable WebRender changes

  • Glenn optimized the allocation of clip masks. Improvements with this fix on a test case generated from running cnn.com in Gecko:
    GPU time 10ms -> 1.7ms.
    Clip target allocations 54 -> 1.
    CPU compositor time 2.8ms -> 1.8ms.
    CPU backend time 1.8ms -> 1.6ms.

Notable Gecko changes

  • Jeff landed tiling support for blob images. Tiling is currently only used for very large images, but when used we get parallel rasterization across tiles for free.
  • Fallback blob images are no longer manually clipped. This means that we don’t have to redraw them while scrolling anymore. This gives a large performance improvement when scrolling mozilla.org

QMOFirefox 56 Beta 4 Testday Results

As you may already know, last Friday – August 18th – we held a new Testday event, for Firefox 56 Beta 4.

Thank you Iryna Thompson for helping us make Mozilla a better place.

From India team: Fahima Zulfath A, Surentharan R.A, subash.M, Ponmurugesh.M, R.KRITHIKA SOWBARNIKA.

From Bangladesh team: Hossain Al Ikram, Maruf Rahman, Azmina Akter Papeya, Rahat Anwar, Saddam Hossain, Anika Alam, Iftekher Alam, Sajedul Islam, Tanjina Tonny, Kazi Nuzhat Tasnem, Tanvir Mazharul, Taseenul Hoque Bappi, Sontus Chandra Anik, Md. Rahimul Islam, Nafis Fuad, Saheda Reza Antora.

Results:
– several test cases executed for Media Block Autoplay, Preferences Search [Photon] and Photon Preference reorg V2 features;
– 3 bugs verified: 1374972, 1387273 and 1375883.

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Open Policy & AdvocacyMozilla applauds India Supreme Court’s decision upholding privacy as a fundamental right

Mozilla is thrilled to see the Supreme Court of India’s decision declaring that the Right to Privacy is guaranteed by the Indian Constitution. Mozilla fights for privacy around the world as part of our mission, and so we’re pleased to see the Supreme Court unequivocally end the debate on whether this right even exists in India. Attention must move now to Aadhaar, which the government is increasingly making mandatory without meaningful privacy protections. To realize the right to privacy in practice, swift action is needed to enact a strong data protection law.

The post Mozilla applauds India Supreme Court’s decision upholding privacy as a fundamental right appeared first on Open Policy & Advocacy.

Air MozillaReps Weekly Meeting Aug. 24, 2017

Reps Weekly Meeting Aug. 24, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

hacks.mozilla.orgIntroducing the Extension Compatibility Tester

With Firefox’s move to a modern web-style browser extension API, it’s now possible to maintain one codebase and ship an extension in multiple browsers. However, since different browsers can have different capabilities, some extensions may require modification to be truly portable. With this in mind, we’ve built the Extension Compatibility Tester to give developers a better sense of whether their existing extensions will work in Firefox.

The tool currently supports Chrome extension bundle (.crx) files, but we’re working on expanding the types of extensions you can check. The tool generates a report showing any potential uses of APIs or permissions incompatible with Firefox, along with next steps on how to distribute a compatible extension to Firefox users.

We will continue to participate in the Browser Extensions Community Group and support its goal of finding a common subset of extensible points in browsers and APIs that developers can use. We hope you give the tool a spin and let us know what you think!

Try it out! >>

“The tool says my extension may not be compatible“

Not to worry! Our analysis only shows API and permission usage, and doesn’t have the full context. If the incompatible functionality is non-essential to your extension you can use capability testing to only use the API when available:

// Causes an Error
browser.unavailableAPI(...);

// Capability Testing FTW!
if ('unavailableAPI' in browser) {
	browser.unavailableAPI(...);
}

Additionally, we’re constantly expanding the available extension APIs, so your missing functionality may be only a few weeks away!

“The tool says my extension is compatible!”

Hooray! That said, definitely try your extension out in Firefox before submitting to make sure things work as you expect. Common APIs may still have different effects in different browsers.

“I don’t want to upload my code to a 3rd party website.”

Understood! The compatibility testing is available as part of our extension development command-line tool or as a standalone module.

If you have any issues using the tool, please file an issue or leave a comment here. The hope is that this tool is a useful first step in helping developers port their extensions, and we get a healthier, more interoperable extension ecosystem.

Happy porting!

Air MozillaThe Joy of Coding - Episode 110

The Joy of Coding - Episode 110 mconley livehacks on real Firefox bugs while thinking aloud.

The Mozilla BlogThe Battle to Save Net Neutrality: A Panel with Tom Wheeler, Ro Khanna, Mozilla, Leading TV Producers and Others

On September 18, net neutrality experts will gather at the Internet Archive to discuss dire threats to the open web. You’re invited

 

Net neutrality — and the future of a healthy internet — is at stake.

In May, the FCC voted to move forward with plans to gut net neutrality. It was a decision met with furor: Since then, many millions of Americans have written, phoned and petitioned the FCC, demanding an internet that belongs to individual users, not broadband ISP gatekeepers. And scores of nonprofits and technology companies have organized to amplify Americans’ voices.

The first net neutrality public comment period ends on August 30, and the FCC is moving closer to a vote.

So on Monday, September 18, Mozilla is gathering leaders at the forefront of protecting net neutrality. We’ll discuss why it matters, what lies ahead, and what can be done to protect it.

RSVP: The Battle to Save Net Neutrality

Leaders like former FCC Chairman Tom Wheeler and Congressman Ro Khanna will discuss net neutrality’s importance to free speech, innovation, competition and social justice.

This free public event, titled “Battle to Save Net Neutrality,” will feature a panel discussion, reception and audience Q&A. It will be held at the Internet Archive (300 Funston Avenue, San Francisco) from 6 p.m. to 9 p.m. Participants include:

  • Panelist Tom Wheeler, former FCC Chairman who served under President Obama and architect of the 2015 net neutrality rules

 

  • Panelist and Congressman Ro Khanna (D-California), who represents California’s 17th congressional district in the heart of Silicon Valley. Khanna is a vocal supporter of net neutrality

 

  • Panelist Amy Aniobi, TV writer and producer for “Insecure” (HBO) and “Silicon Valley” (HBO), and member of the Writers Guild of America, West

 

  • Panelist Luisa Leschin, TV writer and producer for “From Dusk til Dawn” (Netflix) and “Just Add Magic” (Amazon), and a member of the Writers Guild of America, West

 

  • Panelist Denelle Dixon, Mozilla Chief Legal and Business Officer. Dixon spearheads Mozilla’s business, policy and legal activities in defense of a healthy internet. She is a vocal advocate for net neutrality, encryption and greater user choice and control online

 

  • Panelist Malkia Cyril, Executive Director of the Center for Media Justice. Cyril has spent the past 20 years building the capacity of racial and economic justice movements to win media rights, access and power in the digital age

 

  • Moderator Gigi Sohn, Mozilla Tech Policy Fellow and former counselor to FCC Chairman Tom Wheeler (2013-2016). One of the nation’s leading advocates for an open, fair, and fast internet, Sohn was named “one of the heroes that saved the Internet” by The Daily Dot for her leadership in the passage of the FCC’s strong net neutrality rules in 2015

Join us as we discuss the future of net neutrality, and what it means for the health of the internet. Register for this free event here.

The post The Battle to Save Net Neutrality: A Panel with Tom Wheeler, Ro Khanna, Mozilla, Leading TV Producers and Others appeared first on The Mozilla Blog.

hacks.mozilla.orgInside a super fast CSS engine: Quantum CSS (aka Stylo)

You may have heard of Project Quantum… it’s a major rewrite of Firefox’s internals to make Firefox fast. We’re swapping in parts from our experimental browser, Servo, and making massive improvements to other parts of the engine.

The project has been compared to replacing a jet engine while the jet is still in flight. We’re making the changes in place, component by component, so that you can see the effects in Firefox as soon as each component is ready.

And the first major component from Servo—a new CSS engine called Quantum CSS (previously known as Stylo)—is now available for testing in our Nightly version. You can make sure that it’s turned on for you by going to about:config and setting layout.css.servo.enabled to true.

This new engine brings together state-of-the-art innovations from four different browsers to create a new super CSS engine.

4 browser engines feeding in to Quantum CSS

It takes advantage of modern hardware, parallelizing the work across all of the cores in your machine. This means it can run up to 2 or 4 or even 18 times faster.

On top of that, it combines existing state-of-the-art optimizations from other browsers. So even if it weren’t running in parallel, it would still be one fast CSS engine.

Racing jets

But what does the CSS engine do? First let’s look at the CSS engine and how it fits into the rest of the browser. Then we can look at how Quantum CSS makes it all faster.

What does the CSS engine do?

The CSS engine is part of the browser’s rendering engine. The rendering engine takes the website’s HTML and CSS files and turns them into pixels on the screen.

Files to pixels

Each browser has a rendering engine. In Chrome, it’s called Blink. In Edge, it’s called EdgeHTML. In Safari, it’s called WebKit. And in Firefox, it’s called Gecko.

To get from files to pixels, all of these rendering engines basically do the same things:

  1. Parse the files into objects the browser can understand, including the DOM. At this point, the DOM knows about the structure of the page. It knows about parent/child relationships between elements. It doesn’t know what those elements should look like, though.Parsing the HTML into a DOM tree
  2. Figure out what the elements should look like. For each DOM node, the CSS engine figures out which CSS rules apply. Then it figures out values for each CSS property for that DOM node.Styling each DOM node in the tree by attaching computed styles
  3. Figure out dimensions for each node and where it goes on the screen. Boxes are created for each thing that will show up on the screen. The boxes don’t just represent DOM nodes… you will also have boxes for things inside the DOM nodes, like lines of text.Measuring all of the boxes to create a frame tree
  4. Paint the different boxes. This can happen on multiple layers. I think of this like old-time hand drawn animation, with onionskin layers of paper. That makes it possible to just change one layer without having to repaint things on other layers.Painting layers
  5. Take those different painted layers, apply any compositor-only properties like transforms, and turn them into one image. This is basically like taking a picture of the layers stacked together. This image will then be rendered on the screen.Assembling the layers together and taking a picture

This means when it starts calculating the styles, the CSS engine has two things:

  • a DOM tree
  • a list of style rules

It goes through each DOM node, one by one, and figures out the styles for that DOM node. As part of this, it gives the DOM node a value for each and every CSS property, even if the stylesheets don’t declare a value for that property.

I think of it kind of like somebody going through and filling out a form. They need to fill out one of these forms for each DOM node. And for each form field, they need to have an answer.

Blank form with CSS properties

To do this, the CSS engine needs to do two things:

  • figure out which rules apply to the node — aka selector matching
  • fill in any missing values with values from the parent or a default value—aka the cascade

Selector matching

For this step, we’ll add any rule that matches the DOM node to a list. Because multiple rules can match, there may be multiple declarations for the same property.

Person putting check marks next to matching CSS rules

Plus, the browser itself adds some default CSS (called user agent style sheets). How does the CSS engine know which value to pick?

This is where specificity rules come in. The CSS engine basically creates a spreadsheet. Then it sorts the declarations based on different columns.

Declarations in a spreadsheet

The rule that has the highest specificity wins. So based on this spreadsheet, the CSS engine fills out the values that it can.

Form with some CSS properties filled in

For the rest, we’ll use the cascade.

The cascade

The cascade makes CSS easier to write and maintain. Because of the cascade, you can set the color property on the body and know that text in p, and span, and li elements will all use that color (unless you have a more specific override).

To do this, the CSS engine looks at the blank boxes on its form. If the property inherits by default, then the CSS engine walks up the tree to see if one of the ancestors has a value. If none of the ancestors have a value, or if the property does not inherit, it will get a default value.

Form will all CSS properties filled in

So now all of the styles have been computed for this DOM node.

A sidenote: style struct sharing

The form that I’ve been showing you is a little misrepresentative. CSS has hundreds of properties. If the CSS engine held on to a value for each property for each DOM node, it would soon run out of memory.

Instead, engines usually do something called style struct sharing. They store data that usually goes together (like font properties) in a different object called a style struct. Then, instead of having all of the properties in the same object, the computed styles object just has pointers. For each category, there’s a pointer to the style struct that has the right values for this DOM node.

Chunks of the form pulled out to separate objects

This ends up saving both memory and time. Nodes that have similar properties (like siblings) can just point to the same structs for the properties they share. And because many properties are inherited, an ancestor can share a struct with any descendants that don’t specify their own overrides.

Now, how do we make that fast?

So that is what style computation looks like when you haven’t optimized it.

Steps in CSS style computation: selector matching, sorting by specificity, and computing property values

There’s a lot of work happening here. And it doesn’t just need to happen on the first page load. It happens over and over again as users interact with the page, hovering over elements or making changes to the DOM, triggering a restyle.

Initial styling plus restyling for hover, DOM nodes added, etc

This means that CSS style computation is a great candidate for optimization… and browsers have been testing out different strategies to optimize it for the past 20 years. What Quantum CSS does is take the best of these strategies from different engines and combine them to create a superfast new engine.

So let’s look at the details of how these all work together.

Run it all in parallel

The Servo project (which Quantum CSS comes from) is an experimental browser that’s trying to parallelize all of the different parts of rendering a web page. What does that mean?

A computer is like a brain. There’s a part that does the thinking (the ALU). Near that, there’s some short term memory (the registers). These are grouped together on the CPU. Then there’s longer term memory, which is RAM.

CPU with ALU (the part that does the thinking) and registers (short term memory)

Early computers could only think one thing at a time using this CPU. But over the last decade, CPUs have shifted to having multiple ALUs and registers, grouped together in cores. This means that the CPU can think multiple things at once — in parallel.

CPU chip with multiple cores containing ALUs and registers

Quantum CSS makes use of this recent feature of computers by splitting up style computation for the different DOM nodes across the different cores.

This might seem like an easy thing to do… just split up the branches of the tree and do them on different cores. It’s actually much harder than that for a few reasons. One reason is that DOM trees are often uneven. That means that one core will have a lot more work to do than others.

Imbalanced DOM tree being split between multiple cores so one does all the work

To balance the work more evenly, Quantum CSS uses a technique called work stealing. When a DOM node is being processed, the code takes its direct children and splits them up into 1 or more “work units”. These work units get put into a queue.

Cores segmenting their work into work units

When one core is done with the work in its queue, it can look in the other queues to find more work to do. This means we can evenly divide the work without taking time up front to walk the tree and figure out how to balance it ahead of time.

Cores that have finished their work stealing from the core with more work

In most browsers, it would be hard to get this right. Parallelism is a known hard problem, and the CSS engine is very complex. It’s also sitting between the two other most complex parts of the rendering engine — the DOM and layout. So it would be easy to introduce a bug, and parallelism can result in bugs that are very hard to track down, called data races. I explain more about these kinds of bugs in another article.

If you’re accepting contributions from hundreds or thousands of engineers, how can you program in parallel without fear? That’s what we have Rust for.

Rust logo

With Rust, you can statically verify that you don’t have data races. This means you avoid tricky-to-debug bugs by just not letting them into your code in the first place. The compiler won’t let you do it. I’ll be writing more about this in a future article. In the meantime, you can watch this intro video about parallelism in Rust or this more in-depth talk about work stealing.

With this, CSS style computation becomes what’s called an embarrassingly parallel problem — there’s very little keeping you from running it efficiently in parallel. This means that we can get close to linear speed ups. If you have 4 cores on your machine, then it will run close to 4 times faster.

Speed up restyles with the Rule Tree

For each DOM node, the CSS engine needs to go through all of the rules to do selector matching. For most nodes, this matching likely won’t change very often. For example, if the user hovers over a parent, the rules that match it may change. We still need to recompute style for its descendants to handle property inheritance, but the rules that match those descendants probably won’t change.

It would be nice if we could just make a note of which rules match those descendants so we don’t have to do selector matching for them again… and that’s what the rule tree—borrowed from Firefox’s previous CSS engine— does.

The CSS engine will go through the process of figuring out the selectors that match, and then sorting them by specificity. From this, it creates a linked list of rules.

This list is going to be added to the tree.

A linked list of rules being added to the rule tree

The CSS engine tries to keep the number of branches in the tree to a minimum. To do this, it will try to reuse a branch wherever it can.

If most of the selectors in the list are the same as an existing branch, then it will follow the same path. But it might reach a point where the next rule in the list isn’t in this branch of the tree. Only at that point will it add a new branch.

The last item in the linked list being added to the tree

The DOM node will get a pointer to the rule that was inserted last (in this example, the div#warning rule). This is the most specific one.

On restyle, the engine does a quick check to see whether the change to the parent could potentially change the rules that match children. If not, then for any descendants, the engine can just follow the pointer on the descendant node to get to that rule. From there, it can follow the tree back up to the root to get the full list of matching rules, from most specific to least specific. This means it can skip selector matching and sorting completely.

Skipping selector matching and sorting by specificity

So this helps reduce the work needed during restyle. But it’s still a lot of work during initial styling. If you have 10,000 nodes, you still need to do selector matching 10,000 times. But there’s another way to speed that up.

Speed up initial render (and the cascade) with the style sharing cache

Think about a page with thousands of nodes. Many of those nodes will match the same rules. For example, think of a long Wikipedia page… the paragraphs in the main content area should all end up matching the exact same rules, and have the exact same computed styles.

If there’s no optimization, then the CSS engine has to match selectors and compute styles for each paragraph individually. But if there was a way to prove that the styles will be the same from paragraph to paragraph, then the engine could just do that work once and point each paragraph node to the same computed style.

That’s what the style sharing cache—inspired by Safari and Chrome—does. After it’s done processing a node, it puts the computed style into the cache. Then, before it starts computing styles on the next node, it runs a few checks to see whether it can use something from the cache.

Those checks are:

  • Do the 2 nodes have the same ids, classes, etc? If so, then they would match the same rules.
  • For anything that isn’t selector based—inline styles, for example—do the nodes have the same values? If so, then the rules from above either won’t be overridden, or will be overridden in the same way.
  • Do both parents point to the same computed style object? If so, then the inherited values will also be the same.

Computed styles being shared by all siblings, and then asking the question of whether a cousin can share. Answer: yes

Those checks have been in earlier style sharing caches since the beginning. But there are a lot of other little cases where styles might not match. For example, if a CSS rule uses the :first-child selector, then two paragraphs might not match, even though the checks above suggest that they should.

In WebKit and Blink, the style sharing cache would give up in these cases and not use the cache. As more sites use these modern selectors, the optimization was becoming less and less useful, so the Blink team recently removed it. But it turns out there is a way for the style sharing cache to keep up with these changes.

In Quantum CSS, we gather up all of those weird selectors and check whether they apply to the DOM node. Then we store the answers as ones and zeros. If the two elements have the same ones and zeros, we know they definitely match.

A scoreboard showing 0s and 1s, with the columns labeled with selectors like :first-child

If a DOM node can share styles that have already been computed, you can skip pretty much all of the work. Because pages often have many DOM nodes with the same styles, this style sharing cache can save on memory and also really speed things up.

Skipping all of the work

Conclusion

This is the first big technology transfer of Servo tech to Firefox. Along the way, we’ve learned a lot about how to bring modern, high-performance code written in Rust into the core of Firefox.

We’re very excited to have this big chunk of Project Quantum ready for users to experience first-hand. We’d be happy to have you try it out, and let us know if you find any issues.

Mozilla Gfx TeamWebRender newsletter #2

Here comes the second installment of the WebRender newsletter. Last week’s newsletter had some instructions about enabling WebRender but I apparently didn’t make it clear enough that the Gecko integration is still in a very rough shape and will be for a while. If Gecko+WebRender spectacularly crashes at startup or renders some things incorrectly on your computer, worry not, it is to be expected. We’ll fix these issues in due time and we will let you know through this newsletter when WebRender gets to a dogfoodable state, and later when it is ready for use by a wider audience.

Notable WebRender changes

  • This week saw mostly bug fixes in the WebRender repo.

Notable Gecko changes

  • The texture cache rewrite has landed in gecko.
  • All text (including shadows and decorations) are handled by WebRender now. This is probably the most noticeable performance improvement of the week. Our current blob image path for fonts is very slow (That will be fixed by bug 1380014).
  • APZ works in layers-free now.
  • Removed a bunch of malloc/free from blob image playback. This will speed up blob image playback especially when there are multiple blob images to be played back because we don’t end up running into the malloc/free lock.
  • Support for perspective transforms.

The Mozilla BlogWelcome Michael DeAngelo, Chief People Officer

Michael DeAngelo joins the Mozilla leadership team this week as Chief People Officer.

As Chief People Officer, Michael is responsible for all aspects of HR and Organizational Development at Mozilla Corporation with an overall focus on ensuring we’re building and growing a resilient, high impact global organization as a foundation for our next decade of growth and impact.

Michael brings two decades of experience leading people teams at Pinterest, Google and Pepsico. Earlier in his career Michael held a number of HR roles in Organization Development, Compensation, and People Operations at Microsoft, Merck and AlliedSignal.

At Pinterest, Michael built out the majority of the HR function. One of the most important teams was the Diversity and Inclusion function which is recognized as one of the best in the industry. Two of his proudest moments there were being the first company ever to set public diversity goals. and growing new hires from under-represented backgrounds from 1% to 9% in one year.

Michael brings a global perspective from his tenure at Google, where for four years he led HR for the Europe/Middle East/Africa region based in Zurich, Switzerland. At Google he also led the HR team supporting 10,000 employees for Search, Android, Chrome, and Google+. Prior to Google, Michael was Vice President of HR at Pepsico leading all people functions for the Quaker Foods global P&L.

“Having spent so much of my career in technology, I have long been an admirer of Mozilla and the important contributions it has made to keeping the Internet open and accessible to all. This is an exciting time for Mozilla as we’re about to deliver a completely revitalized Firefox, we’re ramping investments in new emerging technologies and we’re making important strides in fighting for a healthier Internet platform for all. I am excited to come on board and support the continued development and growth of the organization’s talent and capabilities to help us reach our goals.”

Michael will be based in the Bay Area and will work primarily out of our San Francisco office.

Welcome Michael!

chris

The post Welcome Michael DeAngelo, Chief People Officer appeared first on The Mozilla Blog.

Air MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaIntern Presentations: Round 5: Thursday, August 17th

Intern Presentations: Round 5: Thursday, August 17th Intern Presentations 7 presenters Time: 1:00PM - 2:45PM (PDT) - each presenter will start every 15 minutes 3 SF, 1 TOR, 1 PDX, 2 Paris

Mozilla VR BlogSamsung Gear VR support lands in Servo

Samsung Gear VR support lands in Servo

Samsung Gear VR support lands in Servo

We are happy to announce that Samsung Gear VR headset support is landing in Servo. The current implementation is WebVR 1.1 spec-compliant and supports both the remote and headset controllers available in the Samsung Gear VR 2017 model.

If you are eager to explore, you can download a project template compatible with Gear VR Android phones. Add your Oculus signature file, and run the project to launch the application on your mobile phone.

Alongside the Gear VR support, we worked on other Servo areas in order to provide A-Frame compatibility, WebGL extensions, optimized Android compilations and reduced Servo startup times.

A-Frame Compatibility

Servo now supports Mutation Observers that enables us to polyfill Custom Elements. Together with a solid WebVR architecture and better texture loading we can now run any A-Frame content across mobile (Google Daydream, Samsung Gear VR) and desktop (HTC Vive) platforms. All the pieces have fallen into place thanks to all the amazing work that the Servo team is doing.

WebGL Extensions

Samsung Gear VR support lands in Servo

WebGL Extensions enable applications to get optimal performance by taking advantage of state-of-the-art GPU capabilities. This is even more important in VR because of the extra work required for stereo rendering. We designed the WebGL extension architecture and implemented some of the extensions used by A-Frame/Three.js such as float textures, instancing, compressed textures and VAOs.

Compiling Servo for Android

Recently, the Rust team changed the default Android compilations targets. They added an armv7-linux-androideabi target corresponding to the armeabi-v7a official ABI and changed the arm-linux-androideabi to correspond to the armeabi official ABI instead of armeabi-v7a.

This could cause important performance regressions on Servo because it was using the arm-linux-androideabi target by default. Using the new armv7 compilation target is easy for pure Rust based crates. It’s not so trivial for cmake or makefile based dependencies because they infer the toolchain and compiler names based on the target name triple.

We adapted all the problematic dependencies. We took advantage of this work to add arm64 compilation support and provided a simple CLI API to select any Android compilation target in Servo.

Reduced startup times

C based libfontconfig library was causing long startup times in Servo for Android. We didn’t find a way to fix the library itself so we opted to get rid of it and implement an alternative way to query Android system fonts. Unfortunately, Android doesn't provide an API to query system fonts until Android O so we were forced to parse the system configuration files and load fonts manually.

Gear VR support on Rust-WebVR Library

Samsung Gear VR support lands in Servo

We started working on ovr-mobile-sys, the Rust bindings crate for the Oculus Mobile SDK API. We used rust-bindgen to automatically generate the bindings from the C headers but had to manually transpile some of the inline SDK header code since inline functions don’t generate symbols and are not exported by rust-bindgen.

Then we added the SDK integration into the rust-webvr standalone library. The OculusVRService class offers the entry point to access Oculus SDK and handles life-cycle operations such as initialization, shutdown, and VR device discovery. The integration with the headset is implemented in OculusVRDisplay. Gear VR lacks positional tracking, but by using the neck model provided in the SDK, we expose a basic position vector simulating how the human head naturally rotates relative to the base of the neck.

In order to read Gear VR sensor inputs and submit frames to the headset, the Android activity must enter VR Mode by calling vrapi_EnterVrMode() function. Oculus Mobile SDK requires a precise life cycle management and handling some events that may interleave in complex ways. For a correct implementation the Android Activity must enter VR mode in a surfaceChanged() or onResume() event, whichever comes last. And it must leave VR mode in a surfaceDestroyed() or onPause() event, whichever comes first.

In a Glutin based Android NativeActivity, life cycle events are notified using Rust channels. This caused synchronization problems due to non-deterministic event handling in multithreading. We couldn’t guarantee that the vrapi_LeaveVrMode() function was called before NativeActivity’s EGLSurface was destroyed and the app went to background. Additionally, we needed to block the event notifier thread until Gear VR resources are freed, in a different renderer thread, to prevent collisions (e.g. Glutin dropping the EGLSurface at the same time that VR renderer thread was leaving VR mode). We contributed a deterministic event handling implementation to the Rust-android-glue.

Oculus mobile SDK allows to directly send a WebGL context texture to the headset. Despite that, we opted for a triple buffered swap chain recommended in the SDK to avoid potential flickering and performance problems when using the same texture every frame. As we did with the Daydream implementation, we render the VR-ready texture to the current ovrTextureSwapChain using a BlitFramebuffer-based solution, instead of rendering a quad, to avoid implementing the required OpenGL state-change safeguards or context switching.

Oculus Mobile SDK allowed us to directly attach the NativeActivity’s surface to the Gear VR time warp renderer. We were able to run the pure Rust room-scale demo without writing a line of Java. It’s nice that the SDK allows to achieve a java-free integration, but our luck changed when we integrated all this work into a full browser architecture.

Gear VR integration into Servo

Our Daydream integration worked inside Servo almost on a first try after it landed on the rust-webvr standalone library. This was not the case with the Gear VR integration…

First, we had to research and fix up to four specific GPU driver issues with the Mali-T880 GPU used in the Samsung Galaxy S7 phone:

As a result, we were able to see WebGL stereo rendering on the screen but entering VR mode crashed with a JNI assertion failure inside the Oculus VR SDK. This was caused because inside the browser context different threads are used for the rendering and VR device initialization/discovery. This requires the use of different Oculus ovrJava instances for each thread.

The assertion failure was gone but we couldn’t see anything on the screen after calling vrapi_EnterVrMode(). The logcat error messages triggered by the Oculus SDK helped to find the cause of the problem. The Gear VR time warp implementation hijacks the explicitly passed Android window surface pointer. We could use the NativeActivity’s window surface in the standalone room-scale demo. In a full browser architecture, however, there is a fight to take over ownership of the Android surface between time warp thread and the browser compositor. We discarded the idea of directly using the NativeActivity’s window surface and decided to switch to a Java SurfaceView VR backend in order to make both the browser’s compositor and Gear VR’s time warp thread happy.

By this means, the VR mode life cycle fit nicely in the browser architecture. There was one final surprise though. The activity entered VR mode correctly, there were no errors in the logcat, time warp thread was showing correct render stats and the headset pose data was correctly fetched. Nevertheless, the VR scene with lens distortion was not yet visible in the Android view hierarchy. This led to a new instance of spending some hours of debugging to change a single line of code. The Android SurfaceView was being rendered correctly but it was composited below the NativeActivity’s browser window because setZOrderOnTop() is not enabled by default on Android:

After this change everything worked flawlessly and it was time to enjoy running some WebVR experiences on the Gear VR ;)

Conclusion

It's been a lot of fun seeing Gear VR support land in Servo and being able to run A-Frame demos in it. We continue to work hard on squeezing WebGL and WebVR performance and expect to land some nice optimizations soon. We are also working on implementing unique WebVR features that no other browser has yet. More news soon ;) Stay tuned!

Mozilla Gfx TeamWebRender newsletter #1

The Quantum Flow and Photon projects have exciting newsletters. The Quantum graphics project (integrating WebRender in Firefox) hasn’t provided a newsletter so far and people have asked for it, so let’s give it a try!

This newsletter will not capture everything that is happening in the project, only some highlights, and some of the terminology might be a bit hard to understand at first for someone not familiar with the internals of Gecko and WebRender. I will try to find the time to write a bit about WebRender’s internals and it will hopefully provide more keys to understanding what’s going on here.

The terms layer-full/layers-free used below refer to the way WebRender is integrated in Gecko. Our first plan was to talk to WebRender using the layers infrastructure in the short term, because it is the simplest approach. This is the “layers-full” integration. Unfortunately the cost of building many layers to transform into WebRender display items is high and we found out that we may not be able to ship WebRender using this strategy. The “layers-free” integration plan is to translate Gecko’s display items into WebRender display items directly without building layers. It is more work but we are getting some encouraging results so far.

Some notable (recent) changes in WebRender

  • Glyph Cache optimizations – Glenn profiled and optimized the glyph cache and made it a lot faster.
  • Texture cache rewrite (issue #1572) – The new cache use pixel buffer objects to transfer images to the GPU (previously used glTexSubImage2D), and does not suffer from fragmentation issues the way the previous one did, and has a better eviction policy.
  • Other text related optimization in the display list serialization.
  • Sub-pixel positioning on Linux.

Some notable (recent) changes in Gecko

  • Clipping in layers free mode (Bug 1386483) – This reuses clips instead of having new ones for every display item. This will reduce the display list processing that happens on the Gecko side as well as the WebRender side. This was one of the big things missing from getting functional parity with current layers-full WebRender.
  • Rounded rectangle clipping in layers free mode (Bug 1370682) – This is a noticeable difference from what we do in layer-full mode. In layer-full mode we currently use mask layers for rounded clipping. Doing this directly with WebRender gives a noticeable performance improvement.

How to get the most exciting WebRender experience today:

Using Firefox nightly, go to about:config and change the following prefs:

  • turn off layers.async-pan-zoom.enabled
  • turn on gfx.webrender.enabled
  • turn on gfx.webrendest.enabled
  • turn on gfx.webrender.layers-free
  • add and turn on gfx.webrender.blob-images
  • if you are on Linux, turn on layers.acceleration.force-enabled

This will give you a peek at the future but beware there are lots of rough edges. Don’t expect the performance of WebRender in Gecko to be representative yet (Probably better to try Servo for that).

All of the integration work is now happening in mozilla-central and bugzilla, WebRender development happens on the servo/webrender github repository.