Mozilla L10NMaking a change with Pootle

tl;dr: As of 1 September 2017, Mozilla’s Pootle instance (mozilla.locamotion.org) will be turned off. Between now and then, l10n-drivers will be assisting l10n communities using Pootle in moving all projects over to Pontoon. Pootle’s positive impact in Mozilla’s continued l10n evolution is undeniable and we thank them for all of their contributions throughout the years.

Mozilla’s localization story has evolved over time. While our mission to improve linguistic accessibility on the Web and in the browser space hasn’t changed, the process and tools that help us to accomplish this have changed over the years. Some of us can remember when a Mozilla localizer needed to be skilled in version control systems, Unix commands, text editors, and Bugzilla in order to make an impactful contribution to l10n. Over time (and in many ways thanks to Pootle), it became clear that the technical barrier to entry was actually preventing us from achieving our mission. Beginning with Pootle (Verbatim) and Narro, we set out to lower that barrier through web-based, open source translation management systems. These removed many of the technical requirements on localizers, which in turn led to us being able to ship Firefox in languages that other browsers either couldn’t or simply wouldn’t ship; making Firefox the most localized browser on the market! Thanks to Pootle, we’ve learned that optimizing l10n impact through these tools is critical to our ability to change and adapt to new,  faster development processes taking the Internet and software industries by storm. We created Pontoon to take things further and focus on in-context localization. The demand for that tool became so great that we ended up adding more and more projects to it. Today I’m announcing the next step in our evolution: as of 1 September 2017, all Mozilla l10n communities using Pootle will be migrated to Pontoon and the Mozilla Pootle instance (mozilla.locamotion.org) will be turned off.

Why?

Over the years, we’ve developed a fond relationship with Translate House (the organization behind Pootle), as have many members of the Mozilla l10n community. Nearly five years ago, we entered into a contract agreement with the Translate House team to keep a Mozilla instance of Pootle running, to develop custom features for that instance, and to mentor l10n communities. As l10n has shifted through the Mozilla organization year after year, the l10n team recently found themselves as part of another internal reorganization, right at the moment in which contract renewal was up for discussion. With that reorganization, came new priorities for l10n and a change in budget for the coming year. In the face of those changes, we were unable to renew our contract with Translate House.

What now?

Before 1 September, the l10n-drivers will be proactively contacting l10n communities using Pootle in order to perform project migrations into Pontoon. Moving project-to-project, we’ll start with the locales that we’re currently shipping for a project, then move to those which are in pre-release, and finally those that have seen activity in the last three months. In the process, we’ll look out for any technical unknown unknowns that Pontoon engineers can address to make the transition a positive and seamless one.

There are a few things you can do to make the transition run smoothly:

  1. Log into Pontoon with your Firefox Account. If you don’t already have a Firefox account, please create one.
  2. Process all pending suggestions in your Pootle projects (i.e., bring your community’s suggestion queue down to 0).
  3. Flag issues with Pontoon to the l10n-drivers so that we can triage them and address them in a timely manner. To do this, please file a bug here, or reach out to the l10n-drivers if you’re not yet comfortable with Bugzilla.

We understand that this is a major change to those contributing to Mozilla through Pootle right now. We know that changing tools will make you less productive for a while. We’ll be holding a public community video call to address concerns, frustrations, and questions face-to-face on Thursday, 27 July at 19:00 UTC. You’re all invited to attend. If you can’t attend due to time zones, we’ll record it and publish it on air.mozilla.org. You can submit questions for the call beforehand on this etherpad doc and we’ll talk about them on the call. We’ve also created this FAQ to help answer any outstanding questions. We’ll be adding the questions and answers from the call to this document as well.

Finally, I would like to personally extend my thanks to Translate House. Their impact on open source localization is unmatched and I’ve truly enjoyed the relationships we’ve built with that team. We wish them all the best in their future direction and hope to have opportunities to collaborate and stand together in support of open localization in the future.

Air MozillaMozilla Weekly Project Meeting, 24 Jul 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

hacks.mozilla.orgOptimizing Performance of A-Frame Scenes for Mobile Devices

A-Frame makes building 3D and VR web applications easy, so developers of all skill levels can create rich and interactive virtual worlds – and help make the web the best and largest deployment surface for VR content. For an Oregon State University capstone project focused on WebVR, our team investigated performance and optimizations for A-Frame on Android smartphones. We developed a means of benchmarking the level of 3D complexity a mobile phone is capable of, and determining which performance metrics are required for such a benchmark.

Team OVRAR!

From the left, Team OVRAR (Optimizing Virtual Reality and Augmented Reality):

Branden Berlin: Javascript Compatibility and Model Lighting
Charles Siebert: Team Captain, Project Designer, and Modeling
Yipeng (Roger) Song: Animations and Texturing

Results and Recommendations

Texture size: The framework resizes textures to the nearest power of two, which heavily increases the loading and rendering workload in the scenes. We found that high-resolution textures that didn’t match the criteria reached sizes of 8196×8196, with one texture taking up to 20 MB! Using texture dimensions that are a power of two helps ensure optimal memory use. Check the Web Console for warnings when textures are resized.

Asset Limit: We found that having more than 70 MB of assets loaded for one web page was unrealistic in a phone environment. It caused significant delays in loading the scene fully, and in some cases crashed the browser on our phones. Use the Allocations recorder in the Performance Tool in Firefox to check your scene’s memory usage, and the A-Frame Inspector to tune aspects of rendering for individual objects.

Tree map

Resolution cost: Higher resolution trees caused delays in loading the models and significant slowdowns in rendering the scenes. Our high resolution tree features 37,000 vertices which increases the graphics rendering workload, including lighting from multiple light sources. This heavily limited the number of models we could load into our scene. We also found an upper limit for our devices while handling these trees: When our room reached about 1,000,000 vertices, our phone browsers would crash after spending a few minutes attempting to load and render. You can add the “stats” property to your tag to see the number of vertices in the scene.

Object count: Load times increased linearly based on the number of models to be drawn to the scene. This would add a significant amount of time, if each object to be loaded took, for example, three milliseconds. Further inspecting the memory snapshot shows that our object models are read in and stored into object arrays for quicker access and rendering. Larger object models would also increase linearly based off the number of vertices and faces that are used to create the model, and their resulting normal vectors. Check the A-Frame stats monitor to keep an eye on your object count.

Measurement overhead: During the testing, we used WebIDE to monitor on-device performance. We found that the overhead of USB debugging on our Android devices caused performance to drop by nearly half. Our testing showed that CPU performance was not the leading bottleneck in rendering the scenes. CPU usage hovered at 10-25% during heavy performance drops. This shows that the rendering is mostly done on the GPU, which follows how OpenGL ES 2.0 operates in this framework.

Testing Approach

Our approach was to:

  • render multiple scenes while measuring specific metrics
  • determine the best practices for those metrics on mobile
  • report any relevant bugs that appear.

The purpose of creating a benchmark application for a mobile device is to give a baseline for what is possible to develop, so developers can use this information to plan their own projects.

We tested on the LG Nexus 5X and used the WebIDE feature in Firefox Nightly to pull performance statistics from the phone while it was rendering our scenes, tracking frames-per-second (FPS), and using memory. Additionally, we tracked processor usage on the device through Android’s native developer settings.

To begin, we broke down the fundamental parts of what goes into rendering computer graphics, and created separate scenes to test each of these parts on the device. We tested object modeling, texturing, animation, and lighting, and then created standards of performance that the phone needed to meet for each. We aimed to first find a baseline performance of 30 FPS for each part and then find the upper bound – the point at which the feature breaks or causes visual drops in performance. We separated these features by creating a VR environment with four “rooms” that tested each in A-Frame.

Room 1: Loading object models using obj-loader

Room 4 screenshot

In the first room, we implemented a high-resolution tree, loading a large number of low vertice-count objects and comparing to a low number of high vertice-count objects. Having a comparable number of vertices rendered in either scene helped us determine the performance impact of loading multiple objects at once.

Room 2: Animations and textures

In this room, we implemented textures and animations to determine the impact on initial load times and the impact in calculating animation methods. We used A-Frame’s built-in functions to attach assets to objects to texture them, and we used A-Frame’s animation methods to animate the objects in this room. This allowed us to easily test this scenario of animating the textured objects and measure the differences between the two iterations. In the first iteration, we implemented low-resolution textures on objects to compare them with high-resolution textures in the second iteration. These resolution sizes varied from 256×256 to 8196×8196. We also wanted to compare the performance between the two rooms, and see if texturing the objects would cause any unforeseen issues with animations other than the initial load time when downloading the assets from the web page.

Room 3: User interaction and lighting

This room’s initial concept focused on the basis of gaming: user interaction. We utilized JavaScript within A-Frame to allow the user to interact with objects scattered about a field. Due to the limited mobility of mobile-VR interaction, we kept it to visual interaction. Once the user looked at an object, it would either shrink or enlarge. We wanted to see if any geometric change due to interaction would impact hardware demand. We manipulated the growth size of object interactions and found a few unreliable stutters. Generally, though, the hardware performance was stable.

For the second iteration, we ramped up the effects of user interactions. We saw that nothing changed when it came to physical effects on objects in the world, so we decided to include something that is more taxing on the hardware: lighting.

As the user interacted with an object, the object would then turn into a light source, producing an ambient light at maximum intensity. We scattered these objects around the room and had the user turn them on, one by one. We started with 10 ‘suns’ and noticed an initial lag when loading the room, as well as a 2-3 second FPS drop to 13, when turning on the first sphere. After that, the rest of the spheres turned on smoothly. We noticed a steady and consistent drop of about 10 FPS for every 10 max-intensity light sources. However, as the intensity was decreased, more and more lighting sources were allowed before a noticeable change in performance occurred.

Room 3 screenshots

Room 4: All previous features implemented together.

Developers are unlikely to use just one of these specific features when creating their applications. We created this room to determine if the performance would drop at an exponential rate if all features were added together, as this would be a realistic scenario.

Further Information

You can find all the source code and documentation for our OVRAR project on Github.

If you have any questions, ask in the comments below. Thanks!

QMOFirefox Developer Edition 55 Beta 11 Testday Results

Hello!

As you may already know, last Friday – July 21st – we held a new Testday event, for Firefox Developer Edition 55 Beta 11.

Thank you all for helping us make Mozilla a better place – Ilse Macias, Athira Appu, Iryna Thompson.

From India team:  Fahima Zulfath A, Nagarajan .R, AbiramiSD, Baranitharaan, Bharathvaj, Surentharan.R.A, R.Krithika Sowbarnika, M.ponmurugesh.

From Bangladesh team: Maruf Rahman, Sajib Hawee, Towkir Ahmed, Iftekher Alam, Tanvir Rahman, Md. Raihan Ali, Sazzad Ehan, Tanvir Mazharul, Md Maruf Hasan Hridoy, Saheda Reza Antora, Anika Alam Raha, Taseenul Hoque Bappi.

Results:

– several test cases executed for the Screenshots, Simplify Page and Shutdown Video Decoder features;

– 7 new logged bugs: 1383397, 1383403, 1383410, 1383102, 1383021, #3196, #3177

– 3 bugs verified: 1061823, 1357915, 1381692

Thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Air MozillaWebdev Beer and Tell: July 2017

Webdev Beer and Tell: July 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaWorking Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long

Working Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long On July 20, Jennifer Selby Long, an expert in the ethical use of the Myers-Briggs Type Indicator® (MBTI®), will lead us in an interactive session...

Air MozillaReps Weekly Meeting Jul. 20, 2017

Reps Weekly Meeting Jul. 20, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

hacks.mozilla.orgThe Next Generation of Web Gaming

Over the last few years, Mozilla has worked closely with other browsers and the industry to advance the state of games on the Web. Together, we have enabled developers to deploy native code on the web, first via asm.js, and then with its successor WebAssembly. Now available in Firefox and Chrome, and also soon in Edge and WebKit, WebAssembly enables near-native performance of code in the browser, which is great for game development, and has also shown benefits for WebVR applications. WebAssembly code is able to deliver more predictable performance due to JIT compilation and garbage collection being avoided. Its wide support across all major browser engines opens up paths to near-native speed, making it possible to build high-performing plugin-free games on the web.

“In 2017 Kongregate saw a shift away from Flash with nearly 60% of new titles using HTML5,” said Emily Greer, co-founder and CEO of Kongregate.  “Developers were able to take advantage of improvements in HTML5 technologies and tools while consumers were able to enjoy games without the need for 3rd-party plugins.  As HTML5 continues to evolve it will enable developers to create even more advanced games that will benefit the millions of gamers on Kongregate.com and the greater, still thriving, web gaming industry.”

Kongregate’s data shows that on average, about 55% of uploaded games are HTML5 games.

And we can also see that these are high-quality games, with over 60% of HTML5 titles receiving a “great” score (better than a 4.0 out of 5 rating).

In spite of this positive trend, opportunities for improvement exist. The web is an ever-evolving platform, and developers are always looking for better performance. One major request we have often heard is for multithreading support on the web. SharedArrayBuffer is a required building block for multithreading, which enables concurrently sharing memory between multiple web workers. The specification is finished, and Firefox intends to ship SharedArrayBuffer support in Firefox 55.

Another common request is for SIMD support. SIMD is short for Single Instruction, Multiple Data. It’s a way for a CPU to parallelize math instructions, offering significant performance improvements for math-heavy requirements such 3D rendering and physics.

The WebAssembly Community Group is now focused on enabling hardware parallelism with SIMD and multithreading as the next major evolutionary steps for WebAssembly. Building on the momentum of shipping the first version of WebAssembly and continued collaboration, both of these new features should be stable and ready to ship in Firefox in early 2018.

Much work has gone into optimizing runtime performance over the last few years, and with that we learned many lessons. We have collected many of these learnings in a practical blog post about porting games from native to web, and look forward to your input on other areas for improvement. As multithreading support lands in 2018, expect to see opportunities to further invest in improving memory usage.

We again wish to extend our gratitude to the game developers, publishers, engine providers, and other browsers’ engine teams who have collaborated with us over the years. We could not have done it without your help — thank you!

hacks.mozilla.orgWebAssembly for Native Games on the Web

The biggest improvement this year to web performance has been the introduction of WebAssembly. Now available in Firefox and Chrome, and coming soon in Edge and WebKit, WebAssembly enables the execution of code at a low assembly-like level in the browser.

Mozilla has worked closely with the games industry for several years to reach this stage: including milestones like the release of games built with Emscripten in 2013, the preview of Unreal Engine 4 running in Firefox (2014), bringing the Unity game engine to WebGL also in 2014, exporting an indie Unity game to WebVR in 2016, and most recently, the March release of Firefox 52 with WebAssembly.

WebAssembly builds on Mozilla’s original asm.js specification, which was created to serve as a plugin-free compilation target approach for applications and games on the web. This work has accumulated a great deal of knowledge at Mozilla specific to the process of porting games and graphics technologies. If you are an engineer working on games and this sounds interesting, read on to learn more about developing games in WebAssembly.

Where Does WebAssembly Fit In?

By now web developers have probably heard about WebAssembly’s promise of performance, but for developers who have not actually used it, let’s set some context for how it works with existing technologies and what is feasible. Lin Clark has written an excellent introduction to WebAssembly. The main point is that unlike JavaScript, which is generally written by hand, WebAssembly is a compilation target, just like native assembly. Except perhaps for small snippets of code, WebAssembly is not designed to be written by humans. Typically, you’d develop the application in a source language (e.g. C/C++) and then use a compiler (e.g. Emscripten), which transforms the source code to WebAssembly in a compilation step.

This means that existing JavaScript code is not the subject of this model. If your application is written in JavaScript, then it already runs natively in a web browser, and it is not possible to somehow transform it to WebAssembly verbatim. What can be possible in these types of applications however, is to replace certain computationally intensive parts of your JavaScript with WebAssembly modules. For example, a web application might replace its JavaScript-implemented file decompression routine or a string regex routine by a WebAssembly module that does the same job, but with better performance. As another example, web pages written in JavaScript can use the Bullet physics engine compiled to WebAssembly to provide physics simulation.

Another important property: Individual WebAssembly instructions do not interleave seamlessly in between existing lines of JavaScript code; WebAssembly applications come in modules. These modules deal with low-level memory, whereas JavaScript operates on high-level object representations. This difference in structure means that data needs to undergo a transformation step—sometimes called marshalling—to convert between the two language representations. For primitive types, such as integers and floats, this step is very fast, but for more complex data types such as dictionaries or images, this can be time consuming. Therefore, replacing parts of a JavaScript application works best when applied to subroutines with large enough granularity to warrant replacement by a full WebAssembly module, so that frequent transitions between the language barriers are avoided.

As an example, in a 3D game written in three.js, one would not want to implement a small Matrix*Matrix multiplication algorithm alone in WebAssembly. The cost of marshalling a matrix data type into a WebAssembly module and then back would negate the speed performance that is gained in doing the operation in WebAssembly. Instead, to reach performance gains, one should look at implementing larger collections of computation in WebAssembly, such as image or file decompression.

On the other end of the spectrum are applications that are implemented as fully in WebAssembly as possible. This minimizes the need to marshal large amounts of data across the language barrier, and most of the application is able to run inside the WebAssembly module. Native 3D game engines such as Unity and Unreal Engine implement this approach, where one can deploy a whole game to run in WebAssembly in the browser. This will yield the best possible performance gain. However, WebAssembly is not a full replacement for JavaScript. Even if as much of the application as possible is implemented in WebAssembly, there are still parts that are implemented in JavaScript. WebAssembly code does not interact directly with existing browser APIs that are familiar to web developers, your program will call out from WebAssembly to JavaScript to interact with the browser. It is possible that this behavior will change in the future as WebAssembly evolves.

Producing WebAssembly

The largest audience currently served by WebAssembly are native C/C++ developers, who are often positioned to write performance sensitive code. An open source community project supported by Mozilla, Emscripten is a GCC/Clang-compatible compiler toolchain that allows building WebAssembly applications on the web. The main scope of Emscripten is support for the C/C++ language family, but because Emscripten is powered by LLVM, it has potential to allow other languages to compile as well. If your game is developed in C/C++ and it targets OpenGL ES 2 or 3, an Emscripten-based port to the web can be a viable approach.

Mozilla has benefited from games industry feedback – this has been a driving force shaping the development of asm.js and WebAssembly. As a result of this collaboration, Unity3D, Unreal Engine 4 and other game engines are already able to deploy content to WebAssembly. This support takes place largely under the hood in the engine, and the aim has been to make this as transparent as possible to the application.

Considerations For Porting Your Native Game

For the game developer audience, WebAssembly represents an addition to an already long list of supported target platforms (Windows, Mac, Android, Xbox, Playstation, …), rather than being a new original platform to which projects are developed from scratch. Because of this, we’ve placed a great deal of focus on development and feature parity with respect to other existing platforms in the development of Emscripten, asm.js, and WebAssembly. This parity continues to improve, although on some occasions the offered features differ noticeably, most often due to web security concerns.

The remainder of this article focuses on the most important items that developers should be aware of when getting started with WebAssembly. Some of these are successfully hidden under an abstraction if you’re using an existing game engine, but native developers using Emscripten should most certainly be aware of the following topics.

Execution Model Considerations

Most fundamental are the differences where code execution and memory model are concerned.

  • Asm.js and WebAssembly use the concept of a typed array (a contiguous linear memory buffer) that represents the low level memory address space for the application. Developers specify an initial size for this heap, and the size of the heap can grow as the application needs more memory.
  • Virtually all web APIs operate using events and an event queue mechanism to provide notifications, e.g. for keyboard and mouse input, file IO and network events. These events are all asynchronous and delivered to event handler functions. There are no polling type APIs for synchronously asking the “browser OS” for events, such as those that native platforms often provide.
  • Web browsers execute web pages on the main thread of the browser. This property carries over to WebAssembly modules, which are also executed on the main thread, unless one explicitly creates a Web Worker and runs the code there. On the main thread it is not allowed to block execution for long periods of time, since that would also block the processing of the browser itself. For C/C++ code, this means that the main thread cannot synchronously run its own loop, but must tick simulation and animation forward based on an event callback, so that execution periodically yields control back to the browser. User-launched pthreads will not have this restriction, and they are allowed to run their own blocking main loops.
  • At the time of writing, WebAssembly does not yet have multithreading support – this capability is currently in development.
  • The web security model can be a bit more strict compared to other platforms. In particular, browser APIs constrain applications from gaining direct access to low-level information about the system hardware, to mitigate being able to generate strong fingerprints to identify users. For example, it is not possible to query information such as the CPU model, the local IP address, amount of RAM or amount of available hard disk space. Additionally, many web features operate on web domain boundaries, and information traveling across domains is configured by cross-origin access control rules.
  • A special programming technique that web security also prevents is the dynamic generation and mutation of code on the fly. It is possible to generate WebAssembly modules in the browser, but after loading, WebAssembly modules are immutable and functions can no longer be added to it or changed.
  • When porting C/C++ code, standard compliant code should compile easily, but native compilers relax certain features on x86, such as unaligned memory accesses, overflowing float->int casts and invoking function pointers via signatures that mismatch from the actual type of the function. The ubiquitousness of x86 has made these kind of nonstandard code patterns somewhat common in native code, but when compiling to asm.js or WebAssembly, these types of constructs can cause issues at runtime. Refer to Emscripten documentation for more information about what kind of code is portable.

Another source of differences comes from the fact that code on a web page cannot directly access a native filesystem on the host computer, and so the filesystem solution that is provided looks a bit different than native. Emscripten defines a virtual filesystem space inside the web page, which backs onto the IndexedDB API for persistence across page visits. Browsers also store downloaded data in navigation caches, which sometimes is desirable but other times less so.

Developers should be mindful in particular about content delivery. In native application stores the model of upfront downloading and installing a large application is an expected standard, but on the web, this type of monolithic deployment model can be an off-putting user experience. Applications can download and cache a large asset package at first run, but that can cause a sizable first-time download impact. Therefore, launching with minimal amount of downloading, and streaming additional asset data as needed can be critical for building a web-friendly user experience.

Toolchain Considerations

The first technical challenge for developers comes from adapting the existing build systems to target the Emscripten compiler. To make this easier, the compiler (emcc & em++) is designed to operate closely as a drop-in replacement for GCC or Clang. This eases migration of existing build systems that are already aware of GCC-like toolchains. Emscripten supports the popular CMake build system configuration generator, and emulates support for GNU Autotools configure scripts.

A fact that is sometimes confused is that Emscripten is not a x86/ARM -> WebAssembly code transformation toolchain, but a cross-compiler. That is, Emscripten does not take existing native x86/ARM compiled code and transform that to run on the web, but instead it compiles C/C++ source code to WebAssembly. This means that you must have all the source available (or use libraries bundled with Emscripten or ported to it). Any code that depends on platform-specific (often closed source) native components, such as Win32 and Cocoa APIs, cannot be compiled, but will need to be ported to utilize other solutions.

Performance Considerations

One of the most frequently asked questions about asm.js/WebAssembly is whether it is fast enough for a particular purpose. Curiously, developers who have not yet tried out WebAssembly are the ones who most often doubt its performance. Developers who have tried it, rarely mention performance as a major issue. There are some performance caveats however, which developers should be aware of.

  • As mentioned earlier, multithreading is not available just yet, so applications that heavily depend on threads will not have the same performance available.
  • Another feature that is not yet available in WebAssembly, but planned, is SIMD instruction set support.
  • Certain instructions can be relatively slower in WebAssembly compared to native. For example, calling virtual functions or function pointers has a higher performance footprint due to sandboxing compared to native code. Likewise, exception handling is observed to cause a bigger performance impact compared to native platforms. The performance landscape can look a bit different, so paying attention to this when profiling can be helpful.
  • Web security validation is known to impact WebGL noticeably. It is recommended that applications using WebGL are careful to optimize their WebGL API calls, especially by avoiding redundant API calls, which still pay the cost for driver security validation.
  • Last, application memory usage is a particularly critical aspect to measure, especially if targeting mobile support as well. Preloading big asset packages on first run and uncompressing large amounts of audio assets are two known sources of memory bloat that are easy to do by accident. Applications will likely need to optimize specifically for this when porting, and this is an active area of optimization in WebAssembly and Emscripten runtime as well.

Summary

WebAssembly provides support for executing low-level code on the web at high performance, similar to how web plugins used to, except that web security is enforced. For developers using some of the super-popular game engines, leveraging WebAssembly will be as easy as choosing a new export target in the project build menu, and this support is available today. For native C/C++ developers, the open source Emscripten toolchain offers a drop-in compatible way to target WebAssembly. There exists a lively community of developers around Emscripten who contribute to its development, and a mailing list for discussion that can help you getting started. Games that run on the web are accessible to everyone independent of which computation platform they are on, without compromising portability, performance, or security, or requiring up front installation steps.

WebAssembly is only one part of a larger collection of APIs that power web-based games, so navigate on to the MDN games section to see the big picture. Hop right on in, and happy Emscriptening!

The Mozilla BlogFirefox Focus for Android Hits One Million Downloads! Today We’re Launching Three New User-Requested Features

Since the launch of Firefox Focus for Android less than a month ago, one million users have downloaded our fast, simple privacy browser app. Thank you for all your tremendous support for our Firefox Focus for Android app. This milestone marks a huge demand for users who want to be in the driver’s seat when it comes to their personal information and web browsing habits.

When we initially launched Firefox Focus for iOS last year, we did so based on our belief that everyone has a right to protect their privacy.  We created the Firefox Focus for Android app to support all our mobile users and give them the control to manage their online browsing habits across platforms.

Within a week of the the Firefox Focus for Android launch, we’ve had more than 8,000 comments, and the app is rated 4.5 stars. We’re floored by the response!

Feedback from Firefox Focus Users

“Awesome, the iconic privacy focused Firefox browser now is even more privacy and security focused.” 

“Excellent! It is indeed extremely lightweight and fast.” 

“This is the best browser to set as your “default”, hands down. Super fast and lightweight.”

 “Great for exactly what it’s built for, fast, secure, private and lightweight browsing. “

New Features

We’re always looking for ways to improve and your comments help shape our products. We huddled together to decide what features we can quickly add and we’re happy to announce the following new features less than a month since the initial launch:

  • Full Screen Videos: Your comments let us know that this was a top priority. We understand that if you’re going to watch videos on your phone, it’s only worth it if you can expand to the full size of your cellphone screen. We added support for most video sites with YouTube being the notable exception. YouTube support is dependent on a bug fix from Google and we will roll it out as soon as this is fixed.
  • Supports Downloads: We use our mobile phones for entertainment – whether it’s listening to music, playing games, reading an ebook, or doing work.  And for some, it requires downloading a file. We updated the Firefox Focus app to support files of all kind.
  • Updated Notification Actions: No longer solely for reminders to erase your history, Notifications now features a shortcut to open Firefox Focus. Finally, a quick and easy way to access private browsing.  

We’re on a mission to make sure our products meet your needs. Responding to your feedback with quick, noticeable improvements is our way of saying thanks and letting you know, “Hey, we’re listening.”

You can download the latest version of Firefox Focus on Google Play and in the App Store. Stay tuned for additional feature updates over the coming months!

 

The post Firefox Focus for Android Hits One Million Downloads! Today We’re Launching Three New User-Requested Features appeared first on The Mozilla Blog.

The Mozilla BlogFirefox for iOS Offers New and Improved Browsing Experience with Tabs, Night Mode and QR Code Reader

Here at Firefox, we’re always looking for ways for users to get the most out of their web experience. Today, we’re rolling out some improvements that will set the stage for what’s to come in the Fall with Project Quantum. Together these new features help to enhance your mobile browsing experience and make a difference in how you use Firefox for iOS.

What’s new in Firefox for iOS:

New Tab Experience

We polished our new tab experience and will be gradually rolling it out so you’ll see recently visited sites as well as highlights from previous web visits.

Night Mode

For the times when you’re in a dark room and the last thing you want to do is turn on your cellphone to check the time – we added Night Mode which dims the brightness of the screen and eases the strain on your eyes. Now, it’ll be easier to read and you won’t get caught checking your email.

 

QR Code Reader

Trying to limit the number of apps on your phone?  We’ve eliminated the need to download a separate app for QR codes with a built-in QR code reader that allows you to quickly access QR codes.

Feature Recommendations

Everyone loves shortcuts and our Feature Recommendations will offer hints and timesavers to improve your overall Firefox experience. To start, this will be available in US and Germany.

To experience the newest features and use the latest version of Firefox for iOS, download the update and let us know what you think.

We hope you enjoy it!

 

The post Firefox for iOS Offers New and Improved Browsing Experience with Tabs, Night Mode and QR Code Reader appeared first on The Mozilla Blog.

CalendarThere is a lot to see — Convert XUL to HTML

This is a repost from medium, where Arshad originally wrote the blog post.

 

In the past blog, I talked mostly about the development environment setup, but this blog will be about the react dialog development.

Since then I have been working on converting some more dialogs into React. I have converted three dialogs — calendar properties dialog, calendar alarm dialog and print dialog into their React equivalent till now. Calendar alarm dialog and print dialog still need some work on state logic but it is not something that will take much time. Here are some screenshots of these dialogs.

calendar-properties-dialog

print-dialog

calendar-alarm-dialog

 

While making react equivalents, I found out XUL highly depends upon attributes and their values. HTML doesn’t work with attributes and their values in the same way XUL does. HTML allows attribute minimization and with React there are some other difficulties related to attributes. React automatically neglects all non-default HTML attributes so to add those attributes I have to add it explicitly using setAttribute method on the element when it has mounted. Here is a short snippet of code which shows how I am adding custom HTML attributes and updating them in React.

class CalendarAlarmWidget extends React.Component {
  componentDidMount() {
    this.addAttributes(this.props);
  }

  componentWillReceiveProps(nextProps) {
    // need to call removeAttributes first
    // so that previous render attributes are removed

    this.removeAttributes();
    this.addAttributes(nextProps);
  }

  addAttributes(props) {
    // add attributes here
  }

  removeAttributes() {
    // remove attributes here
  }
}

XUL also have dialog element which is used instead of window for dialog boxes. I have also made its react equivalent which has nearly all the attributes and functionality that XUL dialog element has. Since XUL has slightly different layout technique to position elements in comparison to HTML, I have dropped some of the layout specific attributes. With the power of modern CSS, it is quite easy to create the layout so instead of controlling layout using attributes I am depending more upon CSS to do these things. Some of the methods like centerWindowOnScreen and moveToAlertPosition are dependent on parent XUL wrapper so I have also dropped them for React equivalent.

There are some elements in XUL whose HTML equivalents are not available and for some XUL elements, HTML equivalents don’t have same structure so their appearance considerably differs. One perfect example would be menulist whose HTML equivalent is select. Unlike menulist whose direct child is menupopup which wraps all menuitem, select element directly wraps all the options so the UI of select can’t be made exactly similar to menulist. option elements are also not customizable unlike menuitem and it also doesn’t support much styling. While it is helpful to have React components that behave similar to their XUL counterparts, in the end only HTML will remain. Therefore it is unavoidable that some features not useful for the new components will be dropped.

I have made some custom React elements to provide all the features that existing dialogs provide, although I am still using HTML select element at some places instead of the custom menulist item because using javascript and extra CSS just to make the element look similar to XUL equivalent is not worth it.

As each platform has its own specific look, there are naturally differences in CSS rules. I have organized the files in a way that it is easy to write rules common to all platforms, but also add per-OS differences. A lot of the UI differences are handled automatically through -moz-appearance rules, which instruct the Mozilla Platform to use OS styling to render the elements. The web app will automatically detect your OS so you can see how the dialog will look on different platforms.

I thought it would be great to get quick suggestions and feedback on UI of dialogs from the community so I have added a comment section on each dialog page. I will be adding more cool features to the web app that can possibly help in making progress in this project.

Thanks to BrowserStack for providing free OSS plans, now I can quickly check how my dialogs are looking on Windows and Mac.

Thanks to yulia [IRC nickname] for finding time to discuss the react implementation of dialog, I hope to have more react discussions in future :)

Feel free to check the dialogs on web app and comment if you have any questions.


Air MozillaThe Joy of Coding - Episode 106

The Joy of Coding - Episode 106 mconley livehacks on real Firefox bugs while thinking aloud.

hacks.mozilla.orgCreating a WebAssembly module instance with JavaScript

This is the 1st article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

WebAssembly is a new way of running code on the web. With it, you can write modules in languages like C or C++ and run them in the browser.

Currently modules can’t run on their own, though. This is expected to change as ES module support comes to browsers. Once that’s in place, WebAssembly modules will likely be loaded in the same way as other ES modules, e.g. using <script type="module">.

But for now, you need to use JavaScript to boot the WebAssembly module. This creates an instance of the module. Then your JavaScript code can call functions on that WebAssembly module instance.

For example, let’s look at how React would instantiate a WebAssembly module. (You can learn more in this video about how React could use WebAssembly.)

When the user loads the page, it would start in the same way.

The browser would download the JS file. In addition, a .wasm file would be fetched. That contains the WebAssembly code, which is binary.

Browser downloading a .js file and a .wasm file

We’ll need to load the code in these files in order to run it. First comes the .js file, which loads the JavaScript part of React. That JavaScript will then create an instance of a WebAssembly module… the reconciler.

To do that, it will call WebAssembly.instantiate.

React.js robot calling WebAssembly.instantiate

Let’s take a closer look at this.

The first thing we pass into WebAssembly.instantiate is going to be the binary code that we got in that .wasm file. That’s the module code.

So we extract the binary into a buffer, and then pass it in.

Binary code being passed in as the source parameter to WebAssembly.instantiate

The engine will start compiling the module code down to something that is specific to the machine that it’s running on.

But we don’t want to do this on the main thread. I’ve talked before about how the main thread is like a full stack developer because it handles JavaScript, the DOM, and layout. We don’t want to block the main thread while we compile the module. So what WebAssembly.instantiate returns is a promise.

Promise being returned as module compiles

This lets the main thread get back to its other work. The main thread knows that once the compiler is finished compiling this module, it will be notified by the promise. That promise will give it the instance.

But the compiled module is not the only thing needed to create the instance. I think of the module as kind of like an instruction book.

The instance is like a person who’s trying to make something with the instruction book. In order to make that thing, they also need raw materials. They need things that they can work with.

Instruction book next to WebAssembly robot

This is where the second parameter to WebAssembly.instantiate comes in. That is the imports object.

Arrow pointing to importObject param of WebAssembly.instantiate
I think of the imports object as a box of those raw materials, like you would get from IKEA. The instance uses these raw materials—these imports—to build a thing, as directed by the instructions. Just as an instruction manual expects a certain set of raw materials, each module expects a specific set of imports.

Imports box next to WebAssembly robot

So when you are instantiating a module, you pass it an imports object that has those imports attached to it. Each import can be one of these four kinds of imports:

  • values
  • function closures
  • memory
  • tables

Values

It can have values, which are basically global variables. The only types that WebAssembly supports right now are integers and floats, so values have to be one of those two types. That will change as more types are added in the WebAssembly spec.

Function closures

It can also have function closures. This means you can pass in JavaScript functions, which WebAssembly can then call.

This is particularly useful because in the current version of WebAssembly, you can’t call DOM methods directly. Direct DOM access is on the WebAssembly roadmap, but not part of the spec yet.

What you can do in the meantime is pass in a JavaScript function that can interact with the DOM in the way you need. Then WebAssembly can just call that JS function.

Memory

Another kind of import is the memory object. This object makes it possible for WebAssembly code to emulate manual memory management. The concept of the memory object confuses people, so I‘ve gone into a little bit more depth in another article, the next post in this series.

Tables

The final type of import is related to security as well. It’s called a table. It makes it possible for you to use something called function pointers. Again, this is kind of complicated, so I explain it in the third part of this series.

Those are the different kinds of imports that you can equip your instance with.

Different kinds of imports going into the imports box

To return the instance, the promise returned from WebAssembly.instantiate is resolved. It contains two things: the instance and, separately, the compiled module.

The nice thing about having the compiled module is that you can spin up other instances of the same module quickly. All you do is pass the module in as the source parameter. The module itself doesn’t have any state (that’s all attached to the instance). That means that instances can share the compiled module code.

Your instance is now fully equipped and ready to go. It has its instruction manual, which is the compiled code, and all of its imports. You can now call its methods.

WebAssembly robot is booted

In the next two articles, we’ll dig deeper into the memory import and the table import.

hacks.mozilla.orgMemory in WebAssembly (and why it’s safer than you think)

This is the 2nd article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

Memory in WebAssembly works a little differently than it does in JavaScript. With WebAssembly, you have direct access to the raw bytes… and that worries some people. But it’s actually safer than you might think.

What is the memory object?

When a WebAssembly module is instantiated, it needs a memory object. You can either create a new WebAssembly.Memory and pass that object in. Or, if you don’t, a memory object will be created and attached to the instance automatically.

All the JS engine will do internally is create an ArrayBuffer (which I explain in another article). The ArrayBuffer is a JavaScript object that JS has a reference to. JS allocates the memory for you. You tell it how much memory are going to need, and it will create an ArrayBuffer of that size.

React.js requesting a new memory object and JS engine creating one

The indexes to the array can be treated as though they were memory addresses. And if you need more memory later, you can do something called growing to make the array larger.

Handling WebAssembly’s memory as an ArrayBuffer — as an object in JavaScript — does two things:

  1. makes it easy to pass values between JS and WebAssembly
  2. helps make the memory management safe

Passing values between JS and WebAssembly

Because this is just a JavaScript object, that means that JavaScript can also dig around in the bytes of this memory. So in this way, WebAssembly and JavaScript can share memory and pass values back and forth.

Instead of using a memory address, they use an array index to access each box.

For example, the WebAssembly could put a string in memory. It would encode it into bytes…

WebAssembly robot putting string "Hello" through decoder ring

…and then put those bytes in the array.

WebAssembly robot putting bytes into memory

Then it would return the first index, which is an integer, to JavaScript. So JavaScript can pull the bytes out and use them.

WebAssembly robot returning index of first byte in string

Now, most JavaScript doesn’t know how to work directly with bytes. So you’ll need something on the JavaScript side, like you do on the WebAssembly side, that can convert from bytes into more useful values like strings.

In some browsers, you can use the TextDecoder and TextEncoder APIs. Or you can add helper functions into your .js file. For example, a tool like Emscripten can add encoding and decoding helpers.

JS engine pulling out bytes, and React.js decoding them

So that’s the first benefit of WebAssembly memory just being a JS object. WebAssembly and JavaScript can pass values back and forth directly through memory.

Making memory access safer

There’s another benefit that comes from this WebAssembly memory just being a JavaScript object: safety. It makes things safer by helping to prevent browser-level memory leaks and providing memory isolation.

Memory leaks

As I mentioned in the article on memory management, when you manage your own memory you may forget to clear it out. This can cause the system to run out of memory.

If a WebAssembly module instance had direct access to memory, and if it forgot to clear out that memory before it went out of scope, then the browser could leak memory.

But because the memory object is just a JavaScript object, it itself is tracked by the garbage collector (even though its contents are not).

That means that when the WebAssembly instance that the memory object is attached to goes out of scope, this whole memory array can just be garbage collected.

Garbage collector cleaning up memory object

Memory isolation

When people hear that WebAssembly gives you direct access to memory, it can make them a little nervous. They think that a malicious WebAssembly module could go in and dig around in memory it shouldn’t be able to. But that isn’t the case.

The bounds of the ArrayBuffer provide a boundary. It’s a limit to what memory the WebAssembly module can touch directly.

Red arrows pointing to the boundaries of the memory object

It can directly touch the bytes that are inside of this array but it can’t see anything that’s outside the bounds of this array.

For example, any other JS objects that are in memory, like the window global, aren’t accessible to WebAssembly. That’s really important for security.

Whenever there’s a load or a store in WebAssembly, the engine does an array bounds checks to make sure that the address is inside the WebAssembly instance’s memory.

If the code tries to access an out-of-bounds address, the engine will throw an exception. This protects the rest of the memory.

WebAssembly trying to store out of bounds and being rejected

So that’s the memory import. In the next article, we’ll look at another kind of import that makes things safer… the table import.

hacks.mozilla.orgWebAssembly table imports… what are they?

This is the 3rd article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

In the first article, I introduced the four different kinds of imports that a WebAssembly module instance can have:

  • values
  • function imports
  • memory
  • tables

That last one is probably a little unfamiliar. What is a table import and what is it used for?

Sometimes in a program you want to be able to have a variable that points to a function, like a callback. Then you can do things like pass it into another function.Defining a callback and passing it into a function

In C, these are called function pointers. The function lives in memory. The variable, the function pointer, just points to that memory address.

Function pointer at memory address 4 points to the callback at memory address 1

And if you need to, later you could point the variable to a different function. This should be a familiar concept.

Function pointer at memory address 4 changes to point to callback2 at memory address 4

In web pages, all functions are just JavaScript objects. And because they’re JavaScript objects, they live in memory addresses that are outside of WebAssembly’s memory.

JS function living in JS managed memory

If we want to have a variable that points to one of these functions, we need to take its address and put it into our memory.

Function pointer in WebAssembly memory pointing to function

But part of keeping web pages secure is keeping those memory addresses hidden. You don’t want code on the page to be able to see or manipulate that memory address. If there’s malicious code on the page, it can use that knowledge of where things are laid out in memory to create an exploit.

For example, it could change the memory address that you have in there, to point to a different memory location.

Then when you try and call the function, instead you would load whatever is in the memory address the attacker gave you.

Malicious actor changing the address in WebAssembly memory to point to malicious code

That could be malicious code that was inserted into memory somehow, maybe embedded inside of a string.

Tables make it possible to have function pointers, but in a way that isn’t vulnerable to these kinds of attacks.

A table is an array that lives outside of WebAssembly’s memory. The values are references to functions.

Another region of memory is added, distinct from WebAssembly memory, which contains the function pointer

Internally, these references contain memory addresses, but because it’s not inside WebAssembly’s memory, WebAssembly can’t see those addresses.

It does have access to the array indexes, though.

All memory outside of the WebAssembly memory object is obfuscated

If the WebAssembly module wants to call one of these functions, it passes the index to an operation called call_indirect. That will call the function.

call_indirect points to the first element of the obfuscated array, which in turn points to the function

Right now the use case for tables is pretty limited. They were added to the spec specifically to support these function pointers, because C and C++ rely pretty heavily on these function pointers.

Because of this, the only kinds of references that you can currently put in a table are references to functions. But as the capabilities of WebAssembly expand—for example, when direct access to the DOM is added—you’ll likely see other kinds of references being stored in tables and other operations on tables in addition to call_indirect.

Web Application SecurityA Security Audit of Firefox Accounts

FXA-01-reportTo provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Firefox Accounts (FxA) that Cure53 conducted last fall. At Mozilla, we sponsor security audits of core open source software underpinning the Web and Internet, recently relaunched our web bug bounty program, find and fix vulnerabilities ourselves, and open source our code for anyone to review. Despite being available to more reviewers, open source software is not necessarily reviewed more thoroughly or frequently than closed source software, and the extra attention from third party reviewers can find outstanding issues and vulnerabilities. To augment our other initiatives and improve the overall security of our web services, we engage third party organizations to audit the security and review the code of specific services.

As Firefox’s central authentication service FxA is a natural first target. Its security is critical to millions of users who rely on it to authenticate with our most sensitive services, such as addons.mozilla.org and Sync. Cure53 ran a comprehensive security audit that encompassed the web services powering FxA and the cryptographic protocol used to protect user accounts and data. They identified 15 issues, none of which were exploited or put user data at risk.

We thank Cure53 for reviewing FxA and increasing our trust in the backbone of Firefox’s identity system. The audit is a step toward providing higher quality and more secure services to our users, which we will continue to improve through our various security initiatives. In the rest of this blog post, we discuss the technical details of the four highest severity issues. The report is available here and you can sign up or log into Firefox Accounts on your desktop or mobile device at: https://accounts.firefox.com/signup

 

FXA-01-001 HTML injection via unsanitized FxA relier Name

The one issue Cure53 ranked as critical, FXA-01-001 HTML injection via unsanitized FxA relier Name, resulted from displaying the name of a relier without HTML escaping on the relier registration page. This issue was not exploitable from outside Mozilla, because the endpoint for registering new reliers is not open to the public. A strict Content Security Policy (CSP) blocked most Cross-Site-Scripting (XSS) on the page, but an attacker could still exfiltrate sensitive authentication data via scriptless attacks and deface or repurpose the page for phishing. To fix the vulnerability soon after Cure53 reported it to us, we updated the template language to escape all variables and use an explicit naming convention for unescaped variables. Third party relier names are now sanitized and escaped.

FXA-01-004 XSS via unsanitized Output on JSON Endpoints

The first of three issues ranked high, FXA-01-004 XSS via unsanitized Output on JSON Endpoints, affected legacy browsers handling JSON endpoints with user controlled fields in the beginning of the response. For responses like the following:

    {
        "id": "81730c8682f1efa5",
        "name": "<img src=x onerror=alert(1)>",
        "trusted": false,
        "image_uri": "",
        "redirect_uri": "javascript:alert(1)"
    }

an attacker could set the name or redirect_uri such that legacy browsers sniff the initial bytes of a response, incorrectly guess the MIME type as HTML instead of JSON, and execute user defined scripts.  We added the HTTP header X-Content-Type-Options: nosniff (XCTO) to disable MIME type sniffing, and wrote middleware and patches for the web frameworks to unicode escape <, >, and & characters in JSON responses.

FXA-01-014 Weak client-side Key Stretching

The second issue with a high severity ranking, FXA-01-014 Weak client-side Key Stretching, is “a tradeoff between security and efficiency”. The onepw protocol threat model includes an adversary capable of breaking or bypassing TLS. Consequently, we run 1,000 iterations of PBKDF2 on user devices to avoid sending passwords directly to the server, which runs a further 216 scrypt iterations on the PBKDF2-stretched password before storing it. Cure53 recommended storing PBKDF2 passwords with a higher work factor of roughly 256,000 iterations, but concluded “an exact recommendation on the number of iterations cannot be supplied in this instance”. To keep performance acceptable on less powerful devices, we have not increased the work factor yet.

FXA-01-010 Possible RCE if Application is run in a malicious Path

The final high severity issue, FXA-01-010 Possible RCE if Application is run in a malicious Path, affected people running FxA web servers from insecure paths in development mode. The servers exposed an endpoint that executes shell commands to determine the release version and git commit they’re running in development mode. For example, the command below returns the current git commit:

var gitDir = path.resolve(__dirname, '..', '..', '.git')
var cmd = util.format('git --git-dir=%s rev-parse HEAD', gitDir)
exec(cmd, …)

Cure53 noted malicious commands like rm -rf * in the directory path __dirname global would be executed and recommended filtering and quoting parameters. We modified the script to use the cwd option and avoid filtering the parameter entirely:

var cmd = 'git rev-parse HEAD'
exec(cmd, { env: { GIT_CONFIG: gitDir } } ...)

Mozilla does not run servers from insecure paths, but some users host their own FxA services and it is always good to consider malicious input from all sources.

 

We reviewed the higher ranked issues from the report, circumstances limiting their impact, and how we fixed and addressed them. We invite you to contribute to developing Firefox Accounts and report security issues through our bug bounty program as we continue to improve the security of Firefox Accounts and other core services.

The post A Security Audit of Firefox Accounts appeared first on Mozilla Security Blog.

Air MozillaIntern Presentations: Round 1: Tuesday, July 18th

Intern Presentations: Round 1: Tuesday, July 18th Intern Presentations 4 presenters Time: 1:00PM - 2:00PM (PDT) - each presenter will start every 15 minutes 2 in MTV, 2 in TOR

The Mozilla BlogMozilla Announces “Net Positive: Internet Health Shorts” – A Film Screening About Society’s Relationship With The Internet

Mozilla, the non-profit behind the Firefox browser, is excited to support Rooftop Films in bringing a memorable evening of film and discussion to The Courtyard of Industry City, in beautiful Brooklyn, New York on Saturday, July 29 starting at 8 PM ET. As a part of Rooftop Films Annual Summer Series, Hitrecord will premiere a film produced by Joseph Gordon-Levitt about staying safe online.

Mozilla believes the Internet is the most fantastically fun, awe-inspiring place we’ve ever built together. It’s where we explore new territory, build innovative products and services, swap stories, get inspired, and find our way in the world. It was built with the intention that everyone is welcome.

Right now, however, we’re at a tipping point. Big corporations want to privatize our largest public resource. Fake news and filter bubbles are making it harder for us to find our way. Online bullies are silencing inspired voices. And our desire to explore is hampered by threats to our safety and privacy.

“The Internet is a vast, vibrant ecosystem,” said Jascha Kaykas-Wolff, Mozilla’s Chief Marketing Officer. “But like any ecosystem, it’s also fragile. If we want the Internet to thrive as a diverse, open and safe place where all voices are welcome, it’s going to take committed citizens standing tall to protect it. Mozilla is proud to support the artists and filmmakers who are raising awareness for Internet health through creativity and storytelling.”

Dan Nuxoll, Program Director at Rooftop Films said, “In such a pivotal year for the Internet, we are excited to be working with Mozilla in support of films that highlight with such great detail our relationship with the web. As a non-profit, we are thrilled to be collaborating with another non-profit in support of consumer education and awareness about issues that matter most.”

Joseph Gordon-Levitt, actor and filmmaker said, “Mozilla is really a great organization, it’s all about keeping the Internet free, open and neutral — ideas very near and dear to my heart. I was flattered when Mozilla knocked on hitRECord’s door and asked us to collaborate.”

Join us as we explore, through short films, what’s helping and what’s hurting the Web. We are calling the event, “Net Positive: Internet Health Shorts.” People can register now to secure a spot.

Featured Films:
Harvest – Kevin Byrnes
Hyper Reality – Keiichi Matsuda
I Know You From Somewhere – Andrew Fitzgerald
It Should Be Easy – Ben Meinhardt
Lovestreams – Sean Buckelew
Project X – Henrik Moltke and Laura Poitras
Too Much Information – Joseph Gordon Levitt & hitRECord
Price of Certainty – Daniele Anastasion
Pizza Surveillance – Micah Laaker

Saturday, July 29
Venue: The Courtyard of Industry City
Address: 274 36th Street (Sunset Park, Brooklyn)
8:00 PM: Doors Open
8:30 PM: Live Music
9:00 PM: Films Begin
10:30 PM: Post-Screening Discussion with Filmmakers
11:00 PM: After-party sponsored by Corona Extra, Tanqueray, Freixenet, DeLeón Tequila, and Fever-Tree Tonic

In the past year, Mozilla has supported the movement to raise awareness for Internet Health by launching the IRL podcast, hosting events around the country, and collaborating with change-makers such as Joseph Gordon-Levitt to educate the public about a healthy and safe Internet environment.

About Mozilla

Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets, and mobile devices. For more information, visit www.mozilla.org.

About Rooftop Films

Rooftop Films is a non-profit organization whose mission is to engage and inspire the diverse communities of New York City by showcasing the work of emerging filmmakers and musicians. In addition to their annual Summer Series – which takes place in unique outdoor venues every weekend throughout the summer – Rooftop provides grants to filmmakers, rents equipment at low-cost to artists and non-profits, and supports film screenings citywide with the Rooftop Films Community Fund. At Rooftop Films, we bring underground movies outdoors. For more information and updates please visit their website at www.rooftopfilms.com.

The post Mozilla Announces “Net Positive: Internet Health Shorts” – A Film Screening About Society’s Relationship With The Internet appeared first on The Mozilla Blog.

Mozilla Add-ons BlogAdd-on Compatibility for Firefox 56

Firefox 56 will be released on September 26th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 56 for Developers, so you should also give it a look. Also, if you haven’t yet, please read our roadmap to Firefox 57.

Compatibility changes

Let me know in the comments if there’s anything missing or incorrect on these lists. We’d like to know if your add-on breaks on Firefox 56.

The automatic compatibility validation and upgrade for add-ons on AMO will run in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 55.

Last stop!

LEGO end of train line

Firefox 56 will be the last version of Firefox to support legacy add-ons. It’s the last release cycle you’ll have to port your add-ons to WebExtensions. Many planned APIs won’t make the cut for 57, so make sure that you plan your development timeline accordingly.

This is also the last compatibility overview I’ll write. I started writing these 7 years ago, the first one covering Firefox 4. Looking ahead, backwards-incompatible changes in WebExtensions APIs should be rare. When and if they occur, we’ll post one-offs about them, so please keep following this blog for updates.

The post Add-on Compatibility for Firefox 56 appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgPicasso Tower 360º tour with A-Frame

A 360º tour refers to an experience that simulates an in-person visit through the surrounding space. This “walkthrough” visit is composed of scenes in which you can look around at any point, similar to how you can look around in Google Street View. In a 360º tour, different scenes are accessible via discrete hotspots that users can enable or jump into, transporting themselves to a new place in the tour.

The magenta octahedron represents the user’s point of view. The image covers the inner surface of the sphere.

The magenta octahedron represents the user’s point of view. The image covers the inner surface of the sphere.

With A-Frame, creating such an experience is a surprisingly simple task.

360º panoramas

In photography, panoramas are essentially wide-angle images. Wide-angle means wide field of view, so the region of the physical space captured by the camera is wider than in regular pictures. A 360º panorama captures the space all the way around the camera.

In the same way that wide-angle photography requires special lenses, 360º panoramas require special cameras. You can read Kevin Ngo’s guide to 360º photography for advice and recommendations when creating panoramas.

Trying to represent a sphere in a rectangular format results in what we call a projection. Projection introduces distortion —straight lines become curves. You will probably be able to recognize panoramic pictures thanks to the effects of distortion that occur when panoramic views are represented in a bi-dimensional space:

To undo the distortion, you have to project the rectangle back into a sphere. With A-Frame, that means using the panorama as the texture of a sphere facing the camera. The simplest approach is to use the a-sky primitive. The projection of the image must be equirectangular in order to work in this setup.

See the Pen 360º panorama viewer by Salvador de la Puente González (@lodr) on CodePen.

By adding some bits of JavaScript, you can modify the src attribute of the sky primitive to change the panorama texture and enable the user to teleport to a different place in your scene.

Getting equirectangular images actually depends on the capturing device. For instance, the Samsung Gear 360 camera requires the use of official Samsung stitching software to combine the native dual-fisheye output into the equirectangular version; while the Ricoh Theta S outputs both equirectangular and dual-fisheye images without further interaction.

A dual-fisheye image arranges two fisheye images side by side

A dual-fisheye image is the common output of 360º cameras. A stitching software can convert this into an equirectangular image.

A 360º tour template

To create such an experience, you can use the 360 tour template that comes with aframe-cli. The aframe-360-tour-template encapsulates the concepts mentioned above in reusable components and meaningful primitives, enabling a developer to write semantic 360º tours in just a few steps.

aframe-cli has not been released yet (this is bleeding edge A-Frame tooling) but you can install a pre-release version with npm by running the following command:

npm install -g aframevr-userland/aframe-cli

Now you can access aframe-cli using the aframe command. Go to your workspace directory and start a new project by specifying the name of the project folder and the template:

$ aframe new tour --template 360-tour
$ cd tour

Start the experience with the following command:

$ aframe serve

And visit http://127.0.0.1:3333 to experience the tour.

Adding panoramas

Visit my Picasso Tower 360 gallery on Flickr and download the complete gallery. (Images are public domain so don’t worry about licensing issues.)

Decompress the file and paste the images inside the app/assets/images/ folder. I will use just three images in this example. After you finish this article, you can experiment with the complete tour. Be sure to notice that the panorama order matches naming: 360_0071_stitched_injected_35936080846_o goes before 360_0072_stitched_injected_35936077976_o, which goes before 360_0073_stitched_injected_35137574104_o and so on…

Edit index.html to locate the panoramas section inside the a-tour primitive. Change current panoramas by modifying their src attribute or add new ones by writing new a-panorama primitives. Replace the current panoramas with the following ones:

<a-panorama id="garden" src="images/360_0071_stitched_injected_35936080846_o.jpg"></a-panorama>
<a-panorama id="corner" src="images/360_0074_stitched_injected_35936077166_o.jpg"></a-panorama>
<a-panorama id="facade" src="images/360_0077_stitched_injected_35137573294_o.jpg"></a-panorama>

Save and reload your browser tab to see the new results.

It is possible you’ll need to correct the rotation of the panorama to make the user face in the direction you want. Change the rotation component of the panorama to do so. (Remember to save and reload to see your changes.):

<a-panorama id="garden" src="images/360_0071_stitched_injected_35936080846_o.jpg" rotation=”0 90 0”></a-panorama>

Now you need to connect the new sequence to the other panoramas with positioned hotspots. Replace current hotspots with the following one and look at the result by reloading the tab:

<a-hotspot id="garden-to-corner" for="garden" to="corner" mixin="hotspot-target" position="-3.86 -0.01 -3.18" rotation="-0.11 50.47 -0.00">
  <a-text value="CORNER" align="center" mixin="hotspot-text"></a-text>
</a-hotspot>

Remember that in order to activate a hotspot, while in desktop mode, you have to place the black circle over the magenta octahedron and click on the screen.

Placing hotspots

Positioning hotspots can be a frustrating endeavour. Fortunately, the template comes with an useful component to help with this task. Simply add the hotspot-helper component to your tour, referencing the hotspot you want to place as the value for the target property: <a-tour hotspot-helper="target: #corner-to-garden">. The component will move the hotspot as you look around and will display a widget in the top-left corner showing the world position and rotation of the hotspot, allowing you to copy these values to the clipboard.

Custom hotspots

You can customise the hotspot using mixins. Edit index.html and locate hotspot-text and hotspot-target mixin primitives inside the assets section.

For instance, to avoid the need to copy the world rotation values, we are going to use ngokevin’s lookAt component which is already included in the template.

Modify the entity with hotspot-text id to looks like this:

<a-mixin id="hotspot-text" look-at="[camera]" text="font: exo2bold; width: 5" geometry="primitive: plane; width: 1.6; height: 0.4" material="color: black;" position="0 -0.6 0"></a-mixin>

Cursor feedback

If you enter VR mode, you will realise that teleporting to a new place requires you to fix your gaze on the hotspot you want to get to for an interval of time. We can change the duration of this interval, modifying the cursor component. Try increasing the timeout to two seconds:

<a-entity cursor="fuse: true; fuse-timeout: 2000" position="0 0 -1"
          geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
          material="color: black; shader: flat">

Once you add fuse: true to your cursor component, you won’t need to click on the screen, even out of VR mode. A click event will trigger after fuse-timeout milliseconds.

Following the suggestion in the article about the cursor component, you can create the perception that something is about to happen by attaching an a-animation primitive inside the cursor entity:

<a-entity cursor="fuse: true; fuse-timeout: 2000" position="0 0 -1"
          geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
          material="color: black; shader: flat">
      <a-animation begin="fusing" end="mouseleave" easing="ease-out" attribute="scale"
                   fill="backwards" from="1 1 1" to="0.2 0.2 0.2"
                   dur="2000"></a-animation>
</a-entity>
Fix the gaze on a hotspot for 2 seconds to activate the hotspot and teleport.

Click on the picture above to see fuse and the animation feedback in action.

Ambient audio

Sound is a powerful tool for increasing the illusion of presence. You can find several places on the Internet offering royalty-free sounds like soundbible.com. Once you decide on the perfect ambient noise for the experience you’re creating, grab the file URL or download it if not available and serve the file locally. Create a new folder sounds under app/assets and put the audio file inside.

Add an audio tag that points to the sound file URL inside the <a-assets> element in order for the file to load:

<a-assets>
   ...
   <audio id="ambient-sound" src="sounds/environment.mp3"></audio>
</a-assets>

And use the sound component referencing the audio element id to start playing the audio:

<a-tour sound="src: #ambient-sound; loop: true; autoplay: true; volume: 0.4"></a-tour>

Adjust the volume by modifying the volume property which ranges from 0 to 1.

Conclusion

360º tours offer first-time WebVR creators a perfect starting project that does not require exotic or expensive gear to begin VR development. Panoramic 360º scenes naturally fall back to regular 2D visualization on a desktop or mobile screen and with a cardboard headset or VR head-mounted display, users will enjoy an improved sense of immersion.

With aframe-cli and the 360º tour template you can now quickly set up the basics to customise and publish your 360º VR tour. Create a new project to show us your favourite places (real or imaginary!) by adding panoramic views, or start hacking on the template to extend its basic functionality. Either way, don’t forget to share your project with the A-Frame community in Slack and Twitter.

The Mozilla Blog60,000,000 Clicks for Copyright Reform

More than 100,000 people—and counting—are demanding Internet-friendly copyright laws in the EU

 

60,000,000 digital flyers.

117,000 activists.

12,000 tweets to Members of the European Parliament (MEPs).

Europe has been Paperstormed.

Earlier this year, Mozilla and our friends at Moniker launched Paperstorm.it, a digital advocacy tool that urges EU policymakers to update copyright laws for the Internet age.

Paperstorm.it users drop digital flyers onto maps of European landmarks, like the Eiffel Tower and the Reichstag Building in Berlin. When users drop a certain amount, they trigger impassioned tweets to European lawmakers:

“We built Paperstorm as a fun (and mildly addictive) way for Internet users to learn about and engage with a serious issue: the EU’s outdated copyright laws,” says Mozilla’s Brett Gaylor, one of Paperstorm’s creators.

“The Parliament has a unique opportunity to reform copyright,” says Raegan MacDonald, Mozilla’s Senior EU Policy Manager. “We hope this campaign served as a reminder that EU citizens want a modern framework that will promote — not hinder — innovation and creativity online. The success of this reform hinges on whether the interests of these citizens — whether creators, innovators, teachers, librarians, or anyone who uses the internet — are truly taken into account in the negotiations.”

Currently, lawmakers are crafting amendments to the proposal for a new copyright law, a process that will end this year. Now is the time to make an impact. And we are.

Over the last two months, more than 100,000 Internet users visited Paperstorm.it. They sent 12,000 tweets to key MEPs, like France’s Jean-Marie Cavada, Germany’s Angelika Niebler, and Lithuania’s Antanas Guoga. In total, Paperstormers contacted 13 MEPs in 10 countries: Austria, France, Germany, Italy, Lithuania, Malta, Poland, Romania, Sweden and the UK.

Then, we created custom MEP figurines inside Paperstorm snowglobes. A Mozilla community member from Italy hand-delivered these snowglobes right to MEPs offices in Brussels, alongside a letter urging a balanced copyright reform for the digital age. Here’s the proof:

Angelika Niebler, Member, ITRE (left) and Jean-Marie Cavada, Vice-Chair, JURI

JURI Committee Vice-Chair, MEP Laura Ferrara, Italy (center) with Mozilla’s Raegan MacDonald and Edoardo Viola

Thanks for clicking. We’re looking forward to what’s ahead: 100,000,000 clicks—and common-sense copyright laws for the Internet age.

The post 60,000,000 Clicks for Copyright Reform appeared first on The Mozilla Blog.

Open Policy & AdvocacyMozilla statement on Supreme Court hearings on Aadhaar

The Supreme Court of India is setting up a nine judge bench to consider whether the right to privacy is a fundamental right under the Indian Constitution. This move is a result of multiple legal challenges to Aadhaar, the Indian national biometric identity database, which the Government of India is currently operating without any meaningful privacy protections.

We’re pleased to see the Indian Supreme Court take this important step forward in considering the privacy implications of Aadhaar. At a time when the Government of India is increasingly making Aadhaar mandatory for everything from getting food rations, to accessing healthcare, to logging into a wifi hotspot, a strong framework protecting privacy is critical. Indians have been waiting for years for a Constitutional Bench of the Supreme Court to take up these Aadhaar cases, and we hope the Right to Privacy will not be in question for much longer.

The post Mozilla statement on Supreme Court hearings on Aadhaar appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogAdd-ons at Mozilla All Hands San Francisco

Firefox add-on staff and contributors gathered at Mozilla’s recent All Hands meeting in San Francisco to spend time as a group focusing on our biggest priority this year: the Firefox 57 release in November.

During the course of the week, Mozillians could be found huddled together in various conference spaces discussing blocker issues, making plans, and hacking on code. Here’s a  recap of the week and a glance at what we have in store for the second half of 2017.

Add-on Engineering

Add-on engineers Luca Greco and Kumar McMillan take a break to model new add-on jackets.

For most of the engineering team, the week was a chance to catch up on the backlog of bugs. (The full list of bugs closed during the week can be found here.)

We also had good conversations about altering HTTP Response in the webRequest API, performance problems with the blocklist on Firefox startup, and sketching out a roadmap for web-ext, the command line tool for extension development. We also had a chance to make progress on the browser.proxy API.

Improving addons.mozilla.org (AMO)

Having recently completed the redesign of AMO for Android, we’ve now turned our attention to refreshing the desktop version. Goals for the next few months include modernizing the homepage and making it easier to find great add-ons. Here’s a preview of the new look:

 

Another area of focus was migrating to Django 1.11. Most of the work on the Django upgrade involved replacing and removing incompatible libraries and customizations, and a lot of progress was made during the week.

Add-on Reviews

Former intern Elvina Valieva helped make improvements to the web-ext command line tool, in addition to doing some impressive marine-themed photoshopping.

Review queue wait times have dramatically improved in the past few weeks, and we’re on track to deliver even more improvements in the next few months. During our week together, we also discussed ideas for improving the volunteer reviewer program and evolving it to stay relevant to the new WebExtensions model. We’ll be reaching out to the review team for feedback in the coming weeks.

Get Involved

Interested in contributing to the add-ons community? Check out our wiki to see a list of current opportunities.

 

The post Add-ons at Mozilla All Hands San Francisco appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyMozilla files comments to save the internet… again

Today, we filed Mozilla’s comments to the FCC. Just want to take a look at them them? They’re right here – or read on for more.

Net neutrality is critical to the internet’s creators, innovators, and everyday users. We’ve talked a lot about the importance of net neutrality over the years, both in the US and globally — and there have been many positive developments. But today there’s a looming threat: FCC Chairman Pai’s plan to roll back enforceable net neutrality protections in his so-called “Restoring Internet Freedom” proceeding.

Net neutrality — enforceable and with clear rules for providers — is critical to the future of the internet. Our economy and society depend on the internet being open. For net neutrality to work, it must be enforceable. In the past, when internet service providers (ISPs) were not subject to enforceable rules, they violated net neutrality. ISPs prevented users from chatting on FaceTime and streaming videos, among other questionable business practices. The 2015 rules fixed this: the Title II classification of broadband protected access to the open internet and made all voices free to be heard. The 2015 rules preserved– and made enforceable– the fundamental principles and assumptions on which the internet have always been rooted. To abandon these core assumptions about how the internet works and is regulated has the potential to wreak havoc. It would hurt users and stymie innovation. It could very well see the US fall behind the other 47 countries around the world that have enforceable net neutrality rules.

We’ve asked you to comment, and we’ve been thrilled with your response. Thank you! Keep it coming! Now it’s our turn. Today, we are filing Mozilla’s comments on the proceeding, arguing against this rollback of net neutrality protections. Net neutrality is a critical part of why the internet is great, and we need to protect it:

  • Net neutrality is fundamental to free speech. Without it, big companies could censor anyone’s voice and make it harder to speak up online.
  • Net neutrality is fundamental to competition. Without it, ISPs can prioritize their businesses over newcomer companies trying to reach users with the next big thing.
  • Net neutrality is fundamental to innovation. Without it, funding for startups could dry-up, as established companies that can afford to “pay to play” become the only safe tech investments.
  • And, ultimately, net neutrality is fundamental to user choice. Without it, ISPs can choose what you access — or how fast it may load — online.

The best way to protect net neutrality is with what we have today: clear, lightweight rules that are enforceable by the FCC. There is no basis to change net neutrality rules, as there is no clear evidence of a negative impact on anything, including ISPs’ long-term infrastructure investments. We’re concerned that user rights and online innovation have become a political football, when really most people and companies agree that net neutrality is important.

There’s more to come in this process — many will write “reply comments” over the next month. After that, the Commission should consider these comments (and we hope they reconsider the plan entirely) and potentially vote on the proposal later this year. We fully expect the courts to weigh in here if the new rule is enacted, and we’ll engage there too. Stay tuned!

The post Mozilla files comments to save the internet… again appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogAdd-ons Update – 2017/07

Here’s the monthly update of the state of the add-ons world.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 1,597 listed add-on submissions:

  • 1294 in fewer than 5 days (81%).
  • 110 between 5 and 10 days (7%).
  • 193 after more than 10 days (12%).

301 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 55 and the bulk validation has been run. Additionally, the compatibility post for 56 is coming up.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest. As always, we recommend that you test your add-ons on Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter. It helps you identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Aayush Sanghavi
  • Santiago Paez
  • Markus Strange
  • umaarabdullah
  • Ahmed Hasan
  • Fiona E Jannat
  • saintsebastian
  • Atique Ahmed
  • Apoorva Pandey
  • Cesar Carruitero
  • J.P. Rivera
  • Trishul Goel
  • Santosh
  • Christophe Villeneuve

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/07 appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgIntroducing sphinx-js, a better way to document large JavaScript projects

Until now, there has been no good tool for documenting large JavaScript projects. JSDoc, long the sole contender, has some nice properties:

  • A well-defined set of tags for describing common structures
  • Tooling like the Closure Compiler which hooks into those tags

But the output is always a mere alphabetical list of everything in your project. JSDoc scrambles up and flattens out your functions, leaving new users to infer their relationships and mentally sort them into comprehensible groups. While you can get away with this for tiny libraries, it fails badly for large ones like Fathom, which has complex new concepts to explain. What I wanted for Fathom’s manual was the ability to organize it logically, intersperse explanatory prose with extracted docs, and add entire sections which are nothing but conceptual overview and yet link into the rest of the work.1

The Python world has long favored Sphinx, a mature documentation tool with support for many languages and output formats, along with top-notch indexing, glossary generation, search, and cross-referencing. People have written entire books in it. Via plugins, it supports everything from Graphviz diagrams to YouTube videos. However, its JavaScript support has always lacked the ability to extract docs from code.

Now sphinx-js adds that ability, giving JavaScript developers the best of both worlds.

sphinx-js consumes standard JSDoc comments and tags—you don’t have to do anything weird to your source code. (In fact, it delegates the parsing and extraction to JSDoc itself, letting it weather future changes smoothly.) You just have Sphinx initialize a docs folder in the root of your project, activate sphinx-js as a plugin, and then write docs to your heart’s content using simple reStructuredText. When it comes time to call in some extracted documentation, you use one of sphinx-js’s special directives, modeled after the Python-centric autodoc’s mature example. The simplest looks like this:

.. autofunction:: linkDensity

That would go and find this function…

/**
 * Return the ratio of the inline text length of the links in an element to
 * the inline text length of the entire element.
 *
 * @param {Node} node - The node whose density to measure
 * @throws {EldritchHorrorError|BoredomError} If the expected laws of the
 *     universe change, raise EldritchHorrorError. If we're getting bored of
 *     said laws, raise BoredomError.
 * @returns {Number} A ratio of link length to overall text length: 0..1
 */
function linkDensity(node) {
  ...
}

…and spit out a nicely formatted block like this:

(the previous comment block, formatted nicely)

Sphinx begins to show its flexibility when you want to do something like adding a series of long examples. Rather than cluttering the source code around linkDensity, the additional material can live in the reStructuredText files that comprise your manual:

.. autofunction:: linkDensity
   
   Anything you type here will be appended to the function's description right
   after its return value. It's a great place for lengthy examples!

There is also a sphinx-js directive for classes, either the ECMAScript 2015 sugared variety or the classic functions-as-constructors kind decorated with @class. It can optionally iterate over class members, documenting as it goes. You can control ordering, turn private members on or off, or even include or exclude specific ones by name—all the well-thought-out corner cases Sphinx supports for Python code. Here’s a real-world example that shows a few truly public methods while hiding some framework-only “friend” ones:

.. autoclass:: Ruleset(rule[, rule, ...])
   :members: against, rules

(Ruleset class with extracted documentation, including member functions)

Going beyond the well-established Python conventions, sphinx-js supports references to same-named JS entities that would otherwise collide: for example, one foo that is a static method on an object and another foo that is an instance method on the same. It does this using a variant of JSDoc’s namepaths. For example…

  • someObject#foo is the instance method.
  • someObject.foo is the static method.
  • And someObject~foo is an inner member, the third possible kind of overlapping thing.

Because JSDoc is still doing the analysis behind the scenes, we get to take advantage of its understanding of these JS intricacies.

Of course, JS is a language of heavy nesting, so things can get deep and dark in a hurry. Who wants to type this full path in order to document innerMember?

some/file.SomeClass#someInstanceMethod.staticMethod~innerMember

Yuck! Fortunately, sphinx-js indexes all such object paths using a suffix tree, so you can use any suffix that unambiguously refers to an object. You could likely say just innerMember. Or, if there were 2 objects called “innerMember” in your codebase, you could disambiguate by saying staticMethod~innerMember and so on, moving to the left until you have a unique hit. This delivers brevity and, as a bonus, saves you having to touch your docs as things move around your codebase.

With the maturity and power of Sphinx, backed by the ubiquitous syntactical conventions and proven analytic machinery of JSDoc, sphinx-js is an excellent way to document any large JS project. To get started, see the readme. Or, for a large-scale example, see the Fathom documentation. A particularly juicy page is the Rule and Ruleset Reference, which intersperses tutorial paragraphs with extracted class and function docs; its source code is available behind a link in its upper right, as for all such pages.

I look forward to your success stories and bug reports—and to the coming growth of rich, comprehensibly organized JS documentation!


1JSDoc has tutorials, but they are little more than single HTML pages. They have no particular ability to cross-link with the rest of the documentation nor to call in extracted comments.

The Mozilla BlogDefending Net Neutrality: Millions Rally to Save the Internet, Again

We’re fighting for net neutrality, again, because it is crucial to the future of the internet. Net neutrality serves to enable free speech, competition, innovation and user choice online.

On July 12, it was great to see such a diversity of voices speak up and join together to support a neutral internet. We need to protect the internet as a shared global public resource for us all. This Day of Action makes it clear, yet again, that net neutrality it a mainstream issue, which the majority of Americans (76% from our recent survey) care about and support.

We were happy to see a lot of engagement with our Day of Action activities:

  • Mozilla collected more than 30,000 public comments on July 12 alone — bringing our total number of public comments to more than 75,000. We’ll be sharing these with the FCC
  • Our nine hour Soothing Sounds of Activism: Net Neutrality video, along with interviews from Senators Al Franken and Ron Wyden, received tens of thousands of views
  • The net neutrality public comments displayed on the U.S. Firefox snippet made 6.8 million impressions
  • 30,000 listeners tuned in for the net neutrality episode of our IRL podcast

The Day of Action was timed a few days before the first deadline for comments to the FCC on the proposed rollback of existing net neutrality protections. This is just the first step though. Mozilla takes action to protect net neutrality every day, because it’s obviously not a one day battle.

Net neutrality is not the sole responsibility any one company, individual or political party. We need to join together because the fight for net neutrality impacts the future of the internet and everyone who uses it.

What’s Next?

Right now, we’re finishing our FCC comments to submit on July 17. Next, we’ll continue to advocate for enforceable net neutrality through all FCC deadlines and we’ll defend the open internet, just like we did with our comments and efforts to protect net neutrality in 2010 and 2014.

The post Defending Net Neutrality: Millions Rally to Save the Internet, Again appeared first on The Mozilla Blog.

QMOFirefox Developer Edition 55 Beta 11 Testday, July 21st

Hello Mozillians,

We are happy to let you know that Friday, July 21st, we are organizing Firefox Developer Edition 55 Beta 11 Testday. We’ll be focusing our testing on the following features: Screenshots, Shutdown Video Decoder and Customization.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Air MozillaReps Weekly Meeting Jul. 13, 2017

Reps Weekly Meeting Jul. 13, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Firefox UXFirefox Workflow User Research in Germany

Munich Public Transit (Photo: Gemma Petrie)

Last year, the Firefox User Research team conducted a series of formative research projects studying multi-device task continuity. While these previous studies broadly investigated types of task flows and strategies for continuity across devices, they did not focus on the functionality, usability, or user goals behind these specific workflows.

For most users, interaction with browsers can be viewed as a series of specific, repeatable workflows. Within the the idea of a “workflow” is the theory of “flow.” Flow has been defined as:

a state of mind experienced by people who are deeply involved in an activity. For example, sometimes while surfing the Net, people become so focused on their pursuit that they lose track of time and temporarily forget about their surroundings and usual concerns…Flow has been described as an intrinsically enjoyable experience.¹

As new features and service integrations are introduced to existing products, there is a risk that unarticulated assumptions about usage context and user mental models could create obstacles for our users. Our goal for this research was to identify these obstacles and gain a detailed understanding of the behaviors, motivations, and strategies behind current browser-based user workflows and related device or app-based workflows. These insights will help us develop products, services, and features for our users.

Primary Research Questions

  • How can we understand users’ current behaviors to develop new workflows within the browser?
  • How do workflows & “flow” states differ between and among different devices?
  • In which current browser workflows do users encounter obstacles? What are these obstacles?
  • Are there types of workflows for specific types of users and their goals? What are they?
  • How are users’ unmet workflow needs being met outside of the browser? And how might we meet those needs in the browser?

Methodology

In order to understand users’ workflows, we employed a three-part, mixed method approach.

Survey

The first phase of our study was a twenty question survey deployed to 1,000 respondents in Germany provided by SSI’s standard international general population panel. We asked participants to select the Internet activities they had engaged in in the previous week. Participants were also asked questions about their browser usage on multiple devices as well as perceptions of privacy. We modeled this survey off of Pew Research Center’s “The Internet and Daily Life” study.

Experience Sampling

In the second phase, a separate group of 26 German participants were recruited from four major German cities: Cologne, Hamburg, Munich, and Leipzig. These participants represented a diverse range of demographic groups and half of the participants used Firefox as their primary browser on at least one of their devices. Participants were asked to download a mobile app called Paco. Paco cued participants up to seven times daily asking them about their current Internet activities, the context for it, and their mental state while completing it.

In-Person Interviews

In the final phase of the study, we selected 11 of the participants from the Experience Sampling segment from Hamburg, Munich, and Leipzig. Over the course of 3 weeks, we visited these participants in their homes and conducted 90 minute interview and observation sessions. Based on the survey results and experience sampling observations, we explored a small set of participants’ workflows in detail.

Product Managers participating in affinity diagramming in the Mozilla Toronto office. (Photo: Gemma Petrie)

Field Team Participation

The Firefox User Research team believes it is important to involve a wide variety of staff members in the experience of in-context research and analysis activities. Members of the Firefox product management and UX design teams accompanied the research team for these in-home interviews in Germany. After the interviews, the whole team met in Toronto for a week to absorb and analyze the data collected from the three segments. The results presented here are based on the analysis provided by the team.

Workflows

Based on our research, we define a workflow as a habitual, frequently employed set of discrete steps that users build into a larger activity. Users employ the tools they have at hand (e.g., tabs, bookmarks, screenshots) to achieve a goal. Workflows can also span across multiple devices, range from simple to technically sophisticated, exist across noncontinuous durations of time, and contain multiple decisions within them.

Example Workflow from Hamburg Participant #2

We observed that workflows appear to be naturally constructed actions to participants. Their workflows were so unconscious or self-evident, that participants often found it challenging to articulate and reconstruct their workflows. Examples of workflows include: Comparison shopping, checking email, checking news updates, and sharing an image with someone else.

Workflows Model

Based on our study, we have developed a general two-part model to illustrate a workflow.

Part 1: Workflows are constructed from discrete steps. These steps are atomic and include actions like typing in a URL, pressing a button, taking a screenshot, sending a text message, saving a bookmark, etc. We mean “atomic” in the sense that the steps are simple, irreducible actions in the browser or other software tools. When employed alone, these actions can achieve a simple result (e.g. creating a bookmark). Users build up the atomic actions into larger actions that constitute a workflow.

Part 2: Outside factors can influence the choices users make for both a whole workflow or steps within a workflow. These factors include software components, physical components, and pyscho/social/cultural factors.

Trying to find the Mozilla Berlin office. (Photo: Gemma Petrie)

Factors Influencing Workflows

While workflows are composed from atomic building blocks of tools, there is a great deal more that influences their construction and adoption among users.

Software Components

Software components are features of the operating system, the browser, and the specs of web technology that allow users to complete small atomic tasks. Some software components also constrain users into limited tasks or are obstacles to some workflows.

The basic building blocks of the browser are the features, tools, and preferences that allow users to complete tasks with the browser. Some examples include: Tabs, bookmarks, screenshots, authentication, and notifications.

Physical Components

Physical components are the devices and technology infrastructure that inform how users interact with software and the Internet. These components employ software but it is users’ physical interaction with them that makes these factors distinct. Some examples include: Access to the internet, network availability, and device form factors.

Psycho/Social/Cultural Factors

Psycho/Social/Cultural influences are contextual, social, and cognitive factors that affect users’ approaches to and decisions about their workflows.

Memory
Participants use memory to fill in gaps in their workflows where technology does not support persistence. For example, when comparison shopping, a user has multiple tabs open to compare prices; the user is using memory to keep in mind prices from the other tabs for the same item.

Control
Participants exercised control over the role of technology in their lives either actively or passively. For example, some participant believed that they received too many notifications from apps and services, and often did not understand how to change these settings. This experience eroded their sense of control over their technology and forced these participants to develop alternate strategies for regaining control over these interruptions. For others, notifications were seen as a benefit. For example, one of our Leipzig participants used home automation tools and their associated notifications on his mobile devices to give him more control over his home environment.

Other examples of psycho/social/cultural factors we observed included: Work/personal divides, identity management, fashion trends in technology adoption, and privacy concerns.

Using the Workflows Model

When analyzing current user workflows, the parts of the model should be cues to examine how the workflow is constructed and what factors influence its construction. When building new features, it can be helpful to ask the following questions to determine viability:

  • Are the steps we are creating truly atomic and usable in multiple workflows?
  • Are we supplying software components that give flexibility to a workflow?
  • What affect will physical factors have on the atomic components in the workflow?
  • How do psycho-social-cultural factors influence users’ choices about the components they are using in the workflow?
Hamburg Train Station (Photo: Gemma Petrie)

Design Principles & Recommendations

  • New features should be atomic elements, not complete user workflows.
  • Don’t be prescriptive, but facilitate efficiency.
  • Give users the tools to build their own workflows.
  • While software and physical components are important, psycho/social/cultural factors are equally as important and influential over users’ workflow decisions.
  • Make it easy for users to actively control notifications and other flow disruptors.
  • Leverage online content to support and improve offline experiences.
  • Help users bridge the gap between primary-device workflows and secondary devices.
  • Make it easy for users to manage a variety of identities across various devices and services.
  • Help users manage memory gaps related to revisiting and curating saved content.

Future Research Phases

The Firefox User Research team conducted additional phases of this research in Canada, the United States, Japan, and Vietnam. Check back for updates on our work.

References:

¹ Pace, S. (2004). A grounded theory of the flow experiences of Web users. International journal of human-computer studies, 60(3), 327–363.


Firefox Workflow User Research in Germany was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaThe Joy of Coding - Episode 105

The Joy of Coding - Episode 105 mconley livehacks on real Firefox bugs while thinking aloud.

The Mozilla BlogDefending Net Neutrality: A Day of Action

Mozilla is participating in the Day of Action with a new podcast, video interviews with U.S. Senators, a special Firefox bulletin, and more

 

As always, Mozilla is standing up for net neutrality.

And today, we’re not alone. Hundreds of organizations — from the ACLU and GitHub to Amazon and Fight for the Future — are participating in a Day of Action, voicing loud support for net neutrality and a healthy internet.

“Mozilla is supporting the majority of Americans who believe the web belongs to individual users, without interference from ISP gatekeepers,” says Ashley Boyd, Mozilla’s VP of Advocacy. “On this Day of Action, we’re amplifying what millions of Americans have been saying for years: Net neutrality is crucial to a free, open internet.”

“We are fighting to protect net neutrality, again, because it’s crucial to the future of the internet,” says Denelle Dixon, Mozilla Chief Legal and Business Officer. “Net neutrality prohibits ISPs from engaging in prioritization, blocking or throttling of content and services online. As a result, net neutrality serves to enable free speech, competition, innovation and user choice online.”

The Day of Action is a response to FCC Commissioner Ajit Pai’s proposal to repeal net neutrality protections enacted in 2015. The FCC voted to move forward with Pai’s proposal in May; we’re currently in the public comment phase. You can read more about the process here.

Here’s how Mozilla is participating in the Day of Action — and how you can get involved, too:

Nine hours of public comments. Over the past few months, Mozilla has collected more than 60,000 comments from Americans in defense of net neutrality.

“The internet should be open for all and not given over to big business,” wrote one commenter. “Net neutrality protects small businesses and innovators who are just getting started,” penned another.

We’ll share all 60,000 comments with the FCC. But first, we’re reading a portion of them aloud in a nine-hour, net neutrality-themed spoken-word marathon.

And we’re showcasing the comments on Firefox, to inspire more Americans to stand up for net neutrality. When Firefox users open a new window today, a different message in support of net neutrality will appear in the “snippet,” the bulletin above and beneath the search bar.

It’s not too late to submit your own comment. Visit mzl.la/savetheinternet to add your voice.

A word from Senators Franken and Wyden. Senator Al Franken (D-Minnesota) and Senator Ron Wyden (D-Oregon) are two of the Senate’s leading voices for net neutrality. Mozilla spoke with both about net neutrality’s connection to free speech, competition, and innovation. Here’s what they had to say:

Stay tuned for more interviews with Congress members about the importance of net neutrality.

Comments for the FCC. Mozilla’s Public Policy team is finishing up comments to the FCC on the importance of enforceable net neutrality to ensure that voices are free to be heard. They will speak to how net neutrality fundamentally enables free speech, online competition and innovation, and user choice. Like our comments from 2010 and 2014, we will defend all users’ ability to create and consume online, and will defend the vitality of the internet. User rights should not be used in a political play.

Net neutrality podcast. We just released the second episode of Mozilla’s original podcast, IRL, which focuses on who wins — and who loses — if net neutrality is repealed. Listen to host Veronica Belmont explore the issue in depth with a roster of guests holding different viewpoints, from Patrick Pittaluga of Grubbly Farms (a maggot farming business in Georgia), to Jessica González of Free Press, to Dr. Roslyn Layton of the American Enterprise Institute.

Subscribe wherever you get your podcasts, or listen on our website.

Today, we’re amplifying the voices of millions of Americans. And we need your help: Visit mzl.la/savetheinternet to join the movement. The future of net neutrality — and the very health of the internet — depends on it.


Note: This blog was updated on July 12 at 2:30 p.m. ET to reflect the most recent number of public comments collected.

The post Defending Net Neutrality: A Day of Action appeared first on The Mozilla Blog.

Air MozillaWebdev Extravaganza: July 2017

Webdev Extravaganza: July 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

Air MozillaRain of Rust -4th online meeting

Rain of Rust -4th online meeting An online event - part of the RainofRust campaign

QMOFirefox 55 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – July 7th – we held a new Testday event, for Firefox 55.0b7.

Thank you all for helping us make Mozilla a better place – Gabriela, Athira Appu, Surentharan.R.A.

From India team:  Surentharan.R.A, Fahima Zulfath A, Kaviya D, Baranitharaan,  Nagarajan .R, Terry John Paul.P, Vinothini.K , ponmurugesh.M, Haritha K Sankari.

From Bangladesh team: Nazir Ahmed Sabbir, Maruf Rahman, Md.Tarikul Islam Oashi, Sajal Ahmed, Tanvir Rahman, Sajedul Islam, Jakaria Muhammed Jakir, Md. Harun-Or-Rashid sajjad, Iftekher Alam, Md. Almas Hossain, Md.Rahatul Islam, Jobayer Ahmed Mickey, Humayra Khanum, Anika Alam, Md.Majedul islam, Md. Mafidul Islam Porag, Farhadur Raja Fahim,
Azad Mohammad, Mahadi Hasan Munna, Sayed Ibn Masud.

– several test cases executed for the Shutdown Video Decoder and Customization features;

– 4 bugs verified: 1359681136733813676271370645.

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Mozilla L10NThe Ten Hands

Gaul is entirely occupied by the Romans. Well, not entirely… One small village of indomitable Gauls still holds out against the invaders.

While most of mozilla gathered in San Francisco, a small group of ten hands gathered in a small village in Slovenia.

Matjaz hosted me, Stas, Adrian and Jarek, to work on Pontoon and other aspects of localization infrastructure at Mozilla. Jarek is a volunteer contributor extraordinaire to Pontoon, and we were finally able to have him join us for his first Mozilla gathering. Adrian is taking a break from his work on Socorro, and will take on work on Pontoon, at least for this quarter.

Adrian, Stas and I hadn’t really looked at the Pontoon code base, so this was a great opportunity to get us onboarded. We also had the chance to talk about some of the pros and cons of the basic data models powering Pontoon.

Jarek and Matjaz made great progress on getting errors and warnings from compare-locales hooked up to Pontoon. The PR already has 43 commits, and is shaping up nicely. It’s been good to see that we were able to use compare-locales as is, though we might want to optimize one API. I was able to help here a bit verbally myself. It’s interesting how efficient such 5 minutes can be, compared to our usual roundtrips of a day between work and not, and continents.

Adrian spent quite some time working on a setup of Pontoon on docker-compose. Having done that myself for the l10n automation, I was his tester here. The PR is now ready for review, which is also on me. Promise.

Stas started to experiment with graphene-django to expose a GraphQL API for Pontoon. That was surprisingly easy to get started. It was also surprisingly bad in performance. He’s written down his notes on the wiki, and we’ll reconvene soon on what the next steps should be. And yes, we abused the word “REST” in a lot of different ways during that week.

Stas and I made a lot of progress on support for Fluent in our core infrastructure, adding support for that in compare-locales and elmo. Stas finalized the support for Fluent in compare-locales. I added support for the diff view in elmo, which required a few updates to compare-locales, too. With the work on compare-locales 2.0, I also updated elmo to support both the legacy JSON output as well as the new JSON output from 2.0.

The days were just packed, as they say. We did go out and explore the area, mostly to get food. In a place where the cab driver has a day job, you have to. In a place where you can see three different countries from your porch, that also means you might go through passport control to go to dinner. Hello Croatia and croatian kunas, where dinner prices are not in euros. Last but not least a big Thank You to Eva and Robert from the Cuk Wine House for their hospitality.

The images are by Adrian Gaudebert and licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

The Mozilla BlogMozilla Fully Paid Parental Leave Program Officially Rolls Out Worldwide

For most countries around the world, school is out, and parents are reconnecting with their kids to enjoy road trips and long days. Many of our Mozilla employees have benefited from the expanded parental leave program we introduced last year to spend quality time with their families. The program offers childbearing parents up to 26 weeks of fully paid leave and non-childbearing (foster and adoptive parents, partners of childbearing) parents up to 12 weeks of fully paid leave.

This July, we completed the global roll out of the program making Mozilla a leader in the tech industry and among organizations with a worldwide employee base.

What makes Mozilla’s parental leave program unique

And sets us apart from other tech companies and other organizations:

  • 2016 Lookback: A benefit for employees who welcomed a child in the calendar year prior to the expanded benefit being rolled out.
  • Global Benefit: As a US-based company with employees all over the world, we chose to offer it to employees around the world — US, Canada, Belgium, Finland, France, Germany, the Netherlands, Spain, Sweden, UK, Australia, New Zealand, Taiwan.
  • Fully Paid Leave: For all parents, they’ll receive their full salary during that time.

What our Mozilla employees have to say:

“Our second son was born in January 2017. When I heard about the new policy that Mozilla will launch globally one month before, I first was not sure how that will work out with the statutory parental leave rules in Germany. But I have to say that I first enjoyed working with Rachel to work out all the details — and now I get enjoy a summer with my family. The second child has changed my life completely, it was hard to match work and family needs. I am grateful that I will have time to give back to my son and my family and grow even more closer together.”  Dominik Strohmeier, based in Berlin, Germany.  Two children, with second child born in 2017.

Chelsea Novak with baby

“Our daughter was born in 2016,” says Chelsea Novak, Firefox Editorial Lead. “When Mozilla announced this new parental leave policy we were excited for parents that were expecting in 2017, but a little sad that we missed out. Having Mozilla extend these new parental leave benefits to us was very generous and gave us some precious time with our family that we weren’t expecting.”  Chelsea and Matej Novak, both longtime Canadian Mozilla employees, based in Toronto. Two children, ages 1 and 3.

 

 

 

“I started with Mozilla in the beginning of 2016, and delivered my child that same year. When I first heard of the policy, I didn’t think the new parental leave would apply to me. Then, Rachel told me the good news. I was amazed that they would extend the parental leave policy to me so that I can take additional time off in 2017.  Mozilla is so generous to parents like myself to enjoy special moments like watching my daughter take her first steps or saying her first words.”   Jen Boscacci, based in Mountain View, California.  Two children, with second child born in 2016.

 

Maura Tuohy with baby

“Being able to take advantage of the 26 weeks of leave — and have the flexibility of when to take it — was an incredible gift for our family. Knowing that the company was so supportive made the experience as stress free as having a newborn can be! I’m so grateful to work for such a progressive and kind company — not just in policies but in culture and practice.”  Maura Tuohy, based in San Francisco.  Her first child was born in 2017.

 

 

This program helps us embrace and celebrate families of all kinds, whether its adoption and foster care, we expanded our support for both childbearing and non-childbearing parents, independent of gender or situation. We value our Mozilla employees, because juggling between work and family responsibilities is no easy feat.

The post Mozilla Fully Paid Parental Leave Program Officially Rolls Out Worldwide appeared first on The Mozilla Blog.

Mozilla VR BlogEasily customized environments using the Aframe-Environment-Component

Easily customized environments using the Aframe-Environment-Component

Get a fresh and new environment for your A-Frame demos and experiments with the aframe-environment component!

Just include the aframe-environment-component.min.js component in your html file, add an <a-entity environment></a-entity> to your <a-scene>, and voila!

Easily customized environments using the Aframe-Environment-Component

<html>  
<head>  
 <script src="path/to/aframe.js"></script>
 <script src="path/to/aframe-environment-component.js"></script>
</head>  
<body>  
    <a-scene>
        <a-entity environment><a-entity>
    </a-scene>
</body>  
</html>  

The component generates a new environment with presets for lights and geometry. These presets can be easily customized by using the inspector (ctrl + alt + i) and tweaking the individual values until you find the look you like. Presets are a combination of property values that define a particular style, they are a starting point that you can later customize:

<a-entity environment="preset: goldmine; sunPosition: 1 5 -2; groundColor: #742"><a-entity>

You can view and try all the presets from the aframe-environment-component example page.

And of course, the component is fully customizable without a preset:

<a-entity environment="skyType: gradient; skyColor: #1d7444; horizonColor: #7ae0e0; groundTexture: checkerboard; groundColor: #523c60; groundColor2: #544264; dressing: cubes; dressingAmount: 15; dressingColor: #7c5c45"></a-entity>

TIP: If you are using the inspector and are happy with the look of your environment, open your browser's dev tools (F12) and copy the latest parameters from the console.

Customizing your environment

The environment component defines four different aspects of the scene: lighting, sky, ground terrain and dressing objects.

Lighting and mood

The lighting in your scene is easily adjusted by changing the sunPosition property. Scene objects will subtly receive a bounce light from the ground, and the color of the fog will also change to match the sky color at the horizon.

Easily customized environments using the Aframe-Environment-Component

To fully control the lighting of the scene, you can disable the environment lights with lighting: none, and you can set lighting: point if you want a point light instead of a distant light for the sun.

Add realism to your scene by adding shadows toggling on the shadow parameter and adding the shadow component on objects that should cast shadows onto the ground. Learn more about A-Frame shadows.

Sky and atmosphere

The 200m radius sky dome can have a basic color, a top-down gradient, or a realistic looking atmospheric look by using skyType: atmosphere sky type. Lowering the sun near or below the horizon will give you a starry night sky.

Ground terrain

The ground is a flat subdivided plane that can be deformed to various different terrain patterns like hills, canyons, or spikes. The appearance can also be customized by its texture and colors.

The center play area where the player is initially positioned is always flat, so nobody will get buried ;)

The grid property will add a grid texture to the ground and can be adjusted to different colors and patterns.

Dressing objects

A sky and ground with nothing more could be a little too simple sometimes. The environment component includes many families of objects that can be used to spice up your scene, including cubes, pyramids, towers, mushrooms and more. Among other parameters, you can adjust their variation using dressingVariance, or the ratio of objects that will be inside or outside the play area with dressingOnPlayArea.

All dressing objects share the same material and are all merged in one single geometry for better performance.

Easily customized environments using the Aframe-Environment-Component

Further customization

To see the full list of parameters of the component, check out GitHub's aframe-environment-component repository.

Help make this component better

We could use your help!

  • File github issues
  • Create a new preset
  • Share your presets! So anyone can copy/paste and even try live
  • Create new dressing geometries
  • Create new procedural textures
  • Create new ground types
  • Create new grid styles

Feel free to send a pull request to the repository!

Performance considerations

The main idea of this component is to have a complete and visually interesting environment by including a single Javascript file, with no extra includes or dependencies. This requires that assets have to be included into the Javascript or (in most cases) generated procedurally . Despite of the computing time and increased file size, both options are normally faster than requesting and waiting for additional textures or model files.

Apart from the parameter dressingAmount, there is not much difference among different parameters in terms of performance.

Mozilla L10NNew L10n Report: July Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers:

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community and locales added

  • azz (Sierra Puebla Nahuatl): it was onboarded in recent months and already made a lot of progress.
  • be (Belarusian): when we sadly had to drop Belarusian from our desktop and mobile builds due to inactivity, new contributors immediately contacted us to revive localization work. A big shout-out to this newly formed community!
  • Tg (Tajik): successfully localized their first project, Thimble!

New content and projects

What’s new or coming up in Firefox desktop

Big changes are coming to Firefox 57, with some of them sneaking up even in 56 (the version currently localized in Nightly, until August 7). The new feature with more significant impact for localization is the new Onboarding experience: it consists of both an Onboarding overlay (a tour displayed by clicking the fox icon in the New tab), and Onboarding notifications, displayed at the bottom of New tab.

If you haven’t seen them yet (we always make sure to tweet the link), we strongly suggest to read the latest news about Photon in the Photon Engineering Newsletter (here’s the latest #8).

On a side note, you should be using Nightly for your daily browsing, it’s exciting (changes every day!) and fundamental to ensure the quality of your localization.

There is a bug on file to stop localizing about:networking, given how small the target is (users debugging network issues) and how obscure some of these strings are.

A final reminder: The deadline to update Firefox Beta is July 25. Remember that Firefox Beta should mainly be used to fix small issues, since new translations added directly to Beta need to be manually added to the Nightly project in your localization tools.

What’s new or coming up in Test Pilot

The new set of experiments, originally planned for July 5, has been moved to July 25. Make also sure to read this mail on dev-l10n if you have issues testing the website on the dev server.

What’s new or coming up in mobile
  • Mobile (both Android and iOS projects) is going to align with the visual changes coming up on desktop by getting a revamped UI thanks to Photon. Check it out!
    • Firefox for Android Photon meta-bug.
    • Please note that due to the current focus on desktop with Firefox 57, Firefox for Android work is slower than usual. Expect to see more and more Photon updates on Nightly builds though as time passes by.
  • We recently launched Focus for Android v1 with 52 languages! Have you tried it out yet? Reviews speak for themselves. Expect a new release soon, and with that, more locales (and of course, more features and improvements to the app)!
  • Mobile experiments are on the rise. The success of Focus is paving the way to many other great mobile projects. Stay tuned on our mailing list because there just may be cool stuff arriving very soon!
What’s new or coming up in web projects
  • With the new look and feel, a new set of Firefox pages and unified footer were released for l10n. Make sure to localize firefox/shared.lang before localizing the new pages. Try to complete these new pages before deadline or the pages will redirect to English on August 15
  • Monthly snippets have expanded to more locales. Last month, we launched the first set in RTL locales: ar, fa, he, and ur. The team is considering creating regional specific snippets.
  • A set of Internet Health pages were launched. Some recent updates were made to fit the new look and layout. Many communities have participated in localizing some or all pages.
  • The newly updated Community Participation Guideline is now localized in six languages: de, es, fr, hi-IN, pt-BR, and zh-TW. Thanks to the impacted communities for reviewing the document before publishing.
  • Expect more updates of existing pages in the coming months so the look and feel are consistent between pages.
What’s new or coming up in Foundation projects
  • Thimble, the educational code editor, got a makeover and new useful features, that all got localized in more than 20 locales.
  • The fundraising campaign will start ramping up earlier than November this year, so it’s a great idea to make sure the project is up-to-date for your locale, if it isn’t already.
  • The EU Copyright campaign is in slow mode over the summer while MEPs are on holiday, but we will rock again full speed in September before committees are voting
  • We will launch an Internet of Things survey over the summer to get a better understanding of what people know about IoT.
Newly published localizer facing documentation
Events
  • Next l10n workshop will be in Asuncion, Paraguay (August)
  • Berlin l10n workshop is coming up in September!
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Opportunities

Accomplishments

Some numbers
Friends of the Lion

Image by Elio Qoshi

  • Shout-out to all Mozilla RTL communities who have been doing a great job at filing and fixing bugs on mobile – as well as providing much needed feedback and insights during these past few months! Tomer, Reza, ItielMaN, Umer, Amir, Manel, Yaron, Abdelrahman, Yaseen – just to name a few. Thank you!
  • Thanks to Jean-Bernard and Adam for jumping in to translate the new Firefox pages in French.
  • Thanks to Nihad of Bosnian community for leading the effort in localizing the mozilla.org site that is now on production.
  • A big thank you to Duy Thanh for his effort in rebuilding the Vietnamese community and his ongoing localization contribution.
  • kab (Kabyle) community started a while back, their engagement is impressive in all products and projects.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

SeaMonkey2.48 is spinning… (and so is my head)…

Greetings All,

Just a FYI, that we are spinning 2.48…  yes.  It’s been a long time; but, as much as I would like, Murphy isn’t taking his long deserved holiday..  Really, Murphy…  TAKE YOUR HOLIDAY!

Anyway, I expect to have a choppy ride (re: Murphy) so can’t state when 2.48 will be released.  (First bump already hit.  We had 2 OSX systems running and both decided to go AWOL.  They were back. Now one decided to return to going AWOL.  So we’re down to 1 OSX builder.  (*oh yay*) ;/ )

However, what I do know is the fact that updates aren’t working so when 2.48 is released,  the usual “download and install” will have to suffice.  This is something I’m sorry about as I had hoped to get 2.48 on the updates train….  but it’s just so darn complicated.

Will keep you guys/gals posted.

:ewong

 

Mozilla VR BlogLink Traversal and Portals in A-Frame

Link Traversal and Portals in A-Frame

Demo for the impatients (It requires controllers: Oculus, HTC Vive, Daydream or GearVR)

A-Frame 0.6.0 and Firefox Nightly now support navigating across pages while in VR mode. WebVR has finally earn the Web badge. The Web gets its name from the structure of interconnected content that uses the link as the glue. Until now, The VR Web was fragmented and had to be consumed in bite size pieces since the VR session was not preserved on page navigation. In the first iteration of the WebVR API we focused on displaying pixels and meet the performance requirements on a single page. Thanks to the efforts of Doshheng Mu and Kip, Firefox Nightly now also ships the mechanism that enables a seamless transition between VR sites.

Link Traversal and Portals in A-Frame

Link traversal in VR relies on the onvrdisplayactivate event. It is fired on the window object on page load if the precedent site was presenting content in the headset.

To enter VR mode for the first time, the user is expected to explicitly trigger VR mode with an action like mouse click or a keyboard shortcut to prevent sites to take control of the headset inadvertently. Once VR is engaged subsequent page transitions can present content in the headset without further user intervention. A page can be granted permissions to enter VR automatically by simply attaching an event handler:

window.addEventListener('vrdisplayactivate' function (evt) {  
  /* A page can now start presenting in the headset */
  vrDisplay.requestPresent([{ source: myCanvas }]).then(function () { ... });
}

Links in A-Frame

A-frame 0.6.0 ships with a link component and a-link primitive. The link component can be configured in several ways:

  <a-entity link="href: index.html; title:My Home; image: #homeThumb"></a-entity>
PropertyDescription
hrefURL where the link points to
titleText displayed on the link. href is used if not defined
onevent that triggers link traversal
image360 panorama used as scene preview in the portal
colorBackground color of the portal
highlightedtrue if the link is highlighted
highlightedColorcolor used to highlight the link
visualAspectEnabledenable/disable the visual aspect if you want to implement your own

The a-link primitive provides a compact interface that feels like the traditional <a> tag that we are all used to.

  <a-link href="index.html" image="#thumbHome" title="my home"></a-link>

The image property points to the <img> element that will be used as a background of the portal and the title is the text displayed on the link.

The UX of VR links

Using the wonderful art of arturitu, both the A-Frame component and primitive come with a first interpretation on how links could be visually represented in VR. It is a starting point for an exciting conversation that will develop in the next years.

Our first approach addresses several problems we identified:

Links visual aspect should be consistent.

So users can quickly identify the interconnected experiences at a glance. Thanks to kip's shader wisdom we chose a portal representation that gives each link a distinct look representative of the referenced content. A-Frame provides a built in screenshot component to easily generate the 360 panoramas necessary for the portal preview.

Link Traversal and Portals in A-Frame

Links should be useful at any distance.

Portals help discriminate between links in the proximity but the information becomes less useful in the distance. From far away, portals alone can be difficult to spot because they might blend with the scene background or become hard to see at wide angles. To solve the problem, we made links fade into a solid fuchsia circle with a white border that grows in thickness with the distance so all the links have a consistent look (colors are configurable). Portals will also face the camera to avoid wide angles that reduce the visible surface.

Link Traversal and Portals in A-Frame

Link Traversal and Portals in A-Frame

Links should provide information about the referenced experience.

One can use the surrounding space to contextualize and give a hint where the link will point to. In addition, links themselves display either the title or the url that they refer to. This provides additional information to the user about the linked content.

Link Traversal and Portals in A-Frame

There should be a convenient way to explore the visible links.

While portals are a good way to preview an experience it can be hard to explore the available options if the user has to move around the scene to inspect the links one by one. We developed a peek feature that allows the user to point to any link and quickly zoom into the preview without having to move.

Link Traversal and Portals in A-Frame

Next steps

One of the limitations of the current API is that a Web Developer needs to manually point to the thumbnails that the links use to render the portals. We want to explore a way, via meta tags, web manifest or other conventions for a site to provide the thumbnail for 3rd party links to consume. This way a Web Developer has more control on how her website will be represented in other pages.

Another rough edge is what happens when after navigation a page takes a lot time to load or ends up in an error. There's no way at the moment to inform the user of those scenarios while keeping the VR headset on. We want to explore ways for the browser itself to intervene in VR mode and keep the user properly informed at each step when leaving, loading and finally navigating to a new piece of VR content.

Conclusion

With in-VR page navigation we're now one step closer to materialize the Open Metaverse on top the existing Web infrastructure. We hope you find our link proposal inspiring and sparks a good conversation. We don't really know how links will look like in the future: doors, inter-dimensional portals or exit burritos... We cannot wait to see what you come up with. All the code and demos are already available as part of the 0.6.0 version of A-Frame. Happy hacking!

Link Traversal and Portals in A-Frame

Air MozillaReps Weekly Meeting Jul. 06, 2017

Reps Weekly Meeting Jul. 06, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Mozilla BlogNew Research: Is an Ad-Supported Internet Feasible in Emerging Markets?

Fresh research conducted by Caribou Digital and funded by Mozilla explores digital advertising models in the Global South — whether they can succeed, and what that means for users, businesses, and the health of the Internet


Since the Internet’s earliest days, advertising has been the linchpin of the digital economy, supporting businesses from online journalism to social networking. Indeed, two of the five largest companies in the world — Facebook and Google — earn almost all of their revenue through digital advertising.

As the Internet reaches new users in India, Kenya, and elsewhere across the Global South, this model is following close behind. But is the digital advertising model that has evolved in developed economies sustainable in emerging economies? And if it’s not: What does it mean for the billions of users who are counting on the Internet to unlock new pathways to education, economic growth, and innovation?

Publishers see drastically less revenue per user in these regions, partly because low-income populations are less valuable to advertisers, and partly because constraints on the user experience — low-quality hardware, unreliable network coverage, and a dearth of local content — fundamentally limit how people engage with digital content and services.

As a result, users in emerging markets will have fewer choices, as local content providers and digital businesses will struggle to earn enough from their home markets to compete with the global platforms.

Today, we’re publishing “Paying Attention to the Poor: Digital Advertising in Emerging Markets.”

It’s fresh research conducted by Caribou Digital and funded by Mozilla that explores the barriers traditional digital advertising models face in emerging economies; the consequent impact on users, businesses, and the health of the Internet; and what new models are emerging.  

In summary:

Ad revenue-wise, there is an order-of-magnitude difference between users in developed economies and users in the Global South.

Facebook earns a quarterly ARPU (average revenue per user) of $1.41 in Africa and Latin America, and $2.07 in Asia-Pacific — an order of magnitude less than  the $19.81 it earns in the U.S. and Canada

As a result, just over half of Facebook’s total global revenue comes from only 12% of its users


The high cost of data in emerging markets is one driver of ad blocking

Due to prohibitive data costs and slower network speeds, many Internet users in emerging markets use proxy browsers, such as UC Browser or Opera Mini, which reduce data consumption and also block ads

One report by PageFair claims over 309 million users around the world used mobile ad blockers in 2016 — with 89 million hailing from India and 28 million hailing from Indonesia


A dearth of user data — or, the “personal data gap” — presents another challenge to advertisers.

In developed economies, data profiling and ad targeting has been a boon to advertisers. But in the Global South, people have much smaller digital footprints

Limited online shopping, a glut of open-source Android devices, and a tendency toward multiple, fragmented social media accounts dilutes the value of personal data to advertisers


Limited advertising revenue in emerging markets challenges local innovation and competition.

Publishers and developers follow the money. As a result, content is targeted to, and localized for, developed markets like the U.S. or Japan — even producers in emerging markets will ignore their domestic market in favor of more lucrative ones

Large companies like Facebook have the resources to subsidize forays into unprofitable markets; smaller companies do not. As a result, the reigning giants become further entrenched


A lack of local content can have deeply negative implications.

Availability of local content is a key demand-side driver for increasing Internet access for marginalized populations, and localized media can foster inclusion and support democratic institutions

But without viable economic models for supporting this content, opportunity is squandered. Presently, the majority of digital content — including user-generated content such as Wikipedia — is in English


The outlook for digital advertising-supported businesses in emerging markets is bleak.

Low monetization rates will continue to limit the types of Internet businesses that can flourish in the Global South

To succeed, businesses in the Global South have to build more strategically, working toward profitability (and not user growth) from the very beginning


These constraints demand new business model innovations for an Internet ecosystem that is evolving differently in the Global South

“Sponsored data” or “incentivized action” models which offer free data in return for engagement with an advertiser’s content are one approach to mitigating the access and affordability constraint

Transactional revenue models, such as those seen in digital financial services, will play an increasingly important role as payments infrastructure matures


You can read the full report here.

In the coming weeks and months, Mozilla and Caribou Digital will share our findings with allies across the Internet health space — the network of NGOs, institutions, and individuals who are working toward a more healthy web. We hope our learnings will help unlock innovative solutions that balance commercial success with openness and freedom online.

The post New Research: Is an Ad-Supported Internet Feasible in Emerging Markets? appeared first on The Mozilla Blog.

Open Policy & AdvocacyG20 Nations Must Set Clear Priorities For Digital Agenda

Home to two-thirds of the world’s population and 90 percent of its economic output, the G20 countries are a powerhouse that have yet to take on a coordinated digital agenda.

This could be about to change. Under the German presidency of the G20, digital concerns – from getting people connected to protecting people’s data once they are – have been made a priority through a new ‘Roadmap for Digitalisation’. Now the question is: will other G20 members like Brazil, China and Russia be willing to translate this initial support into firm G20 commitments that Argentina will continue to drive during the next G20 presidency?

As three leading organisations from the Internet community, we are looking to the world leaders that will gather in Hamburg, Germany for the summit on 7-8 July, to set clear priorities for the G20 digital agenda.

They have good reason to. Digital industries have become central to all G20 economies and must form an integral part of their agenda. If the G20 are looking at their future prosperity and security, they must ensure the digital economy brings connectivity, opportunities and benefits for everyone, while guarding against the risk that digital technologies could drive inequality and exclusion. It is more important now than ever to lay the foundations for an effective, principle-based security framework that respects fundamental rights and ensures user trust. Unilateral or short-sighted solutions, such as on encryption, will fall short to address these challenges.

We believe that Germany’s presidency has set a good precedent for other G20 countries – and especially Argentina – to follow. The proposed G20 “Roadmap for Digitalisation” provides the right framing to address many of the digital community’s current concerns: from strengthening trust in the digital economy and consumer online protections, to bridging digital divides.

Now, to turn intention into action, the resulting agreement from the July summit (the communique) must acknowledge and elevate these issues as part of the overall G20 agenda and form an integral part of the official strategies and policies for G20 leaders. Priorities should include digital access and bridging the current divides, not just in terms of connectivity but also in terms of enabling meaningful access and empowering people. Infrastructure, skills-building and inclusion must be the drivers to shape an open, free and transformative internet – bringing sustainable development and opportunities to all.

Today, heads of state have a historical opportunity to lay the right foundations for a global digital agenda. We hope that they use it.

Cathleen Berger, Global Engagement Lead, Mozilla

Constance Bommelaer, Senior Director, Global Internet Policy, Internet Society

Craig Fagan, Policy Director, Web Foundation

This is a joint blog post by the Internet Society, Mozilla and the World Wide Web Foundation.

The post G20 Nations Must Set Clear Priorities For Digital Agenda appeared first on Open Policy & Advocacy.

Air MozillaThe Joy of Coding - Episode 104

The Joy of Coding - Episode 104 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Add-ons BlogJuly’s Featured Extensions

Firefox Logo on blue background

Pick of the Month: Privacy Badger

by EFF Technologists
Protects you from spying ads and invisible trackers.

“Works without any problems, causes no site loading issues, and is more trustworthy than other, similar programs.”

Featured: AdBlock for Firefox

by AdBlock
Robust ad blocker that takes aim against all forms of ads—pop-ups, banners, pre-rolls, and more.

“Best ad blocker out there.”

Featured: Disconnect

by Disconnect
Another great privacy protecting extension, Disconnect blocks invisible trackers and helps speed up your Firefox experience.

“One of the most important browser add-ons out there. Thanks!”

Featured: Easy YouTube Video Downloader Express

by Dishita
Very simple to use YouTube downloader; and one of the few to offer 1080p full HD and 256kbps MP3 download capability.

“Brilliant for downloading MP3’s and MP4’s.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post July’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Mozilla L10NLocalization at Mozilla SF All Hands

Hello from sunny Northern California!

This week was Mozilla’s bi-annual All Hands, a gathering that brings Mozilla employees and community together for a week to hack on key Mozilla objectives. This All Hands, the Firefox team (which l10n is a part of) was in “all hands on deck” mode to make significant progress on the upcoming Firefox 57 launch. That being the case it was a bit different and more unstructured than previous All Hands.

Fun concept art by Sean Martell depicting Mozilla building new Firefox in comic book style.

L10n-drivers this week focused on a number of improvements to the localizer experience. Pontoon saw more work dedicated to QA checks, incorporating elements of compare-locales within the platform itself. We made significant progress toward landing l20n in Firefox desktop, enabling multi-locale Firefox desktop builds, and we creating l10n documentation for Pontoon and other elements of the l10n process. We also performed an initial terminology extraction from mozilla.org in order to create a Mozilla-specific termbase. Finally, we made the next version of the “Promote Firefox in your language” community marketing guide (which will be available on GitHub soon for the final feedback round) and the next version of the monthly Mozilla l10n report.

We’re also happy to announce two new communication channels for the global localization community: Facebook and Telegram. Over the years we’ve learned that different communities around the world need different ways to connect online than the more traditional means: mailing lists and IRC. Our Facebook group and Telegram channel will not replace mailing lists and IRC, but will supplement those in the hopes of increasing our reach to all l10n communities world wide and making it easier to promote the community’s contributions to localization. To avoid spammers, we moderate both of these channels, so if you’d like to join either, please reach out to your l10n community leaders (most of them are in one or both of these channels).

We’re all very excited for new Firefox to reach users on localized builds with Firefox 57. If you’re not already using Nightly in your language, please download and help us improve localization coverage of Firefox in your language. Firefox 57 will have some very exciting new features that non-English speakers will absolutely want in their language. We’ll increase messaging about exciting things in Firefox 57 throughout the next couple of months to keep you informed and allow you to start sharing them with your friends and family.

QMOFirefox 55 Beta 7 Testday, July 7th

Hello Mozillians,

We are happy to let you know that Friday, July 7th, we are organizing Firefox 55 Beta 7 Testday. We’ll be focusing our testing on the following new features: Shutdown Video Decoder and Customization.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday! 🙂

hacks.mozilla.orgIntroducing HumbleNet: a cross-platform networking library that works in the browser

HumbleNet started out as a project at Humble Bundle in 2015 to support an initiative to port peer-to-peer multiplayer games at first to asm.js and now to WebAssembly. In 2016, Mozilla’s web games program identified the need to enable UDP (User Datagram Protocol) networking support for web games, and asked if they could work with Humble Bundle to release the project as open source. Humble Bundle graciously agreed, and Mozilla worked with OutOfOrder.cc to polish and document HumbleNet. Today we are releasing the 1.0 version of this library to the world!

Why another networking library?

When the idea of HumbleNet first emerged we knew we could use WebSockets to enable multiplayer gaming on the web. This approach would require us to either replace the entire protocol with WebSockets (the approach taken by the asm.js port of Quake 3), or to tunnel UDP traffic through a WebSocket connection to talk to a UDP-based server at a central location.

In order to work, both approaches require a middleman to handle all network traffic between all clients. WebSockets is good for games that require a reliable ordered communication channel, but real-time games require a lower latency solution. And most real-time games care more about receiving the most recent data than getting ALL of the data in order. WebRTC’s UDP-based data channel fills this need perfectly. HumbleNet provides an easy-to-use API wrapper around WebRTC that enables real-time UDP connections between clients using the WebRTC data channel.

What exactly is HumbleNet?

HumbleNet is a simple C API that wraps WebRTC and WebSockets and hides away all the platform differences between browser and non-browser platforms. The current version of the library exposes a simple peer-to-peer API that allows for basic peer discovery and the ability to easily send data (via WebRTC) to other peers. In this manner, you can build a game that runs on Linux, macOS, and Windows, while using any web browser — and they can all communicate in real-time via WebRTC.  This means no central server (except for peer discovery) is needed to handle network traffic for the game. The peers can talk directly to each other.

HumbleNet itself uses a single WebSocket connection to manage peer discovery. This connection only handles requests such as “let me authenticate with you”, and “what is the peer ID for a server named “bobs-game-server”, and “connect me to peer #2345”.  After the peer connection is established, the games communicate directly over WebRTC.

HumbleNet demos

We have integrated HumbleNet into asm.js ports of Quake 2 and Quake 3 and we provide  a simple Unity3D demo as well.

Here is a simple video of me playing Quake 3 against myself. One game running in Firefox 54 (general release), the other in Firefox Developer Edition.

Getting started

You can find pre-built redistributables at https://humblenet.github.io/. These include binaries for Linux, macOS, Windows, a C# wrapper, Unity3D plugin, and emscripten (for targeting asm.js or WebAssembly).

Starting your peer server

Read the documentation about the peer server on the website. In general, for local development, simply starting the peer server is good enough. By default it will run in non-SSL mode on port 8080.

Using the HumbleNet API

Initializing the library

To initialize HumbleNet just call humblenet_init() and then later humblnet_p2p_init(). The second call will initiate the connection to the peer server with the specified credentials.

humblenet_init();

// this initializes the P2P portion of the library connecting to the given peer server with the game token/secret (used by the peer server to validate the client).
// the 4th parameter is for future use to authenticate the user with the peer server

humblenet_p2p_init("ws://localhost:8080/ws", "game token", "game secret", NULL);
Getting your local peer id

Before you can send any data to other peers, you need to know what your own peer ID is. This can be done by periodically polling the humblenet_p2p_get_my_peer_id() function.

// initialization loop (getting a peer)
static PeerId myPeer = 0;

while (myPeer == 0) {
  // allow the polling to run
  humblenet_p2p_wait(50);

  // fetch a peer
  myPeer = humblenet_p2p_get_my_peer_id();
}
Sending data

To send data, we call humblenet_p2p_sendto.  The 3rd parameter is the send mode type. Currently HumbleNet implements 2 modes:SEND_RELIABLE and SEND_RELIABLE_BUFFERED.   The buffered version will attempt to do local buffering of several small messages and send one larger message to the other peer. They will be broken apart on the other end transparently.

void send_message(PeerId peer, MessageType type, const char* text, int size)
{
  if (size > 255) {
    return;
  }

  uint8_t buff[MAX_MESSAGE_SIZE];

  buff[0] = (uint8_t)type;
  buff[1] = (uint8_t)size;

  if (size > 0) {
    memcpy(buff + 2, text, size);
  }

  humblenet_p2p_sendto(buff, size + 2, peer, SEND_RELIABLE, CHANNEL);
}
Initial connections to peers

When initially connecting to a peer for the first time you will have to send an initial message several times while the connection is established. The basic approach here is to send a hello message once a second, and wait for an acknowledge response before assuming the peer is connected. Thus, minimally, any application will need 3 message types: HELLO, ACK, and some kind of DATA message type.

if (newPeer.status == PeerStatus::CONNECTING) {
  time_t now = time(NULL);

  if (now > newPeer.lastHello) {
    // try once a second
    send_message(newPeer.id, MessageType::HELLO, "", 0);
    startPeerLastHello = now;
  }
}
Retrieving data

To actually retrieve data that has been sent to your peer you need to use humblenet_p2p_peek and humblenet_p2p_recvfrom. If you assume that all packages are smaller than a max size, then a simple loop like this can be done to process any pending messages.  Note: Messages larger than your buffer size will be truncated. Using humblenet_p2p_peek you can see the size of the next message for the specified channel.

uint8_t buff[MAX_MESSAGE_SIZE];
bool done = false;

while (!done) {
  PeerId remotePeer = 0;

  int ret = humblenet_p2p_recvfrom(buff, sizeof(buff), &remotePeer, CHANNEL);

  if (ret < 0) {
    if (remotePeer != 0) {
      // disconnected client
    } else {
      // error
      done = true;
    }
  } else if (ret > 0) {
    // we received data process it
    process_message(remotePeer, buff, sizeof(buff), ret);
  } else {
    // 0 return value means no more data to read
    done = true;
  }
}
Shutting down the library

To disconnect from the peer server, other clients, and shut down the library, simply call humblenet_shutdown.

humblenet_shutdown();
Finding other peers

HumbleNet currently provides a simple “DNS” like method of locating other peers.  To use this you simply register a name with a client, and then create a virtual peer on the other clients. Take the client-server style approach of Quake3 for example – and have your server register its name as “awesome42.”

humblenet_p2p_register_alias("awesome42");

Then, on your other peers, create a virtual peer for awesome42.

PeerID serverPeer = humblenet_p2p_virtual_peer_for_alias("awesome42");

Now the client can send data to serverPeer and HumbleNet will take care of translating the virtual peer to the actual peer once it resolves the name.

We have two systems on the roadmap that will improve the peer discovery system.  One is an event system that allows you to request a peer to be resolved, and then notifies you when it’s resolved. The second is a proper lobby system that allows you to create, search, and join lobbies as a more generic means of finding open games without needing to know any name up front.

Development Roadmap

We have a roadmap of what we plan on adding now that the project is released. Keep an eye on the HumbleNet site for the latest development.

Future work items include:

  1. Event API
    1. Allows a simple SDL2-style polling event system so that game code can easily check for various events from the peer server in a cleaner way, such as connects, disconnects, etc.
  2. Lobby API
    1. Uses the Event API to build a means of creating lobbies on the peer server in order to locate game sessions (instead of having to register aliases).
  3. WebSocket API
    1. Adds in support to easily connect to any websocket server with a clean simple API.

How can I contribute?

If you want to help out and contribute to the project, HumbleNet is being developed on GitHub: https://github.com/HumbleNet/humblenet/. Use the issue tracker and pull requests to contribute code. Be sure to read the CONTRIBUTING.md guide on how to create a pull request.

hacks.mozilla.orgBuilding the Web of Things

Mozilla is working to create a Web of Things framework of software and services that can bridge the communication gap between connected devices. By providing these devices with web URLs and a standardized data model and API, we are moving towards a more decentralized Internet of Things that is safe, open and interoperable.

The Internet and the World Wide Web are built on open standards which are decentralized by design, with anyone free to implement those standards and connect to the network without the need for a central point of control. This has resulted in the explosive growth of hundreds of millions of personal computers and billions of smartphones which can all talk to each other over a single global network.

As technology advances from personal computers and smartphones to a world where everything around us is connected to the Internet, new types of devices in our homes, cities, cars, clothes and even our bodies are going online every day.

The Internet of Things

The “Internet of Things” (IoT) is a term to describe how physical objects are being connected to the Internet so that they can be discovered, monitored, controlled or interacted with. Like any advancement in technology, these innovations bring with them enormous new opportunities, but also new risks.

At Mozilla our mission is “to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.”

This mission has never been more important than today, a time when everything around us is being designed to connect to the Internet. As new types of devices come online, they bring with them significant new challenges around security, privacy and interoperability.

Many of the new devices connecting to the Internet are insecure, do not receive software updates to fix vulnerabilities, and raise new privacy questions around the collection, storage, and use of large quantities of extremely personal data.

Additionally, most IoT devices today use proprietary vertical technology stacks which are built around a central point of control and which don’t always talk to each other. When they do talk to each other it requires per-vendor integrations to connect those systems together. There are efforts to create standards, but the landscape is extremely complex and there’s still not yet a single dominant model or market leader.

A chart of leading proprietary IoT stacks

The Web of Things

Using the Internet of Things today is a lot like sharing information on the Internet before the World Wide Web existed. There were competing hypertext systems and proprietary GUIs, but the Internet lacked a unifying application layer protocol for sharing and linking information.

The “Web of Things” (WoT) is an effort to take the lessons learned from the World Wide Web and apply them to IoT. It’s about creating a decentralized Internet of Things by giving Things URLs on the web to make them linkable and discoverable, and defining a standard data model and APIs to make them interoperable.

A table showing Web of Things standards

The Web of Things is not just another vertical IoT technology stack to compete with existing platforms. It is intended as a unifying horizontal application layer to bridge together multiple underlying IoT protocols.

Rather than start from scratch, the Web of Things is built on existing, proven web standards like REST, HTTP, JSON, WebSockets and TLS (Transport Layer Security). The Web of Things will also require new web standards. In particular, we think there is a need for a Web Thing Description format to describe things, a REST style Web Thing API to interact with them, and possibly a new generation of HTTP better optimised for IoT use cases and use by resource constrained devices.

The Web of Things is not just a Mozilla Initiative, there is already a well established Web of Things community and related standardization efforts at the IETF, W3C, OCF and OGC. Mozilla plans to be a participant in this community to help define new web standards and promote best practices around privacy, security and interoperability.

From this existing work three key integration patterns have emerged for connecting things to the web, defined by the point at which a Web of Things API is exposed to the Internet.

Diagram comparing Direct, Gateway, and Cloud Integration Patterns

Direct Integration Pattern

The simplest pattern is the direct integration pattern where a device exposes a Web of Things API directly to the Internet. This is useful for relatively high powered devices which can support TCP/IP and HTTP and can be directly connected to the Internet (e.g. a WiFi camera). This pattern can be tricky for devices on a home network which may need to use NAT or TCP tunneling in order to traverse a firewall. It also more directly exposes the device to security threats from the Internet.

Gateway Integration Pattern

The gateway integration pattern is useful for resource-constrained devices which can’t run an HTTP server themselves and so use a gateway to bridge them to the web. This pattern is particularly useful for devices which have limited power or which use PAN network technologies like Bluetooth or ZigBee that don’t directly connect to the Internet (e.g. a battery powered door sensor). A gateway can also be used to bridge all kinds of existing IoT devices to the web.

Cloud Integration Pattern

In the cloud integration pattern the Web of Things API is exposed by a cloud server which acts as a gateway remotely and the device uses some other protocol to communicate with the server on the back end. This pattern is particularly useful for a large number of devices over a wide geographic area which need to be centrally co-ordinated (e.g. air pollution sensors).

Project Things by Mozilla

In the Emerging Technologies team at Mozilla we’re working on an experimental framework of software and services to help developers connect “things” to the web in a safe, secure and interoperable way.

Things Framework diagram

Project Things will initially focus on developing three components:

  • Things Gateway — An open source implementation of a Web of Things gateway which helps bridge existing IoT devices to the web
  • Things Cloud — A collection of Mozilla-hosted cloud services to help manage a large number of IoT devices over a wide geographic area
  • Things Framework — Reusable software components to help create IoT devices which directly connect to the Web of Things

Things Gateway

Today we’re announcing the availability of a prototype of the first component of this system, the Things Gateway. We’ve made available a software image you can use to build your own Web of Things gateway using a Raspberry Pi.

Things Gateway diagram

So far this early prototype has the following features:

  • Easily discover the gateway on your local network
  • Choose a web address which connects your home to the Internet via a secure TLS tunnel requiring zero configuration on your home network
  • Create a username and password to authorize access to your gateway
  • Discover and connect commercially available ZigBee and Z-Wave smart plugs to the gateway
  • Turn those smart plugs on and off from a web app hosted on the gateway itself

We’re releasing this prototype very early on in its development so that hackers and makers can get their hands on the source code to build their own Web of Things gateway and contribute to the project from an early stage.

This initial prototype is implemented in JavaScript with a NodeJS web server, but we are exploring an adapter add-on system to allow developers to build their own Web of Things adapters using other programming languages like Rust in the future.

Web Thing API

Our goal in building this IoT framework is to lead by example in creating a Web of Things implementation which embodies Mozilla’s values and helps drive IoT standards around security, privacy and interoperability. The intention is not just to create a Mozilla IoT platform but an open source implementation of a Web of Things API which anyone is free to implement themselves using the programming language and operating system of their choice.

To this end, we have started working on a draft Web Thing API specification to eventually propose for standardization. This includes a simple but extensible Web Thing Description format with a default JSON encoding, and a REST + WebSockets Web Thing API. We hope this pragmatic approach will appeal to web developers and help turn them into WoT developers who can help realize our vision of a decentralized Internet of Things.

We encourage developers to experiment with using this draft API in real life use cases and provide feedback on how well it works so that we can improve it.

Web Thing API spec - Member Submission

Get Involved

There are many ways you can contribute to this effort, some of which are:

  • Build a Web Thing — build your own IoT device which uses the Web Thing API
  • Create an adapter — Create an adapter to bridge an existing IoT protocol or device to the web
  • Hack on Project Things — Help us develop Mozilla’s Web of Things implementation

You can find out more at iot.mozilla.org and all of our source code is on GitHub. You can find us in #iot on irc.mozilla.org or on our public mailing list.

Web Application SecurityAnalysis of the Alexa Top 1M sites

Prior to the release of the Mozilla Observatory a year ago, I ran a scan of the Alexa Top 1M websites. Despite being available for years, the usage rates of modern defensive security technologies was frustratingly low. A lack of tooling combined with poor and scattered documentation had led to there being little awareness around countermeasures such as Content Security Policy (CSP), HTTP Strict Transport Security (HSTS), and Subresource Integrity (SRI).

A few months after the Observatory’s release — and 1.5M Observatory scans later — I reassessed the Top 1M websites. The situation appeared as if it was beginning to improve, with the use of HSTS and CSP up by approximately 50%. But were those improvements simply low-hanging fruit, or has the situation continued to improve over the following months?

Technology April 2016 October 2016 June 2017 % Change
Content Security Policy (CSP) .005%1
.012%2
.008%1
.021%2
.018%1
.043%2
+125%
Cookies (Secure/HttpOnly)3 3.76% 4.88% 6.50% +33%
Cross-origin Resource Sharing (CORS)4 93.78% 96.21% 96.55% +.4%
HTTPS 29.64% 33.57% 45.80% +36%
HTTP → HTTPS Redirection 5.06%5
8.91%6
7.94%5
13.29%6
14.38%5
22.88%6
+57%
Public Key Pinning (HPKP) 0.43% 0.50% 0.71% +42%
 — HPKP Preloaded7 0.41% 0.47% 0.43% -9%
Strict Transport Security (HSTS)8 1.75% 2.59% 4.37% +69%
 — HSTS Preloaded7 .158% .231% .337% +46%
Subresource Integrity (SRI) 0.015%9 0.052%10 0.113%10 +117%
X-Content-Type-Options (XCTO) 6.19% 7.22% 9.41% +30%
X-Frame-Options (XFO)11 6.83% 8.78% 10.98% +25%
X-XSS-Protection (XXSSP)12 5.03% 6.33% 8.12% +28%

The pace of improvement across the web appears to be continuing at an astounding rate. Although a 36% increase in the number of sites that support HTTPS might seem small, the absolute numbers are quite large — it represents over 119,000 websites.

Not only that, but 93,000 of those websites have chosen to be HTTPS by default, with 18,000 of them forbidding any HTTP access at all through the use of HTTP Strict Transport Security.

The sharp jump in the rate of Content Security Policy (CSP) usage is similarly surprising. It can be difficult to implement for a new website, and often requires extensive rearchitecting to retrofit to an existing site, as most of the Alexa Top 1M sites are. Between increasingly improving documentation, advances in CSP3 such as ‘strict-dynamic’, and CSP policy generators such as the Mozilla Laboratory, it appears that we might be turning a corner on CSP usage around the web.

Observatory Grading

Despite this progress, the vast majority of large websites around the web continue to not use Content Security Policy and Subresource Integrity. As these technologies — when properly used — can nearly eliminate huge classes of attacks against sites and their users, they are given a significant amount of weight in Observatory scans.

As a result of their low usage rates amongst established websites, they typically receive failing grades from the Observatory. Nevertheless, I continue to see improvements across the board:

Grade April 2016 October 2016 June 2017 % Change
 A+ .003% .008% .013% +62%
A .006% .012% .029% +142%
B .202% .347% .622% +79%
C .321% .727% 1.38% +90%
D 1.87% 2.82% 4.51% +60%
F 97.60% 96.09% 93.45% -2.8%

As 969,924 scans were successfully completed in the last survey, a decrease in failing grades by 2.8% implies that over 27,000 of the largest sites in the world have improved from a failing grade in the last eight months alone.

In fact, my research indicates that over 50,000 websites around the web have directly used the Mozilla Observatory to improve their grades, indicated by scanning their website, making an improvement, and then scanning their website again. Of these 50,000 websites, over 2,500 have improved all the way from a failing grade to an A or A+ grade.

When I first built the Observatory a year ago at Mozilla, I had never imagined that it would see such widespread use. 3.8M scans across 1.55M unique domains later, it seems to have made a significant difference across the internet. I feel incredibly lucky to work at a company like Mozilla that has provided me with a unique opportunity to work on a tool designed solely to make internet a better place.

Please share the Mozilla Observatory and the Web Security Guidelines so that the web can continue to see improvements over the years to come!

 

Footnotes:

  1. Allows 'unsafe-inline' in neither script-src nor style-src
  2. Allows 'unsafe-inline' in style-src only
  3. Amongst sites that set cookies
  4. Disallows foreign origins from reading the domain’s contents within user’s context
  5. Redirects from HTTP to HTTPS on the same domain, which allows HSTS to be set
  6. Redirects from HTTP to HTTPS, regardless of the final domain
  7. As listed in the Chromium preload list
  8. max-age set to at least six months
  9. Percentage is of sites that load scripts from a foreign origin
  10. Percentage is of sites that load scripts
  11. CSP frame-ancestors directive is allowed in lieu of an XFO header
  12. Strong CSP policy forbidding 'unsafe-inline' is allowed in lieu of an XXSSP header

The post Analysis of the Alexa Top 1M sites appeared first on Mozilla Security Blog.

SUMO BlogImportant Platform Update

Hello, SUMO Mozillians!

We have an important update regarding our site to share with you, so grab something cold/hot to drink (depending on your climate), sit down, and give us your attention for the next few minutes.

As you know, we have been hard at work for quite some time now migrating the site over to a new platform. You were a part of the process from day one (since we knew we needed to find a replacement for Kitsune) and we would like to once more thank you for your participation throughout that challenging and demanding period. Many of you have given us feedback or lent a hand with testing, checking, cleaning up, and generally supporting our small team before, during, and after the migration.

Over time and due to technical difficulties beyond our team’s direct control, we decided to ‘roll back’ to Kitsune to better support the upcoming releases of Firefox and related Mozilla products.

The date of ‘rolling forward’ to Lithium was to be decided based on the outcome of leadership negotiations of contract terms and the solving of technical issues (such as redirects, content display, and localization flows, for example) by teams from both sides working together.

In the meantime, we have been using Kitsune to serve content to users and provide forum support.

We would like to inform you that a decision has been made on Mozilla’s side to keep using Kitsune for the foreseeable future. Our team will investigate alternative options to improve and update Mozilla’s support for our users and ways to empower your contributions in that area.

What are the reasons behind this decision?

  1. Technical challenges in shaping Lithium’s platform to meet all of Mozilla’s user support needs.
  2. The contributor community’s feedback and requirements for contributing comfortably.
  3. The upcoming major releases for Firefox (and related products) requiring a smooth and uninterrupted user experience while accessing support resources.

What are the immediate implications of this decision?

  1. Mozilla will not be proceeding with a full ‘roll forward’ of SUMO to Lithium at this time. All open Lithium-related Bugzilla requests will be re-evaluated and may be closed as part of our next sprint (after the San Francisco All Hands).
  2. SUMO is going to remain on Kitsune for both support forum and knowledge base needs for now. Social support will continue on Respond.
  3. The SUMO team is going to kick off a reevaluation process for Kitsune’s technical status and requirements with the help of Mozilla’s IT team. This will include evaluating options of using Kitsune in combination with other tools/platforms to provide support for our users and contribution opportunities for Mozillians.

If you have questions about this update or want to discuss it, please use our community forums.

We are, as always, relying on your time and effort in successfully supporting millions of Mozilla’s software users and fans around the world. Thank you for your ongoing participation in making the open web better!

Sincerely yours,

The SUMO team

P.S. Watch the video from the first day of the SFO All Hands if you want to see us discuss the above (and not only).

 

QMOFirefox 55 Beta 4 Testday Results

Hello Mozillians!

As you may already know, last Friday – June 23rd – we held a new Testday event, for Firefox 55.0b4.

Thank you all for helping us make Mozilla a better place – Tiziana Sellitto, Gabriela (gaby2300) and Avinash Sharma.

From India team: Surentharan.R.A, Fahima Zulfath, Vinothini.K, Rohit R, Sriram B, Baranitharan, terryjohn, P Avinash Sharma, AbiramiSD.

Results:

– several test cases executed for the Screenshots and Simplify page features.

– 6 bugs verified: 1357964, 1370746, 1367767, 1355324, 1365638, 1361986
– 1 new bug filed: 1376184

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

The Mozilla BlogThoughts on the Latest Development in the U.S. Administration Travel Ban case

This morning, the U.S. Supreme Court decided to hear the lawfulness of the U.S. Administration’s revised Travel Ban. We’ve opposed this Executive Order from the beginning as it undermines immigration law and impedes the travel necessary for people who build, maintain, and protect the Internet to come together.

Today’s new development means that until the legal case is resolved the travel ban cannot be enforced against people from the six predominantly Muslim countries who have legitimate ties or relationships to family or business in the U.S. This includes company employees and those visiting close family members.

However, the Supreme Court departed from lower court opinions by allowing the ban to be enforced against visa applicants with no connection to the U.S.  We hope that the Government will apply this standard in a manner so that qualified visa applicants who demonstrate valid reasons for travel to the U.S. are not discriminated against, and that these decisions are reliably made to avoid the chaos that travelers, families, and business experienced earlier this year.

Ultimately, we would like the Court to hold that blanket bans targeted at people of particular religions or nationalities are unlawful under the U.S. Constitution and harmfully impact families, businesses, and the global community.  We will continue to follow this case and advocate for the free flow of information and ideas across borders, of which travel is a key part.

The post Thoughts on the Latest Development in the U.S. Administration Travel Ban case appeared first on The Mozilla Blog.

hacks.mozilla.orgOpus audio codec version 1.2 released

The Opus audio codec just got another major upgrade with the release of version 1.2 (see demo). Opus is a totally open, royalty-free, audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. Its standardization by the Internet Engineering Task Force (IETF) in 2012 (RFC 6716) was a major victory for open standards. Opus is the default codec for WebRTC and is now included in all major web browsers.

This new release brings many speech and music quality improvements, especially at low bitrates. The result is that Opus can now push stereo music bitrates down to 32 kb/s and encode full-band speech down to 14 kb/s. All that is achieved while remaining fully compatible with RFC 6716. The new release also includes optimizations, new options, as well as many bug fixes. This demo shows a few of the upgrades that users and implementers will care about the most, including audio samples. For those who haven’t used Opus yet, now’s a good time to give it a try.

hacks.mozilla.orgAn inside look at Quantum DOM Scheduling

Use of multi-tab browsing is becoming heavier than ever as people spend more time on services like Facebook, Twitter, YouTube, Netflix, and Google Docs, making them a part of their daily life and work on the Internet.

Quantum DOM: Scheduling is a significant piece of Project Quantum, which focuses on making Firefox more responsive, especially when lots of tabs are open. In this article, we’ll describe problems we identified in multi-tab browsing, the solutions we figured out, the current status of Quantum DOM, and opportunities for contribution to the project.

Problem 1: Task prioritization in different categories

Since multiprocess Firefox (e10s) was first enabled in Firefox 48, web content tabs now run in separate content processes in order to reduce overcrowding of OS resources in a given process. However, after further research, we found that the task queue of the main thread in the content process was still crowded with tasks in multiple categories. The tasks in the content process can come from a number of possible sources: through IPC (interprocess communication) from the main process (e.g. for input events, network data, and vsync), directly from web pages (e.g. from setTimeout, requestIdleCallback, or postMessage), or internally in the content process (e.g. for garbage collection or telemetry tasks). For better responsiveness, we’ve learned to prioritize tasks for user inputs and vsync above tasks for requestIdleCallback and garbage collection.

Problem 2: Lack of task prioritization between tabs

Inside Firefox, tasks running in foreground and background tabs are executed in First-Come-First-Served order, in a single task queue. It is quite reasonable to prioritize the foreground tasks over than the background ones, in order to increase the responsiveness of the user experience for Firefox users.

Goals & solutions

Let’s take a look at how we approached these two scheduling challenges, breaking them into a series of actions leading to achievable goals:

  • Classify and prioritize tasks on the main thread of the content processes in 2 dimensions (categories & tab groups), to provide better responsiveness.
  • Preempt tasks that are running the background tabs if this preempting is not noticeable to the user.
  • Provide an alternative to multiple content processes (e10s multi) when fewer content processes are available due to limited resources.

Task categorization

To resolve our first problem, we divide the task queue of the main thread in the content processes into 3 prioritized queues: High (User Input and Refresh Driver), Normal (DOM Event, Networking, TimerCallback, WorkerMessage), and Low (Garbage Collection, IdleCallback). Note: The order of tasks of the same priority is kept unchanged.

Task grouping

Before describing the solution to our second problem, let’s define a TabGroup as a set of open tabs that are associated via window.opener and window.parent. In the HTML standard, this is called a unit of related browsing contexts. Tasks are isolated and cannot affect each other if they belong to different TabGroups. Task grouping ensures that tasks from the same TabGroup are run in order while allowing us to interrupt tasks from background TabGroups in order to run tasks from a foreground TabGroup.

In Firefox internals, each window/document contains a reference to the TabGroup object it belongs to, which provides a set of useful dispatch APIs. These APIs make it easier for Firefox developers to associate a task with a particular TabGroup.

How tasks are grouped inside Firefox

Here are several examples to show how we group tasks in various categories inside Firefox:

  1. Inside the implementation of window.postMessage(), an asynchronous task called PostMessageEvent will be dispatched to the task queue of the main thread:
void nsGlobalWindow::PostMessageMozOuter(...) {
  ...
  RefPtr<PostMessageEvent> event = new PostMessageEvent(...);
  NS_DispatchToCurrentThread(event);
}

With the new association of DOM windows to their TabGroups and the new dispatching API provided in TabGroup, we can now associate this task with the appropriate TabGroup and specify the TaskCategory:

void nsGlobalWindow::PostMessageMozOuter(...) {
  ...
  RefPtr<PostMessageEvent> event = new PostMessageEvent(...);
  // nsGlobalWindow::Dispatch() helps to find the TabGroup of this window for dispatching.
  Dispatch("PostMessageEvent", TaskCategory::Other, event);
}
  1. In addition to the tasks that can be associated with a TabGroup, there are several kinds of tasks inside the content process such as telemetry data collection and resource management via garbage collection, which have no relationship to any web content. Here is how garbage collection starts:
void GCTimerFired() {
  // A timer callback to start the process of Garbage Collection.
}

void nsJSContext::PokeGC(...) {
  ...
  // The callback of GCTimerFired will be invoked asynchronously by enqueuing a task
  // into the task queue of the main thread to run GCTimerFired() after timeout.
  sGCTimer->InitWithFuncCallback(GCTimerFired, ...);
}

To group tasks that have no TabGroup dependencies, a special group called SystemGroup is introduced. Then, the PokeGC() method can be revised as shown here:

void nsJSContext::PokeGC(...) {
  ...
  sGCTimer->SetEventTarget(SystemGroup::EventTargetFor(TaskCategory::GC));
  sGCTimer->InitWithFuncCallback(GCTimerFired, ...);
}

We have now grouped this GCTimerFired task to the SystemGroup with TaskCategory::GC specified. This allows the scheduler to interrupt the task to run tasks for any foreground tab.

  1. In some cases, the same task can be requested either by specific web content or by an internal Firefox script with system privileges in the content process. We’ll have to decide if the SystemGroup makes sense for a request when it is not tied to any window/document. For example, in the implementation of DNSService in the content process, an optional TabGroup-versioned event target can be provided for dispatching the result callback after the DNS query is resolved. If the optional event target is not provided, the SystemGroup event target in TaskCategory::Network is chosen. We make the assumption that the request is fired from an internal script or an internal service which has no relationship to any window/document.
nsresult ChildDNSService::AsyncResolveExtendedNative(
 const nsACString &hostname,
 nsIDNSListener *listener,
 nsIEventTarget *target_,
 nsICancelable  **result)
{
  ...
  nsCOMPtr<nsIEventTarget> target = target_;
  if (!target) {
    target = SystemGroup::EventTargetFor(TaskCategory::Network);
  }

  RefPtr<DNSRequestChild> childReq =
    new DNSRequestChild(hostname, listener, target);
  ...
  childReq->StartRequest();
  childReq.forget(result);

  return NS_OK;
}

TabGroup categories

Once the task grouping is done inside the scheduler, we assign a cooperative thread per tab group from a pool to consume the tasks inside a TabGroup. Each cooperative thread is pre-emptable by the scheduler via JS interrupt at any safe point. The main thread is then virtualized via these cooperative threads.

In this new cooperative-thread approach, we ensure that only one thread at a time can run a task. This allocates more CPU time to the foreground TabGroup and also ensures internal data correctness in Firefox, which includes many services, managers, and data designed intentionally as singleton objects.

Obstacles to task grouping and scheduling

It’s clear that the performance of Quantum-DOM scheduling is highly dependent on the work of task grouping. Ideally, we’d expect that each task should be associated with only one TabGroup. In reality, however, some tasks are designed to serve multiple TabGroups, which require refactoring in advance in order to support grouping, and not all the tasks can be grouped in time before scheduler is ready to be enabled. Hence, to enable the scheduler aggressively before all tasks are grouped, the following design is adopted to disable the preemption temporarily when an ungrouped task arrives because we never know which TabGroup this ungrouped task belongs to.

Current status of task grouping

We’d like to send thanks to the many engineers from various sub-modules including DOM, Graphic, ImageLib, Media, Layout, Network, Security, etc., who’ve helped clear these ungrouped (unlabeled) tasks according to the frequency shown in telemetry results.

The table below shows telemetry records of tasks running in the content process, providing a better picture of what Firefox is actually doing:

The good news is that over 80% of tasks (weighted with frequency) have cleared recently. However, there are still a fair amount of anonymous tasks to be cleared. Additional telemetry will help check the mean time between 2 ungrouped tasks arriving to the main thread. The larger the mean time, the more performance gain we’ll see from Quantum-DOM Scheduler.

Contribute to Quantum DOM development

As mentioned above, the more tasks are grouped (labeled), the more benefit we gain from the scheduler. If you are interested in contributing to Quantum-DOM, here are some ways you can help:

  • Pick any bug from labeling meta-bug without assignee and follow this guideline for labeling.
  • If you are not familiar with these unlabeled bugs, but you want to help on naming the tasks to reduce the anonymous tasks in the telemetry result to improve the analysis in the future, this guideline will be helpful to you. (Update: Naming anonymous tasks are going to be addressed by some automation tool in this bug.)

If you get started fixing bugs and run into issues or questions, you can usually find the Quantum DOM team in Mozilla’s #content IRC channel.

Firefox UXLet‘s tackle the same challenge again, and again.

Actually, let’s not!

The products we build get more design attention as our Firefox UX team has grown from about 15 to 45 people. Designers can now continue to focus on their product after the initial design is finished, instead of having to move to the next project. This is great as it helps us improve our products step by step. But this also leads to increasing efforts to keep this growing team in sync and able to timely answer all questions posed to us.

Scaling communication from small to big teams leads to massive effort for a few.

Especially for engineers and new designers it is often difficult to get timely answers to simple questions. Those answers are often in the original spec, which too often is hard to locate. Or worse, it may be in the mind of the designer, who may have left, or receives too many questions to respond timely.

In a survey we ran in early 2017, developers reported to feel they

  • spend too much time identifying the right specs to build from,
  • spend too much time waiting for feedback from designers, and
  • spend too much time mapping new designs to existing UI elements.

In the same survey designers reported to feel they

  • spend too much time identifying current UI to re-use in their designs, and
  • spend too much time re-building current UI to use in their designs.

All those repetitive tasks people feel they spend too much time on ultimately keep us from tackling newer and bigger challenges. ‒ So, actually, let‘s not spend our time on those.

Let’s help people spend time on what they love to do.

Shifting some communication to a central tool can reduce load on people and lower the barrier for entry.

Let’s build tools that help developers know what a given UI should look like, without them needing to wait for feedback from designers. And let’s use that system for designers to identify UI we already built, and to learn how they can re-use it.

We call this the Photon Design System,
and its first beta version is ready to be used:
design.firefox.com/photon

We are happy to receive feedback and contributions on the current content of the system, as well as on what content to add next.

Photon Design System

Based on what we learned from people, we are building our design system to help people:

  • find what they are looking for easily,
  • understand the context of that quickly, and
  • more deeply understand Firefox Design.

Currently the Photon Design System covers fundamental design elements like icons, colors, typography and copy-writing as well as our design principles and guidelines on how to design for scale. Defining those already helped designers better align across products and features, and developers have a definitive source to fall back to when a design does not specify a color, icon or other.

Growth

With all the design fundamentals in place we are starting to combine them into defined components that can easily be reused to create consistent Firefox UI across all platforms, from mobile to desktop, and from web-based to native. This will add value for people working on Firefox products, as well as help people working on extensions for Firefox.

If you are working on Firefox UI

We would love to learn from you what principles, patterns & components your team’s work touches, and what you feel is worth documenting for others to learn from, and use in their UI.

Share your principle/pattern/component with us!

And if you haven’t yet, ask yourself where you could use what’s already documented in the Photon Design System and help us find more and more synergies across our products to utilize.

If you are working on a Firefox extension

We would love to learn about where you would have wanted design support when building your extension, and when you had to spend more time on design then you intended to.

Share with us!


Let‘s tackle the same challenge again, and again. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaMozilla Gigabit Eugene Open House

Mozilla Gigabit Eugene Open House Hello Eugene, Oregon! Come meet with local innovators, educators, entrepreneurs, students, and community advocates and learn about what it means to be a “Mozilla Gigabit...

Air MozillaGigabit Community Fund June 2017 RFP Webinar

Gigabit Community Fund June 2017 RFP Webinar This summer, we're launching a new round of the Mozilla Gigabit Community Fund. We're funding projects that explore how high-speed networks can be leveraged for...

hacks.mozilla.orgPowerful New Additions to the CSS Grid Inspector in Firefox Nightly

CSS Grid is revolutionizing web design. It’s a flexible, simple design standard that can be used across all browsers and devices. Designers and developers are rapidly falling in love with it and so are we. That’s why we’ve been working hard on the Firefox Developer Tools Layout panel, adding powerful upgrades to the CSS Grid Inspector and Box Model. The latest improvements are now available in Firefox Nightly.

Layout Panel Improvements

The new Layout Panel lists all the available CSS Grid containers on the page and includes an overlay to help you visualize the grid itself. Now you can customize the information displayed on the overlay, including grid line numbers and dimensions.

This is especially useful if you’re still getting to know CSS Grid and how it all works.

There’s also a new interactive grid outline in the sidebar. Mouse over the outline to highlight parts of the grid on the pages and display size, area, and position information.

The new “Display grid areas” setting shows the bounding areas and the associated area name in every cell. This feature was inspired by CSS Grid Template Builder, which was created by Anthony Dugois.

Finally, the Grid Inspector is capable of visualizing transformations applied to the grid container. This lets developers accurately see where their grid lines are on the page for any grids that are translated, skewed, rotated or scaled.

Improved Box Model Panel

We also added a Box Model Properties component that lists properties that affect the position, size and geometry of the selected element. In addition, you’ll be able to see and edit the top/left/bottom/right position and height/width properties—making live layout tweaks quick and easy.

Finally, you’ll also be able to see the offset parent for any positioned element, which is useful for quickly finding nested elements.

As always, we want to hear what you like or don’t like and how we can improve Firefox Dev Tools. Find us on Discourse or @firefoxdevtools on twitter.

Thanks to the Community

Many people were influential in shipping the CSS Layout panel in Nightly, especially the Firefox Developer Tools and Developer Relations teams. We thank them for all their contributions to making Firefox awesome.

We also got a ton of help from the amazing people in the community, and participants in programs like Undergraduate Capstone Open Source Projects (UCOSP) and Google Summer of Code (GSoC). Many thanks to all the contributors who helped land features in this release including:

Micah Tigley – Computer science student at the University of Lethbridge, Winter 2017 UCOSP student, Summer 2017 GSoC student. Micah implemented the interactive grid outline and grid area display.

Alex LockhartDalhousie University student, Winter 2017 UCOSP student. Alex contributed to the Box Model panel with the box model properties and position information.

Sheldon Roddick –  Student at Thompson Rivers University, Winter 2017 UCOSP student. Sheldon did a quick contribution to add the ability to edit the width and height in the box model.

If you’d like to become a contributor to Firefox Dev Tools hit us up on GitHub or Slack or #devtools on irc.mozilla.com. Here you will find all the resources you need to get started.

Air MozillaReps Weekly Meeting Jun. 22, 2017

Reps Weekly Meeting Jun. 22, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air MozillaCommunity Participation Guidelines Revision Brownbag (APAC)

Community Participation Guidelines Revision Brownbag (APAC) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Air MozillaThe Joy of Coding - Episode 103

The Joy of Coding - Episode 103 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Add-ons BlogUpcoming changes for add-on usage statistics

We’re changing the way we calculate add-on usage statistics on AMO so they better reflect their real-world usage. This change will go live on the site later this week.

The user count is a very important part of AMO. We show it prominently on listing and search pages. It’s a key factor of determining add-on popularity and search ranking.

Most popular add-ons on AMO

However, there are a couple of problems with it:

  • We count both enabled and disabled installs. This means some add-ons with high disable rates have a higher ranking than they should.
  • It’s an average over a period of several weeks. Add-ons that are rapidly growing in users have user numbers that are lagging behind.

We’ll be calculating the new average based on enabled installs for the past two weeks of activity. We believe this will reflect add-on usage more accurately.

What it means for add-on developers

We expect most add-ons to experience a small drop in their user numbers, due to the removal of disabled installs. Most add-on rankings on AMO won’t change significantly. This change also doesn’t affect the detailed statistics dashboard developers have access to. Only the number displayed on user-facing sections of the site will change.

If you notice any problems with the statistics or anything else on AMO, please let us know by creating an issue.

The post Upcoming changes for add-on usage statistics appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgDesigning for performance: A data-informed approach for Quantum development

When we announced Project Quantum last October, we talked about how users would benefit from our focus on “performance gains…that will be so noticeable that your entire web experience will feel different.”

We shipped the first significant part of this in Firefox 53, and continue to work on the engineering side. Now let’s dive into the performance side and the work we’re doing to ensure that our users will enjoy a faster Web experience.

What makes work on performance so challenging and why is it so important to include the user from the very beginning?


Performance — a contested subject, to say the least!

Awareness of performance as a UX issue often begins with a negative experience – when things get slow or don’t work as expected. In fact, good performance is already a table stake, something that everyone expects from an online product or service. Outstanding performance will very soon become the new baseline point of reference.

The other issue is that there are different perspectives on performance. For users, performance is about their experience and is very often unspecific. For them, perception of good performance can range from “this is amazingly fast” to “SLOW!”, from “WOW!” to “NO!”. For engineers, performance is about numbers and processes. The probes that collect data in the code often measure one specific task in the pipeline. Measuring and tracking capabilities like Garbage Collection (GC) enables engineers to react to regressions in the data quickly, and work on fixing the root causes.

This is why there can be a disconnect between user experience and engineering efforts at mitigation. We measure garbage collection, but it’s often measured without context, such as whether it runs during page load, while the user interacts with a website, or during event queue idle time. Often, GC is within budget, which means that users will hardly perceive it. More generally, specific aspects of what we measure with our probes can be hard to map to the unspecific experience of performance that users have.

Defining technical and perceived performance

To describe an approach for optimizing performance for users, let us start by defining what performance means. For us, there are two sides to performance: technical performance and perceived performance.

Under technical performance, we include the things that we can measure in the browser: how long page elements take to render, how fast we can parse JavaScript or — and that is often more important to understand — how slow certain things are. Technical performance can be measured and the resulting data can be used to investigate performance issues. Technical performance represents the engineer’s viewpoint.

On the other hand, there is the topic of how users experience performance. When users talk about their browser’s performance, they talk about perceived performance or “Quality of Experience” (QoE). Users express QoE in terms of any perceivable, recognized, and nameable characteristic of the product. In the QoE theory, these are called QoE features. We may assume that these characteristics are related to factors in the product that impact technical performance, the QoE factors, but this is not necessarily given.

A promising approach to user-perceived optimization of performance is to identify those factors that have the biggest impact on QoE features and focus on optimizing their technical performance.

Understanding perception

The first step towards optimizing Quantum for perceived performance is to understand how human perception works. We won’t go into details here, but it’s important to know that there are perception thresholds of duration that we can leverage. The most prominent ones for Web interactions were defined by Jacob Nielsen back in the 1990s, and even today, they are informing user-centric performance models like RAIL. Following Nielsen’s thresholds gives a first good estimate about the budget available for certain tasks to be performed by the browser engine.

With our user research team, we are validating and investigating these perceptual thresholds for modern web content. We are running experiments with users, both in the lab and remotely. Of course, this will only happen with users’ consent and everybody will be able to opt in and opt out of these studies at any time. With tools like Shield, we run a set of experiments that allow us to learn about performance and how to improve it for users.

However, knowing the perceptual thresholds and the respective budget is just an important first step. Following, we will go a bit more into detail about how we use a data-informed approach for benchmarking and optimizing performance during the development of our new browser engine.

Three pillars of perceived Web performance

The challenge with optimizing perceived performance of a browser engine is that there are many components involved in bringing data from the network to our screens. All these components may have an impact on the perceived performance and on the underlying perceptual thresholds. However, users don’t know about this structure and the engine. From their point of view, we can define three main pillars for how users perceive performance on the Web: page load, smoothness and responsiveness.

  • Page load: This is what people notice each time when loading a new page. Users care about fast page loads, and we have seen in user research that this is often the way users determine good or bad performance in their browser. Key events defining the perceptual budget during page load are: an immediate response to the user request for a new page, also known as “First Render” or “First non-blank Paint“, and the moment when all important elements are displayed, currently discussed as Hero Element Timing.
  • Smoothness: Scrolling and panning have become challenging activities on modern websites, with infinite scrolling, parallax effects, and dynamic sticky elements. Animations create a better user experience when interacting with the page. Our users want to enjoy a smooth experience for scrolling the web and web animations, be it on social media pages or when shopping for the latest gadget. Often, people nowadays also refer to smoothness as “always 60 fps”.
  • Responsiveness: Beyond scrolling and panning, the other big group of user interactions on websites are mouse, touch, and keyboard inputs. As modern web services create a native-like experience, user expectations for web services are more demanding, based on what they have come to expect for native apps on their laptops and desktop computers. Users have become sensitive to input latency, so we are currently looking at an ideal maximum delay of 100ms.

Targeted optimization for the whole Web

But how do we optimize these three pillars for the whole of the Web? It’s a bigger job than optimizing the performance of a single web service. In building Firefox, we face the challenge of optimizing our browser engine without knowing which pages our users visit or what they do on the Web, due to our commitment to user privacy. This also limits us in collecting data for specific websites or specific user tasks. However, we want to create the best Quality of Experience for as many users and sites as possible.

To start, we decided to focus on the types of content that are currently most popular with Web users. These categories are:

  • Search (e.g.Yahoo Search, Google, Bing)
  • Productivity (e.g. Yahoo Mail, Gmail, Outlook, GSuite)
  • Social (e.g. Facebook, LinkedIn, Twitter, Reddit)
  • Media (e.g. YouTube, Netflix, SoundCloud, Amazon Video)
  • E-commerce (e.g. eBay or Amazon)
  • News & Reference (e.g. NYTimes, BBC, Wikipedia)

Our goal is to learn from this initial set of categories and the most used sites within them and extend our work on improvements to other categories over time. But how do we now match technical to perceived performance and fix technical performance issues to improve the perceived ones?

A data-informed approach to optimizing a browser engine

The goal of our approach here is to take what matters to users and apply that knowledge to achieve technical impact in the engine. With the basics defined above, our iterative approach for optimizing the engine is as follows:

  1. Identification: Based on the set of categories in focus, we specify scenarios for page load, smoothness, and responsiveness that exceed the performance budget and negatively impact perceived performance.
  2. Benchmarks: We define test cases for the identified scenarios so that they become reproducible and quantifiable in our benchmarking testbeds.
  3. Performance profiles: We record and analyze performance profiles to create a detailed view into what’s happening in the browser engine and guide engineers to identify and fix technical root causes.

Identification of scenarios exceeding performance budget

Input for identifying those scenarios come through different sources. They are either informed by results from user research or can be reported through bugs or user feedback. Here are two examples of such a scenario:

  • Scenario: browser startup
  • Category: a special case for page load
  • Performance budget: 1000ms for First Paint and 1500ms for Hero Element
  • Description: Open the browser by clicking the icon > wait for the browser to be fully loaded as maximized window
  • What to measure: First Paint: browser window appears on Desktop, Hero Element: “Search” placeholder in the search box of the content window
  • Scenario: Open chat window on Facebook
  • Category: Responsiveness
  • Performance budget: 150ms
  • Description: Log in to Facebook > Wait for the homepage to be fully loaded > click on a name in the chat panel to open chat window
  • What to measure: time from mouse-click input event to showing the chat window on screen

Benchmarks

We have built different testbeds that allow us to obtain valid and reproducible results, in order to create a baseline for each of the scenarios, and also to be able to track improvements over time. Talos is a python-driven performance testing framework that, among many other tests, has a defined set of tests for browser startup and page load. It’s been recently updated to match the new requirements and measure events closer to user perception like First Paint.

Hasal, on the other hand, focuses on benchmarks around responsiveness and smoothness. It runs a defined set of scripts that perform the defined scenarios (like the “open chat window” scenario above) and extracts the required timing data through analyzing videos captured during the interaction.

Additionally, there is still a lot of non-automated, manual testing involved, especially for first rounds of baselining new scenarios before scripting them for automated testing. Therefore, we use a HDMI capture card and analyze the recorded videos frame-by-frame manually.

All these testbeds give us data about how critical the identified scenarios are in terms of exceeding their respective perceptual budgets. Running benchmarks regularly (once a week or even more often) for critical scenarios like browser startup also tracks improvements over time and provides good direction when improvements have moved the scenario into the perceptual budget.

Performance profiles

Now that we have defined our scenarios and understand how much improvement is required to create good Quality of Experience, the last step is to enable engineers to achieve these improvements. The way that engineers look at performance problems in the browser engine is through performance profiles. Performance profiles are a snapshot of what happens in the browser engine during a specific user task such as one of our defined scenarios.

A performance profile using the Gecko Profiler. The profile shows Gecko’s main thread, four content threads, and the compositor main thread. Below is the call stack.

 

A profile consists of a timeline with tracing markers, different thread timelines and the call tree. The timeline consists of several rows that indicate interesting events in terms of tracing markers (colored segments). With the timeline, you can also zoom in to get more details for marked areas. The thread timelines show a list of profiled threads, like Gecko’s Main Thread, four content process threads (thanks to multi-process), and the main thread of the compositor process, as seen in the profile above. The x-axis is synced to the timeline above, and the y-axis shows the stack depth at a given point in time. Finally, the call tree shows the collected samples within a given timeframe organized by ‘Running Time’.

It requires some experience to be able to read these performance profiles and translate them into actions. However, because they map critical user scenarios directly to technical performance, performance profiles serve as a good tool to improve the browser engine according to what users care about. The challenge here is to identify root causes to improve performance broadly, rather than focus on specific sites and individual bugs. This is also the reason why we focus on categories of pages and not an individual set of initial websites.

For in-depth information about performance profiles, here is an article and a talk from Ehsan Akhgari about performance profiles. We are continuously working on improving the profiler addon which is now written in React/Redux.

Iterative testing and profiling performance

The initial round of baselining and profiling performance for the scenarios above can help us go from identifying user performance issues to fixing those issues in the browser engine. However, only iterative testing and profiling of performance can ensure that patches that land in the code will also lead to the expected benefits in terms of performance budget.

Additionally, iterative benchmarking will also help identify the impact that a patch has on other critical scenarios. Looking across different performance profiles and capturing comparable interactions or page load scenarios actually leads to fixing root causes. By fixing root causes rather than focusing on one-off cases, we anticipate that we will be able to improve QoE and benefit entire categories of websites and activities.

Continuous performance monitoring with Telemetry

Ultimately, we want to go beyond a specific set of web categories and look at the Web as a whole. We also want to go beyond manual testing, as this is expensive and time-consuming. And we want to apply knowledge that we have obtained from our initial data-driven approach and extend it to monitoring performance across our user base through Telemetry.

We recently added probes to our Telemetry system that will help us to track events that matter to the user, in the wild across all websites, like first non-blank paint during page load. Over time, we will extend the set of probes meaningfully. A good first attempt to define and include probes that are closer to what users perceive has been taken by the Google Chrome team and their Progressive Web Metrics.

A visualization of Progressive Web Metrics during page load and page interaction. The upper field shows the user interaction level and critical interactions related to the technical measures.

 

As mentioned in the beginning, for users performance is a table stake, something that they expect. In this article, we have explored: how we capture issues in perceived performance, how we use benchmarks to measure the criticality of performance issues, and how to fix the issue by looking at performance profiles.

Beyond the scope of the current approach to performance, there’s an even more interesting question: Will improved performance lead to more usage of the browser or changes to how users use their browser? Can performance improvements increase user engagement?

But these are topics that still need more research — and, at some point in time, will be the subject for another blog post.

Meanwhile, if you are now interested to follow along on performance improvements and experience the enhanced performance of the Firefox browser, go download and install the latest Firefox Nightly build and see what you think of its QoE.

Air MozillaCommunity Participation Guidelines Revision Brownbag (EMEA)

Community Participation Guidelines Revision Brownbag (EMEA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...