Air MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaIntern Presentations: Round 5: Thursday, August 17th

Intern Presentations: Round 5: Thursday, August 17th Intern Presentations 7 presenters Time: 1:00PM - 2:45PM (PDT) - each presenter will start every 15 minutes 3 SF, 1 TOR, 1 PDX, 2 Paris

Mozilla VR BlogSamsung Gear VR support lands in Servo

Samsung Gear VR support lands in Servo

Samsung Gear VR support lands in Servo

We are happy to announce that Samsung Gear VR headset support is landing in Servo. The current implementation is WebVR 1.1 spec-compliant and supports both the remote and headset controllers available in the Samsung Gear VR 2017 model.

If you are eager to explore, you can download a project template compatible with Gear VR Android phones. Add your Oculus signature file, and run the project to launch the application on your mobile phone.

Alongside the Gear VR support, we worked on other Servo areas in order to provide A-Frame compatibility, WebGL extensions, optimized Android compilations and reduced Servo startup times.

A-Frame Compatibility

Servo now supports Mutation Observers that enables us to polyfill Custom Elements. Together with a solid WebVR architecture and better texture loading we can now run any A-Frame content across mobile (Google Daydream, Samsung Gear VR) and desktop (HTC Vive) platforms. All the pieces have fallen into place thanks to all the amazing work that the Servo team is doing.

WebGL Extensions

Samsung Gear VR support lands in Servo

WebGL Extensions enable applications to get optimal performance by taking advantage of state-of-the-art GPU capabilities. This is even more important in VR because of the extra work required for stereo rendering. We designed the WebGL extension architecture and implemented some of the extensions used by A-Frame/Three.js such as float textures, instancing, compressed textures and VAOs.

Compiling Servo for Android

Recently, the Rust team changed the default Android compilations targets. They added an armv7-linux-androideabi target corresponding to the armeabi-v7a official ABI and changed the arm-linux-androideabi to correspond to the armeabi official ABI instead of armeabi-v7a.

This could cause important performance regressions on Servo because it was using the arm-linux-androideabi target by default. Using the new armv7 compilation target is easy for pure Rust based crates. It’s not so trivial for cmake or makefile based dependencies because they infer the toolchain and compiler names based on the target name triple.

We adapted all the problematic dependencies. We took advantage of this work to add arm64 compilation support and provided a simple CLI API to select any Android compilation target in Servo.

Reduced startup times

C based libfontconfig library was causing long startup times in Servo for Android. We didn’t find a way to fix the library itself so we opted to get rid of it and implement an alternative way to query Android system fonts. Unfortunately, Android doesn't provide an API to query system fonts until Android O so we were forced to parse the system configuration files and load fonts manually.

Gear VR support on Rust-WebVR Library

Samsung Gear VR support lands in Servo

We started working on ovr-mobile-sys, the Rust bindings crate for the Oculus Mobile SDK API. We used rust-bindgen to automatically generate the bindings from the C headers but had to manually transpile some of the inline SDK header code since inline functions don’t generate symbols and are not exported by rust-bindgen.

Then we added the SDK integration into the rust-webvr standalone library. The OculusVRService class offers the entry point to access Oculus SDK and handles life-cycle operations such as initialization, shutdown, and VR device discovery. The integration with the headset is implemented in OculusVRDisplay. Gear VR lacks positional tracking, but by using the neck model provided in the SDK, we expose a basic position vector simulating how the human head naturally rotates relative to the base of the neck.

In order to read Gear VR sensor inputs and submit frames to the headset, the Android activity must enter VR Mode by calling vrapi_EnterVrMode() function. Oculus Mobile SDK requires a precise life cycle management and handling some events that may interleave in complex ways. For a correct implementation the Android Activity must enter VR mode in a surfaceChanged() or onResume() event, whichever comes last. And it must leave VR mode in a surfaceDestroyed() or onPause() event, whichever comes first.

In a Glutin based Android NativeActivity, life cycle events are notified using Rust channels. This caused synchronization problems due to non-deterministic event handling in multithreading. We couldn’t guarantee that the vrapi_LeaveVrMode() function was called before NativeActivity’s EGLSurface was destroyed and the app went to background. Additionally, we needed to block the event notifier thread until Gear VR resources are freed, in a different renderer thread, to prevent collisions (e.g. Glutin dropping the EGLSurface at the same time that VR renderer thread was leaving VR mode). We contributed a deterministic event handling implementation to the Rust-android-glue.

Oculus mobile SDK allows to directly send a WebGL context texture to the headset. Despite that, we opted for a triple buffered swap chain recommended in the SDK to avoid potential flickering and performance problems when using the same texture every frame. As we did with the Daydream implementation, we render the VR-ready texture to the current ovrTextureSwapChain using a BlitFramebuffer-based solution, instead of rendering a quad, to avoid implementing the required OpenGL state-change safeguards or context switching.

Oculus Mobile SDK allowed us to directly attach the NativeActivity’s surface to the Gear VR time warp renderer. We were able to run the pure Rust room-scale demo without writing a line of Java. It’s nice that the SDK allows to achieve a java-free integration, but our luck changed when we integrated all this work into a full browser architecture.

Gear VR integration into Servo

Our Daydream integration worked inside Servo almost on a first try after it landed on the rust-webvr standalone library. This was not the case with the Gear VR integration…

First, we had to research and fix up to four specific GPU driver issues with the Mali-T880 GPU used in the Samsung Galaxy S7 phone:

As a result, we were able to see WebGL stereo rendering on the screen but entering VR mode crashed with a JNI assertion failure inside the Oculus VR SDK. This was caused because inside the browser context different threads are used for the rendering and VR device initialization/discovery. This requires the use of different Oculus ovrJava instances for each thread.

The assertion failure was gone but we couldn’t see anything on the screen after calling vrapi_EnterVrMode(). The logcat error messages triggered by the Oculus SDK helped to find the cause of the problem. The Gear VR time warp implementation hijacks the explicitly passed Android window surface pointer. We could use the NativeActivity’s window surface in the standalone room-scale demo. In a full browser architecture, however, there is a fight to take over ownership of the Android surface between time warp thread and the browser compositor. We discarded the idea of directly using the NativeActivity’s window surface and decided to switch to a Java SurfaceView VR backend in order to make both the browser’s compositor and Gear VR’s time warp thread happy.

By this means, the VR mode life cycle fit nicely in the browser architecture. There was one final surprise though. The activity entered VR mode correctly, there were no errors in the logcat, time warp thread was showing correct render stats and the headset pose data was correctly fetched. Nevertheless, the VR scene with lens distortion was not yet visible in the Android view hierarchy. This led to a new instance of spending some hours of debugging to change a single line of code. The Android SurfaceView was being rendered correctly but it was composited below the NativeActivity’s browser window because setZOrderOnTop() is not enabled by default on Android:

After this change everything worked flawlessly and it was time to enjoy running some WebVR experiences on the Gear VR ;)

Conclusion

It's been a lot of fun seeing Gear VR support land in Servo and being able to run A-Frame demos in it. We continue to work hard on squeezing WebGL and WebVR performance and expect to land some nice optimizations soon. We are also working on implementing unique WebVR features that no other browser has yet. More news soon ;) Stay tuned!

Mozilla Gfx TeamWebRender newsletter #1

The Quantum Flow and Photon projects have exciting newsletters. The Quantum graphics project (integrating WebRender in Firefox) hasn’t provided a newsletter so far and people have asked for it, so let’s give it a try!

This newsletter will not capture everything that is happening in the project, only some highlights, and some of the terminology might be a bit hard to understand at first for someone not familiar with the internals of Gecko and WebRender. I will try to find the time to write a bit about WebRender’s internals and it will hopefully provide more keys to understanding what’s going on here.

The terms layer-full/layers-free used below refer to the way WebRender is integrated in Gecko. Our first plan was to talk to WebRender using the layers infrastructure in the short term, because it is the simplest approach. This is the “layers-full” integration. Unfortunately the cost of building many layers to transform into WebRender display items is high and we found out that we may not be able to ship WebRender using this strategy. The “layers-free” integration plan is to translate Gecko’s display items into WebRender display items directly without building layers. It is more work but we are getting some encouraging results so far.

Some notable (recent) changes in WebRender

  • Glyph Cache optimizations – Glenn profiled and optimized the glyph cache and made it a lot faster.
  • Texture cache rewrite (issue #1572) – The new cache use pixel buffer objects to transfer images to the GPU (previously used glTexSubImage2D), and does not suffer from fragmentation issues the way the previous one did, and has a better eviction policy.
  • Other text related optimization in the display list serialization.
  • Sub-pixel positioning on Linux.

Some notable (recent) changes in Gecko

  • Clipping in layers free mode (Bug 1386483) – This reuses clips instead of having new ones for every display item. This will reduce the display list processing that happens on the Gecko side as well as the WebRender side. This was one of the big things missing from getting functional parity with current layers-full WebRender.
  • Rounded rectangle clipping in layers free mode (Bug 1370682) – This is a noticeable difference from what we do in layer-full mode. In layer-full mode we currently use mask layers for rounded clipping. Doing this directly with WebRender gives a noticeable performance improvement.

How to get the most exciting WebRender experience today:

Using Firefox nightly, go to about:config and change the following prefs:

  • turn off layers.async-pan-zoom.enabled
  • turn on gfx.webrender.enabled
  • turn on gfx.webrendest.enabled
  • turn on gfx.webrender.layers-free
  • add and turn on gfx.webrender.blob-images
  • if you are on Linux, turn on layers.acceleration.force-enabled

This will give you a peek at the future but beware there are lots of rough edges. Don’t expect the performance of WebRender in Gecko to be representative yet (Probably better to try Servo for that).

All of the integration work is now happening in mozilla-central and bugzilla, WebRender development happens on the servo/webrender github repository.


Air MozillaWeekly SUMO Community Meeting August 16, 2017

Weekly SUMO Community Meeting August 16, 2017 This is the sumo weekly call

Air MozillaIntern Presentations: Round 4: Tuesday, August 15th

Intern Presentations: Round 4: Tuesday, August 15th Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 5 MTV, 1 Berlin

hacks.mozilla.orgEssential WebVR resources

The general release of Firefox 55 brought a number of cool new features to the Gecko platform, one of which is the WebVR API v1.1. This allows developers to create immersive VR experiences inside web apps, compatible with popular hardware such as HTC VIVE, Oculus Rift, and Google Daydream. This article looks at the resources we’ve made available to facilitate getting into WebVR development.

Support notes

Version 1.1 of the WebVR API is very new, with varying support available across modern browsers:

  • Firefox 55 sees full support on Windows, and more experimental support available for Mac in the Beta/Nightly release channels only, until testing and final work is completed. Supported VR hardware includes HTC VIVE, Oculus Rift, and Google Daydream.
  • Chrome support is still experimental — you can currently only see support out in the wild on Chrome for Android with Google Daydream.
  • Edge fully supports WebVR 1.1, through the Windows Mixed Reality headset.
  • Support is also available in Samsung Internet, via their GearVR hardware.

Note that the 1.0 version of the API can be considered obsolete, and has been (or will be) removed from all major browsers.

Controlling WebVR apps using the full features of VR controllers relies on the Gamepad Extensions API. This adds features to the Gamepad API that provide access to controller features like haptic actuators (e.g. vibration hardware) and position/orientation data (i.e., pose). This currently has even more limited support than the WebVR API; Firefox 55+ has it available in Beta/Nightly channels.

In other browsers, you’ll have to make do for now with basic Gamepad API functionality, like reporting button presses.

vr.mozilla.org

vr.mozilla.org — Mozilla’s new landing pad for WebVR — features demos, utilities, news and updates, and all the other information you’ll need to get up and running with WebVR.

MDN documentation

MDN has full documentation available for both the APIs mentioned above. See:

In addition, we’ve written some useful guides to get you familiar with the basics of using these APIs:

A-Frame and other libraries

WebVR experiences can be fairly complex to develop. The API itself is easy to use, but you need to use WebGL to create the 3D scenes you want to feature in your apps, and this can prove difficult to those not well-versed in low-level graphics programming. However, there are a number of libraries to hand that can help with this.

The hero of the WebVR world is Mozilla’s A-Frame library, which allows you to create nice looking 3D scenes using custom HTML elements, handling all the WebGL for you behind the scenes. A-Frame apps are also WebVR-compatible by default. It is perfect for putting together apps and experiences quickly.

There are a number of other well-written 3D libraries available too, which abstract away the difficulty of working with raw WebGL. Good examples include:

These don’t include VR capabilities out of the box, but it is not too difficult to write your own WebVR rendering code around them.

If you are worried about supporting older browsers that only include WebVR 1.0 (or no VR) as well as newer browsers with 1.1, you’ll be pleased to know that there is a WebVR polyfill available.

Demos and examples

See also

Open Policy & AdvocacyBringing the 4th Amendment into the Digital Age

Today, Mozilla has joined other major technology companies in filing an amicus brief urging the Supreme Court of the United States to reexamine how the 4th Amendment and search warrant requirements should apply in our digital era. We are joining this brief because we believe our laws need to keep up with what we already know to be true: that the Internet is an integral part of modern life, and that user privacy must not be treated as optional.

At the heart of this case is the government’s attempt to obtain “cell site location information” to aid in a criminal investigation. This information is generated continuously when your phone is on. Your phone communicates with nearby cell sites to connect with the cellular network and those sites create a record of your phone’s location as you go about your business. In the case at hand, the government did not obtain a warrant, which would have required probable cause, before obtaining this location information. Instead, the government sought a court order under the Stored Communications Act of 1986, which requires a lesser showing.

Looking at how the courts have dealt with the cell phone location records in this case demonstrates why our laws must be revisited to account for modern technological reality. The district court decided that the government didn’t have to obtain a warrant because people do not have a reasonable expectation of privacy in their cell phone location information. On appeal, the Sixth Circuit acknowledged that similar information, such as GPS monitoring in government investigations, would require a warrant. But it too found no warrant was needed because the location information was a “business record” from a “third party” (i.e., the service providers).

We believe users should not be forced to surrender their expectations of privacy when using their phones and we hope the Court will reconsider the law in this area.

*Brief link updated on August 16

The post Bringing the 4th Amendment into the Digital Age appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogAdd-ons Update – 2017/08

Here’s the monthly update of the state of the add-ons world.

The Review Queues

In the past month, our team reviewed 1,803 listed add-on submissions:

  • 1368 in fewer than 5 days (76%).
  • 147 between 5 and 10 days (8%).
  • 288 after more than 10 days (16%).

274 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 56 and the bulk validation has been run. This is the last one of these we’ll do, since compatibility is a much smaller problem with the WebExtensions API.

Firefox 57 is now on the Nightly channel, and only accepting WebExtension add-ons by default. Here are some changes we’re implementing on AMO to ease the transition to 57.

We recommend that you test your add-ons on Beta. If you’re an add-ons user, you can install the Add-on Compatibility Reporter. It helps you identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Apoorva Pandey
  • Neha Tekriwal
  • Swapnesh Kumar Sahoo
  • rctgamer3
  • Tushar Saini
  • vishal-chitnis
  • Cameron Kaiser
  • zombie
  • Trishul Goel
  • Krzysztof Modras
  • Tushar Saini
  • Tim Nguyen
  • Richard Marti
  • Christophe Villeneuve
  • Jan Henning
  • Leni Mutungi
  • dw-dev
  • Dino Herbert

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/08 appeared first on Mozilla Add-ons Blog.

Air MozillaMozilla Weekly Project Meeting, 14 Aug 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

hacks.mozilla.orgA-Frame comes to js13kGames: build a game in WebVR

It’s that time of the year again – the latest edition of the js13kGames competition opened yesterday, on Sunday, August 13th. Just like last year, and going back to 2012 when I started this competition. Every year the contest has a new theme, but his time there’s another new twist that’s a little bit different – a brand new A-Frame VR category just in time for the arrival of WebVR to Firefox 55 and a desktop browser near you.

Js13kGames is an online competition for HTML5 game developers where the fun part is that the size limit is set to 13 kilobytes. Unlike a 48-hour game jam, you have a whole month to come up with your best idea, create it, polish as much as you can, and submit – deadline is September 13th.

A brief history of js13kgames

It started five years ago from the pure need of having a competition for JavaScript game developers like me – I couldn’t find anything interesting, so I created one myself. Somehow it was cool enough for people to participate, and from what I heard they really enjoyed it, so I kept it going over the years even though managing everything on my own is exhausting and time-consuming.

There have been many great games created since the beginning – you can check GitHub’s recent blog post for a quick recap of some of my personal favourites. Two of the best entries from 2016 ended up on Steam in their post-competition versions: Evil Glitch and Glitch Buster, and keys for both of them are available as prizes in the competition this year.

A-Frame category

The big news this year that I’m really proud of: Virtual Reality has arrived with the new A-Frame category. Be sure to check it out the A-Frame landing page for the rules and details. You can reference the minified version of the A-Frame library and you are not required to count its size as part of the 13 kilobytes size limit that defines this contest.

Since the A-Frame library itself was announced I have been really excited trying it out. I believe it’s a real game changer (pun intended) for the WebVR world. With just a few lines of HTML markup you can set up a simple scene with VR mode, controls, lights. Prototyping is extremely easy, and you can build really cool experiments within minutes. There are many useful components in the Registry that can help you out too, so you don’t have to write everything yourself. A-Frame is very powerful, yet so easy to use – I really can’t wait to see what you’ll come up with this year.

Resources

If WebVR is all brand new to you and you have no idea where to start, read Chris Mills’ recent article “WebVR Essentials”. Then be sure to check out the A-Frame website for useful docs and demos, and a lively community of WebVR creators:

I realize the 13K size limit is very constraining, but these limitations spawn creativity. There have been many cool and inspiring games created over the years, and all their source code is available on GitHub in a readable form for everyone to learn from. There are plenty of A-Frame tutorials out there, so feel free to look for the specific solutions to your ideas. I’m sure you’ll find something useful.

Feedback

Many developers who’ve participated in this competition in previous years have mentioned expert feedback as a key benefit from the competition. This year’s judges for the A-Frame category will focus their full attention to on WebVR games only, in order to be able to offer constructive feedback on your entry.

The A-Frame judges include: Fernando Serrano Garcia (WebVR and WebGL developer), Diego Marcos (A-Frame co creator, API designer and maintainer), Ada Rose Edwards (Senior Engineer and WebVR advocate at Samsung) and Matthew ‘Potch’ Claypotch (Developer Advocate at Mozilla).

Prizes

This year, we’ll be offering custom-made VR cardboards to all participants in the js13kGames competition. These will be shipped for every complete submission, along with the traditional annual t-shirt, and a bunch of cool stickers.

In addition to the physical package that’s shipped for free to your doorstep, there’s a whole bunch of digital prizes you can win – software licenses, engines, editors and other tools, as well as subscription plans for various services and online courses, games and game assets, ebooks, and vouchers.

Prizes for the A-Frame category include PlayCanvas licenses, WebVR video courses, and WebStorm licenses. There are other ways to win more prizes too: Community Awards and Social Specials. You can find all the details and rules about how to enter on the competition website.

A look back

I’m happy to see this competition become more and more popular. I’ve started many projects, and many have failed. Yet this one is still alive and kicking, even though HTML5 game deveopment itself is a niche, and the size constraint in this contest means you have to mind the size of every resource you want to use. It is indeed a tough competition and not every developer makes it to the finish, but the feeling of submitting an entry minutes before the deadline is priceless.

I’m a programmer, and my wife Ewa is a graphic designer on all our projects, including js13kGames. I guess that makes Enclave Games a family business! With our little baby daughter Kasia born last year, it’s an ongoing challenge to balance work, family and game development. It’s not easy, but if you believe in something you have to try and make it work.

Start your engines

Anyway, the new category in the competition is a great opportunity to learn A-Frame if you haven’t tried it yet, or improve your skills. After all you have a full month, and there’s guaranteed swag for every entry. The theme this year is “lost” – I hope it will help you find a good idea for the game.

Visit js13kGames website for all the details, see the A-Frame category landing page, and follow @js13kgames on Twitter or on Facebook for announcements. The friendly js13kGames community can help you with any problems or issues you’ll face; they can be found on our js13kgames Slack channel. Good luck and have fun!

Mozilla L10NL10n Style Guides on GitHub

When we began talking about style guides with localization communities at l10n hackathons, we suggested that the Mozilla Wiki was a good place to temporarily store them, until we could define a more centralized and accessible place for them, and that that place would most likely be GitHub. After a lot of research, we’ve created GitHub repository to host all of the Mozilla translation style guides, including community-specific ones. Any style guide that is referenced on a team’s contact page has been copied as a markdown file into this repository. The repository has been built with Gitbooks and the style guides can be accessed with greater readability and improved search capabilities.

You may be wondering, “If the community style guides are already available and linked on team contact pages, why do we need this GitHub repository?” We understand this confusion and wish to address why the repository exists. 

Recently, MDN underwent a major style and content change. This meant that the General Localization Style Guide that was available on MDN needed to be assessed to determine what changes needed to be made or if MDN was even a good home for it. After considering alternatives and associated questions, such as “what about community-specific style guides”, we decided that we need to build a place easy to find for all style guides. Having this central repository for all style guides makes it easier to locate all of the style guides that have been created by each community. We don’t want the hard work to go to waste, that’s why we want to make these style guides accessible and link to them from the team’s page in Pontoon. This centralized repository helps us make sure we don’t miss any style guides.

Currently, community style guides are hosted on a variety of sources and in a mix of formats. While this is not a problem in itself, these varied formats and sources can make it difficult to locate the style guides. Additionally, some of these sources stop hosting the style guide or the style guide may become obsolete for whatever reason. This is not exclusive to style guides hosted to non-Mozilla sources. The wiki at mozilla.org doesn’t represent a good home for this data, for that reason we have moved the General Localization Style Guide as well. Rather than lose the style guides currently hosted on the Mozilla Wiki, we decided to make copies of these style guides in the centralized GitHub Repository.

These considerations aren’t newas you probably know from the past year’s workshopsbut they present an opportunity for us to make this change that will facilitate quality assurance and accessibility for our translation efforts.

This brings up a few tasks for language communities that have a style guide or would like to add one to the repository:

  1. Please check that your current community style guide is in the repository and that it is correct. It is possible that the style guide that was migrated to the repository is the wrong version or contains some errors from migration. If there are any errors in the style guide, please see number 2.
  2. If you need to update/correct or add a style guide to the repository, please update it in the GitHub repository. GitHub has instructions on how to update a repository. Please follow these instructions to create a pull request. This pull request will be reviewed before being merged to the official style guides repository to try to maintain quality. In addition, each pull request should be reviewed by another member of the community as some of the repository administrators may not speak the language of the style guide. 

If there are any questions regarding the new repository or community style guides, please direct them to Kekoa kriggin@mozilla.com or flod at flodolo@mozilla.com.

QMOFirefox 56 Beta 4 Testday, August 18th

Hello dear Mozillians!

We are happy to let you know that Friday, August 18th, we are organizing Firefox 56 Beta 4 Testday. We’ll be focusing our testing on the following new features: Media Block Autoplay, Preferences Search [Photon] and Photon Preferences reorg V2.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

The Mozilla BlogHonoring Our Friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship

To honor Bassel Khartabil’s legacy and his lasting impact on the open web, a slate of nonprofits are launching a new fellowship in his name

 

By Katherine Maher (executive director, Wikimedia Foundation), Ryan Merkley (CEO, Creative Commons) and Mark Surman (executive director, Mozilla)

On August 1, 2017, we received the heartbreaking news that our friend Bassel (Safadi) Khartabil, detained since 2012, was executed by the Syrian government shortly after his 2015 disappearance. Khartabil was a Palestinian Syrian open internet activist, a free culture hero, and an important member of our community. Our thoughts are with Bassel’s family, now and always.

Today we’re announcing the Bassel Khartabil Free Culture Fellowship to honor his legacy and lasting impact on the open web.

Bassel Khartabil

Bassel was a relentless advocate for free speech, free culture, and democracy. He was the cofounder of Syria’s first hackerspace, Aiki Lab, Creative Commons’ Syrian project lead, and a prolific open source contributor, from Firefox to Wikipedia. Bassel’s final project, relaunched as #NEWPALMYRA, entailed building free and open 3D models of the ancient Syrian city of Palmyra. In his work as a computer engineer, educator, artist, musician, cultural heritage researcher, and thought leader, Bassel modeled a more open world, impacting lives globally.

To honor that legacy, the Bassel Khartabil Free Culture Fellowship will support outstanding individuals developing the culture of their communities under adverse circumstances. The Fellowship — organized by Creative Commons, Mozilla, the Wikimedia Foundation, the Jimmy Wales Foundation, #NEWPALMAYRA, and others — will launch with a three-year commitment to promote values like open culture, radical sharing, free knowledge, remix, collaboration, courage, optimism, and humanity.

As part of this new initiative, fellows can work in a range of mediums, from art and music to software and community building. All projects will catalyze free culture, particularly in societies vulnerable to attacks on freedom of expression and free access to knowledge. Special consideration will be given to applicants operating within closed societies and in developing economies where other forms of support are scarce. Applications from the Levant and wider MENA region are greatly encouraged.

Throughout their fellowship term, chosen fellows will receive a stipend, mentorship from affiliate organizations, skill development, project promotion, and fundraising support from the partner network. Fellows will be chosen by a selection committee composed of representatives of the partner organizations.

Says Mitchell Baker, Mozilla executive chairwoman: “Bassel introduced me to Damascus communities who were hungry to learn, collaborate and share. He introduced me to the Creative Commons community which he helped found. He introduced me to the open source hacker space he founded, where Linux and Mozilla and JavaScript libraries were debated, and the ideas of open collaboration blossomed. Bassel taught us all. The cost was execution. As a colleague, Bassel is gone. As a leader and as a source of inspiration, Bassel remains strong. I am honored to join with others and echo Bassel’s spirit through this Fellowship.”

Fellowship details

Organizational Partners include Creative Commons, #FREEBASSEL, Wikimedia Foundation, GlobalVoices, Mozilla, #NEWPALMYRA, YallaStartup, the Jimmy Wales Foundation, and SMEX.

Amazon Web Services is a supporting partner.

The Fellowships are based on one-year terms, which are eligible for renewal.

The benefits are designed to allow for flexibility and stability both for Fellows and their families. The standard fellowship offers a stipend of $50,000 USD, paid in 10 monthly installments. Fellows are responsible for remitting all applicable taxes as required.

To help offset cost of living, the fellowship also provides supplements for childcare and health insurance, and may provide support for project funding on a case-by-case basis. The fellowship also covers the cost of required travel for fellowship activities.

Fellows will receive:

  • A stipend of $50,000 USD, paid in 10 monthly installments
  • A one-time health insurance supplement for Fellows and their families, ranging from $3,500 for single Fellows to $7,000 for a couple with two or more children
  • A one-time childcare allotment of up to $6,000 for families with children
  • An allowance of up to $3,000 towards the purchase of laptop computer, digital cameras, recorders and computer software; fees for continuing studies or other courses, research fees or payments, to the extent such purchases and fees are related to the fellowship
  • Coverage in full for all approved fellowship trips, both domestic and international

The first fellowship will be awarded in April 2018. Applications will be accepted beginning February 2018.

Eligibility Requirements. The Bassel Khartabil Free Culture Fellowship is open to individuals and small teams worldwide, who:

  • Propose a viable new initiative to advance free culture values as outlined in the call for applicants
  • Demonstrate a history of activism in the Open Source, Open Access, Free Culture or Sharing communities
  • Are prepared to focus on the fellowship as their primary work

Special consideration will be given to applicants operating under oppressive conditions, within closed societies, in developing economies where other forms of support are scarce, and in the Levant and wider MENA regions.

Eligible Projects. Proposed projects should advance the free culture values of Bassel Khartabil through the use of art, technology, and culture. Successful projects will aim to:

  • Meaningfully increase free public access to human knowledge, art or culture
  • Further the cause of social justice/social change
  • Strive to develop both a local and global community to support its cause

Any code, content or other materials produced must be published and released as free, openly licensed and/or open-source.

Application Process. Project proposals are expected to include the following:

  • Vision statement
  • Bio and CV
  • Budget and resource requirements for the next year of project development

Applicants whose projects are chosen to advance to the next stage in the evaluation process may be asked to provide additional information, including personal references and documentation verifying income.

About Bassel

Bassel Khartabil, a Palestinian-Syrian computer engineer, educator, artist, musician, cultural heritage researcher and thought leader, was a central figure in the global free culture movement, connecting and promoting Syria’s emerging tech community as it existed before the country was ransacked by civil war. Bassel co-founded Syria’s first hackerspace, Aiki Lab, in Damascus in 2010. He was the Syrian lead for Creative Commons as well as a contributor to Mozilla’s Firefox browser and the Red Hat Fedora Linux operating system. His research into preserving Syrian archeology with computer 3D modeling was a seminal precursor to current practices in digital cultural heritage preservation — this work was relaunched as the #NEWPALMYRA project in 2015.

Bassel’s influence went beyond Syria. He was a key attendee at the Middle East’s bloggers conferences and played a vital role in the negotiations in Doha in 2010 that led to a common language for discussing fair use and copyright across the Arab-speaking world. Software platforms he developed, such as the open-source Aiki Framework for collaborative web development, still power high-traffic web sites today, including Open Clip Art and the Open Font Library. His passion and efforts inspired a new community of coders and artists to take up his cause and further his legacy, and resulted in the offer of a research position in MIT Media Lab’s Center for Civic Media; his listing in Foreign Policy’s 2012 list of Top Global Thinkers; and the award of Index on Censorship’s 2013 Digital Freedom Award.

Bassel was taken from the streets in March of 2012 in a military arrest and interrogated and tortured in secret in a facility controlled by Syria’s General Intelligence Directorate. After a worldwide campaign by international human rights groups, together with Bassel’s many colleagues in the open internet and free culture communities, he was moved to Adra’s civilian prison, where he was able to communicate with his family and friends. His detention was ruled unlawful by the United Nations Working Group on Arbitrary Detention, and condemned by international organizations such as Creative Commons, Amnesty International, Human Rights Watch, the Electronic Frontier Foundation, and the Jimmy Wales Foundation.

Despite the international outrage at his treatment and calls for his release, in October of 2015 he was moved to an undisclosed location and executed shortly thereafter — a fact that was kept secret by the Syrian regime for nearly two years.

The post Honoring Our Friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship appeared first on The Mozilla Blog.

Mozilla Add-ons BlogWebExtensions in Firefox 56

Firefox 56 landed in Beta this week, so it’s time for another update on the WebExtensions transition. Because the development period for this latest release was about twice as long as normal, we have many more updates. Documentation for the APIs discussed here can be found on MDN Web Docs.

API changes

The browsingData API can now remove cookies by host. The initial implementation of browsingData has landed for Android with support for the settings and removeCookies APIs.

The contextMenus API also has a few improvements. The text of the link is now included in the onClickData event and text selection is no longer limited to 150 characters. Optional permission requests can now also be triggered from context menus.

An alternative, more general namespace was added, called browser.menus. It supports the same API and all existing menu contexts, plus a new one that allows you to add items to the Tools menu. You can also provide different icons for your menu items. For example:

browser.menus.create({
  id: "sort-tabs",
  title: "A-Z",
  contexts: ["tools_menu"],
  icons: {
   16: "icon-16-context-menu.png",
  },
});



The windows API now has the ability to read and preface the title of the window object, by passing titlePreface to the window object. This allows extensions to label different windows so they’re easier to distinguish.

The downloads.open API now requires user interaction to be called. This mirrors the Chrome API which also requires user interaction. You can now download a blob created in a background page.

The tabs API has new printing APIs. The tabs.print, tabs.printPreview and tabs.saveAsPDF (not on Mac OS X) methods will bring up the respective print dialogs for the page. The tabs.Tab object now includes the time the tab was lastAccessed.

The webRequests API can now monitor web socket connections (but not the messages)  by specifying ws:// or wss:// in the match pattern. Similarly the match patterns now support moz-extension URLs, however this only applies to the same extension. Importantly a HTTP 302 redirection to a moz-extension page will now work. For example, this was a common use case for extensions that integrated with OAuth.

The pageActions API can now be shown on a per tab basis on Android.

The privacy API gained two new APIs. The privacy.services.passwordSavingEnabled API allows an extension to toggle the preferences that control password saving. The privacy.websites.referrersEnabled API allows an extension to toggle the preferences that control the sending of HTTP Referrer headers.

A new API to control browserSettings has been added with an API to disable the browser’s cache. We’ll use this API for similar settings in the future.

In WebExtensions, we manage the changing of preferences and effects when extensions get uninstalled. This management was applied to chrome_url_overrides. The same management now prevents extensions overriding user changed preferences.

The theming API gained a reset method which can be called after an update to reset Firefox to the default theme.

The proxy API now has the ability to clear out a previously registered proxy.

If you’d like store a large amount of data in indexedDB (something we recommend over storage.local) then you can do so by requesting the unlimitedStorage permission. Requesting this will stop indexedDB prompting the user for permission to store a large amount of data.

The management API has added get and getAll commands. This allows extensions to query existing add-ons to spot any potential conflicts with other content.

Finally, the devtools.panels.elements.onSelectionChanged API landed and extensions that use the developer tools will find that their panels open faster.

Out of process extensions

We first mentioned out of process extensions back in the WebExtensions in Firefox 52 blog post. They’ve been a project that started back in 2016, but they have now been turned on for Windows users in Firefox 56. This is a huge milestone and a lot of work from the team.

This means that all the WebExtensions will run in their own process (that’s one process for all extensions). This has many advantages, but chief among them are  performance, security, and crash handling. For example, a crash in a WebExtension will no longer bring down Firefox. Content scripts from WebExtensions are still handled by the content process.

With the new WebExtensions architecture this change was completed with zero changes by extension developers, a significant improvement over the legacy extension environment.

There are some remaining bugs on Linux and OS X that prevent us from enabling it there, but we hope to enable those in the coming releases.

Along with measuring the performance of out of process, we’ve added in multiple telemetry signals to measure the performance of WebExtensions.  For example, it was recently found that storage.local.set was slow. With some improvements, we’ve seen a significant performance boost from a median of over 200ms down to around 25ms:

These telemetry measures conform to the standard Mozilla telemetry guidelines.

about:debugging

The about:debugging page got some more improvements:

The add-on ID has been added to the page. If there’s a warning about processing the add-on, that will now be shown next to the extension. Perhaps most useful to those working on their first add-on, if an add-on fails to load because of a problem, then no problem—there’s now an easy “retry” button for you to press:

Contributors

Thank you once again to our many contributors for this release, especially our volunteers including: Cameron Kaiser, dw-dev, Giorgio Maone, Swapnesh Kumar Sahoo, Timothy Johnson, Tushar Saini and Tomislav Jovanovic.

Update: improved the quality of the image for context menus.

The post WebExtensions in Firefox 56 appeared first on Mozilla Add-ons Blog.

Mozilla Add-ons BlogUpcoming Changes in Compatibility Features

Firefox 57 is now on the Nightly channel (along with a shiny new logo!). And while it isn’t disabling legacy add-ons just yet, it will soon. There should be no expectation of legacy add-on support on this or later versions. In preparation for Firefox 57, a number of compatibility changes are being implemented on addons.mozilla.org (AMO) to support this transition.

Upcoming Compatibility Changes

  • All legacy add-ons will have strict compatibility set, with a maximum version of 56.*. This is the end of the line for legacy add-on compatibility. They can still be installed on Nightly with some preference changes, but may break due to other changes happening in Firefox.
  • Related to this, you won’t be able to upload legacy add-ons that have a maximum version set higher than 56.*.
  • It will be easier to find older versions of add-ons when the latest one isn’t compatible. Some developers will be submitting ports to the WebExtensions API that depend on very recent API developments, so they may need to set a minimum version of 56.0 or 57.0. That can make it difficult for users of older versions of Firefox to find a compatible version. To address this, compatibility filters on search will be off by default. Also, we will give more prominence to the All Versions page, where older versions of the add-on are available.
  • Add-ons built with WebExtensions APIs will eventually show up higher on search rankings. This is meant to reduce instances of users installing add-ons that will break within a few weeks.

We will be rolling out these changes in the coming weeks.

Add-on compatibility is one of the most complex AMO features, so it’s possible that some things won’t work exactly right at first. If you run into any compatibility issues, please file them here.

The post Upcoming Changes in Compatibility Features appeared first on Mozilla Add-ons Blog.

Air MozillaReps Weekly Meeting Aug. 10, 2017

Reps Weekly Meeting Aug. 10, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla L10NMaking unreviewed suggestions easier to find

Translation review is the vital part of localization. The first step of each review process is to find suggestions that haven’t been reviewed yet. Since this fundamental task can be quite tedious in Pontoon, we’re making it better!

Problem

You can easily filter the so called Suggested strings, which have at least one suggestion, but none of them have been approved yet. Those are definitely suggestions you should review.

But there might be others, which are harder to find.

If a string has an approved translation and a few suggestions, when a new suggestion comes, it’s almost impossible to discover it. You can use the Has Suggestions filter, which lists all strings with at least one suggestion, but you will not be able to distinguish the new suggestion from the already reviewed ones.

To overcome this drawback, you can delete all suggestions that you decide not to approve and then the Has Suggestions filter will only show strings with unreviewed suggestions. But deleting translations in not a good practice if we want to keep translation history and user statistics accurate.

Solution

Today, we’re removing the ability to delete translations and adding the ability to reject them. A translation can be rejected explicitly with a click on the reject icon or as part of a mass action. When a suggestion is approved or an approved translation is submitted, all remaining suggestions automatically become rejected.

The three review states: approved, unreviewed, rejected

The three review states: approved, unreviewed, rejected.

That means we’re effectively splitting translations into 3 groups – approved, unreviewed and rejected, which allows us to introduce the Unreviewed Suggestions filter. This filter finally makes it easy to list all suggestions needing a review.

Important note: To make the filter usable out of the box, all existing suggestions have been automatically rejected if an approved translation was available and approved after the suggestion has been submitted. Without this change, many locales would end up with thousands of unreviewed suggestions.

The final step in making unreviewed suggestions truly discoverable is to show them in dashboards. We’ll fix that as part of bug 1377969. Also, we’ll be updating the documentation soon to reflect these changes.

Adrian

A huge shout-out to Adrian Gaudebert who contributed the patch. Adrian joined the Pontoon team in July and is a long time web developer at Mozilla. He helped with Elmo in the past and most recently worked on Mozilla Crash Reports. We can’t wait to see what inventions he comes up with next!

Air MozillaThe Joy of Coding - Episode 109

The Joy of Coding - Episode 109 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Add-ons BlogFriend of Add-ons: Santosh Viswanatham

Our newest Friend of Add-ons is Santosh Viswanatham! Santosh attended a regional event hosted by Mozilla Rep Srikar Ananthula in 2012 and has been an active leader in the community ever since.  Having previously served as a Firefox Student Ambassador and Regional Ambassador Lead, he is currently a Tech Speaker and a member of the Mozilla Campus Clubs Advisory Committee, where he is helping develop an activity for building extensions for Firefox.

Santosh has brought his considerable enthusiasm for open source software to the add-ons community. Earlier this year, he served a six-month term as a member of the Featured Add-ons Advisory Board, where he helped nominate and select extensions to be featured on addons.mozilla.org each month. Additionally, Santosh hosted a hackathon in Hyderabad, India, where 100 developers spent the night creating more than 20 extensions.

When asked to describe his experience contributing to Mozilla, Santosh says:

“It has been a wonderful opportunity to work with like-minded incredible people. Contributing to Mozilla gave me an opportunity to explore myself and stretched my limits working around super cool technologies. I learned tons of things about technology and communities, improved my skill set, received global exposure, and made friends for a lifetime by contributing to Mozilla.”

In his free time, Santosh enjoys dining out at roadside eateries, spending time with friends, and watching TV shows and movies.

Congratulations, Santosh, and thank you for all of contributions!

Are you a contributor to the add-ons community or know of someone who should be recognized? Please be sure to add them to our Recognition Wiki!

The post Friend of Add-ons: Santosh Viswanatham appeared first on Mozilla Add-ons Blog.

The Mozilla BlogThe Mozilla Information Trust Initiative: Building a movement to fight misinformation online

Today, we are announcing the Mozilla Information Trust Initiative (MITI)—a comprehensive effort to keep the Internet credible and healthy. Mozilla is developing products, research, and communities to battle information pollution and so-called ‘fake news’ online. And we’re seeking partners and allies to help us do so.

Here’s why.

Imagine this: Two news articles are shared simultaneously online.

The first is a deeply reported and thoroughly fact checked story from a credible news-gathering organization. Perhaps Le Monde, the Wall Street Journal, or Süddeutsche Zeitung.

The second is a false or misleading story. But the article is designed to mimic content from a credible newsroom, from its headline to its dissemination.

How do the two articles fare?

The first article—designed to inform—receives limited attention. The second article—designed for virality—accumulates shares. It exploits cognitive bias, belief echos, and algorithmic filter bubbles. It percolates across the Internet, spreading misinformation.

This isn’t a hypothetical scenario—it’s happening now in the U.S., in the U.K., in France, in Germany, and beyond. The Pope did not endorse a U.S. presidential candidate, nor does India’s 2000-rupee note contain a tracking device. But fabricated content, misleading headlines, and false context convinced millions of Internet users otherwise.

The impact of misinformation on our society is one of the most divisive, fraught, and important topics of our day. Misinformation depletes transparency and sows discord, erodes participation and trust, and saps the web’s public benefit. In short: it makes the Internet less healthy. As a result, the Internet’s ability to power democratic society suffers greatly.

This is why we’re launching MITI. We’re investing in people, programs, and projects that disrupt misinformation online.

Why Mozilla? The spread of misinformation violates nearly every tenet of the Mozilla Manifesto, our guiding doctrine. Mozilla has a long history of putting community and principles first, and devoting resources to urgent issues—our Firefox browser is just one example. Mozilla is committed to building tolerance rather than hate, and building technology that can protect individuals and the web.

So we’re drawing on the unique depth and breadth of the Mozilla Network—from journalists and technologists to policymakers and scientists—to build functional products, research, and community-based solutions.

Misinformation is a complex problem with roots in technology, cognitive science, economics, and literacy. And so the Mozilla Information Trust Initiative will focus on four areas:

Product

Mozilla’s Open Innovation team will work with like-minded technologists and artists to develop technology that combats misinformation.

Mozilla will partner with global media organizations to do this, and also double down on our existing product work in the space, like Pocket, Focus, and Coral. Coral is a Mozilla project that builds open-source tools to make digital journalism more inclusive and more engaging.

Literacy

We can’t solve misinformation with technology alone—we also need to educate and empower Internet users, as well as those leading innovative literacy initiatives.

Mozilla will develop a web literacy curriculum that addresses misinformation, and will continue investing in existing projects like the Mission: Information teaching kit.

Research

Misinformation in the digital age is a relatively new phenomenon. To solve such a thorny problem, we first need to fully understand it.

Later this year, Mozilla will be releasing original research on how misinformation impacts users’ experiences online. We will be drawing on a dataset of user-level browsing data gathered during the 2016 U.S. elections.

Creative interventions

Mozilla will field and fund pitches from technologists who are combatting misinformation using various mediums, including virtual reality and augmented reality. It’s an opportunity to apply emerging technology to one of today’s most pressing issues.

Imagine: an augmented reality web app that uses data visualization to investigate misinformation’s impact on Internet health. Or, a virtual reality experience that takes users through the history of misinformation online.

Mozilla will also support key events in this space, like Media Party Argentina, the Computation+Journalism Symposium, the Online News Association, the 22×20 summit, and a MisinfoCon in London as part of MozFest. (To learn more about MozFest — Mozilla’s annual, flagship event devoted to Internet health — visit mozillafestival.org.)

We’re hoping to hear from and work with partners who share our vision. Please reach out to Phillip Smith, Mozilla’s Senior Fellow on Media, Misinformation & Trust, at miti@mozilla.com to get involved.

More than ever, we need a network of people and organizations devoted to understanding, and combatting, misinformation online. The health of the Internet — and our societies — depends on it.

The post The Mozilla Information Trust Initiative: Building a movement to fight misinformation online appeared first on The Mozilla Blog.

hacks.mozilla.orgFirefox 55: first desktop browser to support WebVR

WebVR Support on Desktop

Firefox on Windows is the first desktop browser to support the new WebVR standard (and macOS support is in Nightly!). As the originators of WebVR, Mozilla wanted it to embody the same principles of standardization, openness, and interoperability that are hallmarks of the Web, which is why WebVR works on any device: Vive, Rift, and beyond.

To learn more, check out vr.mozilla.org, or dive into A-Frame, an open source framework for building immersive VR experiences on the Web.

New Features for Developers

Firefox 55 supports several new ES2017/2018 features, including async generators and the rest/spread (“...“) operator for objects:

let a = { foo: 1, bar: 2 };
let b = { bar: 'two' };
let c = { ...a, ...b }; // { foo: 1, bar: 'two' };

MDN has great documentation on using ... with object literals or for destructuring assignment, and the TC39 proposal also provides a concise overview of this feature.

Over in DevTools, the Network panel now supports filtering results with queries like “status-code:200“.

Screenshot showing the Firefox DevTools' Network panel with a filter on status-code:304, and a pop-up showing the new columns that are available.

There are also new, optional columns for cookies, protocol, scheme, and more that can be hidden or shown inside the Network panel, as seen in the screenshot above.

Making Firefox Faster

We’ve implemented several new features to keep Firefox itself running quickly:

  • New installations of Firefox on Windows will now default to the more stable and secure 64-bit version. Existing installations will upgrade to 64-bit with our next release, Firefox 56.
  • Restoring a session or restarting Firefox with many tabs open is now an order of magnitude faster. For reasons unknown, Dietrich Ayala has a Firefox profile with 1,691 open tabs. With Firefox 54, starting up his instance of Firefox took 300 seconds and 2 GB of memory. Today, with Firefox 55, it takes just 15 seconds and 0.5 GB of memory. This improvement is primarily thanks to the tireless work of an external contributor, Kevin Jones, who virtually eliminated the fixed costs associated with restoring tabs.
  • Users can now adjust Firefox’s number of content processes from within Preferences. Multiple content processes debuted in Firefox 54, and allow Firefox to take better advantage of modern, multi-core CPUs, while still being respectful of RAM utilization.
  • Firefox now uses its built-in Tracking Protection lists to identify and throttle tracking scripts running in background pages. After a short grace period, Firefox will increase the minimum setInterval or setTimeout for callbacks scheduled by tracking scripts to 10 seconds while the tab is in the background. This is in addition to our usual 1 second throttling for background tabs, and helps ensure that unused tabs can’t invisibly ruin performance or battery life. Of course, tabs that are playing audio or video are not throttled, so music in a background tab won’t stutter.
  • With the announcement of Flash’s end of life, and in coordination with Microsoft and Google, Firefox 55 now requires users to explicitly click to activate Flash on web pages as we work together toward completely removing Flash from the Web platform in 2020.

Making the Web Faster

Firefox 55 introduces several new low-level capabilities that help improve the performance of demanding web applications:

See the Pen Hello IntersectionObserver by Dan Callahan (@callahad) on CodePen.

  • SharedArrayBuffer and Atomics objects are new JavaScript primitives that allow workers to share and simultaneously access the same memory. This finally makes efficient multi-threading a reality on the Web. The only downside? Developers have to care about thread safety, mutexes, etc. when sharing memory, just like in any other multi-threaded language. You can learn more about SharedArrayBuffer in this code cartoon introduction and this explainer article from last year.
  • The requestIdleCallback() API offers a new way to schedule callbacks whenever the browser has a few extra, unused milliseconds between frames, or whenever a maximum timeout has elapsed. This makes it possible to squeeze work into the margins where the browser would otherwise be idle, and to defer lower priority work while the browser is busy. Using this API requires a bit of finesse, but MDN has great documentation on how to use requestIdleCallback() effectively.

Making the Web More Secure

Geolocation and Storage join the ranks of powerful APIs like Service Workers that are only allowed on secure, https:// origins. If your site needs a TLS certificate, consider Let’s Encrypt: a completely free, automated, and non-profit Certificate Authority.

Additionally, Firefox 55 will not allow plug-ins to load from or on non-HTTP/S schemes, such as file:.

New WebExtension APIs

WebExtensions can now:

And more…

There are many more changes in the works as we get ready for the next era of Firefox in November. Some users of Firefox 55 will begin seeing our new Firefox Screenshots feature, the Bookmarks / History sidebar can now be docked on either side of the browser, and we just announced three new Test Pilot experiments.

For a complete overview of what’s new, refer to the official Release Notes, MDN’s Firefox 55 for Developers, and the Mozilla Blog announcement .

The Mozilla BlogFirefox Is Better, For You. WebVR and new speedy features launching today in Firefox

Perhaps you’re starting to see a pattern – we’re working furiously to make Firefox faster and better than ever. And today we’re shipping a new release that’s our best yet, one that introduces exciting, empowering new technologies for creators as well as improves the everyday experience for all Firefox users.

Here’s what’s new today:

WebVR opens up a whole new world for the WWW

On top of Firefox’s new super-fast multi-process foundation, today we’re launching a breakthrough feature that expands the web to an entirely new experience. Firefox for Windows is the first desktop browser to support WebVR for all users, letting you experience next-generation entertainment in virtual reality.

WebVR enables developers and artists to create web-based VR experiences you can browse to with Firefox. So whether you’re a current Oculus Rift or HTC Vive owner – or still deciding when you’re going to take the VR leap – Firefox can get you to your VR fix faster. Once you find a web game or app that supports VR, you can experience it with your headset just by clicking the VR goggles icon visible on the web page. You can navigate and control VR experiences with handset controllers and your movements in physical space.

For a look at what WebVR can do, check out this sizzle reel (retro intro intended!).

If you’re ready to try out VR with Firefox, a growing community of creators has already been building content with WebVR. Visit vr.mozilla.org to find some experiences we recommend, many made with A-Frame, an easy-to-use WebVR content creation framework made by Mozilla.. One of our favorites is A Painter, a VR painting experience. None of this would have been possible without the hard work of the Mozilla VR team, who collaborated with industry partners, fellow browser makers and the developer community to create and adopt the WebVR specification. If you’d like to learn more about the history and capabilities of WebVR, check out this Medium post by Sean White.

Performance Panel – fine-tune browser performance

Our new multi-process architecture allows Firefox to easily handle complex websites, particularly when you have many of them loaded in tabs. We believe we’ve struck a good balance for most computers, but for those of you who are tinkerers, you can now adjust the number of processes up or down in this version of Firefox. This setting is at the bottom of the General section in Options.

Tip: if your computer has lots of RAM (e.g., more than 8GB), you might want to try bumping up the number of content processes that Firefox uses from its default four. This can make Firefox even faster, although it will use more memory than it does with four processes. But, in our tests on Windows 10, Firefox uses less memory than Chrome, even with eight content processes running.

Faster startup when restoring lots of tabs

Are you a tab hoarder? As part of our Quantum Flow project to improve performance, we’ve significantly reduced the time it takes to start Firefox when restoring tabs from a previous session. Just how much faster are things now? Mozillian Dietrich Ayala ran an interesting experiment, comparing how long it takes to start various versions of Firefox with a whopping 1,691 tabs open. The end result? What used to take nearly eight minutes, now takes just 15 seconds.

A faster and more stable Firefox for 64-bit Windows

If you’re running the 64-bit version of Windows (here’s how to check), you might want to download and reinstall Firefox today. That’s because new downloads on 64-bit Windows will install the 64-bit version of Firefox, which is much less prone to running out of memory and crashing. In our tests so far, the 64-bit version of Firefox reduces crashes by 39% on machines with 4GB of RAM.

If you don’t manually upgrade, no worries. We intend to automatically migrate 64-bit Windows users to 64-bit Firefox in our next release.

A faster way to search

We’re all searching for something. Sometimes that thing is a bit of information – like a fact you can glean from Wikipedia. Or, maybe it’s a product you hope to find on Amazon, or a video on YouTube.

With today’s Firefox release, you can quickly search using many websites’ search engines, right from the address bar. Just type your query, and then click which search engine you’d like to use.

Out of the box, you can easily search with Yahoo, Google, Bing, Amazon, DuckDuckGo, Twitter, and Wikipedia. You can customize this list of search engines in settings.

Even more

Here are a few more interesting improvements shipping today:

  • Parts of a web page that use Flash must now be clicked and given permission to run. This improves battery life, security, and stability, and is a step towards Flash end-of-life.
  • You can now move the sidebar to the right side of the window.
  • Firefox for Android is now translated in Greek and Lao.
  • Simplify print jobs from within print preview.

As usual, you can see everything new in the release notes, and developers can read about new APIs on the Mozilla Hacks Blog.

We’ll keep cranking away – much more to come!

 

 

The post Firefox Is Better, For You. WebVR and new speedy features launching today in Firefox appeared first on The Mozilla Blog.

hacks.mozilla.orgWebVR for All Windows Users

With the release of Firefox 55 on August 8, Mozilla is pleased to make WebVR 1.1 available for all 64-bit Windows users with an Oculus Rift or HTC VIVE headset. Since we first announced this feature two months ago, we’ve seen tremendous growth in the tooling, art content, and applications being produced for WebVR – check out some highlights in this showcase video:

Sketchfab also just announced support for exporting their 3D models into the glTF format and have over 100,000 models available for free download under Creative Commons licensing, so it’s easier to bring high-quality art assets into your WebVR scenes with libraries such three.js and Babylon.js and know that they will just work.

They are also one of the first sites to take advantage of WebVR to make an animated short and highlight the openness of URLs to support link traversal to build awesome in-VR experiences within web content.

The growth in numbers of new users having their first experiences with WebVR content has been phenomenal as well. In the last month, we have seen over 13 million uses of the A-Frame library, started here at Mozilla to make it easier for web developers, designers and people of all backgrounds to create WebVR content.

We can’t wait to see what you will build with WebVR. Please show off what you’re doing by tweeting to @MozillaVR or saying hi in the WebVR Slack.

Stay tuned for an upcoming A-Frame contest announcement with even more opportunities to learn, experiment, and get feedback!

Mozilla L10NCreate a localized build locally

Yesterday we changed the way that you create localized builds on mozilla-central.

This works for developers doing regular builds, as well as developers or localizers without a compile environment. Sadly, users of artifact builds are not supported.

For language packs, a mere

./mach build langpack-de

will work. If you’d rather wish to build a localized package, you’ll want to get the package first. If you’re building yourself, that’s

./mach package

and if you want to get a Nightly build from archive.mozilla.org, just

./mach build wget-en-US

If you want to do that for Firefox for Android, you’ll need to specify which platform you want. Set EN_US_BINARY_URL to the latest-mozilla-central-* path for the binary you want to test.

And then you just

./mach build installers-fr

That’ll take care about getting the french l10n repository, and do all the necessary things to get you a nice little installer/package in dist. Pick your favorite language from our repositories. Care for a RTL build? ./mach installers-fa will get you a Persian one 😉 .

As with other repositories we clone into ~/.mozbuild, you’ll want to update those every now and then. They’re in l10n-central/*, a repository for each language you tried.

Documentation is on gecko.rtd, bugs go here. This works for Firefox, Firefox for Android, and Thunderbird.

And now you can safely forget all the things you never wanted to know about localized builds.

Air MozillaIntern Presentations: Round 3: Thursday, August 3rd

Intern Presentations: Round 3: Thursday, August 3rd Intern Presentations 10 presenters Time: 1:00PM - 3:30PM (PDT) - each presenter will start every 15 minutes 8 SF, 2 PDX

Air MozillaReps Weekly Meeting Aug. 03, 2017

Reps Weekly Meeting Aug. 03, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Add-ons BlogExtension Examples: See the APIs in Action

In the past year, we’ve added a tremendous amount of add-on documentation to MDN Web Docs. One resource we’ve spent time building out is the Extension Examples repository on GitHub, where you can see sample extension code using various APIs. This is helpful for seeing how WebExtensions APIs are used in practice, and it is especially helpful for people just getting started building extensions.

To make the example extensions easier to understand, there is a short README page for each example. There is also a page on MDN Web Docs that lists the JavaScript APIs used in each example.

With the work the Firefox Developer Tools team has completed for add-on developers, it is easier to temporarily install extensions in Firefox for debugging purposes. Feel free to try it out with the example extensions.

As we ramp up our efforts for Firefox 57, expect more documentation and examples to be available on MDN Web Docs and our GitHub repository. There are currently 47 example extensions, and you can help grow it by following these instructions.

Let us know in the comments if you find these examples useful, or contact us using these methods. We encourage you to contribute your own examples as well!

Thank you to all who have contributed to growing the repository.

The post Extension Examples: See the APIs in Action appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyFighting Crime Shouldn’t Kill the Internet

The internet has long been a vehicle for creators and commerce. Yesterday, the Senate introduced a bill that would impose significant limitations on protections that have created vibrant online communities and content platforms, and allow users to create and consume uncurated material. While well intentioned, the liabilities placed on intermediaries in the bill would chill online speech and commerce. This is a counterproductive way to address sex trafficking, the ostensible purpose of the bill.

The internet, from its inception, started as a place to foster platforms and creators. In 1996 a law was passed that was intended to limit illegal content online – the Communications Decency Act (CDA). However, section 230 of the CDA provided protections for intermediaries: if you don’t know about particular illegal content, you aren’t held responsible for it. Intermediaries include platforms, websites, ISPs, and hosting providers, who as a result of CDA 230 are not held responsible for the actions of users. Section 230 is one of the reasons that YouTube, Facebook, Medium and online commenting systems can function without the technical burden or legal risk of screening every piece of user-generated content. Online platforms – love ‘em or hate ‘em – have enabled millions of less technical creators to share their work and opinions.

A fundamental part of the CDA is that it only punishes “knowing conduct” by intermediaries. This protection is missing from the changes this new bill proposes to CDA 230. The authors of the bill appear to be trying to preserve this core balance of – but they don’t add the “knowing conduct” language back into the CDA. Because they put it in the sex trafficking criminal statute instead, only Federal criminal cases would need to show that the site knew about the problematic content. The bill would introduce gaps in liability protections into CDA 230 that are not so easily covered. State laws can target intermediary behavior too, and without a “knowing conduct” standard in CDA directly, platforms of all types could be held liable for conduct of others that they know nothing about. This is also true of the (new) Federal civil right of action that this bill introduces. That means a small drafting choice strikes at the heart of the safe harbor provisions that make CDA 230 a powerful driver of the internet.

This bill is not well scoped to solve the problem, and does not impact the actual perpetrators of sex trafficking. Counterintuitively, it disincentivizes content moderation by removing the safe harbor around moderation (including automated moderation) that companies develop, including to detect illegal content like trafficking. And why would a company want to help law enforcement find the criminal content on their service when someone is going to turn around and sue them for having had it in the first place? Small and startup companies who are relying on the safe harbor to be innovative would face a greater risk environment for any user activity they facilitate. And users would have a much harder time finding places to do business, create, and speak.

The bill claims that CDA was never intended to protect websites that promote trafficking – but it was carefully tailored to ensure that intermediaries are not responsible for the conduct of their users. It has to work this way in order for the internet we know and love to exist. That doesn’t mean law enforcement can’t do its job – the CDA was built to provide ways to go after the bad guys (and to incentivize intermediaries to help). The proposed bill doesn’t do that.

The post Fighting Crime Shouldn’t Kill the Internet appeared first on Open Policy & Advocacy.

about:communityFirefox 55 new contributors

With the release of Firefox 55, we are pleased to welcome the 108 developers who contributed their first code change to Firefox in this release, 89 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

hacks.mozilla.orgIntersection Observer comes to Firefox

What do infinite scrolling, lazy loading, and online advertisements all have in common?

They need to know about—and react to—the visibility of elements on a page!

Unfortunately, knowing whether or not an element is visible has traditionally been difficult on the Web. Most solutions listen for scroll and resize events, then use DOM APIs like getBoundingClientRect() to manually calculate where elements are relative to the viewport. This usually works, but it’s inefficient and doesn’t take into account other ways in which an element’s visibility can change, such as a large image finally loading higher up on the page, which pushes everything else downward.

Things get worse for advertisements, since real money is involved. As Malte Ubl explained in his presentation at JSConf Iceland, advertisers don’t want to pay for ads that never get displayed. To make sure they know when ads are visible, they cover them in dozens of tiny, single-pixel Flash movies whose visibility can be inferred from their framerate. On platforms without Flash, like smartphones, advertisers set up timers to force browsers to recalculate the position of each ad every few milliseconds.

These techniques kill performance, drain batteries, and would be completely unnecessary if the browser could just notify us whenever an element’s visibility changed.

That’s what IntersectionObserver does.

Hello, new IntersectionObserver()

At its most basic, the IntersectionObserver API looks something like:

let observer = new IntersectionObserver(handler);
observer.observe(target); // <-- Element to watch

The demo below shows a simple handler in action.

See the Pen Hello IntersectionObserver by Dan Callahan (@callahad) on CodePen.

A single observer can watch many target elements simultaneously; just repeat the call to observer.observe() for each target.

Intersection? I thought this was about visibility?

By default, IntersectionObservers calculate how much of a target element overlaps (or “intersects with”) the visible portion of the page, also known as the browser’s “viewport:”

Illustration of a target element partially intersecting with a browser's viewport

However, observers can also monitor how much of an element intersects with an arbitrary parent element, regardless of actual on-screen visibility. This can be useful for widgets that load content on demand, like an infinitely scrolling list inside a container div. In those cases, the widget could use IntersectionObservers to help load just enough content to fill its container.

For simplicity, the rest of this article will discuss things in terms of “visibility,” but remember that IntersectionObservers aren’t necessarily limited to literal visibility.

Handler basics

Observer handlers are callbacks that receive two arguments:

  1. A list of IntersectionObserverEntry objects, each containing metadata about how a target’s intersection has changed since the last invocation of the handler.
  2. A reference to the observer itself.

Observers default to monitoring the browser’s viewport, which means the demo above just needs to look at the isIntersecting property to determine if any part of a target element is visible.

By default, handlers only run at the moment when target elements transition from being completely off-screen to being partially visible, or vice versa, but what if you want to distinguish between partially-visible and fully-visible elements?

Thresholds to the rescue!

Working with Thresholds

In addition to a handler callback, the IntersectionObserver constructor can take an object with several configuration options for the observer. One of these options is threshold, which defines breakpoints for invoking the handler.

let observer = new IntersectionObserver(handler, {
    threshold: 0 // <-- This is the default
});

The default threshold is 0, which invokes the handler whenever a target becomes partially visible or completely invisible. Setting threshold to 1 would fire the handler whenever the target flips between fully visible and partially visible, and setting it to 0.5 would fire when the target passes point of 50% visibility, in either direction.

You can also supply an array of thresholds, as shown by threshold: [0, 1] in the demo below:

See the Pen IntersectionObserver Thresholds by Dan Callahan (@callahad) on CodePen.

Slowly scroll the target in and out of the viewport and observe its behavior.

The target starts fully visible—its intersectionRatio is 1—and changes twice as it scrolls off the screen: once to something like 0.87, and then to 0. As the target scrolls back into view, its intersectionRatio changes to 0.05, then 1. The 0 and 1 make sense, but where did the additional values come from, and what about all of the other numbers between 0 and 1?

Thresholds are defined in terms of transitions: the handler fires whenever the browser notices that a target’s intersectionRatio has grown or shrunk past one of the thresholds. Setting the thresholds to [0, 1] tells the browser “notify me whenever a target crosses the lines of no visibility (0) and full visibility (1),” which effectively defines three states: fully visible, partially visible, and not visible.

The observed value of intersectionRatio varies from test to test because the browser must wait for an idle moment before checking and reporting on intersections; those sorts of calculations happen in the background at a lower priority than things like scrolling or user input.

Try editing the codepen to add or remove thresholds. Watch how it changes when and where the handler runs.

Other options

The IntersectionObserver constructor can take two other options:

  • root: The area to observe (default: the browser viewport).
  • rootMargin: How much to shrink or expand the root’s logical size when calculating intersections (default: "0px 0px 0px 0px").

Changing the root allows an observer to check for intersection with respect to a parent container element, instead of just the browser’s viewport.

Growing the observer’s rootMargin makes it possible to detect when a target nears a given region. For example, an observer could wait to load off-screen images until just before they become visible.

Browser support

IntersectionObserver is available by default in Edge 15, Chrome 51, and Firefox 55, which is due for release next week.

A polyfill is available which works effectively everywhere, albeit without the performance benefits of native implementations.

Additional Resources:

Mozilla L10NL10n Report: August Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

Important updates

Mozilla’s Pootle instance is closing down on September 1st, we’ll move existing active localizations to Pontoon. Read this blog post if you’re interested in more details.

New content and projects

What’s new or coming up in Firefox desktop

New content to localize for Firefox desktop is mostly focusing around two areas:

  • Onboarding experience (tour and in-product notifications).
  • Preferences reorganization.

While the Onboarding experience will be an ongoing effort, with content updates between versions of Firefox, the reorganization of preferences should be mostly completed (and it was a complex problem to solve). Unfortunately, another consistent change landed right before merge day for preferences, finally bringing some terminology consistency (website vs site).

In the meantime, still a lot of visible changes in the UI are happening in Firefox, as part of the ongoing Photon project.

Activity Stream (the redesigned about:newtab) is currently scheduled to ship in Firefox 57, with some locales tested as an experiment in 56 (A/B study), while Firefox Screenshots is still scheduled to ship with Firefox 55 for all locales (staged rollout during August, with an increasing number of users receiving the feature over time).

What’s new or coming up in Test Pilot

Test Pilot launched 3 new experiments, two of them are localizable in Pontoon.

Notes adds a simple one-page notepad in Firefox sidebar, to store notes while browsing the web.

Firefox Send, on the other hand, is the first stand-alone web service distributed as part of Test Pilot: it’s a website where you can upload a file, encrypt it and obtain a link to share it. Once the file has been downloaded (or within 24 hours), it gets removed from the server. Basically, a one-time, secure file sharing website that will work with any browser, not just Firefox.

The third project, currently available only for English, is called Voice Fill, and lets you talk with search engines (Google, Yahoo, DuckDuckGo), using AI for speech recognition.

What’s new or coming up in mobile

  • Greek (el) and Lao (lo) are reaching release version of Firefox for Android with Firefox 55 release (today is merge day, next week is the launch). Congratulations! You can download Firefox for Android here.
  • Belarusian (be) and Zapoteco (zam) are now shipping in the Play Store! They’ll reach release with Firefox for Android 56. Congratulations! Try them out on Beta now.
  • More than 1,000,000 Focus for Android downloads! Impressive! If you still haven’t tried it out, come get it here!
  • We have 12 new locales ready to go for Focus iOS v3.4! Afrikaans (af), Danish (da), Greek (el), Spanish from Mexico (es-MX), Hindi (hi-IN), Malay (ms), Romanian (ro), Tamil (ta), Telugu (te), Tagalog (tl), Urdu (ur), and Uzbek (uz). Congratulations! You can try out Focus for iOS here.
  • We are now moving to a bi-weekly cadence for both Focus projects. Check out what that means for l10n by looking at the Focus iOS schedule here and the Focus Android schedule here. In fact, our releases will be schedule driven rather than dictated by feature development progress. Both features and fixes will be allocated to the next available release upon completion. This will give us the ability to respond much more quickly to bug reports and user feedback.

What’s new or coming up in web projects

  • Mozilla.org continues its makeover to position for the new Firefox launch in autumn with new content and the new templates. Since our last report, nine web pages appeared on your web project dashboard. We have a few more in the work, so stay tuned. Keep an eye on the replacement pages and the web dashboard for pending projects.
  • Germany and Taiwan are two of the focused markets for the Firefox campaigns. The communities have more content than others to localize. Additionally, they are adjusting their localization process in order to include multiple parties in this collaborative effort.
  • Snippets: The August campaign focuses on Test Pilot, in time for the rollout of new features.
  • Community Participation Guide is localized in 6 locales: de, es, fr, hi-IN, pt-BR and zh-TW. We are working on an amendment. These communities will need to review the update once the updates are localized.
  • The Firefox Privacy Notice has been revised continuously in the last few months. The document is localized in select locales. If your community has the bandwidth and/or expert knowledge in legal language, please review the document.

What’s new or coming up in Foundation projects

  • Changecopyright.org will get a content update over the summer! The website will get a clear timeline of events for the Copyright reform and it should be easier to take action. We are also investigating the addition of a call tool so that people can directly call their MEPs to be real copyfighters, so stay tuned!
  • The IoT survey has been localized in de, es, fr, it, pt-BR and we’re launching it very soon! We’re supporting a few more locales than with the previous survey, and are expecting to get even more people taking it! 💪
  • Fundraising update: we’re looking into supporting SEPA transfers! It’s a very long process due to bank regulations, and we can’t guarantee anything yet, but we’re trying hard to get it set up this year. This means your contributions will help us raise even more money to create an Open Internet movement, as wire transfers are the #1 request from European people to our donor support team. Oh, and we’re supporting a few more currencies, check it out!

What’s new or coming up in Pontoon

  • Terminology management (WIP) is coming quickly to Pontoon, starting with terminology suggestions in translate view. Check-out this mock-up!

Newly published localizer facing documentation

Kekoa – our tireless intern – is working on documenting how to use the translation interface in Pontoon.

Events

  • The Telugu L10n Meetup happened last weekend in Hyderabad, it’s a joint event by Mozilla, Swecha and Telugu Wikipedia. We can’t wait to know how it went!

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Localization impact in numbers

  • Snippets currently support 8 locales and has recently tested in 4 RTL languages. Thanks to the communities who support this time sensitive and high priority project request on a monthly basis. We’d like to share some high-level Q2 snippet metrics with you:
    • Impressions: Localized snippets received approximately 1,471,993,100 impressions in Q2. 35% of world-wide snippet impressions were non-en.
    • Clicks: approximately 2,869,700 (44% of snippet clicks).
    • Average CTR (click through rate): .21% (.03%) higher than our en audience.
    • Average block rate: .21% (only .01% higher than en block rate)
  • The fundraising campaign has not even started ramping up that we already have some positive numbers to share with you 🙂 Here’s how much the top fundraising locales helped us raise money for Mozilla since January, so you can expect these numbers to get much higher very soon!
    • de: $18,837
    • fr: $16,669
    • es: $5,454
    • it: $3,044
    • ru: $1,769
    • ar: $1,572
    • nl: $1,129

Friends of the Lion

Image by Elio Qoshi

  • Rodrigo single-handedly worked on Zapoteco (zam) on Firefox for Android – which is going to ship with Firefox 56. A warm thank you for all this effort!
  • Elio joined the Italian l10n team volunteering to localize Thimble, he’s been keeping up with this task until the present day with passion and perseverance. More recently he joined the Mozilla Italia L10n Guide project, part of the Open Leadership Training initiative, and contributed to the translation of the Internet Health website.
  • Congratulations to the Greek team for reaching the goal of zero missing strings in Firefox. It’s been a long and adventurous journey.
    • Special shout-out to Jim, who has made possible with his massive contribution to catch up on most of the Mozilla projects (Firefox etc).
    • Another special shout-out to Mike who has joined the team recently and is making great suggestions! Welcome 🙂
  • Thanks to Georgianizator for doing a great job in localizing several projects in Georgian (ka).
  • Othman Wagiman leads the Malay community making significant progress in all products and projects, turning the project dashboard on Pontoon from gray to green. Impressive!!!
  • Ton of the Dutch team identified quite a few inconsistencies in the usages of the most common phrases in our recent web project copies. His feedback is greatly appreciated by all the people and functions who are responsible in putting the content on the web.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Air MozillaThe Joy of Coding - Episode 108

The Joy of Coding - Episode 108 mconley livehacks on real Firefox bugs while thinking aloud.

SeaMonkeyContributed Win64 builds…

Hi All,

I’ve finally managed to generate Win64 contributed builds for both the Installer and the zip file .

Since this is the very *first* Win64 contributed build, please run it with a new profile.

Disclaimer:  While I have tried it myself, I’m not able to test it out thoroughly since there are other stuff I need to do (enable Official Linux64 builds for one, getting the updates properly done, etc..)  Please do report here or the newsgroups as to how it fairs on your system.

 

Mozilla Add-ons BlogAugust’s Featured Extensions

Firefox Logo on blue background

Pick of the Month: Grammarly

by Grammarly
It’s like having an expert proofreader with you at all times. Grammarly offers contextual spell checks (it understands the distinction between “there” and “their” unlike 98% of English-speaking humans) as well as grammar edits.

“As a student at a university, this is the perfect tool to correct all of my writing mistakes.”

Featured: FoxyTab

by erosman
Enjoy a suite of tab related actions, like copy all URLs, close multiple tabs, tab duplication, and more.

“This literally saved my day. Especially someone like me who works as an administrator and moderator on multiple forums, this is a great tool.”

Featured: Zoom Page WE

by DW-dev
Use full-page zoom, text-only zoom, fit-to-width feature, and other ways to focus in on Web pages.

“This is the best text zooming add-on you can find.”

Featured: Save All Images

by belav
Detects all images on any given page and presents a simple way to instantly download them.

“Very useful.”

Featured: YouTube Dark Mode

by HTCom
Turn YouTube completely dark to enhance your viewing experience.

“Simple, effective, no unintentional side effects.”

Featured: EPUBReader

by EPUBReader
Read ebook files right in your browser.

“Works flawlessly.”

Featured: Tab Auto Refresh

by Alex
Automatically refresh tabs based on custom time intervals.

“This is the first auto reload/refresh Firefox extension that I’ve found that can be customized for each tab that I have.”

Featured: TinEye Reverse Image Search

by TinEye
A new kind of reverse image searcher that uses image identification technology rather than keywords, metadata, or watermarks.

“If the image in question can’t be found, this finds it.”

Featured: Awesome Screenshot Plus

by Diigo Inc.
Take full page or partial screen grabs. Annotate with text and graphics. Store and share files. This is a full-service screenshot tool.

“It gives you a lot of options to edit, email, print. You will not be sorry!”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post August’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Air MozillaIntern Presentations: Round 2: Tuesday, August 1st

Intern Presentations: Round 2: Tuesday, August 1st Intern Presentations 11 presenters Time: 1:00PM - 3:45PM (PDT) - each presenter will start every 15 minutes 7 MTV, 2 TOR, 1 Paris, 1 London

Air MozillaWebdev Extravaganza: August 2017

Webdev Extravaganza: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

Mozilla Add-ons BlogNoScript’s Migration to WebExtensions APIs

We asked Giorgio Maone, developer of the popular security extension NoScript, to share his experience about migrating to WebExtension APIs. Originally released in 2005, NoScript was developed to address security vulnerabilities in browsers by pre-emptively blocking scripts from untrusted websites. Over the time it grew into a security suite including many additional and often unique countermeasures against various web-based threats, such as Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF) and Clickjacking.

Why did you decide to transition NoScript to WebExtension APIs?

The so-called “legacy” add-on technology which NoScript has been built with is going to be banned very soon; therefore, like too often in real life, it’s either migrate or die. Many people rely on NoScript for being safer on the Web and in some cases for their physical security too, making this transition, although quite painful, an ethical obligation not to leave them in the cold.

For a long time, I strove to maintain as much backwards compatibility as possible, in order to offer some protection to those users stuck for various reasons with older, inherently less safe, versions of Firefox. For this reason, the legacy version of NoScript contains a lot of code for working around bugs that Firefox has since fixed: this cruft can safely go away during the migration. The plan is to have a lean and mean version of NoScript available as soon as Firefox 57 is released. Some of the APIs required for full parity with the legacy version won’t land until Firefox 57. Until then, I can selectively delegate and prioritize some features to WebExtension APIs that already work by packaging NoScript as an Embedded WebExtension, which is also the best way to migrate user preferences.

On the other hand, people who need NoScript most are those who use the anonymity and security specialized Tor Browser, which is based on Firefox ESR (Gecko 54 until June 2018) where NoScript is not viable yet as a WebExtension. Therefore I’m forced to maintain two very different code bases for almost one year after the release of Firefox 57, in order to support the vast majority of my users.

Can you tell us about where you are with the migration?

NoScript already ships as a hybrid add-on, and I am in the process of moving all the code to WebExtensions APIs. Some features are even more performant on the new platform: it’s the case of the XSS filter, which takes advantage of the more asynchronous architecture of WebExtensions. The legacy XSS filter may stall the browser for a few seconds while checking very large (fortunately unusual) payloads; the WebExtensions-based version allows the browser to stay responsive no matter the load.

Have you had to use WebExtension APIs in creative ways?

NoScript 10 as a WebExtension is built mainly around the WebRequest API, which in its Firefox incarnation features some tweaks differentiating it from its Chromium counterpart. Last year I’ve been working together with the WebExtensions team to develop this enhanced API: a very pleasant experience and a welcome chance for me to contribute code on Mozilla Central again, after quite awhile. Dynamic permissions for embedded JavaScript are not natively supported by WebExtensions. Rather than requesting a new API, I am using Content Security Policies (CSP), a Web Application Security standard, to control scripting execution and other security properties of the webpage. Likewise I’m leveraging other Web Platform (HTML5) features, which were not available yet in the early NoScript days, to rebuild functionality originally based on “legacy” XUL and XPCOM technology.

What advice would you give to other legacy add-on developers?

Try to find workarounds for any missing pieces and be creative when using available APIs, not limited to just within the WebExtensions APIs boundaries but also exploring the whole Web Platform. If workarounds are impossible, ask the add-ons team for additions or enhancements.

Also, try to join and lobby the Browser Extensions Community Group hosted by the W3C. I feel that Mozilla has the most flexible and dynamically growing browser extensions platform, but it would be nice to make sure that the good ideas landing in Firefox also be available in Chrome and other browsers.

Thank you, Giorgio! Best of luck with the migration.

The post NoScript’s Migration to WebExtensions APIs appeared first on Mozilla Add-ons Blog.

Air MozillaMartes Mozilleros, 01 Aug 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

The Mozilla BlogNew Test Pilot Experiments Available Today

It’s been a busy summer for Firefox!                     Last month, we delivered the first in a series of groundbreaking updates to the browser. This week, the Test Pilot team is continuing to evolve Firefox features with three new experiences that will make for a simpler, faster and safer experience.

Send

Sending files over the internet is something many of us do everyday. Mozilla makes it easy to keep your files safe. With Send, your files self-destruct after download, so they can’t be accessed by anyone else. Your files are encrypted during transmission. Plus, Send encrypts files on the client side, so that not even Mozilla can read them.

Voice Fill

Mozilla is a champion of making the web open and accessible to everyone. With Voice Fill, we’re experimenting with support for Speech to Text (STT) functionality in Firefox, making it possible for users to input text in Firefox by voice. Your contributions to this experiment will help us optimize speech to text input so that we can expand support throughout Firefox.

Notes

Whether it’s a sticky note, an app or the back of an envelope, so many of us rely on jotting down quick notes to keep track of our busy lives. Notes is a simple, convenient place to take, store and retrieve notes – all within Firefox. We’re also working to build in support for Firefox Accounts, so you can sync your notes wherever you use Firefox

The Test Pilot program is open to all Firefox users and helps us test and evaluate a variety of potential Firefox features. To activate Test Pilot and help us build the future of Firefox, visit testpilot.firefox.com.

If you’ve experimented with Test Pilot features before, you know that you might run into some bugs or lose some of the polish in Firefox, so you can easily enable or disable features at any time.

We want your feedback! Try out these and other Test Pilot experiments and help us decide which new features to build into future versions of Firefox.

The post New Test Pilot Experiments Available Today appeared first on The Mozilla Blog.

The Mozilla BlogMozilla releases research results: Zero rating is not serving as an on-ramp to the internet

Can digital literacy and Equal Rating solutions help connect the unconnected?

Today, 4 billion people live without the internet. There’s a global debate about how to connect the unconnected, but it’s often dominated by assumptions and not a lot of data or talking to actual users on the ground.

To better inform this issue, Mozilla recently supported a series of focus groups to investigate how and why people use subsidized services in India, Myanmar, Peru, Kenya, Nigeria, Rwanda and South Africa. Today, we’re releasing the results of this research carried out by Research ICT Africa, LIRNEasia and IEP.

Credit: Peter Cihon

Why do we care?
Many companies and organizations are working to connect the unconnected. For us at Mozilla, it is our mission to ensure the internet is a global public resource that’s open and accessible to all.

We’ve focused our work in this space on a concept we call Equal Rating. Building on Mozilla’s strong commitment to net neutrality, Equal Rating models are free of discrimination, gatekeepers, and pay-to-play schemes. Equal Rating stands in contrast to zero rating business models, which reduce the cost to zero only for some sites and services. We’ve pursued this through policy engagement with governments, an innovation challenge to catalyze new thinking in providing affordable access, and this research.

What did we ask?

  • What barriers are keeping people offline?
  • Is zero rating serving as an on-ramp to the internet?
  • Why and how do people use subsidized services?
  • Do people move beyond subsidized services, or do they just stay in the subsidized walled garden?
  • How does use of subsidized services affect future internet usage?

What did we find?

Zero rating is not serving as an on-ramp to the internet
In all countries surveyed — excluding India where zero rating has been banned by the regulator — focus groups revealed that users are not coming online through zero rated services. While more research is needed, if zero rating is not actually serving as an on-ramp to bring people online, the benefits seem low, while the resulting risk of these offerings creating an anti-competitive environment is extremely high.

People use zero rating as one of many cost saving strategies
This research revealed that people who use zero rated services usually also have full access to the internet, and make use of zero rated and subsidized data services as one of many money-saving strategies, including:

  • Use of multiple SIM cards to take advantage of promotions, better reception quality, or better prices for a given service.
  • Use of public Wi-Fi. For example, many buses in Kenya now provide Wi-Fi access, and participants reported being willing to wait for a bus that was Wi-Fi-enabled.
  • Tethering to mobile hotspots. In South Africa and India, users not only share data but also promotions and subsidized offers from one phone to another.
  • Earned reward applications (where users download, use, or share a promoted application in return for mobile data/credit). The research indicates that most users tend to play the system to get the most credit possible and then abandon the earned reward application.
  • While users, especially in the African studies, report skepticism about whether zero rated promotions are truly free, partially subsidized bundles are popular. Notably, many of these offerings are Equal Rating compliant.

Some, particularly rural and low income users, are trapped in walled gardens
While zero rated services tend to be only part of internet usage for most users studied, some users are getting trapped in the walled gardens of these subsidized offerings.

  • In particular, low income respondents in Peru and Rwanda use zero rated content for much of their browsing activity, as do rural respondents in Myanmar.
  • Awareness matters: in Myanmar, respondents who know they are in a zero rated walled garden (e.g., due to lack of photos and video) are more likely to access the full internet beyond the walled garden.
  • But, when Facebook is subsidized without impacting user experience, users tend to concentrate their usage on that single site, demonstrating concerns around the anti-competitive effects of zero rating.

Digital illiteracy limits access for connected and unconnected alike
Infrastructure and affordability are commonly cited barriers to internet access around the world; yet, this research also points to a third important barrier: digital literacy.

  • Users and non-users alike do not understand all that the internet can offer.
  • Users generally restrict their internet use to a few large websites and services.
  • A lack of understanding about the internet and internet-connected devices exacerbates misconceptions and spreads fear around privacy, security, and health, which in turn undermines use of the internet. One Kenyan respondent said of non-users: “there are some assumptions that they can get diseases transmitted to them like skin cancer through the use of the internet.”

Many companies and NGOs are already doing great work to advance digital literacy, but we need to scale up these efforts.

Competition, literacy, language, and gender are also barriers to internet access

This research highlighted a series of consistent and persistent barriers to access.

  • While 95% of the world has access to an internet signal, far too often, users have access to only one, low quality provider, usually the most expensive option in their country.
  • Without basic literacy, some respondents cannot access the internet. As one respondent in rural South Africa said, “if you cannot read or write you cannot use internet, many people in this community are not educated and I believe most of them want to be able to use internet because it makes life easier.”
  • Others in Myanmar, Peru, and Rwanda cite the lack of local language content and tools as keeping them from coming online.
  • Evidence of a gendered digital divide is seen throughout all of the countries studied, with some women afraid of “breaking the machine” while others say social stigma, domestic abuse, negative impressions, and housework obligations limit their use of the internet.

These are just some of the highlights and interesting findings. We have results from nearly 80 focus groups in these seven countries. For more detailed information, the country summaries and full reports are available here.

Next steps to bring the next 4 billion online

Mozilla supported this research to help better inform what we believe is a global imperative to bring the world’s 4 billion unconnected people online to access the full and open internet.

Based on these findings, we believe the internet needs:

  • The development of more Equal Rating compliant models, many of which seemed to be quite popular with research respondents and provide access to the full diversity of the open internet, not just some parts of it.
  • Further investment in digital literacy training, especially in schools, on devices, and in retail outlets. For more information about Mozilla’s digital literacy efforts, see our recent Digital Skills Observatory study.
  • Work on all barriers to access to address infrastructure investment especially in rural areas, affordability, local content and local language tools, and gender equality.

Bringing the full internet to all people is one of the great challenges of our time. While we know there is more research needed, this research better informs the global debate on how to connect the unconnected, and makes clear the challenges ahead. We are committed to tackling these challenges but we know it will take all of us — tech companies, telecom companies, governments, civil society groups and philanthropists — working together to get everyone online.


We’d like to thank the researchers at Research ICT Africa, LIRNEasia, and IEP, as well as Jochai Ben-Avie (who manages all our Equal Rating work) and Peter Cihon (our awesome summer intern) who helped analyze this research
.

The post Mozilla releases research results: Zero rating is not serving as an on-ramp to the internet appeared first on The Mozilla Blog.

hacks.mozilla.orgTour the latest features of the CSS Grid Inspector, July 2017

We began work on a developer tool to help with understanding and using CSS Grid over a year ago. In March, we shipped the first version of a Grid Inspector in the Firefox DevTools along with CSS Grid. Now significant new features are landing in Firefox Nightly. Here’s a tour of what’s arrived in July 2017.

Download Firefox Nightly (if you don’t have it already) to get access to the latest and greatest, and to keep up with the continuing improvements.

SeaMonkey2.48 is out!

Dear all,

While not as long in the tooth and as hard as 2.46, 2.48 was still long enough and finally, I can say that it is out!

Please try it out and see.

Do note that updates are still not working so if you need to update, please install the new one manually.

I know… I know.  I need to get the updates done yesterday….  but hey.. we are getting closer to being up to date with the trees (yes, irrelevant to the updates issue…  just trying to redirect your attention elsewhere.. ;P )

Next up, 2.49 beta …

:ewong

 

The Mozilla BlogHow Could You Use a Speech Interface?

Last month in San Francisco, my colleagues at Mozilla took to the streets to collect samples of spoken English from passers-by. It was the kickoff of our Common Voice Project, an effort to build an open database of audio files that developers can use to train new speech-to-text (STT) applications.

What’s the big deal about speech recognition?

Speech is fast becoming a preferred way to interact with personal electronics like phones, computers, tablets and televisions. Anyone who’s ever had to type in a movie title using their TV’s remote control can attest to the convenience of a speech interface. According to one study, it’s three times faster to talk to your phone or computer than to type a search query into a screen interface.

Plus, the number of speech-enabled devices is increasing daily, as Google Home, Amazon Echo and Apple HomePod gain traction in the market. Speech is also finding its way into multi-modal interfaces, in-car assistants, smart watches, lightbulbs, bicycles and thermostats. So speech interfaces are handy — and fast becoming ubiquitous.

The good news is that a lot of technical advancements have happened in recent years, so it’s simpler than ever to create production-quality STT and text-to-speech (TTS) engines. Powerful tools like artificial intelligence and machine learning, combined with today’s more advanced speech algorithms, have changed our traditional approach to development. Programmers no longer need to build phoneme dictionaries or hand-design processing pipelines or custom components. Instead, speech engines can use deep learning techniques to handle varied speech patterns, accents and background noise – and deliver better-than-ever accuracy.

The Innovation Penalty

There are barriers to open innovation, however. Today’s speech recognition technologies are largely tied up in a few companies that have invested heavily in them. Developers who want to implement STT on the web are working against a fractured set of APIs and support. Google Chrome supports an STT API that is different from the one Apple supports in Safari, which is different from Microsoft’s.

So if you want to create a speech interface for a web application that works across all browsers, you would need to write code that would work with each of the various browser APIs. Writing and then rewriting code to work with every browser isn’t feasible for many projects, especially if the code base is large or complex.

There is a second option: You can purchase access to a non-browser-based API from Google, IBM or Nuance. Fees for this can cost roughly one cent per invocation. If you go this route, then you get one stable API to write to. But at one cent per utterance, those fees can add up quickly, especially if your app is wildly popular and millions of people want to use it. This option has a success penalty built into it, so it’s not a solid foundation for any business that wants to grow and scale.

Opening Up Speech on the Web

We think now is a good time to try to open up the still-young field of speech technology, so more people can get involved, innovate, and compete with the larger players. To help with that, the Machine Learning team in Mozilla Research is working on an open source STT engine. That engine will give Mozilla the ability to support STT in our Firefox browser, and we plan to make it freely available to the speech developer community, with no access or usage fees.

Secondly, we want to rally other browser companies to support the Web Speech API, a W3C community group specification that can allow developers to write speech-driven interfaces that utilize any STT service they choose, rather than having to select a proprietary or commercial service. That could open up a competitive market for smart home hubs–devices like the Amazon Echo that could be configured to communicate with one another, and other systems, for truly integrated speech-responsive home environments.

Where Could Speech Take Us?

Voice-activated computing could do a lot of good. Home hubs could be used to provide safety and health monitoring for ill or elderly folks who want to stay in their homes. Adding Siri-like functionality to cars could make our roads safer, giving drivers hands-free access to a wide variety of services, like direction requests and chat, so eyes stay on the road ahead. Speech interfaces for the web could enhance browsing experiences for people with visual and physical limitations, giving them the option to talk to applications instead of having to type, read or move a mouse.

It’s fun to think about where this work might lead. For instance, how might we use silent speech interfaces to keep conversations private? If your phone could read your lips, you could share personal information without the person sitting next to you at a café or on the bus overhearing. Now that’s a perk for speakers and listeners alike.Speech recognition using lip-reading

Want to participate? We’re looking for more folks to participate in both open source projects: STT engine development and the Common Voice application repository.

If programming is not your bag, you can always donate a few sentences to the Common Voice Project. You might read: “It made his heart rise into his throat” or “I have the diet of a kid who won $20.” Either way, it’s quick and fun. And it helps us offer developers an open source option that’s robust and affordable.

The post How Could You Use a Speech Interface? appeared first on The Mozilla Blog.

Air MozillaReps Weekly Meeting Jul. 27, 2017

Reps Weekly Meeting Jul. 27, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air MozillaEgencia Training: Canada site- Pacific Time

Egencia Training: Canada site- Pacific Time Training and demo of Egencia Canada site (For all residents in Canada)

Air MozillaEgencia Training: UK site

Egencia Training: UK site Training and demo of Egencia UK site (For residents in the UK)

Air MozillaEgencia Training: Singapore site

Egencia Training: Singapore site Training and demo of Egencia Singapore site (For residents in Taipei and APAC, excludes Australia and New Zealand) Here is the training video for Egencia...

Air MozillaEgencia Training: New Zealand site

Egencia Training: New Zealand site Training and demo of Egencia New Zealand site (For residents in New Zealand and Australia)

Air MozillaEgencia Training: France site

Egencia Training: France site Training and demo of Egencia France site. (For all residents in France)

Air MozillaEgencia Training: Germany site

Egencia Training: Germany site Training and demo of Egencia Germany site. (For residents in Germany and EMEA)

Firefox UXToward Making User Research More Open

Firefox Test Pilot is an approach to experimenting with new browser features in the open. What this means is that we plan, design, build, and evaluate experimental features with as much transparency as possible, in line with Mozilla’s mission and values. In this second year of Test Pilot, we’re making a concerted effort to make our work even more public by sharing all of the user research we conduct on our experiments.

What we already do in the open

From the beginning, Test Pilot has strived to work in the open. For example, the majority of our meetings are open to the public. We publish our documentation in places like the Mozilla Wiki and Github. The latter has also been a place where people trying our experiments can suggest and vote on enhancements.

The Test Pilot team prioritizes enhancements to our Containers experiment based on up-votes on Github.

We also receive feedback on experiments via Twitter and the Discourse forum.

People using our Tab Center experiment reached out to us on Twitter when they heard that the experiment would no longer be supported starting with Firefox 56.

Whenever an experiment graduates, we publish — for anyone to see — what we learned from that experiment, including the metrics that informed those learnings.

Detail from our graduation report on the No More 404s experiment

Transparency beyond code

Even with these efforts, we realize that, historically, transparency in open source development has largely focused on code and the conversations that happen around that code. What that focus has neglected are other important parts of our process for creating the products we ship, including user research.

Test Pilot user research has not been shared widely in the past because of the extensive personally identifiable information (PII) contained in our qualitative research. In most cases, we have not removed PII from our research findings because, for the Test Pilot team, those details are often critical for helping us build greater empathy for people using our experiments and for internalizing insights from the research we conduct. For example, a grimace and silence captured on video from a research participant trying to use one of our experiments is generally much more affecting and memorable than a written, anonymized quote from a research participant about how an experiment was confusing.

The benefits of making user research more open

We will continue to leverage user research containing PII internally when the Test Pilot team can learn from it. However, we realize that there are benefits that we can reap by making our user research more shareable beyond our team. With a new commitment to sharing our user research in the form of blog posts here on Medium, links to our research reports, and embedding insights from user research in public conversations happening in other channels like Github, we aim to:

  • Possibly expand who can contribute to our work to include people who will share ideas and feedback on our research approaches
  • Increase accountability around how we leverage user research findings to inform the design and development of our experiments

Share ideas about open research

We welcome additional ideas about how we can make our user research more open while maintaining its rigor and integrity. If there are open research initiatives that you think we can learn from, let us know. We look forward to hearing your thoughts on the research we share moving forward.

Originally published at medium.com on July 26, 2017.


Toward Making User Research More Open was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

hacks.mozilla.orgInspect, Modify, and Debug React and Redux in Firefox with Add-ons

React, along with Redux, is one of the fastest and most flexible UI frameworks on the web. It’s easy to write, easy to use and is great for teams. In fact, the Mozilla community uses React to build a lot of the Firefox DevTools UI and, famously, the Facebook UI is built with React. Despite its popularity, it’s still not easy to debug React in the browser. React Developer Tools by Facebook and Redux DevTools by Zalmoxisus however, let you inspect, modify, and debug your code right in the browser. And now they’re available for Firefox. These add-ons, and others like the Vue add-on will make debugging popular JavaScript frameworks easier. When combined with Mozilla’s Debugger.html tool, all these stand-alone tools will turn your browser into a full-featured debugger.

React

Get the latest version of the React DevTool add-on here. Once it’s installed, you’ll be able to examine React code on any site that uses it. When you visit a React-powered site, the add-on icon will appear to the right of the Firefox address bar:

Open your DevTools by hitting command-option-i (control-shift-i for Windows), clicking the button in the toolbar, or right-clicking on the page and selecting “inspect.” You’ll see the React panel alongside the basic DevTools panels. The main panel will now show you the React tree view:

The React tool works pretty much like every other DevTool. Use the arrow keys or hjkl keys to navigate the code, right-click components to examine them in the elements pane, show source, and so on. Expand or collapse items by clicking the arrows.

The side pane is a great place to store variables and see updates to the code.

There’s also an awesome search bar.

Inspect a React element on a page using the regular inspector, then switch to the React tab. The element will automatically be selected in the React tree.

You can also right-click an element and choose “Find the DOM node” to, well, find the DOM node of any element.

Redux

React and Redux go together like avocado and toast. Redux creates a predictable state container for your React library that lets it run reliably on virtually any system. It also lets you “time travel” to previous versions of your states. The Redux devtool for Firefox lets you inspect Redux actions and payloads, cancel actions, log action reducer errors, and automatically re-evaluate staged actions when you change reducer code.

The Redux devtool has extensive docs on its GitHub repository, including arguments, methods, and even a tutorial on how to create a Redux store for debugging. Check them out.

With Firefox Add-ons, you can have a complete React/Redux debugging toolset right in your browser.
Download Firefox Developer Edition and then check out all the add-ons available at addons.mozilla.org.

Air MozillaThe Joy of Coding - Episode 107

The Joy of Coding - Episode 107 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaNSF WINS Bay Area Meet-Up

NSF WINS Bay Area Meet-Up Mozilla and the National Science Foundation are partnering to give away $2M in prizes for wireless solutions that help connect the unconnected, keep communities connected...

Air MozillaEgencia Training: Canada site- Eastern Time

Egencia Training: Canada site- Eastern Time Training and demo of Egencia Canada site [EST time] with Katt Taylor, Mozilla's Travel Coordinator with Workplace Resources.

hacks.mozilla.orgThe MDN Redesign “Behind the Scenes”

Kuma, the code that produces the MDN site, is a weird mix of the old and the new. MDN turned ten in 2015 and there’s still code and content around from those very first days. When I sat down to start coding the current redesign, the first thing I did was remove the last few traces of the last redesign. In contrast, we have a cutting-edge audience: 92% of our users have a browser with CSS grid support! We enabled http2, and 98% of our users have seen benefits from that.

One of the ways we deal with old code in kuma is with the campsite rule: Always leave your campsite better than you found it. A redesign touches a lot of files, and this was a great opportunity to clean up and refactor — at least until the deadline started getting close.

A redesign is also a great time to change stuff you’re afraid of breaking. People are more understanding of you working the bugs out of something new than breaking something that’s worked for years. I removed 640 lines of stale code during the redesign. (And if I broke a six-year-old XPCOM tutorial you use daily by removing the custom list-style-type, please file a bug!)

One website with two looks

Rather than working on the CSS for the redesign in a separate “redesign” folder, we duplicated all the files and added “-old” to the file name of the old files, which means that all of our redesign work is happening in the “official” files. This preserves the git history and means we don’t have to move anything around after launch. Once we’re happy with the code, we can delete the “-old” files with confidence.

To serve the new styles to our beta testers and the “-old” ones to everyone else, we use Django Waffle. Waffle can also be used to serve different content but because there’s a strong separation of presentation and content on MDN, we’ve made very few changes to the HTML.

Bugs our beta testers found

MDN is huge, and we can’t test every page in every locale. We’re really lucky to have active beta testers. :) Some of the biggest things they turned up for us were:

Highlighting

Problems with Zilla Highlight in Vietnamese and when there's inline code examples.

We started out by replicating Mozilla’s brand “highlight” effect by using the Zilla Slab Highlight font, but we abandoned that pretty quickly when problems turned up in many of our locales and when used in combination with inline code.

The current approach puts a full-width black background on h3 and h4 headings by default, and then some JavaScript runs to add a <span> tag so that the black background hugs the heading text. Progressive enhancement at work.

We went back and forth about this for a while, wondering if the JavaScript and extra <span> was worth the effort. But we stuck with it because it makes the h3 and h4 headings much easier to spot when scanning the page.

What’s Taiwanese for Slab Serif?

Showing the difference between Zilla's thick strokes and the thin strokes of Taiwanese.

Previously we used Open Sans as our heading text. With the redesign, we switched to Zilla Slab. Open Sans was thin and an average height for a font. It didn’t look out of place next to the system fallbacks for Chinese or Japanese characters.

Zilla is big and thick, and we started getting feedback about the contrast with these system fallbacks. Additionally, the character set for Zilla is European Latin only at the moment, so Vietnamese, which uses Latin characters plus a couple Latin characters not used in Europe, displayed font fallbacks in the middles of words.

To address both these problems, we implemented a system that allowed us to override the site fonts on a per-locale basis. (Comment if you’d like a separate blog post about this).

Contrast

We received many complaints about the old design’s low-contrast display. We went a bit too far the other way with this design and received complaints about the high contrast. We’ve toned it down to the ever-popular #333 now.

What’s next

We’re moving on from this to make specific improvements to the article pages:

  • Putting code samples high on the page; our hard-working writers and volunteers are doing this, one page at a time.
  • A better approach to readable line-lengths and wide code examples, inspired by the Hacks Blog theme.
  • Compatibility tables that display desktop and mobile data side by side.
  • Code samples you can experiment with in the page.

See this early by signing up to be a beta tester.

Enjoyed beta testing MDN? You can also beta-test Firefox by downloading Nightly.

Who is “we”?

The MDN dev team is:

  • Stephanie Hobson, me, CSS-Pre-Pre-Processor
  • Schalk Neethling, who reviewed each of my 30+ pull requests in ALL THE BROWSERS, sometimes multiple times
  • John Whitlock, who did the awesome locale fallbacks
  • Ryan Johnson, who always asks “Why not?” when John and I say things can’t be done.

We blog sporadically on the Mozilla Marketing Engineering & Operations blog.

You should also read this blog post by our Product Manager, Kadir Topal, about The Future of MDN.

Mozilla L10NMaking a change with Pootle

tl;dr: As of 1 September 2017, Mozilla’s Pootle instance (mozilla.locamotion.org) will be turned off. Between now and then, l10n-drivers will be assisting l10n communities using Pootle in moving all projects over to Pontoon. Pootle’s positive impact in Mozilla’s continued l10n evolution is undeniable and we thank them for all of their contributions throughout the years.

Mozilla’s localization story has evolved over time. While our mission to improve linguistic accessibility on the Web and in the browser space hasn’t changed, the process and tools that help us to accomplish this have changed over the years. Some of us can remember when a Mozilla localizer needed to be skilled in version control systems, Unix commands, text editors, and Bugzilla in order to make an impactful contribution to l10n. Over time (and in many ways thanks to Pootle), it became clear that the technical barrier to entry was actually preventing us from achieving our mission. Beginning with Pootle (Verbatim) and Narro, we set out to lower that barrier through web-based, open source translation management systems. These removed many of the technical requirements on localizers, which in turn led to us being able to ship Firefox in languages that other browsers either couldn’t or simply wouldn’t ship; making Firefox the most localized browser on the market! Thanks to Pootle, we’ve learned that optimizing l10n impact through these tools is critical to our ability to change and adapt to new,  faster development processes taking the Internet and software industries by storm. We created Pontoon to take things further and focus on in-context localization. The demand for that tool became so great that we ended up adding more and more projects to it. Today I’m announcing the next step in our evolution: as of 1 September 2017, all Mozilla l10n communities using Pootle will be migrated to Pontoon and the Mozilla Pootle instance (mozilla.locamotion.org) will be turned off.

Why?

Over the years, we’ve developed a fond relationship with Translate House (the organization behind Pootle), as have many members of the Mozilla l10n community. Nearly five years ago, we entered into a contract agreement with the Translate House team to keep a Mozilla instance of Pootle running, to develop custom features for that instance, and to mentor l10n communities. As l10n has shifted through the Mozilla organization year after year, the l10n team recently found themselves as part of another internal reorganization, right at the moment in which contract renewal was up for discussion. With that reorganization, came new priorities for l10n and a change in budget for the coming year. In the face of those changes, we were unable to renew our contract with Translate House.

What now?

Before 1 September, the l10n-drivers will be proactively contacting l10n communities using Pootle in order to perform project migrations into Pontoon. Moving project-to-project, we’ll start with the locales that we’re currently shipping for a project, then move to those which are in pre-release, and finally those that have seen activity in the last three months. In the process, we’ll look out for any technical unknown unknowns that Pontoon engineers can address to make the transition a positive and seamless one.

There are a few things you can do to make the transition run smoothly:

  1. Log into Pontoon with your Firefox Account. If you don’t already have a Firefox account, please create one.
  2. Process all pending suggestions in your Pootle projects (i.e., bring your community’s suggestion queue down to 0).
  3. Flag issues with Pontoon to the l10n-drivers so that we can triage them and address them in a timely manner. To do this, please file a bug here, or reach out to the l10n-drivers if you’re not yet comfortable with Bugzilla.

We understand that this is a major change to those contributing to Mozilla through Pootle right now. We know that changing tools will make you less productive for a while. We’ll be holding a public community video call to address concerns, frustrations, and questions face-to-face on Thursday, 27 July at 19:00 UTC. You’re all invited to attend. If you can’t attend due to time zones, we’ll record it and publish it on air.mozilla.org. You can submit questions for the call beforehand on this etherpad doc and we’ll talk about them on the call. We’ve also created this FAQ to help answer any outstanding questions. We’ll be adding the questions and answers from the call to this document as well.

Finally, I would like to personally extend my thanks to Translate House. Their impact on open source localization is unmatched and I’ve truly enjoyed the relationships we’ve built with that team. We wish them all the best in their future direction and hope to have opportunities to collaborate and stand together in support of open localization in the future.

hacks.mozilla.orgOptimizing Performance of A-Frame Scenes for Mobile Devices

A-Frame makes building 3D and VR web applications easy, so developers of all skill levels can create rich and interactive virtual worlds – and help make the web the best and largest deployment surface for VR content. For an Oregon State University capstone project focused on WebVR, our team investigated performance and optimizations for A-Frame on Android smartphones. We developed a means of benchmarking the level of 3D complexity a mobile phone is capable of, and determining which performance metrics are required for such a benchmark.

Team OVRAR!

From the left, Team OVRAR (Optimizing Virtual Reality and Augmented Reality):

Branden Berlin: Javascript Compatibility and Model Lighting
Charles Siebert: Team Captain, Project Designer, and Modeling
Yipeng (Roger) Song: Animations and Texturing

Results and Recommendations

Texture size: The framework resizes textures to the nearest power of two, which heavily increases the loading and rendering workload in the scenes. We found that high-resolution textures that didn’t match the criteria reached sizes of 8196×8196, with one texture taking up to 20 MB! Using texture dimensions that are a power of two helps ensure optimal memory use. Check the Web Console for warnings when textures are resized.

Asset Limit: We found that having more than 70 MB of assets loaded for one web page was unrealistic in a phone environment. It caused significant delays in loading the scene fully, and in some cases crashed the browser on our phones. Use the Allocations recorder in the Performance Tool in Firefox to check your scene’s memory usage, and the A-Frame Inspector to tune aspects of rendering for individual objects.

Tree map

Resolution cost: Higher resolution trees caused delays in loading the models and significant slowdowns in rendering the scenes. Our high resolution tree features 37,000 vertices which increases the graphics rendering workload, including lighting from multiple light sources. This heavily limited the number of models we could load into our scene. We also found an upper limit for our devices while handling these trees: When our room reached about 1,000,000 vertices, our phone browsers would crash after spending a few minutes attempting to load and render. You can add the “stats” property to your tag to see the number of vertices in the scene.

Object count: Load times increased linearly based on the number of models to be drawn to the scene. This would add a significant amount of time, if each object to be loaded took, for example, three milliseconds. Further inspecting the memory snapshot shows that our object models are read in and stored into object arrays for quicker access and rendering. Larger object models would also increase linearly based off the number of vertices and faces that are used to create the model, and their resulting normal vectors. Check the A-Frame stats monitor to keep an eye on your object count.

Measurement overhead: During the testing, we used WebIDE to monitor on-device performance. We found that the overhead of USB debugging on our Android devices caused performance to drop by nearly half. Our testing showed that CPU performance was not the leading bottleneck in rendering the scenes. CPU usage hovered at 10-25% during heavy performance drops. This shows that the rendering is mostly done on the GPU, which follows how OpenGL ES 2.0 operates in this framework.

Testing Approach

Our approach was to:

  • render multiple scenes while measuring specific metrics
  • determine the best practices for those metrics on mobile
  • report any relevant bugs that appear.

The purpose of creating a benchmark application for a mobile device is to give a baseline for what is possible to develop, so developers can use this information to plan their own projects.

We tested on the LG Nexus 5X and used the WebIDE feature in Firefox Nightly to pull performance statistics from the phone while it was rendering our scenes, tracking frames-per-second (FPS), and using memory. Additionally, we tracked processor usage on the device through Android’s native developer settings.

To begin, we broke down the fundamental parts of what goes into rendering computer graphics, and created separate scenes to test each of these parts on the device. We tested object modeling, texturing, animation, and lighting, and then created standards of performance that the phone needed to meet for each. We aimed to first find a baseline performance of 30 FPS for each part and then find the upper bound – the point at which the feature breaks or causes visual drops in performance. We separated these features by creating a VR environment with four “rooms” that tested each in A-Frame.

Room 1: Loading object models using obj-loader

Room 4 screenshot

In the first room, we implemented a high-resolution tree, loading a large number of low vertice-count objects and comparing to a low number of high vertice-count objects. Having a comparable number of vertices rendered in either scene helped us determine the performance impact of loading multiple objects at once.

Room 2: Animations and textures

In this room, we implemented textures and animations to determine the impact on initial load times and the impact in calculating animation methods. We used A-Frame’s built-in functions to attach assets to objects to texture them, and we used A-Frame’s animation methods to animate the objects in this room. This allowed us to easily test this scenario of animating the textured objects and measure the differences between the two iterations. In the first iteration, we implemented low-resolution textures on objects to compare them with high-resolution textures in the second iteration. These resolution sizes varied from 256×256 to 8196×8196. We also wanted to compare the performance between the two rooms, and see if texturing the objects would cause any unforeseen issues with animations other than the initial load time when downloading the assets from the web page.

Room 3: User interaction and lighting

This room’s initial concept focused on the basis of gaming: user interaction. We utilized JavaScript within A-Frame to allow the user to interact with objects scattered about a field. Due to the limited mobility of mobile-VR interaction, we kept it to visual interaction. Once the user looked at an object, it would either shrink or enlarge. We wanted to see if any geometric change due to interaction would impact hardware demand. We manipulated the growth size of object interactions and found a few unreliable stutters. Generally, though, the hardware performance was stable.

For the second iteration, we ramped up the effects of user interactions. We saw that nothing changed when it came to physical effects on objects in the world, so we decided to include something that is more taxing on the hardware: lighting.

As the user interacted with an object, the object would then turn into a light source, producing an ambient light at maximum intensity. We scattered these objects around the room and had the user turn them on, one by one. We started with 10 ‘suns’ and noticed an initial lag when loading the room, as well as a 2-3 second FPS drop to 13, when turning on the first sphere. After that, the rest of the spheres turned on smoothly. We noticed a steady and consistent drop of about 10 FPS for every 10 max-intensity light sources. However, as the intensity was decreased, more and more lighting sources were allowed before a noticeable change in performance occurred.

Room 3 screenshots

Room 4: All previous features implemented together.

Developers are unlikely to use just one of these specific features when creating their applications. We created this room to determine if the performance would drop at an exponential rate if all features were added together, as this would be a realistic scenario.

Further Information

You can find all the source code and documentation for our OVRAR project on Github.

If you have any questions, ask in the comments below. Thanks!

QMOFirefox Developer Edition 55 Beta 11 Testday Results

Hello!

As you may already know, last Friday – July 21st – we held a new Testday event, for Firefox Developer Edition 55 Beta 11.

Thank you all for helping us make Mozilla a better place – Ilse Macias, Athira Appu, Iryna Thompson.

From India team:  Fahima Zulfath A, Nagarajan .R, AbiramiSD, Baranitharaan, Bharathvaj, Surentharan.R.A, R.Krithika Sowbarnika, M.ponmurugesh.

From Bangladesh team: Maruf Rahman, Sajib Hawee, Towkir Ahmed, Iftekher Alam, Tanvir Rahman, Md. Raihan Ali, Sazzad Ehan, Tanvir Mazharul, Md Maruf Hasan Hridoy, Saheda Reza Antora, Anika Alam Raha, Taseenul Hoque Bappi.

Results:

– several test cases executed for the Screenshots, Simplify Page and Shutdown Video Decoder features;

– 7 new logged bugs: 1383397, 1383403, 1383410, 1383102, 1383021, #3196, #3177

– 3 bugs verified: 1061823, 1357915, 1381692

Thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Air MozillaWebdev Beer and Tell: July 2017

Webdev Beer and Tell: July 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaWorking Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long

Working Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long On July 20, Jennifer Selby Long, an expert in the ethical use of the Myers-Briggs Type Indicator® (MBTI®), will lead us in an interactive session...

Air MozillaReps Weekly Meeting Jul. 20, 2017

Reps Weekly Meeting Jul. 20, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

hacks.mozilla.orgThe Next Generation of Web Gaming

Over the last few years, Mozilla has worked closely with other browsers and the industry to advance the state of games on the Web. Together, we have enabled developers to deploy native code on the web, first via asm.js, and then with its successor WebAssembly. Now available in Firefox and Chrome, and also soon in Edge and WebKit, WebAssembly enables near-native performance of code in the browser, which is great for game development, and has also shown benefits for WebVR applications. WebAssembly code is able to deliver more predictable performance due to JIT compilation and garbage collection being avoided. Its wide support across all major browser engines opens up paths to near-native speed, making it possible to build high-performing plugin-free games on the web.

“In 2017 Kongregate saw a shift away from Flash with nearly 60% of new titles using HTML5,” said Emily Greer, co-founder and CEO of Kongregate.  “Developers were able to take advantage of improvements in HTML5 technologies and tools while consumers were able to enjoy games without the need for 3rd-party plugins.  As HTML5 continues to evolve it will enable developers to create even more advanced games that will benefit the millions of gamers on Kongregate.com and the greater, still thriving, web gaming industry.”

Kongregate’s data shows that on average, about 55% of uploaded games are HTML5 games.

And we can also see that these are high-quality games, with over 60% of HTML5 titles receiving a “great” score (better than a 4.0 out of 5 rating).

In spite of this positive trend, opportunities for improvement exist. The web is an ever-evolving platform, and developers are always looking for better performance. One major request we have often heard is for multithreading support on the web. SharedArrayBuffer is a required building block for multithreading, which enables concurrently sharing memory between multiple web workers. The specification is finished, and Firefox intends to ship SharedArrayBuffer support in Firefox 55.

Another common request is for SIMD support. SIMD is short for Single Instruction, Multiple Data. It’s a way for a CPU to parallelize math instructions, offering significant performance improvements for math-heavy requirements such 3D rendering and physics.

The WebAssembly Community Group is now focused on enabling hardware parallelism with SIMD and multithreading as the next major evolutionary steps for WebAssembly. Building on the momentum of shipping the first version of WebAssembly and continued collaboration, both of these new features should be stable and ready to ship in Firefox in early 2018.

Much work has gone into optimizing runtime performance over the last few years, and with that we learned many lessons. We have collected many of these learnings in a practical blog post about porting games from native to web, and look forward to your input on other areas for improvement. As multithreading support lands in 2018, expect to see opportunities to further invest in improving memory usage.

We again wish to extend our gratitude to the game developers, publishers, engine providers, and other browsers’ engine teams who have collaborated with us over the years. We could not have done it without your help — thank you!

hacks.mozilla.orgWebAssembly for Native Games on the Web

The biggest improvement this year to web performance has been the introduction of WebAssembly. Now available in Firefox and Chrome, and coming soon in Edge and WebKit, WebAssembly enables the execution of code at a low assembly-like level in the browser.

Mozilla has worked closely with the games industry for several years to reach this stage: including milestones like the release of games built with Emscripten in 2013, the preview of Unreal Engine 4 running in Firefox (2014), bringing the Unity game engine to WebGL also in 2014, exporting an indie Unity game to WebVR in 2016, and most recently, the March release of Firefox 52 with WebAssembly.

WebAssembly builds on Mozilla’s original asm.js specification, which was created to serve as a plugin-free compilation target approach for applications and games on the web. This work has accumulated a great deal of knowledge at Mozilla specific to the process of porting games and graphics technologies. If you are an engineer working on games and this sounds interesting, read on to learn more about developing games in WebAssembly.

Where Does WebAssembly Fit In?

By now web developers have probably heard about WebAssembly’s promise of performance, but for developers who have not actually used it, let’s set some context for how it works with existing technologies and what is feasible. Lin Clark has written an excellent introduction to WebAssembly. The main point is that unlike JavaScript, which is generally written by hand, WebAssembly is a compilation target, just like native assembly. Except perhaps for small snippets of code, WebAssembly is not designed to be written by humans. Typically, you’d develop the application in a source language (e.g. C/C++) and then use a compiler (e.g. Emscripten), which transforms the source code to WebAssembly in a compilation step.

This means that existing JavaScript code is not the subject of this model. If your application is written in JavaScript, then it already runs natively in a web browser, and it is not possible to somehow transform it to WebAssembly verbatim. What can be possible in these types of applications however, is to replace certain computationally intensive parts of your JavaScript with WebAssembly modules. For example, a web application might replace its JavaScript-implemented file decompression routine or a string regex routine by a WebAssembly module that does the same job, but with better performance. As another example, web pages written in JavaScript can use the Bullet physics engine compiled to WebAssembly to provide physics simulation.

Another important property: Individual WebAssembly instructions do not interleave seamlessly in between existing lines of JavaScript code; WebAssembly applications come in modules. These modules deal with low-level memory, whereas JavaScript operates on high-level object representations. This difference in structure means that data needs to undergo a transformation step—sometimes called marshalling—to convert between the two language representations. For primitive types, such as integers and floats, this step is very fast, but for more complex data types such as dictionaries or images, this can be time consuming. Therefore, replacing parts of a JavaScript application works best when applied to subroutines with large enough granularity to warrant replacement by a full WebAssembly module, so that frequent transitions between the language barriers are avoided.

As an example, in a 3D game written in three.js, one would not want to implement a small Matrix*Matrix multiplication algorithm alone in WebAssembly. The cost of marshalling a matrix data type into a WebAssembly module and then back would negate the speed performance that is gained in doing the operation in WebAssembly. Instead, to reach performance gains, one should look at implementing larger collections of computation in WebAssembly, such as image or file decompression.

On the other end of the spectrum are applications that are implemented as fully in WebAssembly as possible. This minimizes the need to marshal large amounts of data across the language barrier, and most of the application is able to run inside the WebAssembly module. Native 3D game engines such as Unity and Unreal Engine implement this approach, where one can deploy a whole game to run in WebAssembly in the browser. This will yield the best possible performance gain. However, WebAssembly is not a full replacement for JavaScript. Even if as much of the application as possible is implemented in WebAssembly, there are still parts that are implemented in JavaScript. WebAssembly code does not interact directly with existing browser APIs that are familiar to web developers, your program will call out from WebAssembly to JavaScript to interact with the browser. It is possible that this behavior will change in the future as WebAssembly evolves.

Producing WebAssembly

The largest audience currently served by WebAssembly are native C/C++ developers, who are often positioned to write performance sensitive code. An open source community project supported by Mozilla, Emscripten is a GCC/Clang-compatible compiler toolchain that allows building WebAssembly applications on the web. The main scope of Emscripten is support for the C/C++ language family, but because Emscripten is powered by LLVM, it has potential to allow other languages to compile as well. If your game is developed in C/C++ and it targets OpenGL ES 2 or 3, an Emscripten-based port to the web can be a viable approach.

Mozilla has benefited from games industry feedback – this has been a driving force shaping the development of asm.js and WebAssembly. As a result of this collaboration, Unity3D, Unreal Engine 4 and other game engines are already able to deploy content to WebAssembly. This support takes place largely under the hood in the engine, and the aim has been to make this as transparent as possible to the application.

Considerations For Porting Your Native Game

For the game developer audience, WebAssembly represents an addition to an already long list of supported target platforms (Windows, Mac, Android, Xbox, Playstation, …), rather than being a new original platform to which projects are developed from scratch. Because of this, we’ve placed a great deal of focus on development and feature parity with respect to other existing platforms in the development of Emscripten, asm.js, and WebAssembly. This parity continues to improve, although on some occasions the offered features differ noticeably, most often due to web security concerns.

The remainder of this article focuses on the most important items that developers should be aware of when getting started with WebAssembly. Some of these are successfully hidden under an abstraction if you’re using an existing game engine, but native developers using Emscripten should most certainly be aware of the following topics.

Execution Model Considerations

Most fundamental are the differences where code execution and memory model are concerned.

  • Asm.js and WebAssembly use the concept of a typed array (a contiguous linear memory buffer) that represents the low level memory address space for the application. Developers specify an initial size for this heap, and the size of the heap can grow as the application needs more memory.
  • Virtually all web APIs operate using events and an event queue mechanism to provide notifications, e.g. for keyboard and mouse input, file IO and network events. These events are all asynchronous and delivered to event handler functions. There are no polling type APIs for synchronously asking the “browser OS” for events, such as those that native platforms often provide.
  • Web browsers execute web pages on the main thread of the browser. This property carries over to WebAssembly modules, which are also executed on the main thread, unless one explicitly creates a Web Worker and runs the code there. On the main thread it is not allowed to block execution for long periods of time, since that would also block the processing of the browser itself. For C/C++ code, this means that the main thread cannot synchronously run its own loop, but must tick simulation and animation forward based on an event callback, so that execution periodically yields control back to the browser. User-launched pthreads will not have this restriction, and they are allowed to run their own blocking main loops.
  • At the time of writing, WebAssembly does not yet have multithreading support – this capability is currently in development.
  • The web security model can be a bit more strict compared to other platforms. In particular, browser APIs constrain applications from gaining direct access to low-level information about the system hardware, to mitigate being able to generate strong fingerprints to identify users. For example, it is not possible to query information such as the CPU model, the local IP address, amount of RAM or amount of available hard disk space. Additionally, many web features operate on web domain boundaries, and information traveling across domains is configured by cross-origin access control rules.
  • A special programming technique that web security also prevents is the dynamic generation and mutation of code on the fly. It is possible to generate WebAssembly modules in the browser, but after loading, WebAssembly modules are immutable and functions can no longer be added to it or changed.
  • When porting C/C++ code, standard compliant code should compile easily, but native compilers relax certain features on x86, such as unaligned memory accesses, overflowing float->int casts and invoking function pointers via signatures that mismatch from the actual type of the function. The ubiquitousness of x86 has made these kind of nonstandard code patterns somewhat common in native code, but when compiling to asm.js or WebAssembly, these types of constructs can cause issues at runtime. Refer to Emscripten documentation for more information about what kind of code is portable.

Another source of differences comes from the fact that code on a web page cannot directly access a native filesystem on the host computer, and so the filesystem solution that is provided looks a bit different than native. Emscripten defines a virtual filesystem space inside the web page, which backs onto the IndexedDB API for persistence across page visits. Browsers also store downloaded data in navigation caches, which sometimes is desirable but other times less so.

Developers should be mindful in particular about content delivery. In native application stores the model of upfront downloading and installing a large application is an expected standard, but on the web, this type of monolithic deployment model can be an off-putting user experience. Applications can download and cache a large asset package at first run, but that can cause a sizable first-time download impact. Therefore, launching with minimal amount of downloading, and streaming additional asset data as needed can be critical for building a web-friendly user experience.

Toolchain Considerations

The first technical challenge for developers comes from adapting the existing build systems to target the Emscripten compiler. To make this easier, the compiler (emcc & em++) is designed to operate closely as a drop-in replacement for GCC or Clang. This eases migration of existing build systems that are already aware of GCC-like toolchains. Emscripten supports the popular CMake build system configuration generator, and emulates support for GNU Autotools configure scripts.

A fact that is sometimes confused is that Emscripten is not a x86/ARM -> WebAssembly code transformation toolchain, but a cross-compiler. That is, Emscripten does not take existing native x86/ARM compiled code and transform that to run on the web, but instead it compiles C/C++ source code to WebAssembly. This means that you must have all the source available (or use libraries bundled with Emscripten or ported to it). Any code that depends on platform-specific (often closed source) native components, such as Win32 and Cocoa APIs, cannot be compiled, but will need to be ported to utilize other solutions.

Performance Considerations

One of the most frequently asked questions about asm.js/WebAssembly is whether it is fast enough for a particular purpose. Curiously, developers who have not yet tried out WebAssembly are the ones who most often doubt its performance. Developers who have tried it, rarely mention performance as a major issue. There are some performance caveats however, which developers should be aware of.

  • As mentioned earlier, multithreading is not available just yet, so applications that heavily depend on threads will not have the same performance available.
  • Another feature that is not yet available in WebAssembly, but planned, is SIMD instruction set support.
  • Certain instructions can be relatively slower in WebAssembly compared to native. For example, calling virtual functions or function pointers has a higher performance footprint due to sandboxing compared to native code. Likewise, exception handling is observed to cause a bigger performance impact compared to native platforms. The performance landscape can look a bit different, so paying attention to this when profiling can be helpful.
  • Web security validation is known to impact WebGL noticeably. It is recommended that applications using WebGL are careful to optimize their WebGL API calls, especially by avoiding redundant API calls, which still pay the cost for driver security validation.
  • Last, application memory usage is a particularly critical aspect to measure, especially if targeting mobile support as well. Preloading big asset packages on first run and uncompressing large amounts of audio assets are two known sources of memory bloat that are easy to do by accident. Applications will likely need to optimize specifically for this when porting, and this is an active area of optimization in WebAssembly and Emscripten runtime as well.

Summary

WebAssembly provides support for executing low-level code on the web at high performance, similar to how web plugins used to, except that web security is enforced. For developers using some of the super-popular game engines, leveraging WebAssembly will be as easy as choosing a new export target in the project build menu, and this support is available today. For native C/C++ developers, the open source Emscripten toolchain offers a drop-in compatible way to target WebAssembly. There exists a lively community of developers around Emscripten who contribute to its development, and a mailing list for discussion that can help you getting started. Games that run on the web are accessible to everyone independent of which computation platform they are on, without compromising portability, performance, or security, or requiring up front installation steps.

WebAssembly is only one part of a larger collection of APIs that power web-based games, so navigate on to the MDN games section to see the big picture. Hop right on in, and happy Emscriptening!