Air MozillaMarch Privacy Lab: Cryptographic Engineering for Everyone 3.22.17

March Privacy Lab: Cryptographic Engineering for Everyone 3.22.17 Our March speaker is Justin Troutman, creator of PocketBlock - a visual, gamified curriculum that makes cryptographic engineering fun. It's suitable for everyone from an...

Air MozillaBugzilla Project Meeting, 22 Mar 2017

Bugzilla Project Meeting The Bugzilla Project developers meeting.

Air MozillaWeekly SUMO Community Meeting Mar. 22, 2017

Weekly SUMO Community Meeting Mar. 22, 2017 This is the sumo weekly call

hacks.mozilla.orgA Saturday Night: Track and record movement in WebVR

Mozilla’s WebVR team has released a fun new virtual reality demo called A Saturday Night. Put your VR headset on, perform a dance, and share it with the world!


A Saturday Night
has been developed with A-Frame, an open source JavaScript framework created at Mozilla that makes building VR experiences much more accessible. If you have some knowledge of HTML you can create basic scenes with animations, and the A-Frame API allows you to use JavaScript to provide richer interactive experiences. There is also a registry of components, so you can easily include community-contributed code in your own projects.

Not only you can dance along with the demo, we also encourage you to peek at A Saturday Night source code on Github. The most interesting part is that it shows how to track the user’s movement and position (both headset and controllers). And, you can easily reuse that code in your own A-Frame projects too!

The tracking code has been released as a standalone A-Frame component, which you can grab from this Github repository or via NPM:


npm install aframe-motion-capture

There are a few controllers in that repository. The highest-level controllers, avatar-recorder and avatar-replayer, allow you to record and replay the avatar’s movement (head and hands). This is very useful for QA or automated tests –where recording and replaying what a user has done has tremendous value. There’s also the possibility of exploring new use cases: game mechanics or other types of interactive activity that could benefit such as controlling character movement, casting spells by gesturing with your hands, etc.

If you want to learn more about A Saturday Night, or the reusable tracking components, take a look at the A-Frame blog post, where Diego Marcos from Mozilla’s WebVR team shares more technical detail.

Air MozillaRust Libs Meeting 2017-03-21

Rust Libs Meeting 2017-03-21 Rust Libs Meeting 2017-03-21

SUMO BlogGuest post: “That Bug about Mobile Bookmarks”

Hi, SUMO Nation!

Time for a guest blog post by Seburo – one of our “regulars”, who wanted to share a very personal story about Firefox with all of you. He originally posted in on Mozilla’s Discourse, but the more people it reaches, the better. Thank you for sharing, Seburo! (As always, if you want to post something to our blog about your Mozilla and/or SUMO adventures and experiences, let us know.)

Here we go…

 

As a Mozillian I like to set myself goals and targets. It helps me to plan what I would like to do and to ensure that I am constantly focusing on activities that help Mozilla as well as maintain a level of contribution. But under these “public” goals are a number of things that are more long term, that are possible and have been done by many Mozillians, but for me just seem a little out of reach. If you were to see the list, it may seem a little odd and possibly a little egotistical, even laughable, but however impossible some of them are, they serve as a reminder of what I may be able to achieve.

This blog entry is about me achieving one of them…

In the time leading up to the London All-Hands, I had been invited by a fellow SUMO contributor to attend a breakfast meeting to learn more about the plans around Nightly. This clashed with another breakfast meeting between SUMO and Sync to continue to work to improve our support for this great and useful feature of Firefox. Not wanting to upset anyone, I went with the first invite, but hoped to catch up with members of the Sync team during the week.

Having spent the morning better understanding how SUMO fits into the larger corporate structure, I made use of the open time in the schedule to visit the Firefox Homeroom which was based in a basement meeting room, home for the week to all the alchemists and magicians that bring Mozilla software to life. It was on the way back up the stairs that I bumped into Mark from the Firefox Desktop team. Expecting to arrange some time for later in the week, Mark was free to have a chat there and then.

Sync is straightforward when used to connect desktop and mobile versions of Firefox but I wanted to better understand how it would work if a third device was included. It was at the end of the conversation that one of us mentioned about how the bookmarks coming to desktop Firefox could be seen in the Mobile Bookmarks folder in the bookmark drop down menus. But it is not there, which can make it look like your bookmarks have disappeared. Sure, you can open the bookmark library, but this is extra mouse clicks to open a separate tool. Mark suggested that this could be easy to fix and that I should file a bug, a task that duly went in the list of things to do on returning from the week.

A key goal for contributors at an All-Hands is to come back with a number of ways to build upon your ability to contribute in the future and I came back with a long list that took time to work through. The bug was also delayed in filing due to natural pessimism about its chances of success. But I realised…what if we all thought like that? All things that we have done started with someone having an idea that was put forward knowing that other ideas had failed, but they still went ahead regardless.

So I wrote a bug and submitted it and nothing much happened. But after a while there was a spark of activity. Thom from the Sync team had decided to resolve it and seemed to fully understand how this could work. The bug was assigned various flags and it soon became clear to me that work was being done on it. Not having any coding ability, I was not able to provide any real help to Thom aside from positive feedback to an early mock up of how the user experience would look. But to be honest, I was too nervous to say much more. A number of projects I had come back from MozLondon with had fallen through and I did not say anything much that could “jinx it” and it not proceed.

A few months passed after which I started getting copied in on bugmail about code needing review with links to systems I barely knew existed. And there, partway down a page were two words:

Ship It.

I know that these words are not unusual for many people at Mozilla, indeed their very existence is one of the reasons that many staff turn on their computers (the other is probably cat gifs), but for me it was the culmination of something that I never thought would happen. The sobriety of this moment increased with the release of Nightly 54 – I could actually see and use what Thom and Mark had spent time and effort crafting. If you use version 54 (which is currently Firefox Developer Edition) and use Firefox Sync, you should now see a “Mobile Bookmarks” folder in the drop down from the menu bar and from the toolbar. This folder is an easier way for you to access the bookmarks that you have saved on the bus, in the pub, on the train or during that really boring meeting you thought would never end.

I never thought that I would be able to influence the Firefox end product, and I had in a very small way. Whilst full credit should go to Thom and Mark and the Sync team for building this and those who herded and QA’d the bug (never forget these people, their work is vital), credit should also go to the SUMO team for enabling me to be a position to understand the user perspective to help make Sync work for more users. Sync is a great feature of Firefox and one that I hope can be improved and enhanced further.

I sincerely hope that you have enjoyed reading this little story, but I hope that you have learned from it and that those learnings will help you as a contributor. In particular:

  • Have goals, however impossible.
  • Contribute your ideas. Nobody else in the world has the same idea as you and imagines it in the same way.
  • Work outside of your own team, build bridges to other areas.
  • Use Nightly and (if you also use a mobile version of Firefox) use it with Firefox Sync.
  • Be respectful of Mozilla staff as they are at work and they are busy people, but also be prepared to be in awe of their awesomeness.

Whilst this was (I have been told) a simple piece of code, the result for me was to see a feature in Firefox that I helped make happen. Along the way, I have broadened my understanding of the effort that goes into Firefox but I can also see that some of the bigger goals I have are achievable.

There is still so much I want to do.

QMOFirefox 53 Beta 3 Testday Results

Hello Mozillians!

As you may already know, last Friday – March 17th – we held a new Testday event, for Firefox 53 Beta 3.

Thank you all for helping us making Mozilla a better place – Iryna Thompsn, Surentharan and Suren, Jeremy Lam and jaustinlam.

From Bangladesh team: Nazir Ahmed Sabbir | NaSb, Rezaul Huque Nayeem, Md.Majedul islam, Rezwana Islam Ria, Maruf Rahman, Aminul Islam Alvi | AiAlvi, Sayed Mahmud, Mohammad Mosfiqur Rahman, Ridwan, Tanvir Rahman, Anmona Mamun Monisha, Jaber Rahman, Amir Hossain Rhidoy, Ahmed Safa, Humayra Khanum, Sajal Ahmed, Roman Syed, Md Rakibul Islam, Kazi Nuzhat Tasnem, Md. Almas Hossain, Md. Asif Mahmud Apon, Syeda Tanjina Hasan, Saima Sharleen, Nusrat jahan, Sajedul Islam, আল-যুনায়েদ ইসলাম ব্রোহী, Forhad Hossain and Toki Yasir.

From India team: Guna / Skrillex, Subhrajyoti Sen / subhrajyotisen, Pavithra R, Nagaraj.V, karthimdav7, AbiramiSD/@Teens27075637, subash M, Monesh B, Kavipriya.A, Vibhanshu Chaudhary | vibhanshuchaudhary, R.KRITHIKA SOWBARNIKA, HARITHA KAMARAJ and VIGNESH B S.

Results:

– several test cases executed for the WebM Alpha, Compact Themes and Estimated Reading Time features.

– 2 bugs verified: 1324171, 1321472.

– 2 new bugs filed: 1348347, 1348483.

Again thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

hacks.mozilla.orgA new CSS Grid demo on mozilla.org

With CSS Grid shipping across browsers this spring (already in Firefox 52 and Chrome 57; Safari, and hopefully Edge, soon to follow) the team here at Mozilla wanted to show off some of the key features and also let our in-house designers and developers experiment with the technology on mozilla.org. The result is a live demo site that shows CSS Grid features and provides links to our favorite resources. The bonus is the mozilla.org web developers got hands-on, real world experience by working up a site from scratch with Grid.

Screen capture of CSS Grid demo

It turned out that the resources available on the web, including tons of examples and instruction, do such a great job of clearly explaining the basics that they were able to dive in and build the site without any handholding. Jen Simmons and  Rachel Andrew both provide excellent examples and tutorials on how to start working with grids. In addition, MDN contains several detailed guides on using CSS Grids.

When designing this project we had the following goals in mind:

  • Demonstrate the potential of CSS Grid to developers and designers.
  • Introduce Firefox Developer Tools Grid Inspector, which is currently the only in-browser developer tool for Grid.
  • Build a page on mozilla.org that uses Grid Layout.
  • Prove CSS Grid makes it easy for anyone who knows CSS to grasp the fundamentals and create a functional page. (Bonus: the Mozilla webdev team had fun doing it!)

Grid provides powerful layout capabilities, and on the demo site we wanted to illustrate some of the key features. This list is not exhaustive, but does show some really interesting capabilities that are now available.

  • Fixed or Flexible Grids: You can create a grid either with fixed track sizes or with flexible sizes using percentages or the new fr fractional unit.
  • Place & Align Items: You can place items at precise locations on the grid using standard grid properties or by using grid template areas. Items can be placed independent of their HTML source order. Alignment features control how items align when placed into a grid area, and also how the whole grid is aligned.
  • Control Overlap: Grid cells can contain more than one item, and can span multiple rows and columns. You can also control layering with z-index.

Additionally, we wanted to show off Firefox’s Grid Inspector Tool, which lets you see the grid lines in the browser while you’re creating a layout or studying other examples of CSS Grid in action.

Please check out the demo site and let us know what you think. We hope it will help you to learn and inspire you to start using CSS Grid. And stay tuned for more coverage of CSS Grid and how to use it, here and on MDN.

 

Blog of DataHello world!

Welcome to blog.mozilla.org. This is your first post. Edit or delete it, then start blogging!

The Mozilla BlogHow Do We Connect First-Time Internet Users to a Healthy Web?

Fresh research from Mozilla, supported by the Bill & Melinda Gates Foundation, explores how low-income, first-time smartphone users in Kenya experience the web — and what digital skills can make a difference

 

Three billion of us now share the Internet. But our online experiences differ greatly, depending on geography, gender and income.

For a software engineer in San Francisco, the Internet can be open and secure. But for a low-income, first-time smartphone user in Nairobi, the Internet is most often a small collection of apps in an unfamiliar language, limited further by high data costs.

This undercuts the Internet’s potential as a global public resource — a resource everyone should be able to use to improve their lives and societies.

Twelve months ago, Mozilla set out to study this divide. We wanted to understand the barriers that low-income, first-time smartphone users in Kenya face when adapting to online life. And we wanted identify the skills and education methods necessary to overcome them.

To do this, Mozilla created the Digital Skills Observatory: a participatory research project exploring the complex relationship between devices, digital skills, social life, economic life and digital life. The work — funded by the Bill & Melinda Gates Foundation — was developed and led by Mozilla alongside Digital Divide Data and A Bit of Data Inc.

Today, we’re sharing our findings.

 

For one year, Mozilla researchers and local Mozilla community members worked with about 200 participants across seven Kenyan regions. All participants identified as low income and were coming online for the first time through smartphones. To hone our focus, we paid special attention to the impact of digital skills on digital financial services (DFS) adoption. Why? A strong grasp of digital financial services can open doors for people to access the formal financial environment and unlock economic opportunity.

In conducting the study, one group of participants was interviewed regularly and shared smartphone browsing and app usage data. A second group did the same, but also received digital skills training on topics like app stores and cybersecurity.

Our findings were significant. Among them:

  • Without proper digital skills training, smartphone adoption can worsen — not improve — existing financial and social problems.
    • Without media literacy and knowledge of online scams, users fall prey to fraudulent apps and news. The impact of these scams can be devastating on people who are already financially precarious
    • Users employ risky methods to circumvent the high price of data, like sharing apps via Bluetooth. As a result, out-of-date apps with security vulnerabilities proliferate
  • A set of 53 teachable skills can reduce barriers and unlock opportunity.
    • These skills — identified by both participants and researchers — range from managing data usage and recognizing scams to resetting passwords, managing browser settings and understanding business models behind app stores
    • Our treatment group learned these skills, and the end-of-study evaluation showed increased agency and understanding of what is possible online
    • Without these fundamental skills, users are blocked in their discoveries and adoption of digital products
  • Gender and internet usage are deeply entwined.
    • Men often have an effect on the way women use apps and services — for example, telling them to stop, or controlling their usage
    • Women were almost three times as likely to be influenced by their partner when purchasing a smartphone, usually in the form of financial support
  • Language and Internet usage are deeply entwined.
    • The web is largely in English — a challenge for participants who primarily speak Swahili or Sheng (a Swahili-English hybrid)
    • Colloquial language (like Sheng) increases comfort with technology and accommodates learning
  • Like most of us, first-time users found an Internet that is highly centralized.
    • Participants encountered an Internet dominated by just a few entities. Companies like Google, Facebook and Safaricom control access to apps, communication channels and more. This leads to little understanding of what is possible online and little agency to leverage the web
  • Digital skills are best imparted through in-person group workshops or social media channels.
    • Community-based learning was the most impactful — workshops provide wider exposure to what’s possible online and build confidence
    • Mobile apps geared toward teaching digital skills are less effective. Many phones cannot support them, and they are unlikely to “stick”
    • Social networks can be highly effective for teaching digital skills. Our chatbot experiment on WhatsApp showed positive results
  • Local talent is important when teaching digital skills.
    • Without a community of local practitioners and teachers, teaching digital skills becomes far more difficult
    • Research and teaching capacity can be grown and developed within a community
  • Digital skills are critical, but not a panacea.
    • Web literacy is one part of a larger equation. To become empowered digital citizens, individuals also must have access (like hardware and affordable data) and need (a perceived use and value for technology).

Mozilla’s commitment to digital literacy doesn’t end with this research. We’re holding roundtables and events in Kenya — and beyond — to share findings with allies like NGOs and technologists. We’re asking others to contribute to the conversation.

We’re also rolling our learnings into our ongoing Internet Health work, and building on the concept that access alone isn’t enough — we need solutions that account for the nuances of social and economic life, too.

Read the full report here.

The post How Do We Connect First-Time Internet Users to a Healthy Web? appeared first on The Mozilla Blog.

Air MozillaMozilla Weekly Project Meeting, 20 Mar 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

The Mozilla BlogWebVR and AFrame Bringing VR to Web at the Virtuleap Hackathon

Imagine an online application that lets city planners walk through three-dimensional virtual versions of proposed projects, or a math program that helps students understand complex concepts by visualizing them in three dimensions. Both CityViewR & MathworldVR are amazing applications experiences that bring to life the possibilities of virtual reality (VR).
Both are concept virtual reality applications for the web that were generated for the Virtuleap WebVR Hackathon. Amazingly, nine out of ten of the winning projects used AFrame, an open source project sponsored by Mozilla, which makes it much easier to create VR experiences.. CityView really illustrates the capabilities of WebVR to have real life benefits that impact the quality of people’s daily lives beyond the browser.

A top-notch batch of leading VR companies, including Mozilla, funded and supported this global event with the goal of building the grassroots community for WebVR. For non-techies, WebVR is the experimental JavaScript API that allows anyone with a web browser to experience immersive virtual reality on almost any device. WebVR is designed to be completely platform and device agnostic and so it is a scalable and democratic path to stoking a mainstream VR industry that can take advantage of the most valuable thing the web has to offer: built-in traffic and hundreds of millions of users.

Over three months, long contest teams from a dozen countries submitted 34 VR concepts. Seventeen judges and audience panels voted on the entries. Below is a list of the top 10 projects. I wanted to congratulate @ThePascalRascal and @Geczy for their work that won the €30,000 prize and spots to VR accelerator programs in Amsterdam, respectively.

Here’s the really excellent part. With luck and solid code, virtual reality should start appearing in standard general availability web browsers in 2017. That’s a big deal. To date, VR has been accessible primarily on proprietary platforms. To put that in real world terms, the world of VR has been like a maze with many doors opening into rooms. Each room held something cool. But there was no way to walk easily and search through the rooms, browse the rooms, or link one room to another. This ability to link, browse, collaborate and share is what makes the web powerful and it’s what will help WebVR take off.

To get an idea of how we envision this might work, consider the APainter app built by Mozilla’s team. It is designed to let artists create virtual art installations online. Each APainter work has a unique URL and other artists can come in and add to or build on top of the creation of the first artist, because the system is open source. At the same time, anyone with a browser can walk through an APainter work. And artists using APainter can link to other works within their virtual works, be it a button on a wall, a traditional text block, or any other format.

Mozilla participated in this hackathon, and is supporting WebVR,  because we believe keeping the web open and ensuring it is built on open standards that work across all devices and browsers is a key to keeping the internet vibrant and healthy. To that same end, we are sponsoring the AFrame Project. The goal of AFrame is to make coding VR apps for the web even easier than coding web apps with standard HTML and javascript. Our vision at Mozilla is that, in the very near future, any web developer that wants to build VR apps can learn to do so, quickly and easily. We want to give them the power of creative self-expression.

It’s gratifying to see something we have worked so hard on enjoy such strong community adoption. And we’re also super grateful to Amir and the folks that put in the time and effort to organize and staff the Virtualeap Global Hackathon. If you are interested in learning more about AFrame, you can do so here.

The post WebVR and AFrame Bringing VR to Web at the Virtuleap Hackathon appeared first on The Mozilla Blog.

Air MozillaWebdev Beer and Tell: March 2017

Webdev Beer and Tell: March 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Add-ons BlogMigrating to WebExtensions? Don’t Forget Your Users

Stranded users feel sadness.

If you’re the developer of a legacy add-on with an aim to migrate to WebExtensions, here are a few tips to consider so your users aren’t left behind.

Port your user data

If your legacy add-on stores user data, we encourage you to take advantage of Embedded WebExtensions to transfer the data to a format that can be used by WebExtensions. This is critical if you want to seamlessly migrate your users—without putting any actionable burden on them—to the new WebExtensions version when it’s ready. (Embedded WebExtensions is a framework that contains your WebExtension inside of a bootstrapped or SDK extension.)

Testing beta versions

If you want to test a WebExtensions version of your add-on with a smaller group of users, you can make use of the beta versions feature on addons.mozilla.org (AMO). This lets you test pre-release beta versions that are signed and available to Firefox users who want to give them a spin. You’ll benefit from real users interacting with your new version and providing valuable feedback—without sacrificing the good reputation and rating of your listed version. We don’t recommend creating a separate listing on release because this will fragment your user base and leave a large number of them behind when Firefox 57 is released.

Don’t leave your users behind

Updating your listing on AMO when your WebExtension is ready is the only way to ensure all of your users move over without any noticeable interruption.

Need further assistance with your migration journey? You can find real-time help during office hours, or by emailing webextensions-support [@] mozilla [dot] org.

The post Migrating to WebExtensions? Don’t Forget Your Users appeared first on Mozilla Add-ons Blog.

Air MozillaReps Weekly Meeting Mar. 16, 2017

Reps Weekly Meeting Mar. 16, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

hacks.mozilla.orgInternationalize your keyboard controls

Recently I came across two lovely new graphical demos, and in both cases, the controls would not work on my French AZERTY keyboard.

There was the wonderful WebGL 2 technological demo After The Flood, and the very cute Alpaca Peck. Shaw was nice enough to fix the latter when I told him about the issue. It turns out the web browser actually exposes a useful API for this.

Let’s investigate further.

One keyboard, many layouts

People around the world use different keyboard layouts. You can read a lot on Wikipedia’s keyboard layout page, but I’ll try to summarise the important bits here.

The best-known and most widely used layout is QWERTY, used in most of the world:

A QWERTY layout. This layout is called QWERTY for the first six letters in the keyboard.

You may also know AZERTY, used in some French-speaking countries:

A AZERTY layout. AZERTY are the six first letters in the keyboard. Many, but not all of the letters, are in the same place as in QWERTY.

In addition, QWERTZ keyboards are in use in Germany and other European countries, and DVORAK is another alternative to QWERTY:

A DVORAK layout. This layout is completely different from AZERTY and QWERTY.

Each layout also has variants, especially in the symbols in the topmost row, as well as in the right-hand keys. Two keyboards of the same layout family might not be exactly the same. For example Spanish QWERTY keyboards have a special key for ñ, and German QWERTZ keyboards have special keys for ä and ö.

A QWERTZ layout is really close to QWERTY yet has subtle differences.

You will notice that the keyboards have essentially the same structure for all layouts. For the most part, the keys are in the same location, although they can be slightly rearranged or adjusted. This is called the mechanical layout.

So a regional layout is made up of:

  • The visual layout is physically printed on the physical keys.
  • The functional layout refers to the software (driver) that mapps hardware keys to characters.

This means we can actually change the layout used in the operating system without changing the physical keyboard. They are two different things! Some users will install improved layout drivers to be able to type faster or to type specific characters more easily. This is very helpful when useful characters are not normally available in the layout. For example, to type in French, I can very easily reach É, È, Ç or the french quotes « and » thanks to the driver I’m using.

But it comes also handy when you need to write text in several languages: I don’t have the ø character anywhere on my keyboard but my driver allows me to type it in easily.

What happens on the Web?

Well, it used to be a complete mess. Then we converged to a cross-browser behavior quite appropriate for QWERTY keyboards.

The API we’ve grown used to revolves around the three events: keydown, keypress, and keyup. keydown and keyup are called key events because they are fired each time a user presses any key, while keypress is called a character event because it’s supposed to be fired when a character is sent as a result of the key press. All modern browsers seem to agree on this, even if it wasn’t always the case.

For this legacy API, we use the three properties of KeyboardEvent: keyCode, charCode and which. I won’t enter much into the details here, please believe me when I tell you this is a nightmare to work with:

  • Properties don’t have the same meaning when handling a key event (keydown or keyup) versus a character event (keypress).
  • For some keys and events, the values are not consistent cross-browser, even for the latest browser versions.
  • keyCode on key events tries to be international-friendly — no, really — but it fails miserably, because of the lack of a common specification.

So, let’s see what improvements the new API brings us!

The new API, part of UI Events

UI Events, formerly known as DOM Level 3 Events, is a W3C specification in discussion since 2000. It’s still being discussed as a Working Draft, but because most browsers seem to agree today, we can hope that the specification will move forward to a recommendation. The latest keyboard events working draft is available online now.

The new API brings two new very useful properties to a KeyboardEvent event: key and code. They replace the previously existing (and still existing) charCode, keyCode, and which.

Let’s see why these changes are so useful, especially to do cross-keyboard websites (if you will allow me this neologism).

KeyboardEvent.key gives you a printable character or a descriptive string

The property key is almost a direct replacement for the previously used which, except it’s a lot more predictable.

When the pressed key is a printable character, you get the character in string form (instead of its ASCII/Windows-1252 code for which and keyCode, or Unicode code for charCode).

When the pressed key is not a printable character (for example: Backspace, Control, but also Enter or Tab which actually are printable characters), you get a multi-character descriptive string, like 'Backspace', 'Control', 'Enter', 'Tab'.

Among major, modern desktop browsers, only Safari doesn’t support the property yet, but will in the next version.

KeyboardEvent.key Browser Usage, February 2017. All desktop browsers except Safari support the property. Next version of Safari will support it though.

KeyboardEvent.code gives you the physical key

The property is completely new with this specification, although it is what keyCode should have been.

It gives you, in a string form, the physical key that was pressed. This means it’s totally independent of the keyboard layout that is being used.

So let’s say the user presses the Q key on a QWERTY keyboard. Then event.code gives you 'KeyQ' while event.key gives you 'q'.

But when a AZERTY keyboard user presses the A key, he also gets 'KeyQ' as event.code, yet event.key contains 'a'. This happens because the A key on a AZERTY keyboard is at the same location as the Q key on a QWERTY keyboard.

As for numbers, the top digit bar yields values like 'Digit1', while the numeric pad yields values like 'Numpad1'.

Unfortunately this feature is currently implemented only in Blink and Firefox, but Safari support is coming soon.

KeyboardEvent.code Browser Usage, February 2017. Firefox and Blink-based browsers like Chrome and Opera support it. Safari will support it in the next version. Internet Explorer, Edge and most mobile browsers do not.

The reference keyboard

If each key triggers a specific code…, then I can hear your next question. Which code is triggered for which key? What is the reference keyboard?

This is more complicated than it seems. There’s no existing keyboard with all the possible keys.

That’s why the W3C published a specification just for this. You can read about the existing mechanical layouts around the world, as well as their reference keyboard. For instance here is their reference keyboard for the alphanumerical part:

Keyboard Codes, alphanumeric section. Click for a SVG version

I encourage you to take a look and get at least an overview of this specification.

Please also note that the W3C has also published a sibling specification describing the values for the key property.

The relationship between keys and codes

I highly recommend to read through the examples given in the specification. They show very clearly what happens when the user presses various types of keys, both for code and key.

Cross-browser controls

The wonderful Mozilla Developer Network offers a good example of how to control a game using WASD or arrows. But the example doesn’t run cross-browser, and in particular, it doesn’t work on Safari or Internet Explorer because they haven’t implemented the specification yet. So let’s look at how we can support some cross-browser code.

Of course, where the specification isn’t implemented, it won’t work properly on a non-QWERTY keyboard. For this reason, it’s a good idea to use the arrow keys as well, because they’re always at the same place everywhere. In this example, I also use the numeric pad and the IJKL keys, as they’re less likely to be at different locations.

Here’s an example of how JavaScript code can support both the new API and the older API.


window.addEventListener('keydown', function(e) {
  if (e.defaultPrevented) {
    return;
  }

  // We don't want to mess with the browser's shortcuts
  if (e.ctrlKey || e.altKey || e.metaKey || e.shiftKey) {
    return;
  }

  // We try to use `code` first because that's the layout-independent property.
  // Then we use `key` because some browsers, notably Internet Explorer and
  // Edge, support it but not `code`. Then we use `keyCode` to support older
  // browsers like Safari, older Internet Explorer and older Chrome.
  switch (e.code || e.key || e.keyCode) {
    case 'KeyW': // This is 'W' on QWERTY keyboards, but 'Z' on AZERTY keyboards
    case 'KeyI':
    case 'ArrowUp':
    case 'Numpad8':
    case 38: // keyCode for arrow up
      changeDirectionUp();
      break;
   
    // ... Other letters: ASD, JKL, arrows, numpad

    default:
      return;
  }

  e.preventDefault();
  doSomethingUseful();
});

// touch handling
// A real implementation would want to use touchstart and touchend as well.
window.addEventListener('touchmove', function(e) {
  // don't forget to throttle the event
});

Try the full version!

What’s missing?

The API itself is quite well done, not much is missing.

Yet I miss something. There is no way to know what the current keyboard layout is. This would be really useful for writing the instructions to control the game: press WASD/ZQSD/... depending on the layout.

An API to know which letter is behind a specific key would also be useful. Yet I don’t know for sure if the underlying operating systems offer the necessary low-level calls to provide that information.

Other useful things

Without entering into too much detail, let’s fly over some other significant functionalities in the API:

  • The keypress event is deprecated. Now you should always use keydown instead. The event beforeinput is also planned but isn’t supported to date by any stable version of a browser (Chrome Canary has an implementation). The event input is a higher-level event supported by all browsers that is also useful in some situations.
  • With the location property on KeyboardEvent, if a pressed key exists in several locations — e.g. the Shift or Ctrl keys, or the digits —, then you can know which one was actually used. For example, you can know whether the pressed key is in the numeric pad or on the digit top bar.
    Note: This information is also contained in the code property, as every physical key gets its own code.
  • The repeat property is set to true if the user keeps a key depressed and an event is sent repeatedly as a result.
  • If you want to know if a modifier key is depressed while handling another key’s KeyboardEvent, you don’t need to keep track of the state yourself. The boolean properties altKey, ctrlKey, metaKey, shiftKey, as well as the method getModifierState, can give you the state of various modifier keys when the event was triggered.

Oddly enough, the keyboard events don’t seem to work properly on mobile platforms (iPhone untested). So be sure to have a touch interface as well!

You can use it now

This is my conclusion: You can use this now! It’s possible to progressively enhance your game controller code by taking advantage of the newer API for modern browsers while supporting older browsers at the same time.

Your international users will thank you for this… by using your product :-)

QMOExtra Testday event hold by Mozilla Tamilnadu community

Hello Mozillians,

This week, Mozilla community from Tamilnadu organized and held a Testday event in various campus clubs from their region.

I just wanted to thank you all for taking part in this. With the community help, Mozilla is improving every day.

Several test cases were executed for the WebM Alpha, Reader Mode Displays Estimate Reading Time and Quantum – Compositor Process features.

Many thanks to Prasanth P, Surentharan R A, Monesh, Subash, Rohit R, @varun1102, Akksaya, Roshini, Swathika, Suvetha Sri, Bhava, aiswarya.M, Aishvarya, Divya, Arpana, Nivetha, Vallikannu, Pavithra Roselin, Suryakala, prakathi, Bhargavi.G, Vignesh.R, Meganisha.B, Aishwarya.k, harshini.k, Rajesh, Krithika Sowbarnika, harini shilpa, Dhinesh kumar, KAVIPRIYA.S, HARITHA K SANKARI, Nagaraj V, abarna, Sankararaman, Harismitaa R K, Kavya, Monesh, Harini, Vignesh, Anushri, Vishnu Priya, Subash.M, Vinothini K, Pavithra R.

Keep up the good work!
Mihai Boldan, QA Community Mentor
Firefox for Desktop, Release QA Team

The Mozilla BlogFive issues that will determine the future of Internet Health

In January, we published our first Internet Health Report on the current state and future of the Internet. In the report, we broke down the concept of Internet health into five issues. Today, we are publishing issue briefs about each of them: online privacy and security, decentralization, openness, web literacy and digital inclusion. These issues are the building blocks to a healthy and vibrant Internet. We hope they will be a guide and resource to you.

We live in a complex, fast moving, political environment. As policies and laws around the world change, we all need to help protect our shared global resource, the Internet. Internet health shouldn’t be a partisan issue, but rather, a cause we can all get behind. And our choices and actions will affect the future health of the Internet, for better or for worse.

We work on many other policies and projects to advance our mission, but we believe that these issue briefs help explain our views and actions in the context of Internet health:


 

1. Online Privacy & Security:

Security and privacy on the Internet are fundamental and must not be treated as optional.

In our brief, we highlight the following subtopics:

  • Meaningful user control – People care about privacy. But effective understanding and control are often difficult, or even impossible, in practice.
  • Data collection and use – The tech industry, too often, reflects a culture of ‘collect and hoard all the data’. To preserve trust online, we need to see a change.
  • Government surveillance – Public distrust of government is high because of broad surveillance practices. We need more transparency, accountability and oversight.
  • Cybersecurity – Cybersecurity is user security. It’s about our Internet, our data, and our lives online. Making it a reality requires a shared sense of responsibility.

Protecting your privacy and security doesn’t mean you have something to hide. It means you have the ability to choose who knows where you go and what you do.


2. Openness:

A healthy Internet is open, so that together, we can innovate.

To make that a reality, we focus on these three areas:

  • Open source – Being open can be hard. It exposes every wrinkle and detail to public scrutiny. But it also offers tremendous advantages.
  • Copyright – Offline copyright law built for an analog world doesn’t fit the current digital and mobile reality.
  • Patents – In technology, overbroad and vague patents create fear, uncertainty and doubt for innovators.

Copyright and patent laws should better foster collaboration and economic opportunity. Open source, open standards, and pro-innovation policies must continue to be at the heart of the Internet.


3. Decentralization:

There shouldn’t be online monopolies or oligopolies; a decentralized Internet is a healthy Internet.

To accomplish that goal, we are focusing on the following policy areas.

  • Net neutralityNetwork operators must not be allowed to block or skew connectivity or the choices of Internet users.
  • Interoperability – If short-term economic gains limit long-term industry innovation, then the entire technology industry and economy will suffer the consequences.
  • Competition and choice – We need the Internet to be an engine for competition and user choice, not an enabler of gatekeepers.
  • Local contribution – Local relevance is about more than just language; it’s also tailored to the cultural context and the local community.

When there are just a few organizations and governments who control the majority of online content, the vital flow of ideas and knowledge is blocked. We will continue to look for public policy levers to advance our vision of a decentralized Internet.


4. Digital Inclusion:

People, regardless of race, income, nationality, or gender, should have unfettered access to the Internet.

To help promote an open and inclusive Internet, we are focusing on these issues:

  • Advancing universal access to the whole Internet Everyone should have access to the full diversity of the open Internet.
  • Advancing diversity online – Access to and use of the Internet are far from evenly distributed. This represents a connectivity problem and a diversity problem.
  • Advancing respect online – We must focus on changing and building systems that rely on both technology and humans, to increase and protect diverse voices on the Internet.

Numerous and diverse obstacles stand in the way of digital inclusion, and they won’t be overcome by default. Our aim is to collaborate with, create space for, and elevate everyone’s contributions.


5. Web Literacy:

Everyone should have the skills to read, write and participate in the digital world.

To help people around the globe participate in the digital world, we are focusing on these areas:

  • Moving beyond coding –  Universal web literacy doesn’t mean everyone needs to learn to code; other kinds of technical awareness and empowerment can be very meaningful.
  • Integrating web literacy into education – Incorporating web literacy into education requires examining the opportunities and challenges faced by both educators and youth.
  • Cultivating digital citizenship – Everyday Internet users should be able to shape their own Internet experience, through the choices that they make online and through the policies and organizations they choose to support.

Web literacy should be foundational in education, like reading and math. Empowering people to shape the web enables people to shape society itself. We want people to go beyond consuming and contribute to the future of the Internet.


Promoting, protecting, and preserving a healthy Internet is challenging, and takes a broad movement working on many different fronts. We hope that you will read these and take action alongside us, because in doing so you will be protecting the integrity of the Internet. For our part, we commit to advancing our mission and continuing our fight for a vibrant and healthy Internet.

The post Five issues that will determine the future of Internet Health appeared first on The Mozilla Blog.

Air MozillaThe Joy of Coding - Episode 95

The Joy of Coding - Episode 95 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaBuilding Habit-Forming Products with Nir Eyal

Building Habit-Forming Products with Nir Eyal Nir Eyal has built and invested in products reaching hundreds of millions of users including AdNectar, Product Hunt and EventBrite. He'll draw on core psychological...

hacks.mozilla.orgWhy WebAssembly is Faster Than asm.js

WebAssembly, a new binary execution format for the Web, is starting to arrive in stable versions of browsers. A major goal of WebAssembly is to be fast. This post gives some technical details about how it achieves that.

Of course, “fast” is relative. Compared to JavaScript and other dynamic languages, WebAssembly is fast because it is statically typed and simple to optimize. But WebAssembly is also intended to be as fast as native code. asm.js has already come quite close to that, and WebAssembly narrows the gap further. This post focuses therefore on why WebAssembly is faster than asm.js.

Before we start, the usual caveats: Performance is tricky to measure, and has many aspects. Also, in a new technology there are always going to be not-yet-optimized cases. So not every single benchmark will be fast on WebAssembly today. This post describes why WebAssembly should be fast; where it isn’t yet, those are bugs we need to fix.

With that out of the way, here is why WebAssembly is fast:

1. Startup

WebAssembly is designed to be small to download and fast to parse, so that even large applications start up quickly.

It’s actually not that easy to improve on the download size of gzipped minified JavaScript, as it’s already fairly compact when compared with native code. Still, WebAssembly’s binary format can improve on that, by being carefully designed for size in mind (indexes are LEB128s, etc.). It is often around 10–20% smaller (comparing gzipped sizes).

WebAssembly improves on parsing in a much bigger way: It can be parsed an order of magnitude faster than JavaScript. This mostly comes down to binary formats being faster to parse, especially ones designed for that. WebAssembly also makes it easy to parse (and optimize) functions in parallel, which helps a lot on multicore machines.

Total startup time can include factors other than downloading and parsing, such as the VM fully optimizing the code, or downloading additional data files that are necessary before execution, etc. But downloading and parsing are unavoidable and therefore important to improve upon as much as possible. All the rest can be optimized or mitigated, either in the browser or in the app (for example, fully optimizing the code can be avoided by using a baseline compiler or interpreter for WebAssembly, for the first few frames).

2. CPU features

One trick that’s made asm.js so fast is that while all JavaScript numbers are doubles, in asm.js an addition will have a bitwise-and operation right after it, which makes it logically equivalent to the CPU doing a simple integer addition, which CPUs are very good at. So asm.js made it easy for VMs to use a lot of the full power of CPUs.

But asm.js was limited to things that are expressible in JavaScript. WebAssembly isn’t limited in that way, and lets us use even more CPU features, such as:

  • 64-bit integers. Operations on them can be up to 4x faster. This can speed up hashing and encryption algorithms, for example.
  • Load and store offsets. This helps very broadly, basically anything that uses memory objects with fields at fixed offsets (C structs, etc.).
  • Unaligned loads and stores, avoiding asm.js’s need to mask (which asm.js did for Typed Array compatibility purposes). This helps with practically every load and store.
  • Various CPU instructions like popcount, copysign, etc. Each of these can help in specific circumstances (e.g. popcount can help in cryptanalysis).

How much a specific benchmark benefits will depend on whether it uses the features mentioned above. We often see a 5% speedup on average compared to asm.js. Further speedups are expected in the future from CPU features like SIMD.

3. Toolchain Improvements

WebAssembly is primarily a compiler target, and therefore has two parts: Compilers that generate it (the toolchain side), and VMs that run it (the browser side). Good performance depends on both.

This was already the case with asm.js, and Emscripten did a bunch of toolchain optimizations, running LLVM’s optimizer and also Emscripten’s asm.js optimizer. For WebAssembly, we built on top of that, but have also added some significant improvements while doing so. Both asm.js and WebAssembly are not typical compiler targets, and in similar ways, so lessons learned during the asm.js days helped do things better for WebAssembly. In particular:

Overall, these toolchain improvements help about as much as moving from asm.js to WebAssembly helps us (7% and 5% on Box2D, respectively).

4. Predictably Good Performance

asm.js could run at basically native speed, but it never actually did so in all browsers consistently. The reason is that some tried to optimize it one way, some another, with differing results. Over time things started to converge, but the basic problem was that asm.js was not an actual standard: It was an informal spec of a subset of JavaScript, written by one vendor, that only gradually saw interest and adoption from the others.

WebAssembly, on the other hand, has been designed jointly by all major browsers. Unlike JavaScript, which could be made fast only using very creative methods, or asm.js, which could be made fast using simple methods but not all browsers did so, WebAssembly has more agreement upon how to optimize it. There is still plenty of room for differentiation in VMs (different ways to tier compilation, AOT vs. JIT, etc.), but a good baseline of predictable performance can be expected across the entire Web.

Mozilla Add-ons BlogAdd-ons Update – 2017/03

Here’s the state of the add-ons world this month.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. Please give it a read if you haven’t already.

The Review Queues

In the past month, 1,414 listed add-on submissions were reviewed:

  • 1132 (80%) were reviewed in fewer than 5 days.
  • 31 (2%) were reviewed between 5 and 10 days.
  • 251 (18%) were reviewed after more than 10 days.

There are 594 listed add-ons awaiting review.

We met last week to discuss the state of the queues and our plans to reduce waiting times. There are already some changes coming in the next month or so that should help significantly, but we have larger plans that we will share soon that should address this recurring problem permanently.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.

Compatibility

The blog post for 53 is up and the bulk validation will be run soon. Firefox 54 is coming up.

Multiprocess Firefox is enabled for some users, and will be deployed for most users very soon. Make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Piotr Drąg
  • Niharika Khanna
  • saintsebastian
  • Atique Ahmed Ziad
  • gilbertginsberg
  • felixgirault
  • StandB
  • lavish205
  • numrut
  • fitojb
  • totaki
  • ingoe

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/03 appeared first on Mozilla Add-ons Blog.

Air MozillaMartes Mozilleros, 14 Mar 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

The Mozilla BlogA Public-Private Partnership for Gigabit Innovation and Internet Health

Mozilla, the National Science Foundation and U.S. Ignite announce $300,000 in grants for gigabit internet projects in Eugene, OR and Lafayette, LA

 

By Chris Lawrence, VP, Leadership Network

At Mozilla, we believe in a networked approach — leveraging the power of diverse people, pooled expertise and shared values.

This was the approach we took nearly 15 years ago when we first launched Firefox. Our open-source browser was — and is — built by a global network of engineers, designers and open web advocates.

This is also the approach Mozilla takes when working toward its greater mission: keeping the internet healthy. We can’t build a healthy internet — one that cherishes freedom, openness and inclusion — alone. To keep the internet a global public resource, we need a network of individuals and organizations and institutions.

One such partnership is Mozilla’s ongoing collaboration with the National Science Foundation (NSF) and U.S. Ignite. We’re currently offering a $2 million prize for projects that decentralize the web. And together in 2014, we launched the Gigabit Community Fund. We committed to supporting promising projects in gigabit-enabled U.S. cities — projects that use connectivity 250-times normal speeds to make learning more engaging, equitable and impactful.

Today, we’re adding two new cities to the Gigabit Community Fund: Eugene, OR and Lafayette, LA.

 

Beginning in May 2017, we’re providing a total of $300,000 in grants to projects in both new cities. Applications for grants will open in early summer 2017; applicants can be individuals, nonprofits and for-profits.

We’ll support educators, technologists and community activists in Eugene and Lafayette who are building and beta-testing the emerging technologies that are shaping the web. We’ll fuel projects that leverage gigabit networks to make learning more inclusive and engaging through VR field trips, ultra-high definition classroom collaboration, and real-time cross-city robot battles. (These are all real examples from the existing Mozilla gigabit cities of Austin, Chattanooga and Kansas City.)

We’re also investing in the local communities on the ground in Eugene and Lafayette — and in the makers, technologists, and educators who are passionate about local innovation. Mozilla will bring its Mozilla Network approach to both cities, hosting local events and strengthening connections between individuals, schools, nonprofits, museums, and other organizations.

Video: Learn how the Mozilla Gigabit Community Fund supports innovative local projects across the U.S.

Why Eugene and Lafayette? Mozilla Community Gigabit Fund cities are selected based on a range of criteria, including a widely deployed high-speed fiber network; a developing conversation about digital literacy, access, and innovation; a critical mass of community anchor organizations, including arts and educational organizations; an evolving entrepreneurial community; and opportunities to engage K-12 school systems.

We’re excited to fuel innovation in the communities of Eugene and Lafayette  — and to continue our networked approach with NSF, U.S. Ignite and others, in service of a healthier internet.

 

The post A Public-Private Partnership for Gigabit Innovation and Internet Health appeared first on The Mozilla Blog.

Air MozillaMozilla Weekly Project Meeting, 13 Mar 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Mozilla L10NHack on Pontoon with the Google Summer of Code

Mozilla has been kindly invited to participate in the Google Summer of Code (GSoC) 2017. For the first time, Pontoon will be part of this great program, which introduces students to open source software development. Read on if you’re interested in applying.

You will be paired with a mentor (hi!) and spend 3 months hacking on a free and open source translation tool from Mozilla. While gaining exposure to real-world software development techniques, you will also earn a stipend and have a great time!

Pontoon in Esperanto

As part of the Pontoon GSoC project, we’d like to explore the feasibility of screenshot-based localization process. The idea is this:

Localizers often lack context when translating strings. Let’s say you need to translate “Bookmark”. Is it a noun or a verb? In many languages translation for the former would be different than for the latter.

Sure, we can provide context using string comments, but a screenshot showing where in the application the string is used is much more revealing. Besides, application screenshots can be generated automatically, which is not (yet!) true for comments.

Your task will be to redesign Pontoon translation interface to support:

  • string navigation using screenshots
  • displaying original strings in screenshots
  • previewing translations in localized screenshots

JavaScript, HTML, CSS and design skills are required.

Student applications open on March 20th at 16:00 UTC, so now is a perfect time to prepare. Let us know if you have any questions. And then go spend your summer break writing code and learning about open source development while earning a stipend!

Mozilla Add-ons BlogWebExtensions in Firefox 54

Firefox 54 landed in Developer Edition this week, so we have another update on WebExtensions for you. In addition to new APIs to help more developers port over to WebExtensions, we also announced a new Office Hours Support schedule where developers can get more personalized help with the transition.

New APIs

A new API for creating sidebars was implemented. This allows you to place a local HTML file inside the sidebar. The API is similar to the one in Opera. If you specify the sidebar_action manifest key, Firefox will create a sidebar:

To allow keyboard commands to be sent to the sidebar, a new _execute_sidebar_action was added to the commands API which allows you trigger the showing of the sidebar.

The ability to override the about:newtab with pages inside your extension was added to the chrome_url_overrides field in the manifest. Check out the example that uses the topSites API to show the top sites you visit .

The privacy API gives you the ability to flip certain Firefox preferences related to privacy. Although the preferences in Chrome and Firefox aren’t a direct mapping, we’ve mapped the Firefox preferences that makes sense to the APIs. Currently implemented are: networkPredictionEnabled, webRTCIPHandlingPolicy and hyperlinkAuditingEnabled.

The protocol_handler API lets you easily map protocols to actions in your extension. For example: we use irccloud at Mozilla, so we can map ircs:// links to irccloud by adding this into an extension:

  "protocol_handlers": [
    {
      "protocol": "ircs",
      "name": "IRC Mozilla Extension",
      "uriTemplate": "https://irccloud.mozilla.com/#!/%s"
    }
  ]

When a user clicks on an IRC link, it shows the application selector with the IRC Mozilla Extension visible:

This release also marks the landing of the first sets of devtools APIs. Quite a few APIs landed including: inspectedWindow.reload(), inspectedWindow.eval(), inspectedWindow.tabId, network.onNavigated, and panels.create().

Here’s an example of the Redux DevTools extension running on Firefox:

Backwards incompatible changes

The webRequest API will now require that you’ve requested the appropriate hosts’ permissions before allowing you to perform webRequest operations on a URL. This will be a backwards-incompatible change for any extension which used webRequest but did not request the host permission.

Deletes in storage.sync are now encrypted. This would be a breaking change for any extensions using storage.sync on Developer Edition.

API Changes

Some key events were completed in some popular APIs:

  • webRequest.onBeforeRequest is initiated before a server side redirect is about occur and webRequest.onAuthRequired is fired when an authentication failure occurs. These allow you to catch authentication requests from servers, such as proxy authentication.
  • webNavigation.onCreatedNavigationTarget event has been completed. This is fired when a new window or tab is created to be navigated to.
  • runtime.onMessageExternal event has been implemented. This is fired when a message is sent from another extension.

Other notable bugs include:

Android

Notably in Firefox 54, basic tabs API support was landed for Android. The API support focuses on the parts of the API that make sense for Android, so some tab methods and events are deliberately not implemented.

This is an important API in its own right, but other key APIs did not have good support without this. By landing this, Android WebExtensions got much better webNavigation and webRequest support. This gives us a clear path to getting ad blockers, the most common extension type on Android.

Contributors

A big thank you to our contributors Rob Wu, Tomislav Jovanovic and Tushar Saini who helped out with this release.

The post WebExtensions in Firefox 54 appeared first on Mozilla Add-ons Blog.

Mozilla Add-ons BlogImprovements to add-on review communications

We recently made some improvements to our tools and processes to better support communication between add-on developers and reviewers.

Previously, when you submitted an add-on to addons.mozilla.org (AMO) and a reviewer emailed you, your replies went to a mailing list (amo-editors AT mozilla DOT org) where a few reviewers (mostly admins) handled every response. This approach had some flaws—it put the burden on very few people to reply, who had to first get familiar with the add-on code and previous review actions. Further replies from either party went to the mailing list only, rather than being fed back into the review tools on AMO. These flaws slowed things down unnecessarily, and contributed to information clutter.

Now, add-on developers can choose to reply to a review by email—like they’re used to—or from the Manage Status & Versions page of the add-on in the developer hub. Replies are picked up by AMO and displayed in the review history for reviewers and developers. In addition, everyone  involved in the review of the particular version will be notified by email. Admin reviewers will make sure all inquiries are followed up with.

This long-anticipated feature will not only make follow-ups for reviews more efficient for both developers and reviewers, it also makes upcoming reviews easier by having all information in the same place.

The mailing list (amo-editors AT mozilla DOT org) will be discontinued shortly, so we ask all developers to use this system instead. For other questions not related to a particular review, please send a message to amo-admins AT mozilla DOT org.

The Add-on Review team would like to thank Andrew Williamson for implementing this new feature and our QA team for testing it!

The post Improvements to add-on review communications appeared first on Mozilla Add-ons Blog.

Mozilla Add-ons BlogOffice Hours Support for Transitioning and Porting to WebExtensions

To help facilitate more mutual support among developers migrating and porting to WebExtensions, we asked the add-on developer community to sign up for blocks of time when they can be available to assist each other. This week, we published the schedule, which shows you the days and hours (in your time zone) when people are available to answer questions in IRC and the add-on forum. Each volunteer helper has indicated their specialties, so you can find the people who are most likely able to help you.

If you’d like to get help asynchronously, you can join and email the webextensions-support [at] mozilla [dot] org mailing list, where more people are on hand to answer questions.

If you have any knowledge in or expertise with add-ons, please sign up to help! Just go to the etherpad and add your IRC handle, times you’re available, and your specialties, and we’ll add you to the schedule. Or, join the mailing list to help out at any time.

The post Office Hours Support for Transitioning and Porting to WebExtensions appeared first on Mozilla Add-ons Blog.

Air MozillaEqual Ratings Conference Demo Day Presentations 3.09.17

Equal Ratings Conference Demo Day Presentations 3.09.17 We Believe in Equal Rating Mozilla seeks to make the full range of the Internet's extraordinary power and innovative potential available to all. We advocate...

Air MozillaDenise Graveline on Graceful ways with Q & A

Denise Graveline on Graceful ways with Q & A Some speakers love Q & A. Others dread it. No matter which group you are in, this session will share tips for how to plan...

Air MozillaEqual Ratings Conference Judges' Panel Discussion 3.09.17

Equal Ratings Conference Judges' Panel Discussion 3.09.17 We Believe in Equal Rating Mozilla seeks to make the full range of the Internet's extraordinary power and innovative potential available to all. We advocate...

Air MozillaReps Weekly Meeting Mar. 09, 2017

Reps Weekly Meeting Mar. 09, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air MozillaEqual Ratings Conference AM Session 3.09.17

Equal Ratings Conference AM Session 3.09.17 We Believe in Equal Rating Mozilla seeks to make the full range of the Internet's extraordinary power and innovative potential available to all. We advocate...

Air MozillaThe Joy of Coding - Episode 94

The Joy of Coding - Episode 94 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaWeekly SUMO Community Meeting Mar. 08, 2017

Weekly SUMO Community Meeting Mar. 08, 2017 This is the sumo weekly call

Open Policy & AdvocacyMozilla statement on CIA / WikiLeaks

Today, the organization WikiLeaks published a compendium of information alleged to be documents from the U.S. Central Intelligence Agency (CIA) pertaining to tools and techniques to compromise the security of mobile phones, computers, and internet-connected devices. We released the following statement on these reports:

If the information released in today’s reports are accurate, then it proves the CIA is undermining the security of the internet – and so is Wikileaks. We’ve said before that cybersecurity is a shared responsibility, and this is true in this example, regarding the disclosure of security vulnerabilities. It appears that neither the CIA nor Wikileaks are living up to that standard – the CIA seems to be stockpiling vulnerabilities, and Wikileaks seems to be using that trove for shock value rather than coordinating disclosure to the affected companies to give them a chance to fix it and protect users.

The government may have legitimate intelligence or law enforcement reasons for delaying disclosure of vulnerabilities (for example, to enable lawful hacking), but these same vulnerabilities can endanger the security of billions of people. These two interests must be balanced, and recent incidents demonstrate just how easily stockpiling vulnerabilities can go awry without proper policies and procedures in place.

Once governments become aware of a security vulnerability, they have a responsibility to consider how and when (not whether) to disclose the vulnerability to the affected company so they can fix the problem and protect users.

We have been advocating for broader, open conversations about disclosure of security vulnerabilities and although today’s disclosures are jarring, we hope this raises awareness of the severity of these issues and the urgency of collaborating on reforms.

The post Mozilla statement on CIA / WikiLeaks appeared first on Open Policy & Advocacy.

Air MozillaWebdev Extravaganza: March 2017

Webdev Extravaganza: March 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

hacks.mozilla.orgFirefox 52: Introducing Web Assembly, CSS Grid and the Grid Inspector

Introduction

It is definitely an exciting time in the evolution of the web with the adoption of new standards, performance gains, better features for designers, and new tooling. Firefox 52 represents the fruition of a number of features that have been in progress for several years. While many of these will continue to evolve and improve, there’s plenty to celebrate in today’s release of Firefox.

In this article, the Developer Relations team covers some of the most innovative features to land, including WebAssembly, CSS Grid, the CSS Grid Inspector Tool, an improved Responsive Design Mode, and Async and Await support for JavaScript.

WebAssembly breaks barriers between web and native

Firefox 52 supports WebAssembly, a new format for safe, portable, and efficient binary programs on the Web. As an emerging open standard developed by Mozilla, Google, Microsoft, and Apple, WebAssembly will eventually run everywhere that JavaScript does: in every major browser, and in browser-derived runtimes like Node.js and Electron. WebAssembly is designed to be ubiquitous.

Compilers like Mozilla’s Emscripten can target the WebAssembly virtual architecture, making it possible to run portable C/C++ on the web at near-native speeds. In addition to C/C++, the Rust programming language has preliminary support for WebAssembly, and LLVM itself includes an experimental WebAssembly backend. We expect many other languages to add support for WebAssembly in the coming years.

Via Emscripten, WebAssembly makes it straightforward to port entire games and native applications to the Web, but it can also do much more. Thanks to its speed and easy interoperability with JavaScript, tasks which were previously too demanding or impractical for the web are now within reach.

(You can click here to play the full Zen Garden demo. Firefox 52 required, desktop only at this time. )

JavaScript functions can call WebAssembly functions, and vice versa. This makes it possible to mix-and-match between high-level JavaScript and low-level C/C++/Rust within a single web application. Developers can reuse WebAssembly modules without needing to understand their internals, much as they do today with minified JavaScript libraries.

In areas where consistent performance is most important—games, audio/video manipulation, data analysis, raw computation, codecs, etc.—WebAssembly offers clear advantages in size and speed. For this reason, we expect that many popular libraries and frameworks will eventually come to rely on WebAssembly, either directly or indirectly.

In terms of code reuse and software architecture, the wall between “native” and the web is falling, and this is just the beginning. Tooling and debugging will continue to improve, as will interoperability, performance, and raw capabilities. For example, multi-threading and SIMD are already on the WebAssembly Roadmap.

Get started with WebAssembly on MDN, and find the latest information direct from the creators of WebAssembly at WebAssembly.org.

CSS Grid and the Grid Inspector

Firefox 52 includes support for CSS Grid Layout Module Level 1, a CSS specification that defines 18 new CSS properties. CSS Grid is a two-dimensional layout system for the web, making it much easier to code many of the layout patterns we’ve been solving for years using grid frameworks, natively in the browser. And it opens up a world of new possibilities for graphic design. Whether you are focusing on user interfaces for app experiences or the editorial design of content, there’s a lot of power for you in this new toolset.

CSS Grid works by defining rows and columns, and placing items into areas on the grid. The rows and columns can be given a specific size (fixed, fluid, or a mix), or they can be defined to resize themselves depending on the size of content. Items on the grid can be explicitly placed in CSS, or they might be placed by the browser according to the Grid auto-placement algorithm. These sizing and placement options give CSS Grid more power and flexibility than any of the existing layout frameworks. In addition, the ability to define and place things in rows is completely new.

"variations on a grid" screenshot

We are also proud to announce our new Grid Inspector Tool, which allows you to see the grid lines directly on the page, making it easier to see what’s happening.

See the examples from this video at labs.jensimmons.com/2017/01-003.html. And find a playground of more examples at labs.jensimmons.com.

Interested in learning Grid? We have in-depth guides on MDN:

Here are answers to the two most frequently asked questions about CSS Grid:

Should I use Grid or Flexbox? Which is better?

You’ll use both, mixing CSS Grid with Flexbox and the other CSS Properties that affect layout (floats, margins, multicolumn, writing modes, inline block, etc.). It’s not a choose-only-one situation. Grid is the right tool when you want to control sizing and alignment in two dimensions. Flexbox is the right tool when you are only concerned with controlling one dimension. Most projects will use both, each on a different little piece of the page. Once you understand the differences between the two, it’ll be clear how they work together brilliantly.

Why should I get excited about CSS Grid now? Won’t it take years before Grid is supported in enough browsers to be able to use it?

Because of changes to the way browser companies work together to create new CSS, wide support for CSS Grid will arrive at an unprecedented speed. Mozilla is shipping support first, in Firefox 52 on March 7th. Chrome 57 will support Grid a week later, on March 14. Safari 10.1 will ship support for Grid; it’s currently in beta. Internet Explorer 10 and 11 already have support for a much earlier version of the specification, behind an -ms prefix. (You may want to utilize it, or you may not. Learn about the details before deciding.) MS Edge also has current support for the original draft of the spec, with an update to the current spec coming sometime in the future.

You can ship websites that use CSS Grid today, before 100% of your users have a browser with CSS Grid, by thinking through the structure of your code and planning for what happens in all browsers. Feature Queries are a key tool for making sure all users have a good experience on your site.

Async functions and the await keyword

Firefox 52 also includes a brand new JavaScript feature from ES2017: asynchronous functions and their companion, the await operator. Async functions build on top of ES2015 Promises, allowing authors to write asynchronous code in a similar way to how they would write their synchronous equivalents.

Take the following example, which takes the result of one asynchronous request, and uses part of it as the argument to a second asynchronous function. Here’s how it would look with a traditional callback approach:

function loadProfile() {
  getUser(function (err, user) {
    if (err) {
      handleError(err);
    } else {
      getProfile(user.id, function (err, profile) {
        if (err) {
          handleError(err)
        } else {      
          displayProfile(profile);
        }
      });      
    }
  });
}

Relatively straightforward, but if we were to need to do additional processing and asynchronous requests, the levels of nesting or series of callback functions could become difficult to manage. Also, with more complex callback sequences, it can become difficult to determine the flow of the code, making debugging difficult.
Promises, introduced in ES2015, allow for a more compact representation of the same flow:

function loadProfile() {
  getUser()
    .then(function (user) {
      return getProfile(user.id);
    })
    .then(displayProfile)
    .catch(handleError);
}

Promises excel at simplifying these sequential method sequences. In this example, instead of passing a function to getUser and getProfile, the functions now return a Promise, which will be resolved when the function’s result is available. However, when additional processing or conditional calls are required, the nesting can still become quite deep and control flow can again be hard to follow.
Async functions allow us to re-write the example to resemble the way we would write a synchronous equivalent, without blocking the thread the way the synchronous code would!:

async function loadProfile() {
  try {
    let user = await getUser();
    displayProfile(await getProfile(user.id));    
  } catch (err) {
    handleError(err);
  }
}

The async keyword in front of the function tells the JS engine that the following function can be paused by asynchronous requests, and that the result of the function will be a Promise. Each time we need to wait for an asynchronous result, we use the await keyword. This will pause execution of the function without stopping other functions from running. Also, getUser and getProfile don’t need to be changed from how they would be written in the Promise example.
Async functions aren’t a cure-all for complex control flow, but for many cases they can simplify the authoring and maintenance of async code, without importing costly libraries. To learn more see the async and await MDN documentation.

Responsive Design Mode

In addition to the Grid Inspector described above, Firefox now includes an improved Responsive Design Mode (RDM). The improved RDM tool can do network throttling, simulating various connection speeds primarily experienced by mobile users. In addition, various screen size and pixel density simulations are available for common devices. Many of the features were described in an earlier post introducing RDM. Currently this feature is only enabled if e10s is also enabled in the browser. Be sure to read over the complete documentation for RDM on MDN.

More Firefox 52 goodness

These are some highlights of the game-changing features we’ve brought to the browser in Firefox 52. To see the a detailed list of all release changes, including a feature for identifying auto-generated whitespace, and the ability to detect insecure password fields, see the Firefox 52 Release notes.

The Mozilla BlogLots new in Firefox, including “game-changing” support for WebAssembly

Today’s release of Firefox introduces great new features, making the browser more powerful, convenient, and secure across all your devices.

WebAssembly enables near-native performance for games and apps

Firefox has a rich history of giving the web new and amazing capabilities. Along these lines, I’m proud to announce that Firefox is the first browser to support WebAssembly, an emerging standard inspired by a Mozilla research project. WebAssembly allows complex apps, like games, to run faster than ever before in a web browser. We expect that WebAssembly will enable applications that have historically been too complex to run fast in browsers like immersive 3D video games, computer-aided design, video and image editing, and scientific visualization. We also expect that developers will use WebAssembly to speed up many existing web apps.

To learn more about WebAssembly, see David Bryant’s post, and watch this video.

Easier connections to Wi-Fi hotspots with captive portal detection

If you’ve ever had trouble connecting to hotel wi-fi, it’s likely because you had to sign in to a “captive portal”. These captive portals are often problematic because the login page itself is hard to discover if the operating system doesn’t detect it. Very often, you try to navigate to a website and end up with an error.

With today’s release, Firefox now automatically detects captive portals and notifies you about the need to log in. Additionally, after Firefox detects a captive portal, it replaces certificate error pages with a message encouraging you to log in.

Firefox warns you about insecure logins

To help keep you safer on the internet, we’re building upon Firefox’s new warning in the address bar. Firefox now shows an in-context alert if you click into a username or password field on a page that isn’t encrypted with HTTPS.

There’s quite a bit more in this release. Web designers and developers may be particularly interested in CSS Grid, and today we’re shipping the only Grid Inspector developer tool on any major browser. You can learn more about CSS Grid and developer tools you’ll only get from Firefox on the Hacks blog.

Also with this release, Firefox has improved security and performance by disabling all plugins that use the Netscape Plugin API (NPAPI) besides Flash. Later this year we’ll further improve Firefox so that Flash content is only activated with user consent.

We hope you enjoy the new release, and would love your feedback.

 

 

 

The post Lots new in Firefox, including “game-changing” support for WebAssembly appeared first on The Mozilla Blog.

QMOFirefox 53.0 Aurora Testday Results

Hello Mozillians!

As you may already know, last Friday – March 3rd – we held a new Testday event, for Firefox Aurora 53.0a2.

Thank you all for helping us make Mozilla a better place – Iryna Thompson.

From Bangladesh team: Tanvir Rahman, Kazi Nuzhat Tasnem, Saheda Reza Antora, Sabrina Joedder Silva, Maruf Rahman, Md.Majedul Islam, Anmona Mamun Monisha, Nazir Ahmed Sabbir, Sajedul Islam, Rezwana Islam Ria, Humayra Khanum, Forhad Hossain, আল-যুনায়েদ ইসলাম ব্রোহী, Abid Rahman, Roman Syed, Niaz Bhuiyan Asif, Asif Mahmud Rony, Touhidul islam Chayan.

From India team: Monesh, Subash, Rajesh, Rohit R, Pavithra.R, varun1102.

Results:

-several test cases executed for the WebM Alpha, Reader Mode Displays Estimate Reading Time and Quantum – Compositor Process features.

-6 bugs verified: 1323713, 1316225, 1196153, 1332595, 1326837, 1327731
-7 new bugs filed: 1344500, 1344271, 1344494, 1344495, 1344311, 1344325

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

The Mozilla BlogA $2 Million Prize for Building a More Accessible Internet

The National Science Foundation is sponsoring two challenges powered by Mozilla. Our goal: support big ideas that keep the web accessible, decentralized and resilient

 

The Internet can help a young girl in Chicago’s South Side learn how to write JavaScript. It can also keep citizens connected during a time of crisis or disaster.

But only if the Internet works as intended.

The Internet should be a public resource open and accessible to all. And, it is to many. But many people still lack reliable, affordable Internet access. And the underlying network itself is increasingly centralized, relying on infrastructure provided by a tiny handful of companies. We don’t have a failsafe if the infrastructure these companies offer is blocked or goes down.

These are significant issues. Mozilla and the National Science Foundation are committed to finding solutions by supporting bright people and big ideas across the U.S.

Today, Mozilla is announcing the National Science Foundation-sponsored Wireless Innovation for a Networked Society (WINS) Challenges: two U.S.-based competitions with $1 million in prize money each.

The goal: support creative, open-source ideas for making the Internet more accessible, decentralized and resilient. The challenges seek prototypes and designs that either a) provide connectivity during disasters or b) connect the unconnected.

Mozilla believes in the power of collaborative solutions to tackle big issues. Running open challenges has proven to be an effective instrument — not only to identify a broader set of solutions, but also to broaden the dialogue around these issues, to build new communities of problem-solvers and to strengthen the global network of people working toward a healthier Internet.

The program will begin accepting submissions in June 2017 through our soon-to-launch website, and will culminate in fall 2018. You can sign up for related challenge events below, or email wirelesschallenge@mozillafoundation.org for more information.

The challenges

Off-the-Grid Internet Challenge. When disasters like earthquakes and hurricanes strike, communications networks are among the first pieces of critical infrastructure to overload or fail. How can we leverage both the Internet’s decentralized design and current wireless technology to keep people connected to each other — and vital messaging and map services — in the aftermath of a disaster?

Challenge applicants will be expected to design both the means to access the wireless network (i.e. hardware) and the applications provided on top of that network (i.e. software). Projects should be portable, easy to power and simple to access.

Here’s an example: A backpack containing a hard drive is wired to a computer, battery and Wi-Fi router. The router provides access, via a Wi-Fi network, to resources on the hard drive like maps and messaging applications.

Smart Community Networks Challenge. Many communities across the U.S. lack reliable Internet access. Sometimes commercial providers don’t supply affordable access; sometimes a particular community is too isolated; sometimes the speed and quality of access is too slow. How can we leverage existing infrastructure — physical or network — to provide high-quality wireless connectivity to communities in need?

Challenge applicants should account for a high density of users, far-reaching range and robust bandwidth. Projects should also aim to make a minimal physical footprint and uphold users’ privacy and security.

Here’s an example: A neighborhood wireless network where the nodes are housed in, and draw power from, disused phone booths or similarly underutilized infrastructure.

The details

These challenges are open to a range of participants: individuals, teams, nonprofits and for-profits. Applicants might be academics researching wireless networking; technology activists catalyzing local infrastructure projects; entrepreneurs and innovators developing practical solutions for people who need (better) access; makers aiming to have an impact locally; or students and educators exploring networks and community activism.

The challenges consist of two stages. First is the Design Concept Stage, for ideas that have been thoroughly researched and designed. Second is the Working Prototype Stage, for projects ready to prototype and demo (applicants must complete the Design Concept Stage in order to advance to the Working Prototype Stage). Applicants have opportunities to win prize money during both stages.

Judges selected by Mozilla with input from the NSF will select winners. Judges — experts from academia, NGOs and the business world with expertise in technology, research and community activism — will assess projects based on a range of criteria, like creativity, affordability, social impact and adaptability.

This spring, Mozilla will host a series of events — from Raleigh to Oakland to Boulder to NYC — to foster collaboration and attract applications. Launch events will be free and open to the public.

The events (click on the links below to register)

March 25th // Raleigh, NC

April 9th // Oakland, CA

April 15th // Boulder, CO

April 22nd // New York, NY

 

The timeline

October 2017: Design Concept submission deadline

Fall 2017: Design Concept winners announced

May 2018: Working Prototype submission deadline

Summer 2018: Working Prototype winners announced

 

The prizes

[Design Concept Stage]

  • $60,000 – First Place
  • $40,000 – Second Place
  • $30,000 – Third Place
  • $10,000 – 7 Honorable Mention Awards

[Working Prototype Stage]

  • $400,000 – First Place
  • $250,000 – Second Place
  • $100,000 – Third Place
  • $50,000 – Fourth Place

The post A $2 Million Prize for Building a More Accessible Internet appeared first on The Mozilla Blog.

Mozilla L10NFirefox L10n Report – Aurora 54

Here’s an outline of what is currently in Aurora this cycle for Firefox 54.

Current Aurora Cycle – Firefox 54

Key dates for this cycle:

  • Beta (53): localization updates for already shipping locales must be completed before 5 April.
  • Aurora (54): localization updates must be completed before 17 April. That’s the Monday, also known as merge day, before the next release of Firefox.

String breakdown:

  • Firefox Aurora desktop has 179 added strings (102 obsolete). About 35% of the new strings are for Developer Tools.
  • Fennec Aurora has 44 new strings (28 obsolete). 4 new strings are Fennec-only (in /mobile).

There are currently no pending requests to uplift patches with strings to Aurora.

For further details on the new features you can check the release notes (they’re usually published a few days after release):

Noteworthy Changes Available in Aurora

These are some of the interesting changes introduced in the last cycle.

Browser

Several strings were updated changing to Title Case. String ID wasn’t changed in this case, so you won’t notice the change in Pontoon, while you’ll need to confirm the string in Pootle.

Changeset: https://hg.mozilla.org/releases/mozilla-aurora/rev/ab540f0d551b


Several strings about the legacy Sync code were removed in bug 1296767. Completely obsolete files (6) were automatically removed as part of merge day.


In the last couple of cycles, some strings landed in pref for managing Site Data. To see this section in Preferences (at the bottom of Advanced -> Network), you need to enable (set to “true”) both these keys in about:config

  • browser.storageManager.enabled
  • dom.storageManager.enabled

Functionality is still hard to test, since there are no websites using this feature available for testing.

Devtools

There’s currently no support for plural strings in Debugger. A bug is already on file, in the meantime the only solution available is to reorder the string to avoid associating the number to a noun.

New Languages

Urdu (ur) is riding the train to release with Firefox 53. It’s great to have another RTL language available for our desktop users.

We currently have 4 other locales working on Firefox desktop, and we really look forward to release them in the next versions of Firefox:

  • Latgalian (ltg)
  • Burmese (my)
  • Nepali (ne-NP)
  • Tagalog (tl)

Talking about RTL languages, all four of them (Arabic, Persian, Hebrew, Urdu) are now enabled on beta for Firefox for Android, and will be officially released with Firefox for Android 53.

If you want to know more about the process of releasing new locales, or if you speak one of these languages and want to know how to help the localization teams, please get in touch with us.

To all localizers: Thanks again for all the time and effort you put in localizing and promoting Firefox in your language.

The Mozilla BlogMozilla Statement on Immigration Executive Order

Although today’s order was presented as a new Executive Order on immigration, the few changes in it – including allowing exceptions for current visa holders and permanent residents – fundamentally fail to address the issues we had with the previous order. A month may have passed, but it seems clear that little (if any) progress was made on the thinking behind this action.

We are against this Executive Order for the same reasons we opposed its (largely identical) predecessor. As a tech company, and to fulfill our mission to protect and advance the internet as a global public resource, we believe ideas and innovations must flow freely across borders. This order fails to meet that standard for many reasons:

  • It damages Mozilla, the United States, and the global technology industry.
  • It undermines trust in U.S. immigration law.
  • It sets a dangerous precedent that poses risks to international cooperation, including those required to sustain the health of the internet.
  • It is fundamentally misplaced and misguided as a reaction to its ostensible target of protecting national security.

These restrictions are significant and have created a negative impact to Mozilla and our operations, especially as a mission-based organization and global community with international scope and influence over the health of the internet.

The ability for individuals, and the ideas and expertise they carry with them, to travel across borders is central to the creation of the technologies and standards that power the open internet. We will continue to fight for more trust and transparency across organizations and borders to help protect the health of the internet and to nurture the innovation needed to advance the internet.

The post Mozilla Statement on Immigration Executive Order appeared first on The Mozilla Blog.

Air MozillaMozilla Weekly Project Meeting, 06 Mar 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Mozilla Add-ons BlogContribution Opportunity: Mobile Redesign Testing

Calling all testers!

On March 9, a new look will be coming to addons.mozilla.org (AMO) for Android. This redesign will feature a cleaner, more user-friendly add-ons store on Android devices and tablets. We would love your help to track down any remaining bugs.

If you have access to an Android phone and a passion for bug-hunting, we encourage you to look at the instructions on this etherpad and the Contributors Guide and start testing. No prior testing experience is required to contribute. Please be sure to record your name and any bugs you found using the etherpad. After the release on March 9, we still welcome you to file any bugs you see!

If you have any questions or would like to talk to your fellow bug hunters during redesign testing, join the #amo channel at irc.mozilla.org.

Happy testing!

The post Contribution Opportunity: Mobile Redesign Testing appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgPreviewing the WebAssembly Explorer

WebAssembly is a new, cross-browser format for programs on the Web. You can read all about it in Lin Clark‘s six-part series, A cartoon intro to WebAssembly. Unlike JavaScript, WebAssembly is a binary format, which means developers need new tools to help understand and experiment with WebAssembly. One such tool is Mozilla’s WebAssembly Explorer.

The video below demonstrates the basic functions of the WebAssembly Explorer, which lets developers type in simple C or C++ programs and compile them to WebAssembly.

One advantage of WebAssembly—and of the WebAssembly Explorer—is that developers can see exactly what optimizations are being applied to their code. For example, the WebAssembly compiler in the video is able to use C’s type information to automatically select between traditional division and a more efficient bit-shifting shortcut. With JavaScript, a browser’s JIT compiler may eventually arrive at the same optimization, but there are no guarantees. Ahead-of-time compilation also avoids the profiling and observational overhead associated with opportunistic JIT compilers.

While the WebAssembly Explorer is a great learning tool, it’s still in early development and not yet suitable for complex programs. Developers who need a production-grade compiler suite should look to Emscripten, which was originally written to output asm.js but has now been extended to produce WebAssembly as well.

You can find the WebAssembly Explorer’s source code on GitHub, and you can begin experimenting with WebAssembly when it lands in Firefox 52 later this week.

The Mozilla BlogMozilla and BrowserStack Partner to Drive Mobile Testing on Real Devices

At Mozilla a fundamental part of our beliefs is that all websites should work equally well across all browsers and all devices. The Internet should just work everywhere, flawlessly, with no questions asked. We’re therefore really happy that, as of this week, the BrowserStack team is launching a mobile test capability for Firefox browser products and a unique offering – one year of free testing on Firefox mobile browsers on BrowserStack’s Real Device Cloud. In addition, developers can test Firefox browsers on different desktop operating systems for free for 30 days.

We know that today the majority of web content consumption and activity is on mobile. That’s what makes BrowserStack’s new Firefox test capability so important for web developers trying to build web compatible mobile sites. And helping developers be more successful with their sites is great for users too, and for Mozilla.

All of us have experienced badly broken websites. Pull-down menus that don’t work, overlapping text or borders, submit buttons that don’t submit, or forms that are invisible all are common symptoms of incompatibly.  What do you do when you hit a broken website? You usually leave it for another website, and if the problem is frustrating enough you might even try a different browser.

On mobile these problems of web compatibility are even more challenging because of the multitude of devices we all use. Developers building sites for the mobile web must make sure their code works across hundreds of types of devices with different screen sizes, display densities, and many more variables.

We know there are many reasons why the web breaks. There are numerous standards, implemented differently by the browser makers. For users, telling whether a site is broken or is simply a function of bad user experience design on the limited screen real estate of a phone is nearly impossible.

Today, even ensuring mobile website compatibility and equal functionality across the major desktop browser platforms – Firefox, Chrome, Windows –  and major mobile operating systems – iOS, Android– requires a lot of effort. Many hours more effort. We recognize that to build mobile sites that just work on all browsers, developers need a solid mobile test environment.

By partnering with BrowserStack, we at Mozilla are aiming to provide developers with an easy, free way to test their content everywhere, especially on Firefox, so that they can deliver quality experiences on any device and browser combination.

Developers that use BrowserStack to test their site code for web compatibility include many well known web properties such as Microsoft, AirBnB, and MasterCard. Mozilla will be using BrowserStack as part of our ongoing efforts to grow usage of Firefox mobile both by using it to test  our own properties and as part of our evangelism efforts to identify and help fix incompatible sites out on the web. BrowserStack’s compatibility testing tools will make it much easier for our internal engineering teams to identify compatibility problems in the wild and more quickly fix them.

This is only the first step in our partnership. In Q2 we will announce testing on Firefox for iOS. We will continue to work closely with BrowserStack to get the word out to the developer community. Our teams at Mozilla Developer Network (MDN) will provide additional documentation on how to use BrowserStack to test sites for compatibility on Firefox. And on our Mozilla Developer Roadshow (hopefully coming to a city near you soon), we will have our compatibility experts explaining how you can benefit from testing Firefox on Mobile with BrowserStack. The best part about this is truly everyone benefits. Developers can more quickly find and fix compatibility issues for Firefox on mobile devices. Users on Firefox mobile will get a better web experience as more and more developers comprehensively test their site code.

The post Mozilla and BrowserStack Partner to Drive Mobile Testing on Real Devices appeared first on The Mozilla Blog.

about:communityFirefox 52 new contributors

With the release of Firefox 52, we are pleased to welcome the 50 developers who contributed their first code change to Firefox in this release, 45 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

  • 166291: 1310835
  • adamtomasv: 1308600, 1312173, 1313565
  • asppsa: 1304019
  • cody_tran95: 1301212, 1301214, 1301223
  • p23keshav: 1314158
  • patrickkerry.pei: 1264929
  • psnyde2: 1315438
  • u579587: 1287622
  • vinayakagarwal6996: 1304097, 1304167
  • Aaron: 1304310
  • Ajay: 1303708, 1304735
  • Alin Selagea: 1315690
  • Amal Santhosh: 1303356
  • Brian Stack: 1275774, 1304180
  • Chirag: 1296490
  • Dave House: 1307904
  • David Malaschonok: 926579
  • Dhanesh Sabane (UTC+5:30): 1308137
  • Dzmitry Malyshau: 1322169
  • Eden Chuang: 1287664
  • Enes Göktaş: 1302855, 1303227, 1303236
  • Francesco Pischedda: 1280573, 1285555, 1291687
  • Fuqiao Xue: 1111599, 1288577
  • Haard Panchal: 1307771
  • Heikki Toivonen: 209637
  • Horatiu Lazu: 1292299
  • Julia Friesel: 1256932
  • Kaffeekaethe : 1256887, 1307676, 1308931
  • Kanika Narang: 1302950
  • Kirti Singla: 1301627
  • Laszlo Ersek: 1304962
  • Leandro Manica: 1306296
  • Manuel Grießmayr: 1311783
  • Mark Golbeck: 1091592
  • Mark Smith: 1308275
  • Matthew Spencer: 1293704
  • Max Liu: 1312719
  • MikeLing: 1287018
  • Nevin Chen: 1310621
  • Petr Gazarov: 1300944
  • Petr Sumbera: 1309157, 1309246, 1315956
  • Robin Templeton: 1316230
  • Samriddhi: 1303682
  • Saurabh Singhal: 1278275
  • Sourav Garg: 1311343, 1311349
  • Umesh Panchaksharaiah: 1301629
  • Vincent Lequertier: 1299723, 1301351, 1304426
  • Will Wang: 1255977
  • William CS: 1295000
  • Yen Chi-Hsuan (UTC+8): 1143421
  • katecastellano: 1256941
  • Air MozillaRust Meetup March 2017

    Rust Meetup March 2017 Bay Area Rust Meetup March 2017

    Air MozillaCatt Small on The Full Story: Presenting Complete Ideas

    Catt Small on The Full Story: Presenting Complete Ideas Telling a cohesive story is one of the hardest parts of public speaking. Many fledgling speakers find it challenging to string concepts together in an...

    Air MozillaEU-Urheberrechtsreform erklärt: 2 March 2017

    EU-Urheberrechtsreform erklärt: 2 March 2017 EU-Urheberrechtsreform erklärt: We are recording an event at the Wikimedia Germany office in Berlin on March 2nd, 2017 and will upload the recording using this...

    hacks.mozilla.orgContainers Come to Test Pilot

    Containers Experiment UI

    The Containers feature in Firefox Nightly gives users the ability to place barriers on the flow of data across sites by isolating cookies, indexedDB, localStorage, and caches within discrete browsing contexts. If you’re interested in the history and technology behind Containers, please read this blog post outlining the rationale for the Nightly implementation.

    While the feature has garnered positive notice among our Nightly audience, there remain outstanding questions about the user experience that suggest the need for further exploration.

    After running the Containers UI through successive rounds of user research and UX iteration, we are happy to announce that we’ve launched a Containers experiment in Firefox Test Pilot in order to widen the audience exposed to the feature, iterate on the UI, and reason about the future of the feature.

    Containers UI on Test Pilot

    The road to Test Pilot

    Tanvi’s above-mentioned post introducing Containers explores the complexity of contextual identity on the web. She points out that people may wish to represent themselves differently in different browsing contexts: for example, while browsing social media versus doing research about a medical condition.

    Today, browsers don’t do a great job of respecting contextual boundaries. We know from user research that Firefox users make do with a variety of ad hoc tools such as private browsing, multiple profiles, or multiple browsers to manage and protect their online contexts. The Containers experiment provides a tool that’s specifically designed to address context on the web.

    The difficulty with Containers is that the UI and UX proposed by the feature are more-or-less unique among browsers. This presents some challenge for shipping to a general audience. Will users get it? Will the UI be sensible and will the security and privacy story behind the Containers feature match users’ mental models?

    We’ve conducted user research on Nightly Containers using a think aloud protocol and our provisional answer to these questions have been a resounding kinda. We found, for example, that many users are more concerned with local threats (a snooping roommate or boss, for example) than your average security engineer. We also found that some research participants who totally missed the privacy features saw a lot of upside in containers as a strictly organizational tool. With these perspectives in mind, we decided that Test Pilot would be a great platform to expose Containers to a broader audience while continuing to learn more about user perceptions of the feature.

    Firefox Test Pilot is a platform that lets us test potential new Firefox features while getting quantitative and qualitative feedback from participants. If you’re interested in the overall process and goals of Test Pilot, you can read more about it here. With the Containers experiment, we hope to answer the following:

    • Is the security model intelligible to Test Pilot users? How do they understand the feature?
    • Is the feature useful? If so, how much do people use it, and are there specific use cases that are particularly appealing?
    • Which container types do people use? Do people create custom containers?
    • Do containers keep people from opening a different browser to perform specific tasks?

    How does the Test Pilot experiment differ from Containers in Nightly?

    As with all experiments in Test Pilot, we’ve built an onboarding flow to give the uninitiated an introduction to the experiment. In addition to normal Test Pilot onboarding that’s standard across all experiments, we’ve added a few extra extra steps to the Containers experiment itself to introduce the unfamiliar UI.

    Test Pilot onboarding for Containers

    In response to user feedback about task management, Test Pilot Containers also introduces some organizational and visibility improvements over the Nightly version. Container management is moved to a toolbar button from which users can sort, hide, rename, create, and delete Containers. To aid in discoverability, users can now create new Container tabs by hovering over the new tab button.

    Behind the scenes, the Containers experiment sends Telemetry data back to Test Pilot, so that we can learn more about users’ experiences with Containers. As with all Test Pilot experiments, users will be able to submit qualitative feedback in the form of ratings and survey responses about their experiences.

    Most of the above covers the product rationale for Containers, but since this is Hacks, we should talk implementation as well. Like all Test Pilot experiments, Containers is shipped as an add-on signed and served from Test Pilot.

    Containers require a special Firefox preference, so we started with an Embedded WebExtension to use the SDK preferences service and the WebExtension pageAction in tandem. During the development process, we learned that the contextualIdentities API that affords the underlying technology would not land in Firefox release in time for our experiment to ship.

    To resolve this gap, we explored bundling the lower-level service as a WebExtension Experiment. However, WebExtension Experiments are only currently allowed in Nightly and Aurora. Since Test Pilot targets users across all channels, we needed a different solution. Thus, the experiment you see today in Test Pilot wound up as a mix of platform, SDK, and WebExtension code.

    What is the security model provided by Containers?

    The security enhancements of Containers in Nightly and Test Pilot is common across both versions, and are based on a modification to the browser’s Same Origin Policy (SOP).

    The Same Origin Policy ensures that documents and data from distinct origins are isolated from each other. It is a critical browser security mechanism that prevents content from one site from being read or altered by another, potentially malicious site.

    Containers work by adding an extra bit – a userContextId integer – to the normal (scheme, host, port) tuple that defines an origin. So, an origin is now defined as (userContextId, scheme, host, port). For example, when a user visits Gmail in a Work container tab, the browser performs the SOP check against (2, https, mail.google.com, 443). When the same user visits Gmail in a Personal container tab, the browser performs the SOP check against (1, https, mail.google.com, 443).

    Containers separate cookies, localStorage, indexedDB, and cache data from each other and from the Default container in Firefox. So, when a user visits their email site in a Work container tab, the browser sets its cookies only in the Work container. If they then visit their email site in a Personal container, the origin that has their cookies doesn’t match and the user is therefore “signed out”.

    Because cookies are not shared across containers, cookie-based attacks in one container are unsuccessful against cookies stored in another container. Similarly, cookie-based tracking only tracks a single container – it does not track the user’s entire browsing.

    Many privacy and security mechanisms can be realized by including more keys in the origin check. Because of this, Gecko has added attributes to the origin called OriginAttributes. In addition to Containers, this allows us to implement features like Private Browsing Mode, First Party Isolation, and potentially the proposed Suborigins standard.

    So what happens now?

    Well, we wait and see. As users come into new Test Pilot experiments they inevitably uncover bugs and request features. Our immediate task will be to resolve bugs and prioritize new feature concepts.  We’ll continue to push releases to the Containers experiment while it’s in Test Pilot. In the meantime we’ll monitor both qualitative feedback from surveys and quantitative feedback from Telemetry to help us reason about the viability of the experiment and the prioritization of new features.

    There is also ongoing work at the platform level, to further separate History, Bookmarks, and TLS Certificate Security Exceptions data between Containers. Each of these present their own UX, UI, and platform-level challenges.

    In the long run, we will have to decide whether Containers makes it to release Firefox. Maybe the feature as we’ve built it for Test Pilot will prove to be a hit, or maybe we will need to go back to the drawing board. Maybe exposing the underlying APIs to WebExtensions kickstarts further add-on development around OriginAttributes. Shipping Containers in Test Pilot is the next step to help us make informed decisions about the future of Containers. If you’re interested in helping to shape that future please check out the experiment today!

    Air MozillaReps Weekly Meeting Mar. 02, 2017

    Reps Weekly Meeting Mar. 02, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Mozilla Add-ons BlogMarch’s Featured Add-ons

    Firefox Logo on blue background

    Pick of the Month: Dark YouTube Theme

    by NiCU
    Watch YouTube clips shrouded in darkness! Try YouTube with a dark periphery instead of the standard bright white.

    “Better than I imagined.”

    Featured: Turbo Download Manager

    by InBasic
    A robust multi-threading download manager; includes the option of closing the manager window without interrupting download flow.

    “Okay so I’ve been using this downloader about two weeks and it really rocks. The speed is way better than the default downloader and I love how simple it is.”

    Featured: Web Clipper: Easy Screenshot

    by Jeremy Schomery
    Super simple but effective screenshot extension. One click and the context menu offers multiple options, like capturing the entire page, just the visible area, or a selected portion.

    “So light, simple, and perfect without useless frills.”

    Featured: Clean Uninstall

    by rNeomy
    Automatically purge obsolete preferences (in pref.js) for add-ons you’ve uninstalled.

    “This makes add-on management super clean!”

    Featured: To Google Translate

    by Juan Escobar
    With a couple of clicks you can translate any text via Google Translate.

    “برنامج جيد ولاكن ارجو من مطوري البرامج والاضافات كتابة تفاصيل البرامج بالغة العربية اسوتا بالغات الاخرى.”

    Nominate your favorite add-ons

    Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

    If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

    The post March’s Featured Add-ons appeared first on Mozilla Add-ons Blog.

    Mozilla L10NPontoon dashboard facelift

    At the end of last year we ran a user survey and transformed results into Pontoon roadmap for 2017. Since the top-voted feature (in-app notifications) was blocked by the runner-up (project priorities and deadlines), we started working on the latter. It’s now ready for you to consume.

    Adding two columns for project priority and deadline to our dashboards shouldn’t be a big deal, but we also had other related requests to fullfil. Additionally, dashboard code was in desperate need of a rewrite. So we ended up with the biggest changset ever landing in Pontoon! A big thank you to jotes for his patience during the review process!

    Now let’s have a closer look at some of the changes we have made. We’ll use team page as an example and explain differences to other views along the way.

    Greek Team Page

    Greek Team Page

    Main Menu
    Starting on top, you’ll notice a simplified header with Pontoon logo, links to most popular views and the less frequent actions moved to the menu on the right. Note that Machinery was previously referred to as Terminology, but it’s the same old metasearch engine for translations.

    Heading
    The following section presents details of the current view, in our case team dashboard. On the left side you’ll find some CLDR locale data – plural forms, script, writing direction and the number of literate speakers. On the right side you’ll see overall team statistics.

    Subpage Navigation
    Team, Project and Localization (i.e. localization of a project by a team) dashboards consist of various subpages and you switch between them using tabs. As you’ll notice by the YouTube-like progress bar on top of the page, the navigation is now AJAX-based, which should make it faster.

    Project Listing
    Finally, in the project list below the tabs you’ll find the deadline and priority columns. If the deadline is overdue, it’s painted red. If it’s orange, you have less than a week to complete your translations. Projects are ranked in 5 priority levels, marked with stars.

    Team dashboard now allows you to jump straight to the translate view with translation status filter applied. Hover any project to reveal its stats and select one of the translation statuses or “All strings”. A tooltip also appears when hovering in latest activity column, revealing the latest translation, author and date.

    Jump straight to translate view with translation status filter applied

    Jump straight to translate view with translation status filter applied

    Bugzilla integration
    Mozilla uses Bugzilla to track progress of projects and localizations. Open bugs specific to the team can now be accessed via the Bugs tab on the team page. Thanks to Axel, who wrote the code to support this functionality in Elmo, it’s now part of Pontoon too. Which means we’re now officially merging Pontoon and our standalone dashboard codebase!

    Open bugs for the Greek team

    Open bugs for the Greek team

    Other dashboards
    Project and Localization dashboard share their layouts with the Team dashboard. You’ll notice some information not previously available, such as repository URL on the Project page and a list of contributors, project info and team info on the Localization page.

    A look ahead
    With these changes, our dashboards should not only become more powerful, easier to use and more pleasant to the eye, but also more flexible to adapt to future requests. There are plenty of things we could improve:

    • Team dashboards could entirely replace our team wiki pages, reflecting team hierarchy and providing links to l10n resources like style guides.
    • Project dashboards could contain links to l10n preview environments and contact information (l10n drivers, developers).
    • Localization dashboards could contain deadline and priority information provided by the web dashboard.

    Let us know how you feel about the new dashboards. And don’t forget, you can always file a bug or submit an idea for improvement! 😉

    Air MozillaThe Joy of Coding - Episode 93

    The Joy of Coding - Episode 93 mconley livehacks on real Firefox bugs while thinking aloud.

    Air MozillaWeekly SUMO Community Meeting Mar. 01, 2017

    Weekly SUMO Community Meeting Mar. 01, 2017 This is the sumo weekly call

    hacks.mozilla.orgWeb Games Platform: Newest Developments

    In July of 2015 we announced our Games Technology Roadmap, and have been working on addressing those pain points as shared by developers.

    Games are an important part of the web experience. Mozilla and other browser vendors have been working hard to find alternative paths that developers can migrate to. As we come to the end of plugins (Firefox 52) and many browsers planning to make Flash click-to-play during 2017-2018, we are working hard to complete the alternatives and insure they are viable. Many new features that will help improve the platform are arriving in the next few versions of Firefox, and we are seeing other browsers on a similar course.

    We’ve been working closely with other browsers, tool makers, and game developers to test at scale, and promote universal availability of key technologies across all the major browsers. We have seen success with top Facebook titles such as Bubble Witch 3 Saga and Candy Crush Jelly Saga from King, and Top Eleven from Nordeus. There is still work to do to get full potential out of the platform, but today we wanted to provide a status update on what we’ve been working on and what you’ll see shipping in browsers in the near future. Whether you are using compiled code bases and/or JavaScript there’s a little bit here for everybody!

    What We Heard

    As we reached out to game developers, publishers, and browser makers we heard a common set of concerns and requests:

    • Developers wanted to improve their user experience. They have let us know they would like to see reduced code size, faster compile and load times, reduced memory usage, and improved performance to make it easier for users to engage.
      • WebAssembly is a significant leap forward in addressing all of these issues.
      • We recently announced that the four major browsers have reached a consensus on the stable initial version of the standard, enabling all browsers to start shipping WebAssembly.
      • Mozilla intends to release WebAssembly, the successor to asm.js, in Firefox 52 in March 2017.
    • Developers wanted to reach as many users as possible and would like to see more users be able to run WebGL content.
      • Targeted efforts have improved desktop WebGL success rates on non-Windows XP machines from 80% to 99%. We are also seeing a similar trend across other browsers. In addition, telemetry shows that desktop WebGL availability matches that of Flash on Firefox.
      • Developers wanted OpenGL ES3 features in WebGL 2, including new texture formats, and support for floating point texture filtering, rendering to multiple render targets, and MSAA multi-sampling.
      • WebGL 2 supports OpenGL ES3 features. WebGL 2 was released in Firefox 51 in January 2017, and other browsers are following suit.
    • Developers have asked for greater flexibility in allocating larger amounts of address space on 32-Bit systems to run bigger and more complex applications.
      • 32-Bit Out Of Memory (OOM) issues are often caused by a browser process being unable to allocate large blocks of memory due to address space fragmentation.
      • Firefox intends to ship a 32-Bit OOM solution in Firefox 53 in April 2017.
        We do not have similar challenges in 64-Bit versions of Firefox.
    • Developers would like greater information about the hardware Firefox users browse the web on to inform their development decisions.

    Details

    Standardizing and Shipping WebAssembly:

    WebAssembly is an emerging standard whose goal is to define a safe, portable, size- and load-time efficient binary compiler target which offers near-native performance — a virtual CPU for the Web. WebAssembly is being developed in a W3C Community Group (CG) whose members include Mozilla, Microsoft, Google, and Apple.

    WebAssembly can be considered the successor to asm.js, a Mozilla-pioneered project to push the limits of performance within the constraints of the existing JavaScript language. Although, asm.js now offers impressive performance in all browsers, as a new standard, WebAssembly removes incidental constraints, allowing engines to get ever-closer to native performance (while maintaining the same safety and security model as JavaScript). Preliminary measurements in Firefox show that, on average, WebAssembly brings realistic C/C++ workloads to run within 1.25× native speed, down from 1.38× with asm.js — a 9% improvement! Further speedups are anticipated as work continues on the whole pipeline. Dramatic 8× speedups have been observed for synthetic workloads that utilize new WebAssembly features like 64-bit integer arithmetic.

    We recently announced that the WebAssembly Community Group had reached a consensus on the initial version of the standard. Interoperable implementations have landed in pre-release Firefox and Chrome channels, and are under development in Chakra and JavaScriptCore. Mozilla intends to release WebAssembly, the successor to asm.js, in Firefox 52 in March 2017.

    Improving WebGL Success Rates:

    We have been able to identify and address issues that impacted the availability of WebGL on Firefox. In particular, Firefox telemetry shows that we have reduced WebGL availability failures from over 20% in Firefox 47 to less than 8% in Firefox 50 across all machines. And targeted efforts have improved WebGL success rates on non-Windows XP machines from 80% to 99%. This is consistent with the level of improvement we have seen with other browsers.

    Standardizing and Shipping WebGL/WebGL 2:

    WebGL 2 is based on the OpenGL ES 3.0 specification, and offers new features, including 3D textures and 2D texture arrays, ESSL 3.0 (an advanced shading language), integer texture formats and vertex attributes, transform feedback, and uniform blocks for more efficient uploads. It also adds primitive restart, framebuffer blitting and invalidation, separable sampler objects, occlusion queries and pixel buffer objects. In addition, some optional WebGL 1 extensions are now part of the guaranteed core of WebGL 2, including multiple render targets, instanced drawing, depth and floating-point textures, and sRGB support. Also notable is support for the new ETC2 texture format which provides alpha support on a compressed texture and is supported on both desktop and mobile devices. Finally, improved garbage collection offers a smoother experience overall. WebGL 2 shipped in Firefox 51 in January 2017.

    Addressing 32-Bit Out of Memory Issues (OOMs):

    A consistent pain point for web developers using compiled code bases and asm.js is hitting out of memory conditions on 32-Bit browsers. By default, on Windows, Firefox is a 32-Bit application. This limits Firefox to being able to use (at most) 4 gigabytes of address space, which tends to become fragmented over time, and prevents a game’s ability to request a large enough allocation to successfully run.

    To address this, we have proposed a new Large-Allocation header. This header tells the browser to make a best-effort attempt to load the document in an unfragmented content process, which should greatly decrease the OOM failure rate for top-level browsing contexts, even on bigger allocations. We aim to ensure that if the conditions for a cross-process navigation are met, web apps are able to reliably allocate a gigabyte of contiguous address space. It is our intention to ship this solution in Firefox 53.

    An additional opportunity is to encourage users toward using a 64-Bit browser, which allows applications to use huge amounts of physical memory. This means that address space exhaustion (or OOMs) are basically impossible.

    Today:

    • 72% of our Windows users are running 32-bit Firefox on 64-bit Windows. These users could switch to a 64-bit Firefox.
    • 25% are running 32-bit Firefox on 32-bit Windows. These users cannot switch to a 64-bit Firefox.
    • 3% are running 64-bit Firefox already.

    As nearly three-quarters of Firefox users are running 32-bit Firefox on 64-bit Windows, there is a huge opportunity to improve the ability of those users to run large web apps by accelerating the shift to 64-bit. As such, per our 64-Bit plan of record, we are targeting August 2017 (Firefox 55) to change the Firefox installer to default to 64-bit for new installs on 64-bit Windows. Upgrading existing 32-bit Firefox users on 64-bit Windows to 64-bit Firefox will probably happen in October 2017 (Firefox 56).

    Sharing information about the Firefox Hardware Audience:

    Suppose you’re developing a sophisticated web game or application. You may have questions about what capabilities web users have access to on their systems, or how you can target the widest possible audience? To help address these questions we recently released the Firefox Hardware Report to help answer those questions and inform your development decisions.

    On the site you’ll find a variety of data points showing what hardware and OSes Firefox web users are using on the web, and trends over time. This includes CPU vendors, cores, and speeds; system memory; GPU vendors, models, and display resolution; Operating System architecture and market share; browser architecture share, and finally, Flash plugin availability.

    In Conclusion

    Our focus is now on landing all of the above improvements in the coming months. We wish to extend our gratitude to the game developers, engine providers, and other browsers’ engine teams who have worked so long on this technology. It’s been a massive effort, and we all collectively could not have done it without your help and feedback. Thank you!

    QMOSpecial bug verification event for Firefox 52 – a success with the help of the QA Community!

    Hello Mozillians!

    Last week, the Release QA Team (Firefox for Desktop) reached out to a few people from the QA Community and asked for help on a very specific list of bug fixes that would make the team more confident about the quality of Firefox 52.0, if successfully verified.

    The following contributors were hand picked based on their consistent and reliable performance during Bug Verification Days: Maruf Rahman, Md.Majedul isalm, Kazi Nuzhat Tasnem, Azmina, Saheda Reza, Nazir Ahmed Sabbir, Sajedul Islam, Tanvir Rahman and Hossain Al Ikram.

    It gives me great pleasure to extend my warmest congratulations to each and every one of them, on behalf of the entire Release QA Team. Thank you and we all hope that you’ll be willing to repeat this exercise again, soon.

    Keep up the good work guys!
    Mihai Boldan, QA Community Mentor
    Firefox for Desktop, Release QA Team

    QMOFirefox 53.0 Aurora Testday, March 3rd

    Hello Mozillians,

    We are happy to let you know that Friday, March 3rd, we are organizing Firefox 53.0 Aurora Testday. We’ll be focusing our testing on the following features: Implement support for WebM Alpha, Reader Mode Displays Estimated Reading Time and Quantum – Compositor Process for Windows. Check out the detailed instructions via this etherpad .

    No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

    Join us and help us make Firefox better!

    See you on Friday!

    hacks.mozilla.orgDoubling Down on Cross-Browser Testing

    As the trends for web access via desktop and mobile converge, it is more important than ever for developers to test their code wherever users choose to access content on the web. This is why Mozilla has partnered with BrowserStack to offer free testing on mobile Firefox for Android (iOS upcoming). We understand that not every developer owns a device bank or has the time to test on every OS. Mozilla is committed to ensuring a healthy and robust web and we believe that cross-browser compatibility is a key component of that commitment.

    The convenience of access everywhere

    People often ask me to recommend a good camera. My answer has always been the same: the best camera, no matter its form factor or its features, is the one you carry around, the one that is always with you. Taking photos, like writing your thoughts, is something you want to be able to do anywhere.

    The need to have access to information, to record your thoughts, to communicate with others through the Web doesn’t stop when you step out of your office or home. The convenience of access everywhere is more important than a large screen and a good keyboard.

    It no longer makes sense to speak about the mobile Web. Interactions are mediated through many form factors and capabilities, including small screen devices with touch UIs. We need to cater for those and at the same time respect user choices for their devices and software.

    The growth (and struggles) of the Web on mobile

    There’s been steady growth in web access via mobile devices since the time that mobile phones first had browsers. And growth of mobile traffic is projected to increase sevenfold before 2021. Increasingly, you can expect users and customers to visit on their mobile devices. Design your sites and apps so that they are flexible enough to adjust to many differing situations. Key design considerations include: layout, connectivity quality, performance, input interactions, and content accessibility. Design these to work responsively across a variety of form factors and OSes.

    Historically, testing on mobile has been difficult. The cost of devices makes for an expensive testing pool. There were no great ways to debug web sites with the constraints of mobile browsers – it felt a bit like the pre-Firebug days on desktop. In the last couple of years, new developer tooling and 3rd-party services offering remote testing give developer better options.

    Testing for speed, connectivity issues, screen density and size, touch interactions and different mobile user agents gives you the insights you need to optimize your mobile user experience. This kind of testing has become a reliable feature in desktop developer tools.

    If you prefer to access the devices directly through USB or WIFI, developer tools can give you an overview of the device content that you can directly manipulate and interact with. Testing on a real mobile device from the desktop is essential.

    In this example, we access the DOM of Chrome for Android on Firefox Desktop developer tools and modify the stylesheet values for testing.

    The cost of not testing

    It’s hard to understand all the ways a user may struggle when interacting with a website if you don’t put yourself in the user context. You run the risk of alienating and losing frustrated visitors, and you miss the opportunity to build your brand and connect with people if If you aren’t testing on a variety of browsers and devices.

    Be careful with the metrics you are collecting. Statistics are a useful tool, but they can only reflect the reality you’ve created for them to measure. Basically, we measure only what we let in. Does your site work reliably on all modern browsers?

    Some questions to ask yourself before you ship:

    • Did you check if the menu is working with touch interactions?
    • Did you check the layout adjusts well to different screen sizes including the content which has been uploaded by users?
    • What is happening if the connection is breaking?
    • Or just slow because the person is on a business trip with different network capabilities?
    • Have you tested in more than one browsers?
    • Is the content still readable once on a small screen at arms length?

    These issues are known and are documented for you to be aware of; new issues will come up with future generation devices.

    Best Practices

    Mobile Development Best practices:

    • Learn and use web standards.
    • Make cross-browser testing part of your tool-chain.
    • Learn how to remote-debug mobile devices (Firefox, Chrome).
    • Learn how to use device emulators (Firefox, Chrome).
    • Start testing on real mobile devices, on a variety of devices and network speeds, if possible (or simulate these speeds with developer tools).
    • Try testing on mobile and desktop with our partner BrowserStack

    All of your experience developing sites and apps on desktop should apply to mobile as well: just because something works in one browser, that’s no guarantee it will work for all your users — perhaps not even users of the same browser on different devices or platforms.

    Firebug BlogFirebug 2.0.19

    The Firebug team released Firebug 2.0.19. This is a maintenance release ensuring compatibility with latest Firefox releases.

     

    The beta channel on AMO is also updated.

     
    Firebug 2.0.19 is compatible with Firefox 30 – 54

     
    Firebug 2.0.19 fixes issue 8077.
     

    You might also want to read about Unifying Firebug & Firefox DevTools.

     

    Please post feedback in the newsgroup, thanks.

    Jan ‘Honza’ Odvarko

     

    Mozilla Add-ons BlogAdd-on Compatibility for Firefox 53

    If you haven’t yet, please read our roadmap to Firefox 57. Firefox 53 is an important milestone, when we will stop accepting new legacy add-ons on AMO, will turn Multiprocess Firefox on by default, and will be restricting binary access from add-ons outside of the WebExtensions API.

    Firefox 53 will be released on April 18th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 53 for Developers, so you should also give it a look.

    General

    Password Manager

    The 3 following changes are related, and the main impact is that add-ons can no longer call findSlotByName("") to figure out if the master password is set. You can find an example on how to change this here.

    XPCOM and Modules

    WebExtensions

    • Encrypt record deletes. The storage.sync API hasn’t shipped yet, but it’s probably already in use by some pre-release users. This change causes old synced data to be lost.

    Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 53, I’d like to know.

    The automatic compatibility validation and upgrade for add-ons on AMO will happen in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 52.

    The post Add-on Compatibility for Firefox 53 appeared first on Mozilla Add-ons Blog.

    hacks.mozilla.orgA cartoon intro to WebAssembly

    WebAssembly is fast. You’ve probably heard this. But what is it that makes WebAssembly fast?

    In this series, I want to explain to you why WebAssembly is fast.

    Wait, so what is WebAssembly?

    WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser. So when people say that WebAssembly is fast, what they are comparing it to is JavaScript.

    Now, I don’t want to imply that it’s an either/or situation — that you’re either using WebAssembly or using JavaScript. In fact, we expect that developers will use both WebAssembly and JavaScript in the same application.

    But it is useful to compare the two, so you can understand the potential impact that WebAssembly will have.

    A little performance history

    JavaScript was created in 1995. It wasn’t designed to be fast, and for the first decade, it wasn’t fast.

    Then the browsers started getting more competitive.

    In 2008, a period that people call the performance wars began. Multiple browsers added just-in-time compilers, also called JITs. As JavaScript was running, the JIT could see patterns and make the code run faster based on those patterns.

    The introduction of these JITs led to an inflection point in the performance of JavaScript. Execution of JS was 10x faster.

    A graph showing JS execution performance increasing sharply in 2008

    With this improved performance, JavaScript started being used for things no one ever expected it to be used for, like server-side programming with Node.js. The performance improvement made it feasible to use JavaScript on a whole new class of problems.

    We may be at another one of those inflection points now, with WebAssembly.

    A graph showing another performance spike in 2017 with a question mark next to it

    So, let’s dive into the details to understand what makes WebAssembly fast.

    Background:

    WebAssembly, the present:

    WebAssembly, the future:

    hacks.mozilla.orgA crash course in just-in-time (JIT) compilers

    This is the second part in a series on WebAssembly and what makes it fast. If you haven’t read the others, we recommend starting from the beginning.

    JavaScript started out slow, but then got faster thanks to something called the JIT. But how does the JIT work?

    How JavaScript is run in the browser

    When you as a developer add JavaScript to the page, you have a goal and a problem.

    Goal: you want to tell the computer what to do.

    Problem: you and the computer speak different languages.

    You speak a human language, and the computer speaks a machine language. Even if you don’t think about JavaScript or other high-level programming languages as human languages, they really are. They’ve been designed for human cognition, not for machine cognition.

    So the job of the JavaScript engine is to take your human language and turn it into something the machine understands.

    I think of this like the movie Arrival, where you have humans and aliens who are trying to talk to each other.

    A person holding a sign with source code on it, and an alien responding in binary

    In that movie, the humans and aliens don’t just do word-for-word translations. The two groups have different ways of thinking about the world. And that’s true of humans and machines too (I’ll explain this more in the next post).

    So how does the translation happen?

    In programming, there are generally two ways of translating to machine language. You can use an interpreter or a compiler.

    With an interpreter, this translation happens pretty much line-by-line, on the fly.

    A person standing in front of a whiteboard, translating source code to binary as they go

    A compiler on the other hand doesn’t translate on the fly. It works ahead of time to create that translation and write it down.

    A person holding up a page of translated binary

    There are pros and cons to each of these ways of handling the translation.

    Interpreter pros and cons

    Interpreters are quick to get up and running. You don’t have to go through that whole compilation step before you can start running your code. You just start translating that first line and running it.

    Because of this, an interpreter seems like a natural fit for something like JavaScript. It’s important for a web developer to be able to get going and run their code quickly.

    And that’s why browsers used JavaScript interpreters in the beginning.

    But the con of using an interpreter comes when you’re running the same code more than once. For example, if you’re in a loop. Then you have to do the same translation over and over and over again.

    Compiler pros and cons

    The compiler has the opposite trade-offs.

    It takes a little bit more time to start up because it has to go through that compilation step at the beginning. But then code in loops runs faster, because it doesn’t need to repeat the translation for each pass through that loop.

    Another difference is that the compiler has more time to look at the code and make edits to it so that it will run faster. These edits are called optimizations.

    The interpreter is doing its work during runtime, so it can’t take much time during the translation phase to figure out these optimizations.

    Just-in-time compilers: the best of both worlds

    As a way of getting rid of the interpreter’s inefficiency—where the interpreter has to keep retranslating the code every time they go through the loop—browsers started mixing compilers in.

    Different browsers do this in slightly different ways, but the basic idea is the same. They added a new part to the JavaScript engine, called a monitor (aka a profiler). That monitor watches the code as it runs, and makes a note of how many times it is run and what types are used.

    At first, the monitor just runs everything through the interpreter.

    Monitor watching code execution and signaling that code should be interpreted

    If the same lines of code are run a few times, that segment of code is called warm. If it’s run a lot, then it’s called hot.

    Baseline compiler

    When a function starts getting warm, the JIT will send it off to be compiled. Then it will store that compilation.

    Monitor sees function is called multiple times, signals that it should go to the baseline compiler to have a stub created

    Each line of the function is compiled to a “stub”. The stubs are indexed by line number and variable type (I’ll explain why that’s important later). If the monitor sees that execution is hitting the same code again with the same variable types, it will just pull out its compiled version.

    That helps speed things up. But like I said, there’s more a compiler can do. It can take some time to figure out the most efficient way to do things… to make optimizations.

    The baseline compiler will make some of these optimizations (I give an example of one below). It doesn’t want to take too much time, though, because it doesn’t want to hold up execution too long.

    However, if the code is really hot—if it’s being run a whole bunch of times—then it’s worth taking the extra time to make more optimizations.

    Optimizing compiler

    When a part of the code is very hot, the monitor will send it off to the optimizing compiler. This will create another, even faster, version of the function that will also be stored.

    Monitor sees function is called even more times, signals that it should be fully optimized

    In order to make a faster version of the code, the optimizing compiler has to make some assumptions.

    For example, if it can assume that all objects created by a particular constructor have the same shape—that is, that they always have the same property names, and that those properties were added in the same order— then it can cut some corners based on that.

    The optimizing compiler uses the information the monitor has gathered by watching code execution to make these judgments. If something has been true for all previous passes through a loop, it assumes it will continue to be true.

    But of course with JavaScript, there are never any guarantees. You could have 99 objects that all have the same shape, but then the 100th might be missing a property.

    So the compiled code needs to check before it runs to see whether the assumptions are valid. If they are, then the compiled code runs. But if not, the JIT assumes that it made the wrong assumptions and trashes the optimized code.

    Monitor sees that types don't match expectations, and signals to go back to interpreter. Optimizer throws out optimized code

    Then execution goes back to the interpreter or baseline compiled version. This process is called deoptimization (or bailing out).

    Usually optimizing compilers make code faster, but sometimes they can cause unexpected performance problems. If you have code that keeps getting optimized and then deoptimized, it ends up being slower than just executing the baseline compiled version.

    Most browsers have added limits to break out of these optimization/deoptimization cycles when they happen. If the JIT has made more than, say, 10 attempts at optimizing and keeps having to throw it out, it will just stop trying.

    An example optimization: Type specialization

    There are a lot of different kinds of optimizations, but I want to take a look at one type so you can get a feel for how optimization happens. One of the biggest wins in optimizing compilers comes from something called type specialization.

    The dynamic type system that JavaScript uses requires a little bit of extra work at runtime. For example, consider this code:

    
    function arraySum(arr) {
      var sum = 0;
      for (var i = 0; i < arr.length; i++) {
        sum += arr[i];
      }
    }
    

    The += step in the loop may seem simple. It may seem like you can compute this in one step, but because of dynamic typing, it takes more steps than you would expect.

    Let’s assume that arr is an array of 100 integers. Once the code warms up, the baseline compiler will create a stub for each operation in the function. So there will be a stub for sum += arr[i], which will handle the += operation as integer addition.

    However,sum and arr[i] aren’t guaranteed to be integers. Because types are dynamic in JavaScript, there’s a chance that in a later iteration of the loop, arr[i] will be a string. Integer addition and string concatenation are two very different operations, so they would compile to very different machine code.

    The way the JIT handles this is by compiling multiple baseline stubs. If a piece of code is monomorphic (that is, always called with the same types) it will get one stub. If it is polymorphic (called with different types from one pass through the code to another), then it will get a stub for each combination of types that has come through that operation.

    This means that the JIT has to ask a lot of questions before it chooses a stub.

    Decision tree showing 4 type checks

    Because each line of code has its own set of stubs in the baseline compiler, the JIT needs to keep checking the types each time the line of code is executed. So for each iteration through the loop, it will have to ask the same questions.

    Code looping with JIT asking what types are being used in each loop

    The code would execute a lot faster if the JIT didn’t need to repeat those checks. And that’s one of the things the optimizing compiler does.

    In the optimizing compiler, the whole function is compiled together. The type checks are moved so that they happen before the loop.

    Code looping with questions being asked ahead of time

    Some JITs optimize this even further. For example, in Firefox there’s a special classification for arrays that only contain integers. If arr is one of these arrays, then the JIT doesn’t need to check if arr[i] is an integer. This means that the JIT can do all of the type checks before it enters the loop.

    Conclusion

    That is the JIT in a nutshell. It makes JavaScript run faster by monitoring the code as it’s running it and sending hot code paths to be optimized. This has resulted in many-fold performance improvements for most JavaScript applications.

    Even with these improvements, though, the performance of JavaScript can be unpredictable. And to make things faster, the JIT has added some overhead during runtime, including:

    • optimization and deoptimization
    • memory used for the monitor’s bookkeeping and recovery information for when bailouts happen
    • memory used to store baseline and optimized versions of a function

    There’s room for improvement here: that overhead could be removed, making performance more predictable. And that’s one of the things that WebAssembly does.

    In the next article, I’ll explain more about assembly and how compilers work with it.

    hacks.mozilla.orgA crash course in assembly

    This is the third part in a series on WebAssembly and what makes it fast. If you haven’t read the others, we recommend starting from the beginning.

    To understand how WebAssembly works, it helps to understand what assembly is and how compilers produce it.

    In the article on the JIT, I talked about how communicating with the machine is like communicating with an alien.

    A person holding a sign with source code on it, and an alien responding in binary

    I want to take a look now at how that alien brain works—how the machine’s brain parses and understands the communication coming in to it.

    There’s a part of this brain that’s dedicated to the thinking—things like adding and subtracting, or logical operations. There’s also a part of the brain near that which provides short-term memory, and another part that provides longer-term memory.

    These different parts have names.

    • The part that does the thinking is the Arithmetic-logic Unit (ALU).
    • The short term memory is provided by registers.
    • The longer term memory is the Random Access Memory (or RAM).

    A diagram showing the CPU, including ALU and Registers, and RAM

    The sentences in machine code are called instructions.

    What happens when one of these instructions comes into the brain? It gets split up into different parts that mean different things.

    The way that this instruction is split up is specific to the wiring of this brain.

    For example, a brain that is wired like this might always take the first six bits and pipe that in to the ALU. The ALU will figure out, based on the location of ones and zeros, that it needs to add two things together.

    This chunk is called the “opcode”, or operation code, because it tells the ALU what operation to perform.

    6-bits being taken from a 16-bit instruction and being piped into the ALU

    Then this brain would take the next two chunks of three bits each to determine which two numbers it should add. These would be addresses of the registers.

    Two 3-bit chunks being decoded to determine source registers

    Note the annotations above the machine code here, which make it easier for us humans to understand what’s going on. This is what assembly is. It’s called symbolic machine code. It’s a way for humans to make sense of the machine code.

    You can see here there is a pretty direct relationship between the assembly and the machine code for this machine. Because of this, there are different kinds of assembly for the different kinds of machine architectures that you can have. When you have a different architecture inside of a machine, it is likely to require its own dialect of assembly.

    So we don’t just have one target for our translation. It’s not just one language called machine code. It’s many different kinds of machine code. Just as we speak different languages as people, machines speak different languages.

    With human to alien translation, you may be going from English, or Russian, or Mandarin to Alien Language A or Alien language B. In programming terms, this is like going from C, or C++, or Rust to x86 or to ARM.

    You want to be able to translate any one of these high-level programming languages down to any one of these assembly languages (which corresponds to the different architectures). One way to do this would be to create a whole bunch of different translators that can go from each language to each assembly.

    Diagram showing programming languages C, C++, and Rust on the left and assembly languages x86 and ARM on the right, with arrows between every combination

    That’s going to be pretty inefficient. To solve this, most compilers put at least one layer in between. The compiler will take this high-level programming language and translate it into something that’s not quite as high level, but also isn’t working at the level of machine code. And that’s called an intermediate representation (IR).

    Diagram showing an intermediate representation between high level languages and assembly languages, with arrows going from high level programming languages to intermediate representation, and then from intermediate representation to assembly language

    This means the compiler can take any one of these higher-level languages and translate it to the one IR language. From there, another part of the compiler can take that IR and compile it down to something specific to the target architecture.

    The compiler’s front-end translates the higher-level programming language to the IR. The compiler’s backend goes from IR to the target architecture’s assembly code.

    Same diagram as above, with labels for front-end and back-end

    Conclusion

    That’s what assembly is and how compilers translate higher-level programming languages to assembly. In the next article, we’ll see how WebAssembly fits in to this.

    hacks.mozilla.orgCreating and working with WebAssembly modules

    This is the fourth part in a series on WebAssembly and what makes it fast. If you haven’t read the others, we recommend starting from the beginning.

    WebAssembly is a way to run programming languages other than JavaScript on web pages. In the past when you wanted to run code in the browser to interact with the different parts of the web page, your only option was JavaScript.

    So when people talk about WebAssembly being fast, the apples to apples comparison is to JavaScript. But that doesn’t mean that it’s an either/or situation—that you are either using WebAssembly, or you’re using JavaScript.

    In fact, we expect that developers are going to use both WebAssembly and JavaScript in the same application. Even if you don’t write WebAssembly yourself, you can take advantage of it.

    WebAssembly modules define functions that can be used from JavaScript. So just like you download a module like lodash from npm today and call functions that are part of its API, you will be able to download WebAssembly modules in the future.

    So let’s see how we can create WebAssembly modules, and then how we can use them from JavaScript.

    Where does WebAssembly fit?

    In the article about assembly, I talked about how compilers take high-level programming languages and translate them to machine code.

    Diagram showing an intermediate representation between high level languages and assembly languages, with arrows going from high level programming languages to intermediate representation, and then from intermediate representation to assembly language

    Where does WebAssembly fit into this picture?

    You might think it is just another one of the target assembly languages. That is kind of true, except that each one of those languages (x86, ARM ) corresponds to a particular machine architecture.

    When you’re delivering code to be executed on the user’s machine across the web, you don’t know what your target architecture the code will be running on.

    So WebAssembly is a little bit different than other kinds of assembly. It’s a machine language for a conceptual machine, not an actual, physical machine.

    Because of this, WebAssembly instructions are sometimes called virtual instructions. They have a much more direct mapping to machine code than JavaScript source code. They represent a sort of intersection of what can be done efficiently across common popular hardware. But they aren’t direct mappings to the particular machine code of one specific hardware.

    Same diagram as above with WebAssembly inserted between the intermediate representation and assembly

    The browser downloads the WebAssembly. Then, it can make the short hop from WebAssembly to that target machine’s assembly code.

    Compiling to .wasm

    The compiler tool chain that currently has the most support for WebAssembly is called LLVM. There are a number of different front-ends and back-ends that can be plugged into LLVM.

    Note: Most WebAssembly module developers will code in languages like C and Rust and then compile to WebAssembly, but there are other ways to create a WebAssembly module. For example, there is an experimental tool that helps you build a WebAssembly module using TypeScript, or you can code in the text representation of WebAssembly directly.

    Let’s say that we wanted to go from C to WebAssembly. We could use the clang front-end to go from C to the LLVM intermediate representation. Once it’s in LLVM’s IR, LLVM understands it, so LLVM can perform some optimizations.

    To go from LLVM’s IR (intermediate representation) to WebAssembly, we need a back-end. There is one that’s currently in progress in the LLVM project. That back-end is most of the way there and should be finalized soon. However, it can be tricky to get it working today.

    There’s another tool called Emscripten which is a bit easier to use at the moment. It has its own back-end that can produce WebAssembly by compiling to another target (called asm.js) and then converting that to WebAssembly. It uses LLVM under the hood, though, so you can switch between the two back-ends from Emscripten.

    Diagram of the compiler toolchain

    Emscripten includes many additional tools and libraries to allow porting whole C/C++ codebases, so it’s more of a software developer kit (SDK) than a compiler. For example, systems developers are used to having a filesystem that they can read from and write to, so Emscripten can simulate a file system using IndexedDB.

    Regardless of the toolchain you’ve used, the end result is a file that ends in .wasm. I’ll explain more about the structure of the .wasm file below. First, let’s look at how you can use it in JS.

    Loading a .wasm module in JavaScript

    The .wasm file is the WebAssembly module, and it can be loaded in JavaScript. As of this moment, the loading process is a little bit complicated.

    
    function fetchAndInstantiate(url, importObject) {
      return fetch(url).then(response =>
        response.arrayBuffer()
      ).then(bytes =>
        WebAssembly.instantiate(bytes, importObject)
      ).then(results =>
        results.instance
      );
    }
    

    You can see this in more depth in our docs.

    We’re working on making this process easier. We expect to make improvements to the toolchain and integrate with existing module bundlers like webpack or loaders like SystemJS. We believe that loading WebAssembly modules can be as easy as as loading JavaScript ones.

    There is a major difference between WebAssembly modules and JS modules, though. Currently, functions in WebAssembly can only use numbers (integers or floating point numbers) as parameters or return values.

    Diagram showing a JS function calling a C function and passing in an integer, which returns an integer in response

    For any data types that are more complex, like strings, you have to use the WebAssembly module’s memory.

    If you’ve mostly worked with JavaScript, having direct access to memory isn’t so familiar. More performant languages like C, C++, and Rust, tend to have manual memory management. The WebAssembly module’s memory simulates the heap that you would find in those languages.

    To do this, it uses something in JavaScript called an ArrayBuffer. The array buffer is an array of bytes. So the indexes of the array serve as memory addresses.

    If you want to pass a string between the JavaScript and the WebAssembly, you convert the characters to their character code equivalent. Then you write that into the memory array. Since indexes are integers, an index can be passed in to the WebAssembly function. Thus, the index of the first character of the string can be used as a pointer.

    Diagram showing a JS function calling a C function with an integer that represents a pointer into memory, and then the C function writing into memory

    It’s likely that anybody who’s developing a WebAssembly module to be used by web developers is going to create a wrapper around that module. That way, you as a consumer of the module don’t need to know about memory management.

    If you want to learn more, check out our docs on working with WebAssembly’s memory.

    The structure of a .wasm file

    If you are writing code in a higher level language and then compiling it to WebAssembly, you don’t need to know how the WebAssembly module is structured. But it can help to understand the basics.

    If you haven’t already, we suggest reading the article on assembly (part 3 of the series).

    Here’s a C function that we’ll turn into WebAssembly:

    
    int add42(int num) {
      return num + 42;
    }
    

    You can try using the WASM Explorer to compile this function.

    If you open up the .wasm file (and if your editor supports displaying it), you’ll see something like this.

    
    00 61 73 6D 0D 00 00 00 01 86 80 80 80 00 01 60
    01 7F 01 7F 03 82 80 80 80 00 01 00 04 84 80 80
    80 00 01 70 00 00 05 83 80 80 80 00 01 00 01 06
    81 80 80 80 00 00 07 96 80 80 80 00 02 06 6D 65
    6D 6F 72 79 02 00 09 5F 5A 35 61 64 64 34 32 69
    00 00 0A 8D 80 80 80 00 01 87 80 80 80 00 00 20
    00 41 2A 6A 0B
    

    That is the module in its “binary” representation. I put quotes around binary because it’s usually displayed in hexadecimal notation, but that can be easily converted to binary notation, or to a human readable format.

    For example, here’s what num + 42 looks like.

    Table showing hexadecimal representation of 3 instructions (20 00 41 2A 6A), their binary representation, and then the text representation (get_local 0, i32.const 42, i32.add)

    How the code works: a stack machine

    In case you’re wondering, here’s what those instructions would do.

    Diagram showing that get_local 0 gets value of first param and pushes it on the stack, i32.const 42 pushes a constant value on the stack, and i32.add adds the top two values from the stack and pushes the result

    You might have noticed that the add operation didn’t say where its values should come from. This is because WebAssembly is an example of something called a stack machine. This means that all of the values an operation needs are queued up on the stack before the operation is performed.

    Operations like add know how many values they need. Since add needs two, it will take two values from the top of the stack. This means that the add instruction can be short (a single byte), because the instruction doesn’t need to specify source or destination registers. This reduces the size of the .wasm file, which means it takes less time to download.

    Even though WebAssembly is specified in terms of a stack machine, that’s not how it works on the physical machine. When the browser translates WebAssembly to the machine code for the machine the browser is running on, it will use registers. Since the WebAssembly code doesn’t specify registers, it gives the browser more flexibility to use the best register allocation for that machine.

    Sections of the module

    Besides the add42 function itself, there are other parts in the .wasm file. These are called sections. Some of the sections are required for any module, and some are optional.

    Required:

    1. Type. Contains the function signatures for functions defined in this module and any imported functions.
    2. Function. Gives an index to each function defined in this module.
    3. Code. The actual function bodies for each function in this module.

    Optional:

    1. Export. Makes functions, memories, tables, and globals available to other WebAssembly modules and JavaScript. This allows separately-compiled modules to be dynamically linked together. This is WebAssembly’s version of a .dll.
    2. Import. Specifies functions, memories, tables, and globals to import from other WebAssembly modules or JavaScript.
    3. Start. A function that will automatically run when the WebAssembly module is loaded (basically like a main function).
    4. Global. Declares global variables for the module.
    5. Memory. Defines the memory this module will use.
    6. Table. Makes it possible to map to values outside of the WebAssembly module, such as JavaScript objects. This is especially useful for allowing indirect function calls.
    7. Data. Initializes imported or local memory.
    8. Element. Initializes an imported or local table.

    For more on sections, here’s a great in-depth explanation of how these sections work.

    Coming up next

    Now that you know how to work with WebAssembly modules, let’s look at why WebAssembly is fast.

    Air MozillaMartes Mozilleros, 28 Feb 2017

    Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

    hacks.mozilla.orgWhat makes WebAssembly fast?

    This is the fifth part in a series on WebAssembly and what makes it fast. If you haven’t read the others, we recommend starting from the beginning.

    In the last article, I explained that programming with WebAssembly or JavaScript is not an either/or choice. We don’t expect that too many developers will be writing full WebAssembly code bases.

    So developers don’t need to choose between WebAssembly and JavaScript for their applications. However, we do expect that developers will swap out parts of their JavaScript code for WebAssembly.

    For example, the team working on React could replace their reconciler code (aka the virtual DOM) with a WebAssembly version. People who use React wouldn’t have to do anything… their apps would work exactly as before, except they’d get the benefits of WebAssembly.

    The reason developers like those on the React team would make this swap is because WebAssembly is faster. But what makes it faster?

    What does JavaScript performance look like today?

    Before we can understand the differences in performance between JavaScript and WebAssembly, we need to understand the work that the JS engine does.

    This diagram gives a rough picture of what the start-up performance of an application might look like today.

    The time that the JS engine spends doing any one of these tasks depends on the JavaScript the page uses. This diagram isn’t meant to represent precise performance numbers. Instead, it’s meant to provide a high-level model of how performance for the same functionality would be different in JS vs WebAssembly.

    Diagram showing 5 categories of work in current JS engines

    Each bar shows the time spent doing a particular task.

    • Parsing—the time it takes to process the source code into something that the interpreter can run.
    • Compiling + optimizing—the time that is spent in the baseline compiler and optimizing compiler. Some of the optimizing compiler’s work is not on the main thread, so it is not included here.
    • Re-optimizing—the time the JIT spends readjusting when its assumptions have failed, both re-optimizing code and bailing out of optimized code back to the baseline code.
    • Execution—the time it takes to run the code.
    • Garbage collection—the time spent cleaning up memory.

    One important thing to note: these tasks don’t happen in discrete chunks or in a particular sequence. Instead, they will be interleaved. A little bit of parsing will happen, then some execution, then some compiling, then some more parsing, then some more execution, etc.

    The performance this breakdown brings is a big improvement from the early days of JavaScript, which would have looked more like this:

    Diagram showing 3 categories of work in past JS engines (parse, execute, and garbage collection) with times being much longer than previous diagram

    In the beginning, when it was just an interpreter running the JavaScript, execution was pretty slow. When JITs were introduced, it drastically sped up execution time.

    The tradeoff is the overhead of monitoring and compiling the code. If JavaScript developers kept writing JavaScript in the same way that they did then, the parse and compile times would be tiny. But the improved performance led developers to create larger JavaScript applications.

    This means there’s still room for improvement.

    How does WebAssembly compare?

    Here’s an approximation of how WebAssembly would compare for a typical web application.

    Diagram showing 3 categories of work in WebAssembly (decode, compile + optimize, and execute) with times being much shorter than either of the previous diagrams

    There are slight variations between browsers in how they handle all of these phases. I’m using SpiderMonkey as my model here.

    Fetching

    This isn’t shown in the diagram, but one thing that takes up time is simply fetching the file from the server.

    Because WebAssembly is more compact than JavaScript, fetching it is faster. Even though compaction algorithms can significantly reduce the size of a JavaScript bundle, the compressed binary representation of WebAssembly is still smaller.

    This means it takes less time to transfer it between the server and the client. This is especially true over slow networks.

    Parsing

    Once it reaches the browser, JavaScript source gets parsed into an Abstract Syntax Tree.

    Browsers often do this lazily, only parsing what they really need to at first and just creating stubs for functions which haven’t been called yet.

    From there, the AST is converted to an intermediate representation (called bytecode) that is specific to that JS engine.

    In contrast, WebAssembly doesn’t need to go through this transformation because it is already an intermediate representation. It just needs to be decoded and validated to make sure there aren’t any errors in it.

    Diagram comparing parsing in current JS engine with decoding in WebAssembly, which is shorter

    Compiling + optimizing

    As I explained in the article about the JIT, JavaScript is compiled during the execution of the code. Depending on what types are used at runtime, multiple versions of the same code may need to be compiled.

    Different browsers handle compiling WebAssembly differently. Some browsers do a baseline compilation of WebAssembly before starting to execute it, and others use a JIT.

    Either way, the WebAssembly starts off much closer to machine code. For example, the types are part of the program. This is faster for a few reasons:

    1. The compiler doesn’t have to spend time running the code to observe what types are being used before it starts compiling optimized code.
    2. The compiler doesn’t have to compile different versions of the same code based on those different types it observes.
    3. More optimizations have already been done ahead of time in LLVM. So less work is needed to compile and optimize it.

    Diagram comparing compiling + optimizing, with WebAssembly being shorter

    Reoptimizing

    Sometimes the JIT has to throw out an optimized version of the code and retry it.

    This happens when assumptions that the JIT makes based on running code turn out to be incorrect. For example, deoptimization happens when the variables coming into a loop are different than they were in previous iterations, or when a new function is inserted in the prototype chain.

    There are two costs to deoptimization. First, it takes some time to bail out of the optimized code and go back to the baseline version. Second, if that function is still being called a lot, the JIT may decide to send it through the optimizing compiler again, so there’s the cost of compiling it a second time.

    In WebAssembly, things like types are explicit, so the JIT doesn’t need to make assumptions about types based on data it gathers during runtime. This means it doesn’t have to go through reoptimization cycles.

    Diagram showing that reoptimization happens in JS, but is not required for WebAssembly

    Executing

    It is possible to write JavaScript that executes performantly. To do it, you need to know about the optimizations that the JIT makes. For example, you need to know how to write code so that the compiler can type specialize it, as explained in the article on the JIT.

    However, most developers don’t know about JIT internals. Even for those developers who do know about JIT internals, it can be hard to hit the sweet spot. Many coding patterns that people use to make their code more readable (such as abstracting common tasks into functions that work across types) get in the way of the compiler when it’s trying to optimize the code.

    Plus, the optimizations a JIT uses are different between browsers, so coding to the internals of one browser can make your code less performant in another.

    Because of this, executing code in WebAssembly is generally faster. Many of the optimizations that JITs make to JavaScript (such as type specialization) just aren’t necessary with WebAssembly.

    In addition, WebAssembly was designed as a compiler target. This means it was designed for compilers to generate, and not for human programmers to write.

    Since human programmers don’t need to program it directly, WebAssembly can provide a set of instructions that are more ideal for machines. Depending on what kind of work your code is doing, these instructions run anywhere from 10% to 800% faster.

    Diagram comparing execution, with WebAssembly being shorter

    Garbage collection

    In JavaScript, the developer doesn’t have to worry about clearing out old variables from memory when they aren’t needed anymore. Instead, the JS engine does that automatically using something called a garbage collector.

    This can be a problem if you want predictable performance, though. You don’t control when the garbage collector does its work, so it may come at an inconvenient time. Most browsers have gotten pretty good at scheduling it, but it’s still overhead that can get in the way of your code’s execution.

    At least for now, WebAssembly does not support garbage collection at all. Memory is managed manually (as it is in languages like C and C++). While this can make programming more difficult for the developer, it does also make performance more consistent.

    Diagram showing that garbage collection happens in JS, but is not required for WebAssembly

    Conclusion

    WebAssembly is faster than JavaScript in many cases because:

    • fetching WebAssembly takes less time because it is more compact than JavaScript, even when compressed.
    • decoding WebAssembly takes less time than parsing JavaScript.
    • compiling and optimizing takes less time because WebAssembly is closer to machine code than JavaScript and already has gone through optimization on the server side.
    • reoptimizing doesn’t need to happen because WebAssembly has types and other information built in, so the JS engine doesn’t need to speculate when it optimizes the way it does with JavaScript.
    • executing often takes less time because there are fewer compiler tricks and gotchas that the developer needs to know to write consistently performant code, plus WebAssembly’s set of instructions are more ideal for machines.
    • garbage collection is not required since the memory is managed manually.

    This is why, in many cases, WebAssembly will outperform JavaScript when doing the same task.

    There are some cases where WebAssembly doesn’t perform as well as expected, and there are also some changes on the horizon that will make it faster. I’ll cover those in the next article.

    hacks.mozilla.orgWhere is WebAssembly now and what’s next?

    This is the sixth part in a series on WebAssembly and what makes it fast. If you haven’t read the others, we recommend starting from the beginning.

    On February 28, the four major browsers announced their consensus that the MVP of WebAssembly is complete. This provides a stable initial version that browsers can start shipping.

    Personified logos of 4 major browsers high-fiving

    This provides a stable core that browsers can ship. This core doesn’t contain all of the features that the community group is planning, but it does provide enough to make WebAssembly fast and usable.

    With this, developers can start shipping WebAssembly code. For earlier versions of browsers, developers can send down an asm.js version of the code. Because asm.js is a subset of JavaScript, any JS engine can run it. With Emscripten, you can compile the same app to both WebAssembly and asm.js.

    Even in the initial release, WebAssembly will be fast. But it should get even faster in the future, through a combination of fixes and new features.

    Improving WebAssembly performance in browsers

    Some speed improvements will come as browsers improve WebAssembly support in their engines. The browser vendors are working on these issues independently.

    Faster function calls between JS and WebAssembly

    Currently, calling a WebAssembly function in JS code is slower than it needs to be. That’s because it has to do something called “trampolining”. The JIT doesn’t know how to deal directly with WebAssembly, so it has to route the WebAssembly to something that does. This is a slow piece of code in the engine itself, which does setup to run the optimized WebAssembly code.

    Person jumping from JS on to a trampoline setup function to get to WebAssembly

    This can be up to 100x slower than it would be if the JIT knew how to handle it directly.

    You won’t notice this overhead if you’re passing a single large task off to the WebAssembly module. But if you have lots of back-and-forth between WebAssembly and JS (as you do with smaller tasks), then this overhead is noticeable.

    Faster load time

    JITs have to manage the tradeoff between faster load times and faster execution times. If you spend more time compiling and optimizing ahead of time, that speeds up execution, but it slows down start up.

    There’s a lot of ongoing work to balance up-front compilation (which ensures there is no jank once the code starts running) and the basic fact that most parts of the code won’t be run enough times to make optimization worth it.

    Since WebAssembly doesn’t need to speculate what types will be used, the engines don’t have to worry about monitoring the types at runtime. This gives them more options, for example parallelizing compilation work with execution.

    Plus, recent additions to the JavaScript API will allow streaming compilation of WebAssembly. This means that the engine can start compiling while bytes are still being downloaded.

    In Firefox we’re working on a two-compiler system. One compiler will run ahead of time and do a pretty good job at optimizing the code. While that’s running code, another compiler will do a full optimization in the background. The fully-optimized version of the code will be swapped in when it’s ready.

    Adding post-MVP features to the spec

    One of the goals of WebAssembly is to specify in small chunks and test along the way, rather than designing everything up front.

    This means there are lots of features that are expected, but haven’t been 100% thought-through yet. They will have to go through the specification process, which all of the browser vendors are active in.

    These features are called future features. Here are just a few.

    Working directly with the DOM

    Currently, there’s no way to interact with the DOM. This means you can’t do something like element.innerHTML to update a node from WebAssembly.

    Instead, you have to go through JS to set the value. This can mean passing a value back to the JavaScript caller. On the other hand, it can mean calling a JavaScript function from within WebAssembly—both JavaScript and WebAssembly functions can be used as imports in a WebAssembly module.

    Person reaching around from WebAssembly through JS to get to the DOM

    Either way, it is likely that going through JavaScript is slower than direct access would be. Some applications of WebAssembly may be held up until this is resolved.

    Shared memory concurrency

    One way to speed up code is to make it possible for different parts of the code to run at the same time, in parallel. This can sometimes backfire, though, since the overhead of communication between threads can take up more time than the task would have in the first place.

    But if you can share memory between threads, it reduces this overhead. To do this, WebAssembly will use JavaScript’s new SharedArrayBuffer. Once that is in place in the browsers, the working group can start specifying how WebAssembly should work with them.

    SIMD

    If you read other posts or watch talks about WebAssembly, you may hear about SIMD support. The acronym stands for single instruction, multiple data. It’s another way of running things in parallel.

    SIMD makes it possible to take a large data structure, like a vector of different numbers, and apply the same instruction to different parts at the same time. In this way, it can drastically speed up the kinds of complex computations you need for games or VR.

    This is not too important for the average web app developer. But it is very important to developers working with multimedia, such as game developers.

    Exception handling

    Many code bases in languages like C++ use exceptions. However, exceptions aren’t yet specified as part of WebAssembly.

    If you are compiling your code with Emscripten, it will emulate exception handling for some compiler optimization levels. This is pretty slow, though, so you may want to use the DISABLE_EXCEPTION_CATCHING flag to turn it off.

    Once exceptions are handled natively in WebAssembly, this emulation won’t be necessary.

    Other improvements—making things easier for developers

    Some future features don’t affect performance, but will make it easier for developers to work with WebAssembly.

    • First-class source-level developer tools. Currently, debugging WebAssembly in the browser is like debugging raw assembly. Very few developers can mentally map their source code to assembly, though. We’re looking at how we can improve tooling support so that developers can debug their source code.
    • Garbage collection. If you can define your types ahead of time, you should be able to turn your code into WebAssembly. So code using something like TypeScript should be compilable to WebAssembly. The only hitch currently, though, is that WebAssembly doesn’t know how to interact with existing garbage collectors, like the one built in to the JS engine. The idea of this future feature is to give WebAssembly first-class access to the builtin GC with a set of low-level GC primitive types and operations.
    • ES6 Module integration. Browsers are currently adding support for loading JavaScript modules using the script tag. Once this feature is added, a tag like <script src=url type="module"> could work even if url points to a WebAssembly module.

    Conclusion

    WebAssembly is fast today, and with new features and improvements to the implementation in browsers, it should get even faster.

    Air MozillaIDGA Diversity Mentor Cafe Evening Event

    IDGA Diversity Mentor Cafe Evening Event An evening event from 4pm-7pm for 70 attendees to do a short presentation and network.

    The Mozilla BlogMozilla Acquires Pocket

    We are excited to announce that the Mozilla Corporation has completed the acquisition of Read It Later, Inc. the developers of Pocket.

    Mozilla is growing, experimenting more, and doubling down on our mission to keep the internet healthy, as a global public resource that’s open and accessible to all. As our first strategic acquisition, Pocket contributes to our strategy by growing our mobile presence and providing people everywhere with powerful tools to discover and access high quality web content, on their terms, independent of platform or content silo.

    Pocket will join Mozilla’s product portfolio as a new product line alongside the Firefox web browsers with a focus on promoting the discovery and accessibility of high quality web content. (Here’s a link to their blog post on the acquisition).  Pocket’s core team and technology will also accelerate Mozilla’s broader Context Graph initiative.

    Pocket Application on Android, Desktop and iPhone

    “We believe that the discovery and accessibility of high quality web content is key to keeping the internet healthy by fighting against the rising tide of centralization and walled gardens. Pocket provides people with the tools they need to engage with and share content on their own terms, independent of hardware platform or content silo, for a safer, more empowered and independent online experience.” – Chris Beard, Mozilla CEO

    Pocket brings to Mozilla a successful human-powered content recommendation system with 10 million unique monthly active users on iOS, Android and the Web, and with more than 3 billion pieces of content saved to date.

    In working closely with Pocket over the last year around the integration within Firefox, we developed a shared vision and belief in the opportunity to do more together that has led to Pocket joining Mozilla today.

    “We’ve really enjoyed partnering with Mozilla over the past year. We look forward to working more closely together to support the ongoing growth of Pocket and to create great new products that people love in support of our shared mission.” – Nate Weiner, Pocket CEO

    As a result of this strategic acquisition, Pocket will become a wholly owned subsidiary of Mozilla Corporation and will become part of the Mozilla open source project.

    About Mozilla: Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets and mobile devices. For more information, visit www.mozilla.org.

    About Pocket: Pocket, made by Read It Later, Inc., is the world’s leading save-for-later service. It currently has more than 10 million active monthly registered users and is integrated into hundreds of leading apps including Flipboard and Twitter. Pocket helps people save interesting articles, videos and more from the web for later enjoyment. Once saved to Pocket, content is visible on any device — phone, tablet or computer, online or off. Pocket is available for major devices and platforms including Firefox, Google Chrome, Safari, iOS, Android and Windows. For more information, visit www.getpocket.com/about.

    Download Pocket visuals:

    The post Mozilla Acquires Pocket appeared first on The Mozilla Blog.

    Air MozillaMozilla Weekly Project Meeting, 27 Feb 2017

    Mozilla Weekly Project Meeting The Monday Project Meeting

    Mozilla Add-ons BlogBecome a Featured Themes Collection Curator

    The Featured Themes collection is a great place to start if you’re looking for nice lightweight themes to personalize your Firefox. From kittens to foxes, winter snowscapes to sunny beaches, it is a continually rotating collection of high-quality themes in a variety of colors, images, and moods.

    Currently, volunteer theme reviewers are invited to help curate this collection, but we’d like to open it to more community participation. We invite theme creators with a keen eye for design to apply to become featured themes curators. Over a six-month period, volunteer curators will join a small group of theme reviewers and staff members to add 3 – 5 themes each week to the collection and to remove any themes that have been featured for longer than two weeks.

    To learn more about becoming a featured themes curator and what it entails please take a look at the wiki. If you would like to apply to become a curator, please email cneiman [at] mozilla [dot] com with a link to your AMO profile, a brief statement about why you would make a strong curator, and a link to a collection of at least five themes that you feel are feature-worthy.

    The post Become a Featured Themes Collection Curator appeared first on Mozilla Add-ons Blog.