hacks.mozilla.orgMoving Firefox to a faster 4-week release cycle

Overview

We typically ship a major Firefox browser (Desktop and Android) release every 6 to 8 weeks. Building and releasing a browser is complicated and involves many players. To optimize the process, and make it more reliable for all users, over the years we’ve developed a phased release strategy that includes ‘pre-release’ channels: Firefox Nightly, Beta, and Developer Edition. With this approach, we can test and stabilize new features before delivering them to the majority of Firefox users via general release.

Today’s announcement

And today we’re excited to announce that we’re moving to a four-week release cycle! We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner. Feature teams are increasingly working in sprints that align better with shorter release cycles. Considering these factors, it is time we changed our release cadence.

Starting Q1 2020, we plan to ship a major Firefox release every 4 weeks. Firefox ESR release cadence (Extended Support Release for the enterprise) will remain the same. In the years to come, we anticipate a major ESR release every 12 months with 3 months support overlap between new ESR and end-of-life of previous ESR. The next two major ESR releases will be ~June 2020 and ~June 2021.

Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements. With four-week cycles, we can be more agile and ship features faster, while applying the same rigor and due diligence needed for a high-quality and stable release. Also, we put new features and implementation of new Web APIs into the hands of developers more quickly. (This is what we’ve been doing recently with CSS spec implementations and updates, for instance.)

In order to maintain quality and minimize risk in a shortened cycle, we must:

  • Ensure Firefox engineering productivity is not negatively impacted.
  • Speed up the regression feedback loop from rollout to detection to resolution.
  • Be able to control feature rollout based on release readiness.
  • Ensure adequate testing of larger features that span multiple release cycles.
  • Have clear, consistent mitigation and decision processes.

Firefox rollouts and feature experiments

Given a shorter Beta cycle, support for our pre-release channel users is essential, including developers using Firefox Beta or Developer Edition. We intend to roll out fixes to them as quickly as possible. Today, we produce two Beta builds per week. Going forward, we will move to more frequent Beta builds, similar to what we have today in Firefox Nightly.

Staged rollouts of features will be a continued best practice. This approach helps minimize unexpected (quality, stability or performance) disruptions to our release end-users. For instance, if a feature is deemed high-risk, we will plan for slow rollout to end-users and turn the feature off dynamically if needed.

We will continue to foster a culture of feature experimentation and A/B testing before rollout to release. Currently, the duration of experiments is not tied to a release cycle length and therefore not impacted by this change. In fact, experiment length is predominantly a factor of time needed for user enrollment, time to trigger the study or experiment and collect the necessary data, followed by data analysis needed to make a go/no-go decision.

Despite the shorter release cycles, we will do our best to localize all new strings in all locales supported by Firefox. We value our end-users from all across the globe. And we will continue to delight you with localized versions of Firefox.

Firefox release schedule 2019 – 2020

Firefox engineering will deploy this change gradually, starting with Firefox 71. We aim to achieve 4-week release cadence by Q1 2020. The table below lists Firefox versions and planned launch dates. Note: These are subject to change due to business reasons.

a table showing the release dates for Firefox GA and pre-release channels, 2019-2020

Process and product quality metrics

As we slowly reduce our release cycle length, from 7 weeks down to 6, 5, 4 weeks, we will monitor closely. We’ll watch aspects like release scope change; developer productivity impact (tree closure, build failures); beta churn (uplifts, new regressions); and overall release stabilization and quality (stability, performance, carryover regressions). Our main goal is to identify bottlenecks that prevent us from being more agile in our release cadence. Should our metrics highlight an unexpected trend, we will put in place appropriate mitigations.

Finally, projects that consume Firefox mainline or ESR releases, such as SpiderMonkey and Tor, now have more frequent releases to choose from. These 4-week releases will be the most stable, fastest, and best quality Firefox builds.

In closing, we hope you’ll enjoy the new faster cadence of Firefox releases. You can always refer to https://wiki.mozilla.org/Release_Management/Calendar for the latest release dates and other information. Got questions? Please send email to release-mgmt@mozilla.com.

The post Moving Firefox to a faster 4-week release cycle appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogExamining AI’s Effect on Media and Truth

Mozilla is announcing its eight latest Creative Media Awards. These art and advocacy projects highlight how AI intersects with online media and truth — and impacts our everyday lives

 

Today, one of the biggest issues facing the internet — and society — is misinformation.

It’s a complicated issue, but this much is certain: The artificial intelligence (AI) powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong.

Earlier this year, Mozilla called for art and advocacy projects that illuminate the role AI plays in spreading misinformation. And today, we’re announcing the winners: Eight projects that highlight how AI like machine learning impacts our understanding of the truth.

These eight projects will receive Mozilla Creative Media Awards totalling $200,000, and will launch to the public by May 2020. They include a Turing Test app; a YouTube recommendation simulator; educational deepfakes; and more. Awardees hail from Japan, the Netherlands, Uganda, and the U.S. Learn more about each awardee below.

Mozilla’s Creative Media Awards fuel the people and projects on the front lines of the internet health movement. Past Creative Media Award winners have built mock dating apps that highlight algorithmic discrimination; they’ve created games that simulate the inherent bias of automated hiring; and they’ve published clever tutorials that mix cosmetic advice with cybersecurity best practices.

These eight awards align with Mozilla’s focus on fostering more trustworthy AI.


The winners

 

[1] Truth-or-Dare Turing Test | by Foreign Objects in the U.S.

This project explores deceptive AI that mimic real humans. Users play truth-or-dare with another entity, and at the conclusion of the game, must guess if they were playing with a fellow human or an AI. (“Truths” are played out using text, and “dares” are played out using an online sketchpad.) The project also includes a website outlining the state of mimicry technology, its uses, and its dangers.

 

[2] Swap the Curators in the Tube | by Tomo Kihara in Japan

This project explores how recommendation engines present different realities to different people. Users will peruse the YouTube recommendations of five wildly different personas — including a conspiracist and a racist persona — to experience how their recommendations differ.

 

[3] An Interview with ALEX | by Carrie Wang in the U.S.

The project is a browser-based experience that simulates a job interview with an AI in a future of gamified work and total surveillance. As the interview progresses, users learn that this automated HR manager is covering up the truth of this job, and using facial and speech recognition to make assumptions and decisions about them.

 

[4] The Future of Memory | by Xiaowei Wang, Jasmine Wang, and Yang Yuting in the U.S.

This project explores algorithmic censorship, and the ways language can be made illegible to such algorithms. It reverse-engineers how automated censors work, to provide a toolkit of tactics using a new “machine resistant” language, composed of emoji, memes, steganography and homophones. The project will also archive censored materials on a distributed, physical network of offline modules.

 

[5] Choose Your Own Fake News | by Pollicy in Uganda

This project uses comics and audio to explore how misinformation spreads across the African continent. Users engage in a choose-your-own-adventure game that simulates how retweets, comments, and other digital actions can sow misinformation, and how that misinformation intersects with gender, religion, and ethnicity.

 

 

[6] Deep Reckonings | by Stephanie Lepp in the U.S.

This project uses deepfakes to address the issue of deepfakes. Three false videos will show public figures — like tech executives — reckoning with the dangers of synthetic media. Each video will be clearly watermarked and labeled as a deepfake to prevent misinformation.

 

[7] In Event of Moon Disaster | by Halsey Burgund, Francesca Panetta, Magnus Bjerg Mortensen, Jeff DelViscio and the MIT Center for Advanced Virtuality

This project uses the 1969 moon landing to explore the topic of modern misinformation. Real coverage of the landing will be presented on a website alongside deepfakes and other false content, to highlight the difficulty of telling the two apart. And by tracking viewers’ attention, the project will reveal which content captivated viewers more.

 

[8] Most FACE Ever | by Kyle McDonald in the U.S.

This project teaches users about computer vision and facial recognition technology through playful challenges. Users will enable their webcam, engage with a facial recognition AI, and try to “look” a certain way — say, “criminal,” or “white.” The game reveals how inaccurate and biased facial recognition can often be.


These eight awardees were selected based on quantitative scoring of their applications by a review committee, and a qualitative discussion at a review committee meeting. Committee members included Mozilla staff, current and alumni Mozilla Fellows and Awardees, and outside experts. Selection criteria is designed to evaluate the merits of the proposed approach. Diversity in applicant background, past work, and medium were also considered.

These awards are part of the NetGain Partnership, a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The goal of this philanthropic collaboration is to advance the public interest in the digital age.

Also see (May 2019): Seeking Art that Explores AI, Media, and Truth

The post Examining AI’s Effect on Media and Truth appeared first on The Mozilla Blog.

Open Policy & AdvocacyGovernments should work to strengthen online security, not undermine it

On Friday, Mozilla filed comments in a case brought by Privacy International in the European Court of Human Rights involving government “computer network exploitation” (“CNE”)—or, as it is more colloquially known, government hacking.

While the case focuses on the direct privacy and freedom of expression implications of UK government hacking, Mozilla intervened in order to showcase the further, downstream risks to users and internet security inherent in state CNE. Our submission highlights the security and related privacy threats from government stockpiling and use of technology vulnerabilities and exploits.

Government CNE relies on the secret discovery or introduction of vulnerabilities—i.e., bugs in software, computers, networks, or other systems that create security weaknesses. “Exploits” are then built on top of the vulnerabilities. These exploits are essentially tools that take advantage of vulnerabilities in order to overcome the security of the software, hardware, or system for purposes of information gathering or disruption.

When such vulnerabilities are kept secret, they can’t be patched by companies, and the products containing the vulnerabilities continue to be distributed, leaving people at risk. The problem arises because no one—including government—can perfectly secure information about a vulnerability. Vulnerabilities can be and are independently discovered by third parties and inadvertently leaked or stolen from government. In these cases where companies haven’t had an opportunity to patch them before they get loose, vulnerabilities are ripe for exploitation by cybercriminals, other bad actors, and even other governments,1 putting users at immediate risk.

This isn’t a theoretical concern. For example, the findings of one study suggest that within a year, vulnerabilities undisclosed by a state intelligence agency may be rediscovered up to 15% of the time.2 Also, one of the worst cyber attacks in history was caused by a vulnerability and exploit stolen from NSA in 2017 that affected computers running Microsoft Windows.3 The devastation wreaked through use of that tool continues apace today.4

This example also shows how damaging it can be when vulnerabilities impact products that are in use by tens or hundreds of millions of people, even if the actual government exploit was only intended for use against one or a handful of targets.

As more and more of our lives are connected, governments and companies alike must commit to ensuring strong security. Yet state CNE significantly contributes to the prevalence of vulnerabilities that are ripe for exploitation by cybercriminals and other bad actors and can result in serious privacy and security risks and damage to citizens, enterprises, public services, and governments. Mozilla believes that governments can and should contribute to greater security and privacy for their citizens by minimizing their use of CNE and disclosing vulnerabilities to vendors as they find them.

————————
1https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/
2https://www.belfercenter.org/sites/default/files/files/publication/Vulnerability Rediscovery (belfer-revision).pdf
3https://en.wikipedia.org/wiki/WannaCry_ransomware_attack
4https://www.nytimes.com/2019/05/25/us/nsa-hacking-tool-baltimore.html

The post Governments should work to strengthen online security, not undermine it appeared first on Open Policy & Advocacy.

QMOFirefox 70 Beta 6 Testday Results

Hello Mozillians!

As you may already know,  Friday, September 13th – we held a new Testday event, for Firefox 70 Beta 6.

Thank you all for helping us make Mozilla a better place: Gabriela (gaby2300), Dan Caseley (Fishbowler) and Aishwarya Narasimhan!

Result: Several test cases were executed for Protection Report and Privacy Panel UI Updates.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO.

We will make announcements as soon as something shows up!

 

Mozilla VR BlogCreating privacy-centric virtual spaces

Creating privacy-centric virtual spaces

We now live in a world with instantaneous communication unrestrained by geography. While a generation ago, we would be limited by the speed of the post, now we’re limited by the speed of information on the Internet. This has changed how we connect with other people.

As immersive devices become more affordable, social spaces in virtual reality (VR) will become more integrated into our daily lives and interactions with friends, family, and strangers. Social media has enabled rapid pseudonymous communication, which can be directed at both a single person and large groups. If social VR is the next evolution of this, what approaches will result in spaces that respect user identities, autonomy, and safety?

We need spaces that reflect how we interact with others on a daily basis.

Social spaces: IRL and IVR

Often, when people think about social VR, what tends to come to mind are visions from the worlds of science fiction stories: Snow Crash, Ready Player One, The Matrix - huge worlds that involve thousands of strangers interacting virtually on a day to day basis. In today’s social VR ecosystem, many applications take a similarly public approach: new users are often encouraged (or forced) by the system to interact with new people in the name of developing relationships with strangers who are also participating in the shared world. This can result in more dynamic and populated spaces, but in a way that isn’t inherently understood from our regular interactions.

This approach doesn’t mirror our usual day-to-day experiences—instead of spending time with strangers, we mostly interact with people we know. Whether we’re in a private, semi-public, or public space, we tend to stick to familiarity. We can define the privacy of space by thinking about who has access to a location, and the degree to which there is established trust among other people you encounter there.

Private: a controlled space where all individuals are known to each other. In the physical world, your home is an example of a private space—you know anyone invited into your home, whether they’re a close associate, or a passing acquaintance (like a plumber)
Semi-public: a semi-controlled space where all individuals are associated with each other. For example, you might not know everyone in your workplace, but you’re all connected via your employer
Public: a public space made up of a lot of different, separate groups of people who might not have established relationships or connections. In a restaurant, while you know the group you’re dining with, you likely don’t know anyone else

Creating privacy-centric virtual spaces

While we might encounter strangers in public or semi-public spaces, most of our interactions are still with people we know. This should extend to the virtual world. However, VR devices haven’t been widely available until recently, so most companies building virtual worlds have designed their spaces in a way that prioritizes getting people in the same space, regardless of whether or not those users already know each other.

For many social VR systems, the platform hosting spaces often networks different environments and worlds together and provides a centralized directory of user-created content to go explore. While this type of discovery has benefits and values, in the physical world, we largely spend time with the same people from day to day. Why don’t we design a social platform around this?

Mozilla Hubs is a social VR platform created to provide spaces that more accurately emulate our IRL interactions. Instead of hosting a connected, open ecosystem, users create their own independent, private-by-default rooms. This creates a world where instead of wandering into others’ spaces, you intentionally invite people you know into your space.

Private by default

Communities and societies often establish their own cultural norms, signals, inside jokes, and unspoken (or written) rules — these carry over to online spaces. It can be difficult for people to be thrown into brand-new groups of users without this understanding, and there are often no guarantees that the people you’ll be interacting with in these public spaces will be receptive to other users who are joining. In contrast to these public-first platforms, we’ve designed our social VR platform, Hubs, to be private by default. This means that instead of being in an environment with strangers from the outset, Hubs rooms are designed to be private to the room owner, who can then choose who they invite into the space with the room access link.

Protecting public spaces

When we’re in public spaces, we have different sets of implied rules than the social norms that we might abide by when we’re in private. In virtual spaces, these rules aren’t always as clear and different people will behave differently in the absence of established rules or expectations. Hubs allows communities to set up their own public spaces, so that they can bring their own social norms into the spaces. When people are meeting virtually, it’s important to consider the types of interactions that you’re encouraging.

Because access to a Hubs room is predicated on having the invitation URL, the degree to which that link is shared by the room owner or visitors to the room will dictate how public or private a space is. If you know that the only people in a Hubs room are you and two of your closest friends, you probably have a pretty good sense of how the three of you interact together. If you’re hosting a meetup and expecting a crowd, those behaviors can be less clear. Without intentional community management practices, virtual spaces can turn ugly. Here are some things that you could consider to keep semi-public or public Hubs rooms safe and welcoming:

  • Keep the distribution of the Hubs link limited to known groups of trusted individuals and consider using a form or registration to handle larger events.
  • Use an integration like the Hubs Discord Bot to back users against a known identity. Removing users from a linked Discord channel also removes their ability to enter an authenticated Hubs room.
  • Lock down room permissions in the room to limit who can create new objects in the room or draw with the pen tool.
  • Create a code of conduct or set of community guidelines that are posted in the Hubs room near the room entry point.
  • Assign trusted users to act as moderators so users who is available in the space to help welcome visitors and enforce positive conduct.

We need social spaces that respect and empower participants. Here at Mozilla, we’re creating a platform that more closely reflects how we interact with others IRL. Our spaces are private by default, and Hubs allows users to control who enters their space and how visitors can behave.

Mozilla Hubs is an open source social VR platform—come try it out at hubs.mozilla.com or contribute here.

Mozilla VR BlogMultiview on WebXR

Multiview on WebXR

The WebGL multiview extension is already available in several browsers and 3D web engines and it could easily help to improve the performance on your WebXR application

What is multiview?

When VR first arrived, many engines supported stereo rendering by running all the render stages twice, one for each camera/eye. While it works it is highly inefficient.

for (eye in eyes)
	renderScene(eye)

Where renderScene will setup the viewport, shaders, and states every time is being called. This will double the cost of rendering every frame.

Later on, some optimizations started to appear in order to improve the performance and minimize the state changes.

for (object in scene) 
	for (eye in eyes)
		renderObject(object, eye)

Even if we reduce the number of state changes, by switching programs and grouping objects, the number of draw calls remains the same: two times the number of objects.

In order to minimize this bottleneck, the multiview extension was created. The TL;DR of this extension is: Using just one drawcall you can draw on multiple targets, reducing the overhead per view.

Multiview on WebXR

This is done by modifying your shader uniforms with the information for each view and accessing them with the gl_ViewID_OVR, similar to how the Instancing API works.

in vec4 inPos;
uniform mat4 u_viewMatrices[2];
void main() {
    gl_Position = u_viewMatrices[gl_ViewID_OVR] * inPos;
}

The resulting render loop with the multiview extension will look like:

for (object in scene)
    setUniformsForBothEyes() // Left/Right camera matrices
    renderObject(object)

This extension can be used to improve multiple tasks as cascaded shadow maps, rendering cubemaps, rendering multiple viewports as in CAD software, although the most common use case is stereo rendering.

Stereo rendering is also our main target as this will improve the VR rendering path performance with just a few modifications in a 3D engine. Currently, most of the headsets have two views, but there are prototypes of headset with ultra-wide FOV using 4 views which is currently the maximum number of views supported by multiview.

Multiview in WebGL

Once the OpenGL OVR_multiview2 specification was created, the WebGL working group started to make a WebGL version of this API.

It’s been a while since our first experiment supporting multiview on servo and three.js. Back then it was quite a challenge to support WEBGL_multiview: it was based on opaque framebuffers and it was possible to use it with WebGL1 but the shaders need to be compiled with GLSL 3.0 support, which was only available on WebGL2, so some hacks on the servo side were needed in order to get it running.
At that time the WebVR spec had a proposal to support multiview but it was not approved.

Thanks to the work of the WebGL WG, the multiview situation has improved a lot in the last few months. The specification is already in the Community Approved status, which means that browsers could ship it enabled by default (As we do on Firefox desktop 70 and Firefox Reality 1.4)

Some important restrictions of the final specification to notice:

  • It only supports WebGL2 contexts, as it needs GLSL 3.00 and texture arrays.
  • Currently there is no way to use multiview to render to a multisampled backbuffer, so you should create contexts with antialias: false. (The WebGL WG is working on a solution for this)

Web engines with multiview support

We have been working for a while on adding multiview support to three.js (PR). Currently it is possible to get the benefits of multiview automatically as long as the extension is available and you define a WebGL2 context without antialias:

var context = canvas.getContext( 'webgl2', { antialias: false } );
renderer = new THREE.WebGLRenderer( { canvas: canvas, context: context } );

You can see a three.js example using multiview here (source code).

A-Frame is based on three.js so they should get multiview support as soon as they update to the latest release.

Babylon.js has had support for OVR_multiview2 already for a while (more info).

For details on how to use multiview directly without using any third party engine, you could take a look at the three.js implementation, see the specification’ sample code or read this tutorial by Oculus.

Browsers with multiview support

The extension was just approved by the Community recently so we expect to see all the major browsers adding support for it by default soon

  • Firefox Desktop: Firefox 71 will support multiview enabled by default. In the meantime, you can test it on Firefox Nightly by enabling draft extensions.
  • Firefox Reality: It’s already enabled by default since version 1.3.
  • Oculus Browser: It’s implemented but disabled by default, you must enable Draft WebGL Extension preference in order to use it.
  • Chrome: You can use it on Chrome Canary for Windows by running it with the following command line parameters: --use-cmd-decoder=passthrough --enable-webgl-draft-extensions

Performance improvements

Most WebGL or WebXR applications are CPU bound, the more objects you have on the scene the more draw calls you will submit to the GPU. In our benchmarks for stereo rendering with two views, we got a consistent improvement of ~40% compared to traditional rendering.
As you can see on the following chart, the more cubes (drawcalls) you have to render, the better the performance will be.
Multiview on WebXR

What’s next?

The main drawback when using the current multiview extension is that there is no way to render to a multisampling backbuffer. In order to use it with WebXR you should set antialias: false when creating the context. However this is something the WebGL WG is working on.

As soon as they come up with a proposal and is implemented by the browsers, 3D engines should support it automatically. Hopefully, we will see new extensions arriving to the WebGL and WebXR ecosystem to improve the performance and quality of the rendering, such as the ones exposed by Nvidia VRWorks (eg: Variable Rate Shading and Lens Matched Shading).

References

https://developer.nvidia.com/vrworks/graphics/multiview
https://developer.oculus.com/documentation/mobilesdk/latest/concepts/mobile-multiview/
https://www.khronos.org/registry/OpenGL/extensions/OVR/OVR_multiview2.txt
https://community.arm.com/developer/tools-software/graphics/b/blog/posts/optimizing-virtual-reality-understanding-multiview
https://arm-software.github.io/opengl-es-sdk-for-android/multiview.html
https://github.com/KhronosGroup/WebGL/issues/2912
https://developer.oculus.com/documentation/oculus-browser/latest/concepts/browser-multiview/

* Header image by Nvidia VRWorks

Mozilla VR BlogFirefox Reality 1.4

Firefox Reality 1.4

Firefox Reality 1.4 is now available for users in the Viveport and Oculus stores.

With this release, we’re excited to announce that users can enjoy browsing in multiple windows side-by-side. Each window can be set to the size and position of your choice, for a super customizable experience.

Firefox Reality 1.4

And, by popular demand, we’ve enabled local browsing history, so you can get back to sites you've visited before without typing. Sites in your history will also appear as you type in the search bar, so you can complete the address quickly and easily. You can clear your history or turn it off anytime from within Settings.

The Content Feed also has a new and improved menu of hand-curated “Best of WebVR” content for you to explore. You can look forward to monthly updates featuring a selection of new content across different categories including Animation, Extreme (sports/adrenaline/adventure), Music, Art & Experimental and our personal favorite way to wind down a day, 360 Chill.

Additional highlights

  • Movable keyboard, so you can place it where it’s most comfortable to type.
  • Tooltips on buttons and actions throughout the app.
  • Updated look and feel for the Bookmarks and History views so you can see and interact better at all window sizes.
  • An easy way to request the desktop version of a site that doesn’t display well in VR, right from the search bar.
  • Updated and reorganized settings to be easier to find and understand.
  • Added the ability to set a preferred website language order.

Full release notes can be found in our GitHub repo here.

Stay tuned as we keep improving Firefox Reality! We’re currently working on integrating your Firefox Account so you’ll be able to easily send tabs to and from VR from other devices. New languages and copy/paste are also coming soon, in addition to continued improvements in performance and stability.

Firefox Reality is available right now. Go and get it!
Download for Oculus Go
Download for Oculus Quest
Download for Viveport (Search for Firefox Reality in Viveport store)

Mozilla VR BlogWebXR emulator extension

WebXR emulator extension

We are happy to announce the release of our WebXR emulator browser extension which helps WebXR content creation.

We understand that developing and debugging WebXR experiences is hard for many reasons:

  • You must own a physical XR device
  • Lack of support of XR devices on some platforms, as macOS
  • Putting on and taking off the headset all the time is an uncomfortable task
  • In order to make your app responsive across form factors, you must own tons of devices: mobile, tethered, 3dof, 6dof, and so on

With this extension, we aim to soften most of these issues.

WebXR emulator extension emulates XR devices so that you can directly enter immersive(VR) mode from your desktop browser and test your WebXR application without the need of any XR devices. It emulates multiple XR devices, so you can select which one you want to test.

The extension is built on top of the WebExtensions API, so it works on Firefox, Chrome, and other browsers supporting the API.

WebXR emulator extension

How can I use it?

  1. Install the extension from the extension stores (Firefox, Chrome)
  2. Launch a WebXR application, for example the Three.js examples. You will notice that the application detects that you have a VR device (emulated) and it will let you enter the immersive (VR) mode.
  3. Open the “WebXR” tab in the browser’s developer tool (Firefox, Chrome) to control the emulated device. You can move the headset and controllers and trigger the controller buttons. You will see their transforms reflected in the WebXR application.
    WebXR emulator extension

What’s next?

The development of this extension is still at an early stage. We have many awesome features planned, including:

  • Recording and replaying of actions and movements of your XR devices so you don’t have to replicate them every time you want to test your app and can share them with others.
  • Incorporate new XR devices
  • Control the headset and controllers using a standard gamepad like the Xbox or PS4 controllers or use your mobile as 3dof device
  • Something else?

We would love your feedback! What new features do you want next? Any problems with the extension on your WebXR application? Please join us on GitHub to discuss them.

Lastly, we would like to give a shout out to the WebVR API emulation Extension by Jaume Sanchez as it was a true inspiration for us when building this one.

Open Policy & AdvocacyCASE Act Threatens User Rights in the United States

This week, the House Judiciary Committee is expected to mark up the Copyright Alternative in Small Claims Enforcement (CASE) Act of 2019 (H.R. 2426). While the bill is designed to streamline the litigation process, it will impose severe costs upon users and the broader internet ecosystem. More specifically, the legislation would create a new administrative tribunal for claims with limited legal recourse for users, incentivizing copyright trolling and violating constitutional principles. Mozilla has always worked for copyright reform that supports businesses and internet users, and we believe that the CASE Act will stunt innovation and chill free expression online. With this in mind, we urge members to oppose passage of H.R. 2426.

First, the tribunal created by the legislation conflicts with well-established separation of powers principles and limits due process for potential defendants. Under the CASE Act, a new administrative board would be created within the Copyright Office to review claims of infringement. However, as Professor Pamela Samuelson and Kathryn Hashimoto of Berkeley Law point out, it is not clear that Congress has the authority under Article I of the Constitution to create this tribunal. Although Congress can create tribunals that adjudicate “public rights” matters between the government and others, the creation of a board to decide infringement disputes between two private parties would represent an overextension of its authority into an area traditionally governed by independent Article III courts.

Moreover, defendants subject to claims under the CASE Act will be funneled into this process with strictly limited avenues for appeal. The legislation establishes the tribunal as a default legal process for infringement claims–defendants will be forced into the process unless they explicitly opt-out. This implicitly places the burden on the user, and creates a more coercive model that will disadvantage defendants who are unfamiliar with the nuances of this new legal system. And if users have objections to the decision issued by the tribunal, the legislation severely restricts access to justice by limiting substantive court appeals to cases in which the board exceeded its authority; failed to render a final determination; or issued a determination as a result of fraud, corruption, or other misconduct.

While the board is supposed to be reserved for small claims, the tribunal is authorized to award damages of up to $30,000 per proceeding. For many people, this supposedly “small” amount would be enough to completely wipe out their household savings. Since the forum allows for statutory damages to be imposed, the plaintiff does not even have to show any actual harm before imposing potentially ruinous costs on the defendant.

These damages awards are completely out of place in what is being touted as a small claims tribunal. As Stan Adams of the Center for Democracy and Technology notes, awards as high as $30,000 exceed the maximum awards for small claims courts in 49 out of 50 states. In some cases, they would be ten times higher than the damages available in small claims court.

The bill also authorizes the Register of Copyrights to unilaterally establish a forum for claims of up to $5,000 to be decided by a singular Copyright Claims Officer, without any pre-established explicit due process protections for users. These amounts may seem negligible in the context of a copyright suit, where damages can reach up to $150,000, but nearly 40 percent of Americans cannot cover a $400 emergency today.

Finally, the CASE Act will give copyright trolls a favorable forum. In recent years, some unscrupulous actors made a business of threatening thousands of Internet users with copyright infringement suits. These suits are often based on flimsy, but potentially embarrassing, allegations of infringement of pornographic works. Courts have helped limit the worst impact of these campaigns by making sure the copyright owner presented evidence of a viable case before issuing subpoenas to identify Internet users. But the CASE Act will allow the Copyright Office to issue subpoenas with little to no process, potentially creating a cheap and easy way for copyright trolls to identify targets.

Ultimately, the CASE Act will create new problems for internet users and exacerbate existing challenges in the legal system. For these reasons, we ask members to oppose H.R. 2426.

The post CASE Act Threatens User Rights in the United States appeared first on Open Policy & Advocacy.

The Mozilla BlogFirefox’s Test Pilot Program Returns with Firefox Private Network Beta

Like a cat, the Test Pilot program has had many lives. It originally started as an Add-on before we relaunched it three years ago. Then in January, we announced that we were evolving our culture of experimentation, and as a result we closed the Test Pilot program to give us time to further explore what was next.

We learned a lot from the Test Pilot program. First, we had a loyal group of users who provided us feedback on projects that weren’t polished or ready for general consumption. Based on that input we refined and revamped various features and services, and in some cases shelved projects altogether because they didn’t meet the needs of our users. The feedback we received helped us evaluate a variety of potential Firefox features, some of which are in the Firefox browser today.

If you haven’t heard, third time’s the charm. We’re turning to our loyal and faithful users, specifically the ones who signed up for a Firefox account and opted-in to be in the know about new products testing, and are giving them a first crack to test-drive new, privacy-centric products as part of the relaunched Test Pilot program. The difference with the newly relaunched Test Pilot program is that these products and services may be outside the Firefox browser, and will be far more polished, and just one step shy of general public release.

We’ve already earmarked a couple of new products that we plan to fine-tune before their official release as part of the relaunched Test Pilot program. Because of how much we learned from our users through the Test Pilot program, and our ongoing commitment to build our products and services to meet people’s online needs, we’re kicking off our relaunch of the Test Pilot program by beta testing our project code named Firefox Private Network.

Try our first beta – Firefox Private Network

One of the key learnings from recent events is that there is growing demand for privacy features. The Firefox Private Network is an extension which provides a secure, encrypted path to the web to protect your connection and your personal information anywhere and everywhere you use your Firefox browser.

There are many ways that your personal information and data are exposed: online threats are everywhere, whether it’s through phishing emails or data breaches. You may often find yourself taking advantage of the free WiFi at the doctor’s office, airport or a cafe. There can be dozens of people using the same network — casually checking the web and getting social media updates. This leaves your personal information vulnerable to those who may be lurking, waiting to take advantage of this situation to gain access to your personal info. Using the Firefox Private Network helps protect you from hackers lurking in plain sight on public connections.

Start testing the Firefox Private Network today, it’s currently available in the US on the Firefox desktop browser. A Firefox account allows you to be one of the first to test potential new products and services, you can sign up directly from the extension.

 

Key features of Firefox Private Network are:

  • Protection when in public WiFi access points – Whether you are waiting at your doctor’s office, the airport or working from your favorite coffee shop, your connection to the internet is protected when you use the Firefox browser thanks to a secure tunnel to the web, protecting all your sensitive information like the web addresses you visit, personal and financial information.
  • Internet Protocol (IP) addresses are hidden so it’s harder to track you – Your IP address is like a home address for your computer. One of the reasons why you may want to keep it hidden is to keep advertising networks from tracking your browsing history. Firefox Private Network will mask your IP address providing protection from third party trackers around the web.
  • Toggle the switch on at any time. By clicking in the browser extension, you will find an on/off toggle that shows you whether you are currently protected, which you can turn on at anytime if you’d like additional privacy protection, or off if not needed at that moment.

Your feedback on Firefox Private Network beta is important

Over the next several months you will see a number of variations on our testing of the Firefox Private Network. This iterative process will give us much-needed feedback to explore technical and possible pricing options for the different online needs that the Firefox Private Network meets.

Your feedback will be essential in making sure that we offer a full complement of services that address the problems you face online with the right-priced service solutions. We depend on your feedback and we will send a survey to follow up. We hope you can spend a few minutes to complete it and let us know what you think. Please note this first Firefox Private Network Beta test will only be available to start in the United States for Firefox account holders using desktop devices. We’ll keep you updated on our eventual beta test roll-outs in other locales and platforms.

Sign up now for a Firefox account and join the fight to keep the internet open and accessible to all.

The post Firefox’s Test Pilot Program Returns with Firefox Private Network Beta appeared first on The Mozilla Blog.

hacks.mozilla.orgCaniuse and MDN compatibility data collaboration

Web developers spend a good amount of time making web compatibility decisions. Deciding whether or not to use a web platform feature often depends on its availability in web browsers.

A brief history of compatibility data

More than 10 years ago, @fyrd created the caniuse project, to help developers check feature availability across browsers. Over time, caniuse has evolved into the go-to resource to answer the question that comes up day to day: “Can I use this?”

About 2 years ago, the MDN team started re-doing its browser compatibility tables. The team was on a mission to take the guesswork out of web compatibility. Since then, the BCD project has become a large dataset with more than 10,000 data points. It stays up to date with the help of over 500 contributors on GitHub.

MDN compatibility data is available as open data on npm and has been integrated in a variety of projects including VS Code and webhint.io auditing.

Two great data sources come together

Today we’re announcing the integration of MDN’s compat data into the caniuse website. Together, we’re bringing even more web compatibility information into the hands of web developers.

Caniuse table for Intl.RelativeTimeFormat. Data imported from mdn-compat-data.

Before we began our collaboration, the caniuse website only displayed results for features available in the caniuse database. Now all search results can include support tables for MDN compat data. This includes data types already found on caniuse, specifically the HTML, CSS, JavaScript, Web API, SVG & and HTTP categories. By adding MDN data, the caniuse support table count expands from roughly 500 to 10,500 tables! Developers’ caniuse queries on what’s supported where will now have significantly more results.

The new feature tables will look a little different. Because the MDN compat data project and caniuse have compatible yet somewhat different goals, the implementation is a little different too. While the new MDN-based tables don’t have matching fields for all the available metadata (such as links to resources and a full feature description), support notes and details such as bug information, prefixes, feature flags, etc. will be included.

The MDN compatibility data itself is converted under the hood to the same format used in caniuse compat tables. Thus, users can filter and arrange MDN-based data tables in the same way as any other caniuse table. This includes access to browser usage information, either by region or imported through Google Analytics to help you decide when a feature has enough support for your users. And the different view modes available via both datasets help visualize support information.

Differences in the datasets

We’ve been asked why the datasets are treated differently. Why didn’t we merge them in the first place? We discussed and considered this option. However, due to the intrinsic differences between our two projects, we decided not to. Here’s why:

MDN’s support data is very broad and covers feature support at a very granular level. This allows MDN to provide as much detailed information as possible across all web technologies, supplementing the reference information provided by MDN Web Docs.

Caniuse, on the other hand, often looks at larger features as a whole (e.g. CSS Grid, WebGL, specific file format support). The caniuse approach provides developers with higher level at-a-glance information on whether the feature’s supported. Sometimes detail is missing. Each individual feature is added manually to caniuse, with a primary focus on browser support coverage rather than on feature coverage overall.

Because of these and other differences in implementation, we don’t plan on merging the source data repositories or matching the data schema at this time. Instead, the integration works by matching the search query to the feature’s description on caniuse.com. Then, caniuse generates an appropriate feature table, and converts MDN support data to the caniuse format on the fly.

What’s next

We encourage community members of both repos, caniuse and mdn-compat-data, to work together to improve the underlying data. By sharing information and collaborating wherever possible, we can help web developers find answers to compatibility questions.

The post Caniuse and MDN compatibility data collaboration appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogSemantic Placement in Augmented Reality using MrEd

Semantic Placement in Augmented Reality using MrEd

In this article we’re going to take a brief look at how we may want to think about placement of objects in Augmented Reality. We're going to use our recently released lightweight AR editing tool MrEd to make this easy to demonstrate.

Designers often express ideas in a domain appropriate language. For example a designer may say “place that chair on the floor” or “hang that photo at eye level on the wall”.

However when we finalize a virtual scene in 3d we often keep only the literal or absolute XYZ position of elements and throw out the original intent - the deeper reason why an object ended up in a certain position.

It turns out that it’s worth keeping the intention - so that when AR scenes are re-created for new participants or in new physical locations that the scenes still “work” - that they still are satisfying experiences - even if some aspects change.

In a sense this recognizes the Japanese term 'Wabi-Sabi'; that aesthetic placement is always imperfect and contends between fickle forces. Describing placement in terms of semantic intent is also similar to responsive design on the web or the idea of design patterns as described by Christopher Alexander.

Let’s look at two simple examples of semantic placement in practice.

1. Relative to the Ground

When you’re placing objects in augmented reality you often want to specify that those objects should be relationally placed in a position relative to other objects. A typical, in fact ubiquitous, example of placement is that often you want an object to be positioned relative to “the ground”.

Sometimes the designer's intent is to select the highest relative surface underneath the object in question (such as placing a lamp on a table) or at other times to select the lowest relative surface underneath an object (such as say placing a kitten on the floor under a table). Often, as well, we may want to express a placement in the air - such as say a mailbox, or a bird.

In this very small example I’ve attached a ground detection script to a duck, and then sprinkled a few other passive objects around the scene. As the ground is detected the duck will pop down from a default position to be offset relative to the ground (although still in the air). See the GIF above for an example of the effect.

To try this scene out yourself you will need WebXR for iOS which is a preview of emerging WebXR standards using iOS ARKit to expose augmented reality features in a browser environment. This is the url for the scene above in play mode (on a WebXR capable device):

https://painted-traffic.glitch.me/.mred/build/?mode=play&doc=doc_103575453

Here is what it should look like in edit mode:

Semantic Placement in Augmented Reality using MrEd

You can also clone the glitch and edit the scene yourself (you’ll want to remember to set a password in the .env file and then login from inside MrEd). See:

https://glitch.com/edit/#!/painted-traffic

Here’s my script itself:

/// #title grounded
/// #description Stick to Floor/Ground - dynamically and constantly searching for low areas nearby
({
    start: function(evt) {
        this.sgp.startWorldInfo()
    },
    tick: function(e) {
        let floor = this.sgp.getFloorNear({point:e.target.position})
        if(floor) {
            e.target.position.y = floor.y
        }
    }
})

This is relying on code baked into MrEd (specifically inside of findFloorNear() in XRWorldInfo.js if you really want to get detailed).

In the above example I begin by calling startWorldInfo() to start painting the ground planes (so that I can see them since it’s nice to have visual feedback). And, every tick, I call a floor finder subroutine which simply returns the best guess as to the floor in that area. The floor finder logic in this case is pre-defined but one could easily imagine other kinds of floor finding strategies that were more flexible.

2. Follow the player

Another common designer intent is to make sure that some content is always visible to the player. As designers in virtual or augmented reality it can be more challenging to direct a users attention to virtual objects. These are 3d immersive worlds, the player can be looking in any direction. Some kind of mechanic is needed to help make sure that the player sees what they need to see.

One common simple solution is to build an object that stays in front of the user. This can be itself a combination of multiple simpler behaviors. An object can be ordered to seek a position in front of the user, be at a certain height, and ideally billboarded so that any signage or message is always legible.

In this example a sign is decorated with two separate scripts, one to keep the sign in front of the player, and another to billboard the sign to face the player.

https://painted-traffic.glitch.me/.mred/build/?mode=edit&doc=doc_875751741&doctype=vr

Closing thoughts

We’ve only scratched the surface of the kinds of intent could be expressed or combined together. If you want to dive deeper there is a longer list in a separate article Laundry List of UX Patterns). I also invite you to help extend the industry; think both about what high level intentions you mean when you place objects and also how you'd communicate those intentions.

The key insight here is that preserving semantic intent means thinking of objects as intelligent, able to respond to simple high level goals. Virtual objects are more than just statues or art at a fixed position, but can be entities that can do your bidding, and follow high level rules.

Ultimately future 3d tools will almost certainly provide these kinds of services - much in the way CSS provides layout directives. We should also expect to see conventions emerge as more designers begin to work in this space. As a call to action, it's worth it to notice the high level intentions that you want, and to get the developers of the tools that you use to start to incorporate those intentions as primitives.

hacks.mozilla.orgDebugging TypeScript in Firefox DevTools

Firefox Debugger has evolved into a fast and reliable tool chain over the past several months and it’s now supporting many cool features. Though primarily used to debug JavaScript, did you know that you can also use Firefox to debug your TypeScript applications?

Before we jump into real world examples, note that today’s browsers can’t run TypeScript code directly. It’s important to understand that TypeScript needs to be compiled into Javascript before it’s included in an HTML page.

Also, debugging TypeScript is done through a source-map, and so we need to instruct the compiler to produce a source-map for us as well.

You’ll learn the following in this post:

  • Compiling TypeScript to JavaScript
  • Generating source-map
  • Debugging TypeScript

Let’s get started with a simple TypeScript example.

TypeScript Example

The following code snippet shows a simple TypeScript hello world page.

// hello.ts
 
interface Person {
  firstName: string;
  lastName: string;
}
 
function hello(person: Person) {
  return "Hello, " + person.firstName + " " + person.lastName;
}
 
function sayHello() {
  let user = { firstName: "John", lastName: "Doe" };
  document.getElementById("output").innerText = hello(user);
}

TypeScript (TS) is very similar to JavaScript and the example should be understandable even for JS developers unfamiliar with TypeScript.

The corresponding HTML page looks like this:

// hello.html
 
<!DOCTYPE html>
<html>
<head>
  <script src="hello.js"></script>
</head>
<body">
  <button onclick="sayHello()">Say Hello!</button>
  <div id="output"></div>
</body>
</html>

Note that we are including the hello.js not the hello.ts file in the HTML file. Today’s browser can’t run TS directly, and so we need to compile our hello.ts file into regular JavaScript.

The rest of the HTML file should be clear. There is one button that executes the sayHello() function and <div id="output"> that is used to show the output (hello message).

Next step is to compile our TypeScript into JavaScript.

Compiling TypeScript To JavaScript

To compile TypeScript into JavaScript you need to have a TypeScript compiler installed. This can be done through NPM (Node Package Manager).

npm install -g typescript

Using the following command, we can compile our hello.ts file. It should produce a JavaScript version of the file with the *.js extension.

tsc hello.ts

In order to produce a source-map that describes the relationship between the original code (TypeScript) and the generated code (JavaScript), you need to use an additional --sourceMap argument. It generates a corresponding *.map file.

tsc hello.ts --sourceMap

Yes, it’s that simple.

You can read more about other compiler options if you are interested.

The generated JS file should look like this:

function greeter(person) {
  return "Hello, " + person.firstName + " " + person.lastName;
}
var user = {
  firstName: "John",
  lastName: "Doe"
};
function sayHello() {
  document.getElementById("output").innerText = greeter(user);
}
//# sourceMappingURL=hello.js.map

The most interesting thing is probably the comment at the end of the generated file. The syntax comes from old Firebug times and refers to a source map file containing all information about the original source.

Are you curious what the source map file looks like? Here it is.

{
   "version":3,
   "file":"hello.js",
   "sourceRoot":"",
   "sources":["hello.ts"],
   "names":[],
   "mappings":
"AAKA,SAAS,OAAO,CAAC,MAAc;IAC7B,OAAO,SAAS,GAAG,MAAM,CAAC,SAAS,GAAG,GAAG,GAAG,MAAM,CAAC,QAAQ,CAAC;AAC9D,CAAC;AAED,IAAI,IAAI,GAAG;IACT,SAAS,EAAE,MAAM;IACjB,QAAQ,EAAE,KAAK;CAChB,CAAC;AAEF,SAAS,QAAQ;IACf,QAAQ,CAAC,cAAc,CAAC,QAAQ,CAAC,CAAC,SAAS,GAAG,OAAO,CAAC,IAAI,CAAC,CAAC;AAC9D,CAAC"
}

It contains information (including location) about the generated file (hello.js), the original file (hello.ts), and, most importantly, mappings between those two. With this information, the debugger knows how to interpret the TypeScript code even if it doesn’t know anything about TypeScript.

The original language could be anything (RUST, C++, etc.) and with a proper source-map, the debugger knows what to do. Isn’t that magic?

We are all set now. The next step is loading our little app into the Debugger.

Debugging TypeScript

The debugging experience is no different from how you’d go about debugging standard JS. You’re actually debugging the generated JavaScript, but since source-map is available the debugger knows how to show you the original TypeScript instead.

This example is available online, so if you are running Firefox you can try it right now.

Let’s start with creating a breakpoint on line 9 in our original TypeScript file. To hit the breakpoint you just need to click on the Say Hello! button introduced earlier.

Debugging TypeScript

See, it’s TypeScript there!

Note the Call stack panel on the right side, it properly shows frames coming from hello.ts file.

One more thing: If you are interested in seeing the generated JavaScript code you can use the context menu and jump right into it.

This action should navigate you to the hello.js file and you can continue debugging from the same location.

You can see that the Sources tree (on the left side) shows both these files at the same time.

Map Scopes

Let’s take a look at another neat feature that allows inspection of variables in both original and generated scopes.

Here is a more complex glitch example.

  1. Load https://firefox-devtools-example-babel-typescript.glitch.me/
  2. Open DevTools Toolbox and select the Debugger panel
  3. Create a breakpoint in Webpack/src/index.tsx file on line 45
  4. The breakpoint should pause JS execution immediately

Screenshot of the DevTools debugger panel allowing inspection of variables in both original and generated scopes

Notice the Scopes panel on the right side. It shows variables coming from generated (and also minified) code and it doesn’t correspond to the original TSX (TypeScript with JSX) code, which is what you see in the Debugger panel.

There is a weird e variable instead of localeTime, which is actually used in the source code.

This is where the Map scopes feature comes handy. In order to see the original variables (used in the original TypeScript code) just click the Map checkbox.

Debugger panel in Firefox DevTools,using the Map checkbox to see original TypeScript variables

See, the Scopes panel shows the localeTime variable now (and yes, the magic comes from the source map).

Finally, if you are interested in where the e variable comes from, jump into the generated location using the context menu (like we just did in the previous example).

DevTools showing Debugger panel using the context menu to locate the e variable

Stay tuned for more upcoming Debugger features!

Jan ‘Honza’ Odvarko

The post Debugging TypeScript in Firefox DevTools appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgDebugging WebAssembly Outside of the Browser

WebAssembly has begun to establish itself outside of the browser via dedicated runtimes like Mozilla’s Wasmtime and Fastly’s Lucet. While the promise of a new, universal format for programs is appealing, it also comes with new challenges. For instance, how do you debug .wasm binaries?

At Mozilla, we’ve been prototyping ways to enable source-level debugging of .wasm files using traditional tools like GDB and LLDB.

The screencast below shows an example debugging session. Specifically, it demonstrates using Wasmtime and LLDB to inspect a program originally written in Rust, but compiled to WebAssembly.

This type of source-level debugging was previously impossible. And while the implementation details are subject to change, the developer experience—attaching a normal debugger to Wasmtime—will remain the same.

By allowing developers to examine programs in the same execution environment as a production WebAssembly program, Wasmtime’s debugging support makes it easier to catch and diagnose bugs that may not arise in a native build of the same code. For example, the WebAssembly System Interface (WASI) treats filesystem access more strictly than traditional Unix-style permissions. This could create issues that only manifest in WebAssembly runtimes.

Mozilla is proactively working to ensure that WebAssembly’s development tools are capable, complete, and ready to go as WebAssembly expands beyond the browser.

Please try it out and let us know what you think.

Note: Debugging using Wasmtime and LLDB should work out of the box on Linux with Rust programs, or with C/C++ projects built via the WASI SDK.

Debugging on macOS currently requires building and signing a more recent version of LLDB.

Unfortunately, LLDB for Windows does not yet support JIT debugging.

Thanks to Lin Clark, Till Schneidereit, and Yury Delendik for their assistance on this post, and for their work on WebAssembly debugging.

The post Debugging WebAssembly Outside of the Browser appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeyPost-release post…

Hi,

SeaMonkey is currently as far behind the release schedule (rapid release or otherwise).  Mozilla isn’t making it easier for us to release things; but, hopefully we’ll be under our own release schedule (if we’re not already) for Mozilla to have any affect on us.

That said, I can’t help but feel that I’ve dropped the proverbial ball  on this release.    I’d like to personally offer my own apologies for the delay.  It’s entirely and wholy my fault and not the fault of the other devs.   I have a lot of ideas to get things working again; but, I’m hoping I’ll get the chance to do what I’ve set out to do for SeaMonkey.

Yours truly,

:ewong

SeaMonkeySeaMonkey 2.49.5 has been released!

Hi Everyone.

2.49.5 was just ‘released’ (aka pushed to the releases/2.49.5) portion of archive.mozilla.org.

It has been an intensely frustrating experience, and dare I say worse than 2.46’s release.  I can’t imagine how much everyone is wanting to kick me off the team…

I’m still working on making the release process ‘smoother’, but it’s a very steep uphill battle (dare I even quote Richie from “Bottoms”..  “Steep?  It’s F’ing vertical!”)

That said…  I’ve got some time to fix this whole smegging smegfest..

Yours honestly,

:ewong

Mozilla Add-ons BlogMozilla’s Manifest v3 FAQ

What is Manifest v3?

Chrome versions the APIs they provide to extensions, and the current format is version 2. The Firefox WebExtensions API is nearly 100% compatible with version 2, allowing extension developers to easily target both browsers.

In November 2018, Google proposed an update to their API, which they called Manifest v3. This update includes a number of changes that are not backwards-compatible and will require extension developers to take action to remain compatible.

A number of extension developers have reached out to ask how Mozilla plans to respond to the changes proposed in v3. Following are answers to some of the frequently asked questions.

Why do these changes negatively affect content blocking extensions?

One of the proposed changes in v3 is to deprecate a very powerful API called blocking webRequest. This API gives extensions the ability to intercept all inbound and outbound traffic from the browser, and then block, redirect or modify that traffic.

In its place, Google has proposed an API called declarativeNetRequest. This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves. Extensions would still be able to use webRequest but only to observe requests, not to modify or block them.

As a result, some content blocking extension developers have stated they can no longer maintain their add-on if Google decides to follow through with their plans. Those who do continue development may not be able to provide the same level of capability for their users.

Will Mozilla follow Google with these changes?

In the absence of a true standard for browser extensions, maintaining compatibility with Chrome is important for Firefox developers and users. Firefox is not, however, obligated to implement every part of v3, and our WebExtensions API already departs in several areas under v2 where we think it makes sense.

Content blocking: We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them.

Background service workers: Manifest v3 proposes the implementation of service workers for background processes to improve performance. We are currently investigating the impact of this change, what it would mean for developers, and whether there is a benefit in continuing to maintain background pages.

Runtime host permissions: We are evaluating the proposal in Manifest v3 to give users more granular control over the sites they give permissions to, and investigating ways to do so without too much interruption and confusion.

Cross-origin communication: In Manifest v3, content scripts will have the same permissions as the page they are injected in. We are planning to implement this change.

Remotely hosted code: Firefox already does not allow remote code as a policy. Manifest v3 includes a proposal for additional technical enforcement measures, which we are currently evaluating and intend to also enforce.

Will my extensions continue to work in Manifest v3?

Google’s proposed changes, such as the use of service workers in the background process, are not backwards-compatible. Developers will have to adapt their add-ons to accommodate these changes.

That said, the changes Google has proposed are not yet stabilized. Therefore, it is too early to provide specific guidance on what to change and how to do so. Mozilla is waiting for more clarity and has begun investigating the effort needed to adapt.

We will provide ongoing updates about changes necessary on the add-ons blog.

What is the timeline for these changes?

Given Manifest v3 is still in the draft and design phase, it is too early to provide a specific timeline. We are currently investigating what level of effort is required to make the changes Google is proposing, and identifying where we may depart from their plans.

Later this year we will begin experimenting with the changes we feel have a high chance of being part of the final version of Manifest v3, and that we think make sense for our users. Early adopters will have a chance to test our changes in the Firefox Nightly and Beta channels.

Once Google has finalized their v3 changes and Firefox has implemented the parts that make sense for our developers and users, we will provide ample time and documentation for extension developers to adapt. We do not intend to deprecate the v2 API before we are certain that developers have a viable path forward to migrate to v3.

Keep your eyes on the add-ons blog for updates regarding Manifest v3 and some of the other work our team is up to. We welcome your feedback on our community forum.

 

hacks.mozilla.orgFirefox 69 — a tale of Resize Observer, microtasks, CSS, and DevTools

For our latest excellent adventure, we’ve gone and cooked up a new Firefox release. Version 69 features a number of nice new additions including JavaScript public instance fields, the Resize Observer and Microtask APIs, CSS logical overflow properties (e.g. overflow-block), and @supports for selectors.

We will also look at highlights from the raft of new debugging features in the Firefox 69 DevTools, including console message grouping, event listener breakpoints, and text label checks.

This blog post provides merely a set of highlights; for all the details, check out the following:

The new CSS

Firefox 69 supports a number of new CSS features; the most interesting are as follows.

New logical properties for overflow

69 sees support for some new logical propertiesoverflow-block and overflow-inline — which control the overflow of an element’s content in the block or inline dimension respectively.

These properties map to overflow-x or overflow-y, depending on the content’s writing-mode. Using these new logical properties instead of overflow-x and overflow-y makes your content easier to localize, especially when adapting it to languages using a different writing direction. They can also take the same values — visible, hidden, scroll, auto, etc.

Note: Look at Handling different text directions if you want to read up on these concepts.

@supports for selectors

The @supports at-rule has long been very useful for selectively applying CSS only when a browser supports a particular property, or doesn’t support it.

Recently this functionality has been extended so that you can apply CSS only if a particular selector is or isn’t supported. The syntax looks like this:

@supports selector(selector-to-test) {
  /* insert rules here */
}

We are supporting this functionality by default in Firefox 69 onwards. Find some usage examples here.

JavaScript gets public instance fields

The most interesting addition we’ve had to the JavaScript language in Firefox 69 is support for public instance fields in JavaScript classes. This allows you to specify properties you want the class to have up front, making the code more logical and self-documenting, and the constructor cleaner. For example:

class Product {
  name;
  tax = 0.2;
  basePrice = 0;
  price;

  constructor(name, basePrice) {
    this.name = name;
    this.basePrice = basePrice;
    this.price = (basePrice * (1 + this.tax)).toFixed(2);
  }
}

Notice that you can include default values if wished. The class can then be used as you’d expect:

let bakedBeans = new Product('Baked Beans', 0.59);
console.log(`${bakedBeans.name} cost $${bakedBeans.price}.`);

Private instance fields (which can’t be set or referenced outside the class definition) are very close to being supported in Firefox, and also look to be very useful. For example, we might want to hide the details of the tax and base price. Private fields are indicated by a hash symbol in front of the name:

#tax = 0.2;
 #basePrice = 0;

The wonder of WebAPIs

There are a couple of new WebAPIs enabled by default in Firefox 69. Let’s take a look.

Resize Observer

Put simply, the Resize Observer API allows you to easily observe and respond to changes in the size of an element’s content or border box. It provides a JavaScript solution to the often-discussed lack of “element queries” in the web platform.

A simple trivial example might be something like the following (resize-observer-border-radius.html, see the source also), which adjusts the border-radius of a <div> as it gets smaller or bigger:

const resizeObserver = new ResizeObserver(entries => {
  for (let entry of entries) {
    if(entry.contentBoxSize) {
      entry.target.style.borderRadius = Math.min(100, (entry.contentBoxSize.inlineSize/10) +
                                                      (entry.contentBoxSize.blockSize/10)) + 'px';
    } else {
      entry.target.style.borderRadius = Math.min(100, (entry.contentRect.width/10) +
                                                      (entry.contentRect.height/10)) + 'px';
    }
  }
});

resizeObserver.observe(document.querySelector('div'));

“But you can just use border-radius with a percentage”, I hear you cry. Well, sort of. But that quickly leads to ugly-looking elliptical corners, whereas the above solution gives you nice square corners that scale with the box size.

Another, slightly less trivial example is the following (resize-observer-text.html , see the source also):

if(window.ResizeObserver) {
  const h1Elem = document.querySelector('h1');
  const pElem = document.querySelector('p');
  const divElem = document.querySelector('body > div');
  const slider = document.querySelector('input');

  divElem.style.width = '600px';

  slider.addEventListener('input', () => {
    divElem.style.width = slider.value + 'px';
  })

  const resizeObserver = new ResizeObserver(entries => {
    for (let entry of entries) {
        if(entry.contentBoxSize) {
            h1Elem.style.fontSize = Math.max(1.5, entry.contentBoxSize.inlineSize/200) + 'rem';
            pElem.style.fontSize = Math.max(1, entry.contentBoxSize.inlineSize/600) + 'rem';
        } else {
            h1Elem.style.fontSize = Math.max(1.5, entry.contentRect.width/200) + 'rem';
            pElem.style.fontSize = Math.max(1, entry.contentRect.width/600) + 'rem';
        }
    }
  });

  resizeObserver.observe(divElem);
}

Here we use the resize observer to change the font-size of a header and paragraph as a slider’s value is changed, causing the containing <div> to change its width. This shows that you can respond to changes in an element’s size, even if they have nothing to do with the viewport size changing.

So to summarise, Resize Observer opens up a wealth of new responsive design work that was difficult with CSS features alone. We’re even using it to implement a new responsive version of our new DevTools JavaScript console!.

Microtasks

The Microtasks API provides a single method — queueMicrotask(). This is a low-level method that enables us to directly schedule a callback on the microtask queue. This schedules code to be run immediately before control returns to the event loop so you are assured a reliable running order (using setTimeout(() => {}, 0)) for example can give unreliable results).

The syntax is as simple to use as other timing functions:

self.queueMicrotask(() => {
  // function contents here
})

The use cases are subtle, but make sense when you read the explainer section in the spec. The biggest benefactors here are framework vendors, who like lower-level access to scheduling. Using this will reduce hacks and make frameworks more predictable cross-browser.

Developer tools updates in 69

There are various interesting additions to the DevTools in 69, so be sure to go and check them out!

Event breakpoints and async functions in the JS debugger

The JavaScript debugger has some cool new features for stepping through and examining code:

New remote debugging

In the new shiny about:debugging page, you’ll find a grouping of options for remotely debugging devices, with more to follow in the future. In 69, we’ve enabled a new mechanism for allowing you to remotely debug other versions of Firefox, on the same machine or other machines on the same network (see Network Location).

Console message grouping

In the console, we now group together similar error messages, with the aim of making the console tidier, spamming developers less, and making them more likely to pay attention to the messages. In turn, this can have a positive effect on security/privacy.

The new console message grouping looks like this, when in its initial closed state:

When you click the arrow to open the message list, it shows you all the individual messages that are grouped:

Initially the grouping occurs on CSP, CORS, and tracking protection errors, with more categories to follow in the future.

Flex information in the picker infobar

Next up, we’ll have a look at the Page inspector. When using the picker, or hovering over an element in the HTML pane, the infobar for the element now shows when it is a flex container, item, or both.

website nav menu with infobar pointing out that it is a flex item

See this page for more details.

Text Label Checks in the Accessibility Inspector

A final great feature to mention is the new text label checks feature of the Accessibility Inspector.

When you choose Check for issues > Text Labels from the dropdown box at the top of the accessibility inspector, it marks all the nodes in the accessibility tree with a warning sign if it is missing a descriptive text label. The Checks pane on the right hand side then gives a description of the problem, along with a Learn more link that takes you to more detailed information available on MDN.

WebExtensions updates

Last but not least, Let’s give a mention to WebExtensions! The main feature to make it into Firefox 69 is User Scripts — these are a special kind of extension content script that, when registered, instruct the browser to insert the given scripts into pages that match the given URL patterns.

See also

In this post we’ve reviewed the main web platform features added in Firefox 69. You can also read up on the main new features of the Firefox browser — see the Firefox 69 Release Notes.

The post Firefox 69 — a tale of Resize Observer, microtasks, CSS, and DevTools appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogToday’s Firefox Blocks Third-Party Tracking Cookies and Cryptomining by Default

Today, Firefox on desktop and Android will — by default — empower and protect all our users by blocking third-party tracking cookies and cryptominers. This milestone marks a major step in our multi-year effort to bring stronger, usable privacy protections to everyone using Firefox.

Firefox’s Enhanced Tracking Protection gives users more control

For today’s release, Enhanced Tracking Protection will automatically be turned on by default for all users worldwide as part of the ‘Standard’ setting in the Firefox browser and will block known “third-party tracking cookies” according to the Disconnect list. We first enabled this default feature for new users in June 2019. As part of this journey we rigorously tested, refined, and ultimately landed on a new approach to anti-tracking that is core to delivering on our promise of privacy and security as central aspects of your Firefox experience.

Currently over 20% of Firefox users have Enhanced Tracking Protection on. With today’s release, we expect to provide protection for 100% of ours users by default. Enhanced Tracking Protection works behind-the-scenes to keep a company from forming a profile of you based on their tracking of your browsing behavior across websites — often without your knowledge or consent. Those profiles and the information they contain may then be sold and used for purposes you never knew or intended. Enhanced Tracking Protection helps to mitigate this threat and puts you back in control of your online experience.

You’ll know when Enhanced Tracking Protection is working when you visit a site and see a shield icon in the address bar:

 

When you see the shield icon, you should feel safe that Firefox is blocking thousands of companies from your online activity.

For those who want to see which companies we block, you can click on the shield icon, go to the Content Blocking section, then Cookies. It should read Blocking Tracking Cookies. Then, click on the arrow on the right hand side, and you’ll see the companies listed as third party cookies that Firefox has blocked:

If you want to turn off blocking for a specific site, click on the Turn off Blocking for this Site button.

Protecting users’ privacy beyond tracking cookies

Cookies are not the only entities that follow you around on the web, trying to use what’s yours without your knowledge or consent. Cryptominers, for example, access your computer’s CPU, ultimately slowing it down and draining your battery, in order to generate cryptocurrency — not for yours but someone else’s benefit. We introduced the option to block cryptominers in previous versions of Firefox Nightly and Beta and are including it in the ‘Standard Mode‘ of your Content Blocking preferences as of today.

Another type of script that you may not want to run in your browser are Fingerprinting scripts. They harvest a snapshot of your computer’s configuration when you visit a website. The snapshot can then also be used to track you across the web, an issue that has been present for years. To get protection from fingerprinting scripts Firefox users can turn on ‘Strict Mode.’ In a future release, we plan to turn fingerprinting protections on by default.

Also in today’s Firefox release

To see what else is new or what we’ve changed in today’s release, you can check out our release notes.

Check out and download the latest version of Firefox available here.

The post Today’s Firefox Blocks Third-Party Tracking Cookies and Cryptomining by Default appeared first on The Mozilla Blog.

Mozilla ServicesA New Policy for Mozilla Location Service

Several years ago we started a geolocation experiment called the Mozilla Location Service (MLS) to create a location service built on open-source software and powered through crowdsourced location data. MLS provides geolocation lookups based on publicly observable cell tower and WiFi access point information. MLS has served the public interest by providing location information to open-source operating systems, research projects, and developers.

Today Mozilla is announcing a policy change regarding MLS. Our new policy will impose limits on commercial use of MLS. Mozilla has not made this change by choice. Skyhook Holdings, Inc. contacted Mozilla some time ago and alleged that MLS infringed a number of its patents. We subsequently reached an agreement with Skyhook that avoids litigation. While the terms of the agreement are confidential, we can tell you that the agreement exists and that our MLS policy change relates to it. We can also confirm that this agreement does not change the privacy properties of our service: Skyhook does not receive location data from Mozilla or our users.

Our new policy preserves the public interest heart of the MLS project. Mozilla has never offered any commercial plans for MLS and had no intention to do so. Only a handful of entities have made use of MLS API Query keys for commercial ventures. Nevertheless, we regret having to impose new limits on MLS. Sometimes companies have to make difficult choices that balance the massive cost and uncertainty of patent litigation against other priorities.

Mozilla has long argued that patents can work to inhibit, rather than promote, innovation. We continue to believe that software development, and especially open-source software, is ill-served by the patent system. Mozilla endeavors to be a good citizen with respect to patents. We offer a free license to our own patents under the Mozilla Open Software Patent License Agreement. We will also continue our advocacy for a better patent system.

Under our new policy, all users of MLS API Query keys must apply.  Non-commercial users (such as academic, public interest, research, or open-source projects) can request an MLS API Query key capped at a daily usage limit of 100,000. This limit may be increased on request. Commercial users can request an MLS API Query key capped at a daily usage limit of 100,000. The daily limit cannot be increased for commercial uses and those keys will expire after 3 months. In effect, commercial use of MLS will now be of limited duration and restricted in volume.

Existing keys will expire on March 1, 2020. We encourage non-commercial users to re-apply for continued use of the service. Keys for a small number of commercial users that have been exceeding request limits will expire sooner. We will reach out to those users directly.

Location data and services are incredibly valuable in today’s connected world. We will continue to provide an open-source and privacy respecting location service in the public interest. You can help us crowdsource data by opting-in to the contribution option in our Android mobile browser.

about:communityFirefox 69 new contributors

With the release of Firefox 69, we are pleased to welcome the 50 developers who contributed their first code change to Firefox in this release, 39 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

hacks.mozilla.orgThe Baseline Interpreter: a faster JS interpreter in Firefox 70

Introduction

Modern web applications load and execute a lot more JavaScript code than they did just a few years ago. While JIT (just-in-time) compilers have been very successful in making JavaScript performant, we needed a better solution to deal with these new workloads.

To address this, we’ve added a new, generated JavaScript bytecode interpreter to the JavaScript engine in Firefox 70. The interpreter is available now in the Firefox Nightly channel, and will go to general release in October. Instead of writing or generating a new interpreter from scratch, we found a way to do this by sharing most code with our existing Baseline JIT.

The new Baseline Interpreter has resulted in performance improvements, memory usage reductions and code simplifications. Here’s how we got there:

Execution tiers

In modern JavaScript engines, each function is initially executed in a bytecode interpreter. Functions that are called a lot (or perform many loop iterations) are compiled to native machine code. (This is called JIT compilation.)

Firefox has an interpreter written in C++ and multiple JIT tiers:

  • The Baseline JIT. Each bytecode instruction is compiled directly to a small piece of machine code. It uses Inline Caches (ICs) both as performance optimization and to collect type information for Ion.
  • IonMonkey (or just Ion), the optimizing JIT. It uses advanced compiler optimizations to generate fast code for hot functions (at the expense of slower compile times).

Ion JIT code for a function can be ‘deoptimized’ and thrown away for various reasons, for example when the function is called with a new argument type. This is called a bailout. When a bailout happens, execution continues in the Baseline code until the next Ion compilation.

Until Firefox 70, the execution pipeline for a very hot function looked like this:

Timeline showing C++ Interpreter, Baseline Compilation, Baseline JIT Code, Prepare for Ion, Ion JIT Code with an arrow (called bailout) from Ion JIT Code back to Baseline JIT Code

Problems

Although this works pretty well, we ran into the following problems with the first part of the pipeline (C++ Interpreter and Baseline JIT):

  1. Baseline JIT compilation is fast, but modern web applications like Google Docs or Gmail execute so much JavaScript code that we could spend quite some time in the Baseline compiler, compiling thousands of functions.
  2. Because the C++ interpreter is so slow and doesn’t collect type information, delaying Baseline compilation or moving it off-thread would have been a performance risk.
  3. As you can see in the diagram above, optimized Ion JIT code was only able to bail out to the Baseline JIT. To make this work, Baseline JIT code required extra metadata (the machine code offset corresponding to each bytecode instruction).
  4. The Baseline JIT had some complicated code for bailouts, debugger support, and exception handling. This was especially true where these features intersect!

Solution: generate a faster interpreter

We needed type information from the Baseline JIT to enable the more optimized tiers, and we wanted to use JIT compilation for runtime speed. However, the modern web has such large codebases that even the relatively fast Baseline JIT Compiler spent a lot of time compiling. To address this, Firefox 70 adds a new tier called the Baseline Interpreter to the pipeline:

Same timeline of execution tiers as before but now has the 'Baseline Interpreter' between C++ interpreter and Baseline compilation. The bailout arrow points to Baseline Interpreter instead of Baseline JIT Code.

The Baseline Interpreter sits between the C++ interpreter and the Baseline JIT and has elements from both. It executes all bytecode instructions with a fixed interpreter loop (like the C++ interpreter). In addition, it uses Inline Caches to improve performance and collect type information (like the Baseline JIT).

Generating an interpreter isn’t a new idea. However, we found a nice new way to do it by reusing most of the Baseline JIT Compiler code. The Baseline JIT is a template JIT, meaning each bytecode instruction is compiled to a mostly fixed sequence of machine instructions. We generate those sequences into an interpreter loop instead.

Sharing Inline Caches and profiling data

As mentioned above, the Baseline JIT uses Inline Caches (ICs) both to make it fast and to help Ion compilation. To get type information, the Ion JIT compiler can inspect the Baseline ICs.

Because we wanted the Baseline Interpreter to use exactly the same Inline Caches and type information as the Baseline JIT, we added a new data structure called JitScript. JitScript contains all type information and IC data structures used by both the Baseline Interpreter and JIT.

The diagram below shows what this looks like in memory. Each arrow is a pointer in C++. Initially, the function just has a JSScript with the bytecode that can be interpreted by the C++ interpreter. After a few calls/iterations we create the JitScript, attach it to the JSScript and can now run the script in the Baseline Interpreter.

As the code gets warmer we may also create the BaselineScript (Baseline JIT code) and then the IonScript (Ion JIT code).
JSScript (bytecode) points to JitScript (IC and profiling data). JitScript points to BaselineScript (Baseline JIT Code) and IonScript (Ion JIT code).

Note that the Baseline JIT data for a function is now just the machine code. We’ve moved all the inline caches and profiling data into JitScript.

Sharing the frame layout

The Baseline Interpreter uses the same frame layout as the Baseline JIT, but we’ve added some interpreter-specific fields to the frame. For example, the bytecode PC (program counter), a pointer to the bytecode instruction we are currently executing, is not updated explicitly in Baseline JIT code. It can be determined from the return address if needed, but the Baseline Interpreter has to store it in the frame.

Sharing the frame layout like this has a lot of advantages. We’ve made almost no changes to C++ and IC code to support Baseline Interpreter frames—they’re just like Baseline JIT frames. Furthermore, When the script is warm enough for Baseline JIT compilation, switching from Baseline Interpreter code to Baseline JIT code is a matter of jumping from the interpreter code into JIT code.

Sharing code generation

Because the Baseline Interpreter and JIT are so similar, a lot of the code generation code can be shared too. To do this, we added a templated BaselineCodeGen base class with two derived classes:

The base class has a Handler C++ template argument that can be used to specialize behavior for either the Baseline Interpreter or JIT. A lot of Baseline JIT code can be shared this way. For example, the implementation of the JSOP_GETPROP bytecode instruction (for a property access like obj.foo in JavaScript code) is shared code. It calls the emitNextIC helper method that’s specialized for either Interpreter or JIT mode.

Generating the Interpreter

With all these pieces in place, we were able to implement the BaselineInterpreterGenerator class to generate the Baseline Interpreter! It generates a threaded interpreter loop: The code for each bytecode instruction is followed by an indirect jump to the next bytecode instruction.

For example, on x64 we currently generate the following machine code to interpret JSOP_ZERO (bytecode instruction to push a zero value on the stack):

// Push Int32Value(0).
movabsq $-0x7800000000000, %r11
pushq  %r11
// Increment bytecode pc register.
addq   $0x1, %r14
// Patchable NOP for debugger support.
nopl   (%rax,%rax)
// Load the next opcode.
movzbl (%r14), %ecx
// Jump to interpreter code for the next instruction.
leaq   0x432e(%rip), %rbx
jmpq   *(%rbx,%rcx,8)

When we enabled the Baseline Interpreter in Firefox Nightly (version 70) back in July, we increased the Baseline JIT warm-up threshold from 10 to 100. The warm-up count is determined by counting the number of calls to the function + the number of loop iterations so far. The Baseline Interpreter has a threshold of 10, same as the old Baseline JIT threshold. This means that the Baseline JIT has a lot less code to compile.

Results

Performance and memory usage

After this landed in Firefox Nightly our performance testing infrastructure detected several improvements:

  • Various 2-8% page load improvements. A lot happens during page load in addition to JS execution (parsing, style, layout, graphics). Improvements like this are quite significant.
  • Many devtools performance tests improved by 2-10%.
  • Some small memory usage wins.

Note that we’ve landed more performance improvements since this first landed.

To measure how the Baseline Interpreter’s performance compares to the C++ Interpreter and the Baseline JIT, I ran Speedometer and Google Docs on Windows 10 64-bit on Mozilla’s Try server and enabled the tiers one by one. (The following numbers reflect the best of 7 runs.):
C++ Interpreter 901 ms, + Baseline Interpreter 676 ms, + Baseline JIT 633 ms
On Google Docs we see that the Baseline Interpreter is much faster than just the C++ Interpreter. Enabling the Baseline JIT too makes the page load only a little bit faster.

On the Speedometer benchmark we get noticeably better results when we enable the Baseline JIT tier. The Baseline Interpreter does again much better than just the C++ Interpreter:
C++ Interpreter 31 points, + Baseline Interpreter 52 points, + Baseline JIT 69 points
We think these numbers are great: the Baseline Interpreter is much faster than the C++ Interpreter and its start-up time (JitScript allocation) is much faster than Baseline JIT compilation (at least 10 times faster).

Simplifications

After this all landed and stuck, we were able to simplify the Baseline JIT and Ion code by taking advantage of the Baseline Interpreter.

For example, deoptimization bailouts from Ion now resume in the Baseline Interpreter instead of in the Baseline JIT. The interpreter can re-enter Baseline JIT code at the next loop iteration in the JS code. Resuming in the interpreter is much easier than resuming in the middle of Baseline JIT code. We now have to record less metadata for Baseline JIT code, so Baseline JIT compilation got faster too. Similarly, we were able to remove a lot of complicated code for debugger support and exception handling.

What’s next?

With the Baseline Interpreter in place, it should now be possible to move Baseline JIT compilation off-thread. We will be working on that in the coming months, and we anticipate more performance improvements in this area.

Acknowledgements

Although I did most of the Baseline Interpreter work, many others contributed to this project. In particular Ted Campbell and Kannan Vijayan reviewed most of the code changes and had great design feedback.

Also thanks to Steven DeTar, Chris Fallin, Havi Hoffman, Yulia Startsev, and Luke Wagner for their feedback on this blog post.

The post The Baseline Interpreter: a faster JS interpreter in Firefox 70 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogThank you, Chris

Thank you, Chris.

Chris Beard has been Mozilla Corporation’s CEO for 5 and a half years. Chris has announced 2019 will be his last year in this role. I want to thank Chris from the bottom of my heart for everything he has done for Mozilla. He has brought Mozilla enormous benefits — new ideas, new capabilities, new organizational approaches. As CEO Chris has put us on a new and better path. Chris’ tenure has seen the development of important organization capabilities and given us a much stronger foundation on which to build. This includes reinvigorating our flagship web browser Firefox to be once again a best-in-class product. It includes recharging our focus on meeting the online security and privacy needs facing people today. And it includes expanding our product offerings beyond the browser to include a suite of privacy and security-focused products and services from Facebook Container and Enhanced Tracking Protection to Firefox Monitor.

Chris will remain an advisor to the board. We recognize some people may think these words are a formula and have no deep meaning. We think differently. Chris is a true “Mozillian.” He has been devoted to Mozilla for the last 15 years, and has brought this dedication to many different roles at Mozilla. When Chris left Mozilla to join Greylock as an “executive-in-residence” in 2013, he remained an advisor to Mozilla Corporation. That was an important relationship, and Chris and I were in contact when it started to become clear that Chris could be the right CEO for MoCo. So over the coming years I expect to work with Chris on mission-related topics. And I’ll consider myself lucky to do so.

One of the accomplishments of Chris’ tenure is the strength and depth of Mozilla Corporation today. The team is strong. Our organization is strong, and our future full of opportunities. It is precisely the challenges of today’s world, and Mozilla’s opportunities to improve online life, that bring so many of us to Mozilla. I personally remain deeply focused on Mozilla. I’ll be here during Chris’ tenure, and I’ll be here after his tenure ends. I’m committed to Mozilla, and to making serious contributions to improving online life and developing new technical capabilities that are good for people.

Chris will remain as CEO during the transition. We will continue to develop the work on our new engagement model, our focus on privacy and user agency. To ensure continuity, I will increase my involvement in Mozilla Corporation’s internal activities. I will be ready to step in as interim CEO should the need arise.

The Board has retained Tuck Rickards of the recruiting firm Russell Reynolds for this search. We are confident that Tuck and team understand that Mozilla products and technology bring our mission to life, and that we are deeply different than other technology companies. We’ll say more about the search as we progress.

The internet stands at an inflection point today. Mozilla has the opportunity to make significant contributions to a better internet. This is why we exist, and it’s a key time to keep doing more. We offer heartfelt thanks to Chris for leading us to this spot, and for leading the rejuvenation of Mozilla Corporation so that we are fit for this purpose, and determined to address big issues.

Mozilla’s greatest strength is the people who respond to our mission and step forward to take on the challenge of building a better internet and online life. Chris is a shining example of this. I wish Chris the absolute best in all things.

I’ll close by stating a renewed determination to find ways for everyone who seeks safe, humane and exciting online experiences to help create this better world.

The post Thank you, Chris appeared first on The Mozilla Blog.

The Mozilla BlogMy Next Chapter

Earlier this morning I shared the news internally that – while I’ve been a Mozillian for 15 years so far, and plan to be for many more years – this will be my last year as CEO.

When I returned to Mozilla just over five years ago, it was during a particularly tumultuous time in our history. Looking back it’s amazing to reflect on how far we’ve come, and I am so incredibly proud of all that our teams have accomplished over the years.

Today our products, technology and policy efforts are stronger and more resonant in the market than ever, and we have built significant new organizational capabilities and financial strength to fuel our work. From our new privacy-forward product strategy to initiatives like the State of the Internet we’re ready to seize the tremendous opportunity and challenges ahead to ensure we’re doing even more to put people in control of their connected lives and shape the future of the internet for the public good.

In short, Mozilla is an exceptionally better place today, and we have all the fundamentals in place for continued positive momentum for years to come.

It’s with that backdrop that I made the decision that it’s time for me to take a step back and start my own next chapter. This is a good place to recruit our next CEO and for me to take a meaningful break and recharge before considering what’s next for me. It may be a cliché — but I’ll embrace it — as I’m also looking forward to spending more time with my family after a particularly intense but gratifying tour of duty.

However, I’m not wrapping up today or tomorrow, but at year’s end. I’m absolutely committed to ensuring that we sustain the positive momentum we have built. Mitchell Baker and I are working closely together with our Board of Directors to ensure leadership continuity and a smooth transition. We are conducting a search for my successor and I will continue to serve as CEO through that transition. If the search goes beyond the end of the year, Mitchell will step in as interim CEO if necessary, to ensure we don’t miss a beat. And I will stay engaged for the long-term as advisor to both Mitchell and the Board, as I’ve done before.

I am grateful to have had this opportunity to serve Mozilla again, and to Mitchell for her trust and partnership in building this foundation for the future.

Over the coming months I’m excited to share with you the new products, technology and policy work that’s in development now. I am more confident than ever that Mozilla’s brightest days are yet to come.

The post My Next Chapter appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogWhat’s New in Thunderbird 68

Our newest release, Thunderbird version 68 is now available! Users on version 60, the last major release, will not be immediately updated – but will receive the update in the coming weeks. In this blog post, we’ll take a look at the features that are most noteworthy in the newest version. If you’d like to see all the changes in version 68, you can check out the release notes.

Thunderbird 68 focuses on polish and setting the stage for future releases. There was a lot of work that we had to do below the surface that has made Thunderbird more future-proof and has made it a solid base to continue to build upon. But we also managed to create some great features you can touch today.

New App Menu

Thunderbird 68 features a big update to the App Menu. The new menu is single pane with icons and separators that make it easier to navigate and reduce clutter. Animation when cycling through menu items produces a more engaging experience and results in the menu feeling more responsive and modern.

New Thunderbird Menu

Thunderbird’s New App Menu

Options/Preferences in a Tab

Thunderbird’s Options/Preferences have been converted from a dialog window to its own dedicated tab. The new Preferences tab provides more space which allows for better organized content and is more consistent with the look and feel of the rest of Thunderbird. The new Preferences tab also makes it easier to multitask without the problem of losing track of where your preferences are when switching between windows.

Preferences in a Tab

Preferences in a Tab

Full Color Support

Thunderbird now features full color support across the app. This means changing the color of the text of your email to any color you want or setting tags to any shade your heart desires.

New Full Color Picker

Full Color Support

Better Dark Theme

The dark theme available in Thunderbird has been enhanced with a dark message thread pane as well as many other small improvements.

Thunderbird Dark Theme

Thunderbird Dark Theme

Attachment Management

There are now more options available for managing attachments. You can “detach” an attachment to store it in a different folder while maintaining a link from the email to the new location. You can also open the folder containing a detached file via the “Open Containing Folder” option.

Attachment options for detached files.

Attachment options for detached files.

Filelink Improved

Filelink attachments that have already been uploaded can now be linked to again instead of having to re-upload them. Also, an account is no longer required to use the default Filelink provider – WeTransfer.

Other Filelink providers like Box and Dropbox are not included by default but can be added by grabbing the Dropbox and Box add-ons.

Other Notable Changes

There are many other smaller changes that make Thunderbird 68 feel polished and powerful including an updated To/CC/BCC selector in the compose window, filters can now be set to run periodically, and feed articles now show external attachments as links.

There are many other updates in this release, you can see a list of all of them in the Thunderbird 68 release notes. If you would like to try the newest Thunderbird, head to our website and download it today!

QMOFirefox 69 Beta 14 Testday Results

Hello Mozillians!

As you may already know, Friday August 16th – we held a new Testday event, for Firefox 69 Beta 14.

Thank you all for helping us make Mozilla a better place: Fernando, noelonassis !

Results: Several test cases were executed for: Anti-tracking.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO.
We will make announcements as soon as something shows up!

Cheers!

Mozilla VR BlogNew Avatar Features in Hubs

New Avatar Features in Hubs

It is now easier than ever to customize avatars for Hubs! Choosing the way that you represent yourself in a 3D space is an important part of interacting in a virtual world, and we want to make it possible for anyone to have creative control over how they choose to show up in their communities. With the new avatar remixing update, members of the Hubs community can publish avatars that they create under a remixable, Creative Commons license, and grant others the ability to derive new works from those avatars. We’ve also added more options for creating custom avatars.

When you change your avatar in Hubs, you will now have the opportunity to browse through 'Featured' avatars and ‘Newest’ avatars. Avatars that are remixable will have an icon on them that allows you to save a version of that avatar to your own ‘My Avatars’ library, where you can customize the textures on the avatar to create your own spin on the original work. The ‘Red Panda’ avatar below is a remix of the original Panda Bot.

New Avatar Features in Hubs

In addition to remixing avatars, you can create avatars by uploading a binary glTF file (or selecting one of the base bots that Hubs provides) and uploading up to four texture maps to the model. We have a number of resources available on GitHub for creating custom textures, as well as sets to get you started. You can also make your own designs with a 2D image editor or a 3D texture painting program.

The Hubs base avatar is a glTF model that has four texture maps and supports physically-based rendering (PBR) materials. This allows a great deal of flexibility in what kind of avatars can be created while still providing a quick way to create custom base color maps. For users who are familiar with 3D modeling, you can also create your own new avatar style from scratch, or by using the provided .blend files in the avatar-pipelines GitHub repo.

New Avatar Features in Hubs

We’ve also made it easier to share avatars with one another inside a Hubs room. A tab for ‘Avatars’ now appears in the Create menu, and you can place thumbnails for avatars in the room you’re in to quickly swap between them. This will also allow others in the room to easily change to a specific avatar, which is a fun way to share avatars with a group.

New Avatar Features in Hubs

These improvements to our avatar tools are just the start of what we’re working on to increase opportunities that Hubs users have available to express themselves on the platform. Making it easy to change your avatar - over and over again - allows users to have flexibility over how much personal information they want their avatar to reveal, and easily change from one digital body to another depending on how they’re using Hubs at a given time. While, at Mozilla, we find Panda Robots to be perfectly suited to company meetings, other communities and groups will have their own established social norms for professional activities. We want to support a rich, creative ecosystem for our users, and we can't wait to see what you create!

Open Policy & AdvocacyMozilla Mornings on the future of EU content regulation

On 10 September, Mozilla will host the next installment of our EU Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

The next installment will focus on the future of EU content regulation. We’re bringing together a high-level panel to discuss how the European Commission should approach the mooted Digital Services Act, and to lay out a vision for a sustainable and rights-protective content regulation framework in Europe.

Featuring

Werner Stengg
Head of Unit, E-Commerce & Platforms, European Commission DG CNECT
Alan Davidson
Vice President of Global Policy, Trust & Security, Mozilla
Liz Carolan
Executive Director, Digital Action
Eliska Pirkova
Europe Policy Analyst, Access Now

Moderated by Brian Maguire, EURACTIV

 Logistical information

10 September 2019
08:30-10:30
L42 Business Centre, rue de la Loi 42, Brussels 1040

 

Register your attendance here

The post Mozilla Mornings on the future of EU content regulation appeared first on Open Policy & Advocacy.

SUMO BlogIntroducing Bryce and Brady

Hello SUMO Community,

I’m thrilled to share this update with you today. Bryce and Brady have joined us last week and will be able to help out on Support for some of the new efforts Mozilla are working on towards creating a connected and integrated Firefox experience.

They are going to be involved with new products, but also they won’t forget to put extra effort in providing support on forums and as well as serving as an escalation point for hard to solve issues.

Here is a short introduction to Brady and Bryce:

Hi! My name is Brady, and I am one of the new members of the SUMO team. I am originally from Boise, Idaho and am currently going to school for a Computer Science degree at Boise State. In my free time, I’m normally playing video games, writing, drawing, or enjoying the Sawtooths. I will be providing support for Mozilla products and for the SUMO team.

Hello!  My name is Bryce, I was born and raised in San Diego and I reside in Boise, Idaho.  Growing up I spent a good portion of my life trying to be the best sponger(boogie boarder) and longboarder in North County San Diego.  While out in the ocean I had all sorts of run-ins with sea creatures; but nothing to scary. I am also an IN-N-Out fan, as you may find me sporting their merchandise with boardshorts and the such.   I am truly excited to be part of this amazing group of fun loving folks and I am looking forward to getting to know everyone.

Please welcome them warmly!

hacks.mozilla.orgWebAssembly Interface Types: Interoperate with All the Things!

People are excited about running WebAssembly outside the browser.

That excitement isn’t just about WebAssembly running in its own standalone runtime. People are also excited about running WebAssembly from languages like Python, Ruby, and Rust.

Why would you want to do that? A few reasons:

  • Make “native” modules less complicated
    Runtimes like Node or Python’s CPython often allow you to write modules in low-level languages like C++, too. That’s because these low-level languages are often much faster. So you can use native modules in Node, or extension modules in Python. But these modules are often hard to use because they need to be compiled on the user’s device. With a WebAssembly “native” module, you can get most of the speed without the complication.
  • Make it easier to sandbox native code
    On the other hand, low-level languages like Rust wouldn’t use WebAssembly for speed. But they could use it for security. As we talked about in the WASI announcement, WebAssembly gives you lightweight sandboxing by default. So a language like Rust could use WebAssembly to sandbox native code modules.
  • Share native code across platforms
    Developers can save time and reduce maintainance costs if they can share the same codebase across different platforms (e.g. between the web and a desktop app). This is true for both scripting and low-level languages. And WebAssembly gives you a way to do that without making things slower on these platforms.

Scripting languages like Python and Ruby saying 'We like WebAssembly's speed', low-level languages like Rust and C++ saying, 'and we like the security it could give us' and all of them saying 'and we all want to make developers more effective'

So WebAssembly could really help other languages with important problems.

But with today’s WebAssembly, you wouldn’t want to use it in this way. You can run WebAssembly in all of these places, but that’s not enough.

Right now, WebAssembly only talks in numbers. This means the two languages can call each other’s functions.

But if a function takes or returns anything besides numbers, things get complicated. You can either:

  • Ship one module that has a really hard-to-use API that only speaks in numbers… making life hard for the module’s user.
  • Add glue code for every single environment you want this module to run in… making life hard for the module’s developer.

But this doesn’t have to be the case.

It should be possible to ship a single WebAssembly module and have it run anywhere… without making life hard for either the module’s user or developer.

user saying 'what even is this API?' vs developer saying 'ugh, so much glue code to worry about' vs both saying 'wait, it just works?'

So the same WebAssembly module could use rich APIs, using complex types, to talk to:

  • Modules running in their own native runtime (e.g. Python modules running in a Python runtime)
  • Other WebAssembly modules written in different source languages (e.g. a Rust module and a Go module running together in the browser)
  • The host system itself (e.g. a WASI module providing the system interface to an operating system or the browser’s APIs)

A wasm file with arrows pointing to and from: logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

And with a new, early-stage proposal, we’re seeing how we can make this Just Work™, as you can see in this demo.

So let’s take a look at how this will work. But first, let’s look at where we are today and the problems that we’re trying to solve.

WebAssembly talking to JS

WebAssembly isn’t limited to the web. But up to now, most of WebAssembly’s development has focused on the Web.

That’s because you can make better designs when you focus on solving concrete use cases. The language was definitely going to have to run on the Web, so that was a good use case to start with.

This gave the MVP a nicely contained scope. WebAssembly only needed to be able to talk to one language—JavaScript.

And this was relatively easy to do. In the browser, WebAssembly and JS both run in the same engine, so that engine can help them efficiently talk to each other.

A js file asking the engine to call a WebAssembly function

The engine asking the WebAssembly file to run the function

But there is one problem when JS and WebAssembly try to talk to each other… they use different types.

Currently, WebAssembly can only talk in numbers. JavaScript has numbers, but also quite a few more types.

And even the numbers aren’t the same. WebAssembly has 4 different kinds of numbers: int32, int64, float32, and float64. JavaScript currently only has Number (though it will soon have another number type, BigInt).

The difference isn’t just in the names for these types. The values are also stored differently in memory.

First off, in JavaScript any value, no matter the type, is put in something called a box (and I explained boxing more in another article).

WebAssembly, in contrast, has static types for its numbers. Because of this, it doesn’t need (or understand) JS boxes.

This difference makes it hard to communicate with each other.

JS asking wasm to add 5 and 7, and Wasm responding with 9.2368828e+18

But if you want to convert a value from one number type to the other, there are pretty straightforward rules.

Because it’s so simple, it’s easy to write down. And you can find this written down in WebAssembly’s JS API spec.

A large book that has mappings between the wasm number types and the JS number types

This mapping is hardcoded in the engines.

It’s kind of like the engine has a reference book. Whenever the engine has to pass parameters or return values between JS and WebAssembly, it pulls this reference book off the shelf to see how to convert these values.

JS asking the engine to call wasm's add function with 5 and 7, and the engine looking up how to do conversions in the book

Having such a limited set of types (just numbers) made this mapping pretty easy. That was great for an MVP. It limited how many tough design decisions needed to be made.

But it made things more complicated for the developers using WebAssembly. To pass strings between JS and WebAssembly, you had to find a way to turn the strings into an array of numbers, and then turn an array of numbers back into a string. I explained this in a previous post.

JS putting numbers into WebAssembly's memory

This isn’t difficult, but it is tedious. So tools were built to abstract this away.

For example, tools like Rust’s wasm-bindgen and Emscripten’s Embind automatically wrap the WebAssembly module with JS glue code that does this translation from strings to numbers.

JS file complaining about having to pass a string to Wasm, and the JS glue code offering to do all the work

And these tools can do these kinds of transformations for other high-level types, too, such as complex objects with properties.

This works, but there are some pretty obvious use cases where it doesn’t work very well.

For example, sometimes you just want to pass a string through WebAssembly. You want a JavaScript function to pass a string to a WebAssembly function, and then have WebAssembly pass it to another JavaScript function.

Here’s what needs to happen for that to work:

  1. the first JavaScript function passes the string to the JS glue code

  2. the JS glue code turns that string object into numbers and then puts those numbers into linear memory

  3. then passes a number (a pointer to the start of the string) to WebAssembly

  4. the WebAssembly function passes that number over to the JS glue code on the other side

  5. the second JavaScript function pulls all of those numbers out of linear memory and then decodes them back into a string object

  6. which it gives to the second JS function

JS file passing string 'Hello' to JS glue code
JS glue code turning string into numbers and putting that in linear memory
JS glue code telling engine to pass 2 to wasm
Wasm telling engine to pass 2 to JS glue code
JS glue code taking bytes from linear memory and turning them back into a string
JS glue code passing string to JS file

So the JS glue code on one side is just reversing the work it did on the other side. That’s a lot of work to recreate what’s basically the same object.

If the string could just pass straight through WebAssembly without any transformations, that would be way easier.

WebAssembly wouldn’t be able to do anything with this string—it doesn’t understand that type. We wouldn’t be solving that problem.

But it could just pass the string object back and forth between the two JS functions, since they do understand the type.

So this is one of the reasons for the WebAssembly reference types proposal. That proposal adds a new basic WebAssembly type called anyref.

With an anyref, JavaScript just gives WebAssembly a reference object (basically a pointer that doesn’t disclose the memory address). This reference points to the object on the JS heap. Then WebAssembly can pass it to other JS functions, which know exactly how to use it.

JS passing a string to Wasm and the engine turning it into a pointer
Wasm passing the string to a different JS file, and the engine just passes the pointer on

So that solves one of the most annoying interoperability problems with JavaScript. But that’s not the only interoperability problem to solve in the browser.

There’s another, much larger, set of types in the browser. WebAssembly needs to be able to interoperate with these types if we’re going to have good performance.

WebAssembly talking directly to the browser

JS is only one part of the browser. The browser also has a lot of other functions, called Web APIs, that you can use.

Behind the scenes, these Web API functions are usually written in C++ or Rust. And they have their own way of storing objects in memory.

Web APIs’ parameters and return values can be lots of different types. It would be hard to manually create mappings for each of these types. So to simplify things, there’s a standard way to talk about the structure of these types—Web IDL.

When you’re using these functions, you’re usually using them from JavaScript. This means you are passing in values that use JS types. How does a JS type get converted to a Web IDL type?

Just as there is a mapping from WebAssembly types to JavaScript types, there is a mapping from JavaScript types to Web IDL types.

So it’s like the engine has another reference book, showing how to get from JS to Web IDL. And this mapping is also hardcoded in the engine.

A book that has mappings between the JS types and Web IDL types

For many types, this mapping between JavaScript and Web IDL is pretty straightforward. For example, types like DOMString and JS’s String are compatible and can be mapped directly to each other.

Now, what happens when you’re trying to call a Web API from WebAssembly? Here’s where we get to the problem.

Currently, there is no mapping between WebAssembly types and Web IDL types. This means that, even for simple types like numbers, your call has to go through JavaScript.

Here’s how this works:

  1. WebAssembly passes the value to JS.
  2. In the process, the engine converts this value into a JavaScript type, and puts it in the JS heap in memory
  3. Then, that JS value is passed to the Web API function. In the process, the engine converts the JS value into a Web IDL type, and puts it in a different part of memory, the renderer’s heap.

Wasm passing number to JS
Engine converting the int32 to a Number and putting it in the JS heap
Engine converting the Number to a double, and putting that in the renderer heap

This takes more work than it needs to, and also uses up more memory.

There’s an obvious solution to this—create a mapping from WebAssembly directly to Web IDL. But that’s not as straightforward as it might seem.

For simple Web IDL types like boolean and unsigned long (which is a number), there are clear mappings from WebAssembly to Web IDL.

But for the most part, Web API parameters are more complex types. For example, an API might take a dictionary, which is basically an object with properties, or a sequence, which is like an array.

To have a straightforward mapping between WebAssembly types and Web IDL types, we’d need to add some higher-level types. And we are doing that—with the GC proposal. With that, WebAssembly modules will be able to create GC objects—things like structs and arrays—that could be mapped to complicated Web IDL types.

But if the only way to interoperate with Web APIs is through GC objects, that makes life harder for languages like C++ and Rust that wouldn’t use GC objects otherwise. Whenever the code interacts with a Web API, it would have to create a new GC object and copy values from its linear memory into that object.

That’s only slightly better than what we have today with JS glue code.

We don’t want JS glue code to have to build up GC objects—that’s a waste of time and space. And we don’t want the WebAssembly module to do that either, for the same reasons.

We want it to be just as easy for languages that use linear memory (like Rust and C++) to call Web APIs as it is for languages that use the engine’s built-in GC. So we need a way to create a mapping between objects in linear memory and Web IDL types, too.

There’s a problem here, though. Each of these languages represents things in linear memory in different ways. And we can’t just pick one language’s representation. That would make all the other languages less efficient.

someone standing between the names of linear memory languages like C, C++, and Rust, pointing to Rust and saying 'I pick... that one!'. A red arrow points to the person saying 'bad idea'

But even though the exact layout in memory for these things is often different, there are some abstract concepts that they usually share in common.

For example, for strings the language often has a pointer to the start of the string in memory, and the length of the string. And even if the string has a more complicated internal representation, it usually needs to convert strings into this format when calling external APIs anyways.

This means we can reduce this string down to a type that WebAssembly understands… two i32s.

The string Hello in linear memory, with an offset of 2 and length of 5. Red arrows point to offset and length and say 'types that WebAssembly understands!'

We could hardcode a mapping like this in the engine. So the engine would have yet another reference book, this time for WebAssembly to Web IDL mappings.

But there’s a problem here. WebAssembly is a type-checked language. To keep things secure, the engine has to check that the calling code passes in types that match what the callee asks for.

This is because there are ways for attackers to exploit type mismatches and make the engine do things it’s not supposed to do.

If you’re calling something that takes a string, but you try to pass the function an integer, the engine will yell at you. And it should yell at you.

So we need a way for the module to explicitly tell the engine something like: “I know Document.createElement() takes a string. But when I call it, I’m going to pass you two integers. Use these to create a DOMString from data in my linear memory. Use the first integer as the starting address of the string and the second as the length.”

This is what the Web IDL proposal does. It gives a WebAssembly module a way to map between the types that it uses and Web IDL’s types.

These mappings aren’t hardcoded in the engine. Instead, a module comes with its own little booklet of mappings.

Wasm file handing a booklet to the engine and saying `Here's a little guidebook. It will tell you how to translate my types to interface types`

So this gives the engine a way to say “For this function, do the type checking as if these two integers are a string.”

The fact that this booklet comes with the module is useful for another reason, though.

Sometimes a module that would usually store its strings in linear memory will want to use an anyref or a GC type in a particular case… for example, if the module is just passing an object that it got from a JS function, like a DOM node, to a Web API.

So modules need to be able to choose on a function-by-function (or even even argument-by-argument) basis how different types should be handled. And since the mapping is provided by the module, it can be custom-tailored for that module.

Wasm telling engine 'Read carefully... For some function that take DOMStrings, I'll give you two numbers. For others, I'll just give you the DOMString that JS gave to me.'

How do you generate this booklet?

The compiler takes care of this information for you. It adds a custom section to the WebAssembly module. So for many language toolchains, the programmer doesn’t have to do much work.

For example, let’s look at how the Rust toolchain handles this for one of the simplest cases: passing a string into the alert function.

#[wasm_bindgen]
extern "C" {
    fn alert(s: &str);
}

The programmer just has to tell the compiler to include this function in the booklet using the annotation #[wasm_bindgen]. By default, the compiler will treat this as a linear memory string and add the right mapping for us. If we needed it to be handled differently (for example, as an anyref) we’d have to tell the compiler using a second annotation.

So with that, we can cut out the JS in the middle. That makes passing values between WebAssembly and Web APIs faster. Plus, it means we don’t need to ship down as much JS.

And we didn’t have to make any compromises on what kinds of languages we support. It’s possible to have all different kinds of languages that compile to WebAssembly. And these languages can all map their types to Web IDL types—whether the language uses linear memory, or GC objects, or both.

Once we stepped back and looked at this solution, we realized it solved a much bigger problem.

WebAssembly talking to All The Things

Here’s where we get back to the promise in the intro.

Is there a feasible way for WebAssembly to talk to all of these different things, using all these different type systems?

A wasm file with arrows pointing to and from: logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

Let’s look at the options.

You could try to create mappings that are hardcoded in the engine, like WebAssembly to JS and JS to Web IDL are.

But to do that, for each language you’d have to create a specific mapping. And the engine would have to explicitly support each of these mappings, and update them as the language on either side changes. This creates a real mess.

This is kind of how early compilers were designed. There was a pipeline for each source language to each machine code language. I talked about this more in my first posts on WebAssembly.

We don’t want something this complicated. We want it to be possible for all these different languages and platforms to talk to each other. But we need it to be scalable, too.

So we need a different way to do this… more like modern day compiler architectures. These have a split between front-end and back-end. The front-end goes from the source language to an abstract intermediate representation (IR). The back-end goes from that IR to the target machine code.

This is where the insight from Web IDL comes in. When you squint at it, Web IDL kind of looks like an IR.

Now, Web IDL is pretty specific to the Web. And there are lots of use cases for WebAssembly outside the web. So Web IDL itself isn’t a great IR to use.

But what if you just use Web IDL as inspiration and create a new set of abstract types?

This is how we got to the WebAssembly interface types proposal.

Diagram showing WebAssembly interface types in the middle. On the left is a wasm module, which could be compiled from Rust, Go, C, etc. Arrows point from these options to the types in the middle. On the right are host languages like JS, Python, and Ruby; host platforms like .NET, Node, and operating systems, and more wasm modules. Arrows point from these options to the types in the middle.

These types aren’t concrete types. They aren’t like the int32 or float64 types in WebAssembly today. There are no operations on them in WebAssembly.

For example, there won’t be any string concatenation operations added to WebAssembly. Instead, all operations are performed on the concrete types on either end.

There’s one key point that makes this possible: with interface types, the two sides aren’t trying to share a representation. Instead, the default is to copy values between one side and the other.

saying 'since this is a string in linear memory, I know how to manipulate it' and browser saying 'since this is a DOMString, I know how to manipulate it'

There is one case that would seem like an exception to this rule: the new reference values (like anyref) that I mentioned before. In this case, what is copied between the two sides is the pointer to the object. So both pointers point to the same thing. In theory, this could mean they need to share a representation.

In cases where the reference is just passing through the WebAssembly module (like the anyref example I gave above), the two sides still don’t need to share a representation. The module isn’t expected to understand that type anyway… just pass it along to other functions.

But there are times where the two sides will want to share a representation. For example, the GC proposal adds a way to create type definitions so that the two sides can share representations. In these cases, the choice of how much of the representation to share is up to the developers designing the APIs.

This makes it a lot easier for a single module to talk to many different languages.

In some cases, like the browser, the mapping from the interface types to the host’s concrete types will be baked into the engine.

So one set of mappings is baked in at compile time and the other is handed to the engine at load time.

Engine holding Wasm's mapping booklet and its own mapping reference book for Wasm Interface Types to Web IDL, saying 'So this maps to a string? Ok, I can take it from here to the DOMString that the function is asking for using my hardcoded bindings'

But in other cases, like when two WebAssembly modules are talking to each other, they both send down their own little booklet. They each map their functions’ types to the abstract types.

Engine reaching for mapping booklets from two wasm files, saying 'Ok, let's see how these map to each other'

This isn’t the only thing you need to enable modules written in different source languages to talk to each other (and we’ll write more about this in the future) but it is a big step in that direction.

So now that you understand why, let’s look at how.

What do these interface types actually look like?

Before we look at the details, I should say again: this proposal is still under development. So the final proposal may look very different.

Two construction workers with a sign that says 'Use caution'

Also, this is all handled by the compiler. So even when the proposal is finalized, you’ll only need to know what annotations your toolchain expects you to put in your code (like in the wasm-bindgen example above). You won’t really need to know how this all works under the covers.

But the details of the proposal are pretty neat, so let’s dig into the current thinking.

The problem to solve

The problem we need to solve is translating values between different types when a module is talking to another module (or directly to a host, like the browser).

There are four places where we may need to do a translation:

For exported functions

  • accepting parameters from the caller
  • returning values to the caller

For imported functions

  • passing parameters to the function
  • accepting return values from the function

And you can think about each of these as going in one of two directions:

  • Lifting, for values leaving the module. These go from a concrete type to an interface type.
  • Lowering, for values coming into the module. These go from an interface type to a concrete type.

Telling the engine how to transform between concrete types and interface types

So we need a way to tell the engine which transformations to apply to a function’s parameters and return values. How do we do this?

By defining an interface adapter.

For example, let’s say we have a Rust module compiled to WebAssembly. It exports a greeting_ function that can be called without any parameters and returns a greeting.

Here’s what it would look like (in WebAssembly text format) today.

a Wasm module that exports a function that returns two numbers. See proposal linked above for details.

So right now, this function returns two integers.

But we want it to return the string interface type. So we add something called an interface adapter.

If an engine understands interface types, then when it sees this interface adapter, it will wrap the original module with this interface.

an interface adapter that returns a string. See proposal linked above for details.

It won’t export the greeting_ function anymore… just the greeting function that wraps the original. This new greeting function returns a string, not two numbers.

This provides backwards compatibility because engines that don’t understand interface types will just export the original greeting_ function (the one that returns two integers).

How does the interface adapter tell the engine to turn the two integers into a string?

It uses a sequence of adapter instructions.

Two adapter instructions inside of the adapter function. See proposal linked above for details.

The adapter instructions above are two from a small set of new instructions that the proposal specifies.

Here’s what the instructions above do:

  1. Use the call-export adapter instruction to call the original greeting_ function. This is the one that the original module exported, which returned two numbers. These numbers get put on the stack.
  2. Use the memory-to-string adapter instruction to convert the numbers into the sequence of bytes that make up the string. We have to specifiy “mem” here because a WebAssembly module could one day have multiple memories. This tells the engine which memory to look in. Then the engine takes the two integers from the top of the stack (which are the pointer and the length) and uses those to figure out which bytes to use.

This might look like a full-fledged programming language. But there is no control flow here—you don’t have loops or branches. So it’s still declarative even though we’re giving the engine instructions.

What would it look like if our function also took a string as a parameter (for example, the name of the person to greet)?

Very similar. We just change the interface of the adapter function to add the parameter. Then we add two new adapter instructions.

Here’s what these new instructions do:

  1. Use the arg.get instruction to take a reference to the string object and put it on the stack.
  2. Use the string-to-memory instruction to take the bytes from that object and put them in linear memory. Once again, we have to tell it which memory to put the bytes into. We also have to tell it how to allocate the bytes. We do this by giving it an allocator function (which would be an export provided by the original module).

One nice thing about using instructions like this: we can extend them in the future… just as we can extend the instructions in WebAssembly core. We think the instructions we’re defining are a good set, but we aren’t committing to these being the only instruct for all time.

If you’re interested in understanding more about how this all works, the explainer goes into much more detail.

Sending these instructions to the engine

Now how do we send this to the engine?

These annotations gets added to the binary file in a custom section.

A file split in two. The top part is labeled 'known sections, e.g. code, data'. The bottom part is labeled 'custom sections, e.g. interface adapter'

If an engine knows about interface types, it can use the custom section. If not, the engine can just ignore it, and you can use a polyfill which will read the custom section and create glue code.

How is this different than CORBA, Protocol Buffers, etc?

There are other standards that seem like they solve the same problem—for example CORBA, Protocol Buffers, and Cap’n Proto.

How are those different? They are solving a much harder problem.

They are all designed so that you can interact with a system that you don’t share memory with—either because it’s running in a different process or because it’s on a totally different machine across the network.

This means that you have to be able to send the thing in the middle—the “intermediate representation” of the objects—across that boundary.

So these standards need to define a serialization format that can efficiently go across the boundary. That’s a big part of what they are standardizing.

Two computers with wasm files on them and multiple lines flowing into a single line connecting connecting them. The single line represents serialization and is labelled 'IR'

Even though this looks like a similar problem, it’s actually almost the exact inverse.

With interface types, this “IR” never needs to leave the engine. It’s not even visible to the modules themselves.

The modules only see the what the engine spits out for them at the end of the process—what’s been copied to their linear memory or given to them as a reference. So we don’t have to tell the engine what layout to give these types—that doesn’t need to be specified.

What is specified is the way that you talk to the engine. It’s the declarative language for this booklet that you’re sending to the engine.

Two wasm files with arrows pointing to the word 'IR' with no line between, because there is no serialization happening.

This has a nice side effect: because this is all declarative, the engine can see when a translation is unnecessary—like when the two modules on either side are using the same type—and skip the translation work altogether.

The engine looking at the booklets for a Rust module and a Go module and saying 'Ooh, you’re both using linear memory for this string... I’ll just do a quick copy between your memories, then'

How can you play with this today?

As I mentioned above, this is an early stage proposal. That means things will be changing rapidly, and you don’t want to depend on this in production.

But if you want to start playing with it, we’ve implemented this across the toolchain, from production to consumption:

  • the Rust toolchain
  • wasm-bindgen
  • the Wasmtime WebAssembly runtime

And since we maintain all these tools, and since we’re working on the standard itself, we can keep up with the standard as it develops.

Even though all these parts will continue changing, we’re making sure to synchronize our changes to them. So as long as you use up-to-date versions of all of these, things shouldn’t break too much.

Construction worker saying 'Just be careful and stay on the path'

So here are the many ways you can play with this today. For the most up-to-date version, check out this repo of demos.

Thank you

  • Thank you to the team who brought all of the pieces together across all of these languages and runtimes: Alex Crichton, Yury Delendik, Nick Fitzgerald, Dan Gohman, and Till Schneidereit
  • Thank you to the proposal co-champions and their colleagues for their work on the proposal: Luke Wagner, Francis McCabe, Jacob Gravelle, Alex Crichton, and Nick Fitzgerald
  • Thank you to my fantastic collaborators, Luke Wagner and Till Schneidereit, for their invaluable input and feedback on this article

The post WebAssembly Interface Types: Interoperate with All the Things! appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogMozilla takes action to protect users in Kazakhstan

Today, Mozilla and Google took action to protect the online security and privacy of individuals in Kazakhstan. Together the companies deployed technical solutions within Firefox and Chrome to block the Kazakhstan government’s ability to intercept internet traffic within the country.

The response comes after credible reports that internet service providers in Kazakhstan have required people in the country to download and install a government-issued certificate on all devices and in every browser in order to access the internet. This certificate is not trusted by either of the companies, and once installed, allowed the government to decrypt and read anything a user types or posts, including intercepting their account information and passwords. This targeted people visiting popular sites Facebook, Twitter and Google, among others.

“People around the world trust Firefox to protect them as they navigate the internet, especially when it comes to keeping them safe from attacks like this that undermine their security. We don’t take actions like this lightly, but protecting our users and the integrity of the web is the reason Firefox exists.” — Marshall Erwin, Senior Director of Trust and Security, Mozilla

“We will never tolerate any attempt, by any organization—government or otherwise—to compromise Chrome users’ data. We have implemented protections from this specific issue, and will always take action to secure our users around the world.” — Parisa Tabriz, Senior Engineering Director, Chrome

This is not the first attempt by the Kazakhstan government to intercept the internet traffic of everyone in the country. In 2015, the Kazakhstan government attempted to have a root certificate included in Mozilla’s trusted root store program. After it was discovered that they were intending to use the certificate to intercept user data, Mozilla denied the request. Shortly after, the government forced citizens to manually install its certificate but that attempt failed after organizations took legal action.

Each company will deploy a technical solution unique to its browser. For additional information on those solutions please see the below links.

Mozilla
Google

Russian: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh: Бұл постыны қазақ тілінде мына жерден оқыңыз.

 

 

 

 

The post Mozilla takes action to protect users in Kazakhstan appeared first on The Mozilla Blog.

Web Application SecurityProtecting our Users in Kazakhstan

Russian translation: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh translation: Бұл постыны қазақ тілінде мына жерден оқыңыз.

In July, a Firefox user informed Mozilla of a security issue impacting Firefox users in Kazakhstan: They stated that Internet Service Providers (ISPs) in Kazakhstan had begun telling their customers that they must install a government-issued root certificate on their devices. What the ISPs didn’t tell their customers was that the certificate was being used to intercept network communications. Other users and researchers confirmed these claims, and listed 3 dozen popular social media and communications sites that were affected.

The security and privacy of HTTPS encrypted communications in Firefox and other browsers relies on trusted Certificate Authorities (CAs) to issue website certificates only to someone that controls the domain name or website. For example, you and I can’t obtain a trusted certificate for www.facebook.com because Mozilla has strict policies for all CAs trusted by Firefox which only allow an authorized person to get a certificate for that domain. However, when a user in Kazakhstan installs the root certificate provided by their ISP, they are choosing to trust a CA that doesn’t have to follow any rules and can issue a certificate for any website to anyone. This enables the interception and decryption of network communications between Firefox and the website, sometimes referred to as a Monster-in-the-Middle (MITM) attack.

We believe this act undermines the security of our users and the web, and it directly contradicts Principle 4 of the Mozilla Manifesto that states, “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”

To protect our users, Firefox, together with Chrome, will block the use of the Kazakhstan root CA certificate. This means that it will not be trusted by Firefox even if the user has installed it. We believe this is the appropriate response because users in Kazakhstan are not being given a meaningful choice over whether to install the certificate and because this attack undermines the integrity of a critical network security mechanism.  When attempting to access a website that responds with this certificate, Firefox users will see an error message stating that the certificate should not be trusted.

We encourage users in Kazakhstan affected by this change to research the use of virtual private network (VPN) software, or the Tor Browser, to access the Web. We also strongly encourage anyone who followed the steps to install the Kazakhstan government root certificate to remove it from your devices and to immediately change your passwords, using a strong, unique password for each of your online accounts.

The post Protecting our Users in Kazakhstan appeared first on Mozilla Security Blog.

Mozilla Gfx Teammoz://gfx newsletter #47

Hi there! Time for another mozilla graphics newsletter. In the comments section of the previous newsletter, Michael asked about the relation between WebRender and WebGL, I’ll try give a short answer here.

Both WebRender and WebGL need access to the GPU to do their work. At the moment both of them use the OpenGL API, either directly or through ANGLE which emulates OpenGL on top of D3D11. They, however, each work with their own OpenGL context. Frames produced with WebGL are sent to WebRender as texture handles. WebRender, at the API level, has a single entry point for images, video frames, canvases, in short for every grid of pixels in some flavor of RGB format, be them CPU-side buffers or already in GPU memory as is normally the case for WebGL. In order to share textures between separate OpenGL contexts we rely on platform-specific APIs such as EGLImage and DXGI.

Beyond that there isn’t any fancy interaction between WebGL and WebRender. The latter sees the former as a image producer just like 2D canvases, video decoders and plain static images.

What’s new in gfx

Wayland and hidpi improvements on Linux

  • Martin Stransky made a proof of concept implementation of DMABuf textures in Gecko’s IPC mechanism. This dmabuf EGL texture backend on Wayland is similar what we have on Android/Mac. Dmabuf buffers can be shared with main/compositor process, can be bound as a render target or texture and can be located at GPU memory. The same dma buf buffer can be also used as hardware overlay when it’s attached to wl_surface/wl_subsurface as wl_buffer.
  • Jan Horak fixed a bug that prevented tabs from rendering after restoring a minimized window.
  • Jan Horak fixed the window parenting hierarchy with Wayland.
  • Jan Horak fixed a bug with hidpi that was causing select popups to render incorrectly after scrolling.

WebGL multiview rendering

WebGL’s multiview rendering extension has been approved by the working group and it’s implementation by Jeff Gilbert will be shipping in Firefox 70.
This extension allows more efficient rendering into multiple viewports, which is most commonly use by VR/AR for rendering both eyes at the same time.

Better high dynamic range support

Jean Yves landed the first part of his HDR work (a set of 14 patches). While we can’t yet output HDR content to HDR screen, this work greatly improved the correctness of the conversion from various HDR formats to low dynamic range sRGB.

You can follow progress on the color space meta bug.

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

If you are curious about the state of WebRender on a particular platform, up to date information is available at http://arewewebrenderyet.com

Speaking of which, darkspirit enabled webrender on Nightly for Nvidia+Nouveau drivers linux users in Firefox Nightly.

More filters in WebRender

When We run into a primitive that isn’t supported by WebRender, we make it go through software fallback implementation which can be slow for some things. SVG filters are a good example of primitives that perform much better if implemented on the GPU in WebRender.
Connor Brewster has been working on implementing a number of SVG filters in WebRender:

See the SVG filters in WebRender meta bug.

Texture swizzling and allocation

WebRender previously only worked with BGRA for color textures. Unfortunately this format is optimal on some platforms but sub-optimal (or even unsupported) on others. So a conversion sometimes has to happen and this conversion if done by the driver can be very costly.

Kvark reworked the texture caching logic to support using and swizzling between different formats (for example RGBA and BGRA).
A document that landed with the implementation provides more details about the approach and context.

Kvark also improved the texture cache allocation behavior.

Kvark also landed various refactorings (1), (2), (3), (4).

Android improvements

Jamie fixed emoji rendering on webrender on android and continues investigating driver issues on Adreno 3xx devices.

Displaylist serialization

Dan replaced bincode in our DL IPC code with a new bespoke and private serialization library (peek-poke), ending the terrible reign of the secret serde deserialize_in_place hacks and our fork of serde_derive.

Picture caching improvements

Glenn landed several improvements and fixes to the picture caching infrastructure:

  • Bug 1566712 – Fix quality issueswith picture caching when transform has fractional offsets.
  • Bug 1572197 – Fix world clip region for preserve-3d items with picture caching.
  • Bug 1566901 – Make picture caching more robust to float issues.
  • Bug 1567472 – Fix bug in preserve-3d batching code in WebRender.

Font rendering improvements

Lee landed quite a few font related patches:

  • Bug 1569950 – Only partially clear WR glyph caches if it is not necessary to fully clear.
  • Bug 1569174 – Disable embedded bitmaps if ClearType rendering mode is forced.
  • Bug 1568841 – Force GDI parameters for GDI render mode.
  • Bug 1568858 – Always stretch box shadows except for Cairo.
  • Bug 1568841 – Don’t use enhanced contrast on GDI fonts.
  • Bug 1553818 – Use GDI ClearType contrast for GDI font gamma.
  • Bug 1565158 – Allow forcing DWrite symmetric rendering mode.
  • Bug 1563133 – Limit the GlyphBuffer capacity.
  • Bug 1560520 – Limit the size of WebRender’s glyph cache.
  • Bug 1566449 – Don’t reverse glyphs in GlyphBuffer for RTL.
  • Bug 1566528 – Clamp the amount of synthetic bold extra strikes.
  • Bug 1553228 – Don’t free the result of FcPatternGetString.

Various fixes and improvements

  • Gankra fixed an issue with addon popups and document splitting.
  • Sotaro prevented some unnecessary composites on out-of-viewport external image updates.
  • Nical fixed an integer oveflow causing the browser to freeze
  • Nical improved the overlay profiler by showing more relevant numbers when the timings are noisy.
  • Nical fixed corrupted rendering of progressively loaded images.
  • Nical added a fast path when none of the primitives of an image batch need anti-aliasing or repetition.

SeaMonkeyBuild3 has been “candidatized”…

Hi everyone!

Build 3 has been pushed to the regular candidates path.  I’ve coined a ‘new’ term “candidatized”, meaning it’s been sent to the candidates path on archive.mo.

That said, there’s a chance it’ll be dubbed 2.49.5 instead of rc3.

Also, that said, there’s a few changes from the previous automation-based releases/candidates; that being, a lot of the files that are supposed to be there, aren’t and files that shouldn’t be there, are.  The automation is still not working.  The automation that’s in place is manual…  😛

Yes, we’re missing a README file.   No fear…  that’s coming next.  So if you refresh that main build3/ page, it should appear within 24 hrs provided it’s there.  [motif being “it’s there when it’s there.”]

Again, I cannot stress this out.  Both IanN and frg were the mainstays of this release.  So all round applause to them!

:ewong

hacks.mozilla.orgUsing WebThings Gateway notifications as a warning system for your home

Ever wonder if that leaky pipe you fixed is holding up? With a trip to the hardware store and a Mozilla WebThings Gateway you can set up a cheap leak sensor to keep an eye on the situation, whether you’re home or away. Although you can look up detector status easily on the web-based dashboard, it would be better to not need to pay attention unless a leak actually occurs. In the WebThings Gateway 0.9 release, a number of different notification mechanisms can be set up, including emails, apps, and text messages.

Leak Sensor Demo

         

In this post I’ll show you how to set up gateway notifications to warn you of changes in your home that you care about. You can set each notification to one of three levels of severity–low, normal, and high–so that you can identify which are informational changes and which alerts should be addressed immediately (fire! intruder! leak!). First, we’ll choose a device to worry about. Next, we’ll decide how we want our gateway to contact us. Finally, we’ll set up a rule to tell the gateway when it should contact us.

Choosing a device

First, make sure the device you want to monitor is connected to your gateway. If you haven’t added the device yet, visit the Gateway User Guide for information about getting started.

Now it’s time to figure out which things’ properties will lead to interesting notifications. For each thing you want to investigate, click on its splat icon to get a full view of all its properties.

View of all gateway things with splat icon of leak sensor highlighted Detailed Leak Sensor view

You may also want to log properties of various analog devices over time to see what values are “normal”. For example, you can monitor the refrigerator temperature for a couple of days to help determine what qualifies as an abnormal temperature. In this graph, you can see the difference between baseline power draw (around 20 watts) and charging (up to 90 watts).

Graph of laptop charger plug power over the last day with clear differentiation between off, standby, and charging states

Charger Power Consumption Graph

In my case, I’ve selected a leak sensor so I won’t need to log data in advance. It’s pretty clear that I want to be notified when the leak property of my sensor becomes true (i.e., when a leak is detected). If instead you want to monitor a smart plug, you can look at voltage, power, or on/off state. Note that the notification rules you create will let you combine multiple inputs using “and” or “or” logic. For example, you might want to be alerted if indoor motion is detected “and” all of the family smartphone “presence” states are “inactive” (i.e., no one in your family is home, so what caused motion?). Whatever your choice, keep the logical states of your various sensors in mind while you set up your notifier.

Setting up your notifier

The 0.9 WebThings Gateway release added support for notifiers as a specific form of add-on. Thanks to the efforts of the community and a bit of our own work, your gateway can already send you notifications over email, SMS, Telegram, or specialized push notification apps with new add-ons released every week. You can find several notification add-on options by clicking “+” on the Settings > Add-ons page.

Main menu with settings highlighted Settings page with add-ons section highlighted Initial list of installed add-ons without email add-on List of installable add-ons with email sender highlighted List of installed add-ons with link to README of email add-on highlighted

The easiest-to-use notifiers are email and SMS since there are fewer moving parts, but feel free to choose whichever approach you prefer. Follow the configuration instructions in your chosen notifier’s README file. You can get to the README for your notifier by clicking on the author’s name in the add-on list then scrolling down.

You’ll find a complete guide to the email notifier here: https://github.com/mozilla-iot/email-sender-adapter#email-sender-adapter.

Creating a rule

Finally, let’s teach our gateway how and when it should yell for attention. We can set this up in a simple drag-and-drop rule. First, drag your device to the left as a trigger and select the “Leak” property.

Dragging and dropping the leak block into the rule Where to click to open the leak block's select property dropdown Configuring the leak block through a dropdown

Next, drag your notification channel to the right as an effect and configure its title, body, and level as desired.

Illustration of dragging email block into rule Rule after email block is dropped into it Configuring the email part of the rule through a dropdown

Your rule is now set up and ready to go!

Fully configured if leak then send email rule

The finished rule!

You can now manually test it out. For a leak sensor you can just spill a little water on it to make sure you get a text, email, or other notification warning you about a possible scary flood. This is also a perfect time to start experimenting. Can you set up a second, louder notification for when you’re asleep? What about only notifying when you’re at home so you can deal with the leak immediately?

Advanced rule logic where if the leak sensor is active and "phone at home" is true then it sends an email

A more advanced rule

Notifications are just one small piece of the WebThings Gateway ecosystem. We’re trying to build a future where the convenience of a connected life doesn’t require giving up your security and privacy. If you have ideas about how the WebThings Gateway can better orchestrate your home, please comment on Discourse or contribute on GitHub. If your preferred notification channel is missing and you can code, we love community add-ons! Check out the source code of the email add-on for inspiration. Coming up next, we’ll be talking about how you can have a natural spoken dialogue with the WebThings Gateway without sending your voice data to the cloud.

The post Using WebThings Gateway notifications as a warning system for your home appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NL10n Report: August Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers:

  • Mohsin of Assanese (as) is committed to rebuild the community and has been contributing to several projects.
  • Emil of Syriac (syc) joined us through the Common Voice project.
  • Ratko and Isidora of Serbian (sr) have been prolific contributors to a wide range of products and projects since joining the community.
  • Haile of Amheric (am) joined us through the Common Voice project, is busy localizing and recruiting more contributors so he can rebuild the community.
  • Ahsun Mahmud of Bengali (bn) focuses his interest on Firefox.

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Maltese (mt)
  • Romansh Vallader (rm-vallery)
  • Syriac (syc)

New content and projects

What’s new or coming up in Firefox desktop

We’re quickly approaching the deadline for Firefox 69. The last day to ship your changes in this version is August 20, less than a week away.

A lot of content targeting Firefox 70 already landed and it’s available in Pontoon for translation, with more to come in the following days. Here are a few of the areas where you should focus your testing on.

about:logins

This is the new password manager for Firefox. If you don’t plan to store the passwords in your browser, you should at least create a new profile to test the feature and its interactions (adding logins, editing, removing, etc.).

Enhanced Tracking Protection (ETP) and Protection Panels

This is going to be the main focus for Firefox 70:

  • New protection panel displayed when clicking the shield icon in the address bar.
  • Updated preferences.
  • New about:protections page. The content of this page will be exposed for localization in the coming days.

With ETP there will be several new terms to define for your language, like “Cross-Site Tracking Cookies” or “Social Media Trackers”. Make sure they’re translated consistently across the products and websites.

The deadline to ship localization for Firefox 70 will be October 8.

What’s new or coming up in mobile

It’s summer vacation time in mobile land, which means most projects are following the usual course of things.

Just like for Desktop, we’re quickly approaching the deadline for Firefox Android v69. The last day to ship your changes in this version is August 20.

Another thing to note is that we’ve exposed strings for Firefox iOS v19 (deadline TBD soon).

Other projects are following the usual continuous localization workflow. Stay tuned for the next report as there will be novelties then for sure!

What’s new or coming up in web projects

Firefox Accounts

A lot of strings landed earlier this month. If you need to prioritize what to localize first, look for string IDs containing `delete_account` or `sync-engines`. Expect more strings to land in the coming weeks.

Mozilla.org

The following files were added or updated since the last report.

  • New: firefox/adblocker.lang and firefox/whatsnew_69.lang (due on 26 of August)
  • Update: firefox/new/trailhead.lang

The navigation.lang file has been made available for localization for some time. This is a shared file, and the content is on production whether the file is fully localized or not. If this is not fully translated, make sure to give this file higher priority to complete soon.

What’s new or coming up in Foundation projects

More content from foundation.mozilla.org will be exposed to localization in de, es, fr, pl, pt-BR over the next few weeks! Content is exposed in different stages, because the website is built using different technologies, which makes it challenging for localization. The main pages will be available in the Engagement project, and a new tag can help have a look at them. Other template strings will be exposed in a new project later.

donate.mozilla.org is getting an update too! The website is being rebuilt from the ground up with a new system that will make it easier to maintain. The UI won’t change too much, so the copy will mostly remain the same. However, it won’t be possible to migrate the current translations to the new system, instead we will heavily rely on Pontoon’s translation memory.
Once the new website is ready, the current project in Pontoon will be set to “read only” mode during a transition period and a new project will be enabled.

Please make sure to review any pending suggestion over the next few weeks, so that they get properly added to the translation memory and are ready to be reused into the new project.

What’s new or coming up in SuMo

Newly published articles:

What’s new or coming up in Pontoon

The Translate.Next work moves on. We hope to have it wrapped up by the end of this quarter (i.e., end of September). Help us test by turning on Translate.Next from the Pontoon translation editor.

Newly published localizer facing documentation

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers, and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR BlogWebXR category in JS13KGames!

WebXR category in JS13KGames!

Today starts the 8th edition of the annual js13kGames competition and we are sponsoring its WebXR category with a bunch of prizes including Oculus Quest headsets!

Like many other game development contests, the main goal of the js13kGames competition is to make a game based on a given theme under a specific amount of time. This year’s theme is "BACK" and the time you have to work on your game is a whole month, from today to September 13th.
There is, of course, another important rule you must follow: the zip containing your game should not weigh more than 13kb. (Please follow this link for the complete set of rules). Don’t let the size restriction discourage you. Previous competitors have done amazing things in 13kb.

This year, as in the previous editions, Mozilla is sponsoring the competition, with special emphasis on the WebXR category, where, among other prizes, the best three games will get an Oculus Quest headset!

Like many other game development contests, the main goal is to release a game based on a given theme under a specific amount of time. This year’s theme is "BACK" and the time you have to work on your game is a whole month, from today to 13th September.
There is, of course, another important rule you must follow: the zip containing your game should not weigh more than 13kb. (Please follow this link for the complete set of rules).

This year, as in the previous editions, Mozilla is again sponsoring the competition, with special emphasis on the WebXR category, where, among other prizes, the best three games will get an Oculus Quest headset!

WebXR category in JS13KGames!

Frameworks allowed

Last year you were allowed to use A-Frame and Babylon.js in your game. This year we have been working with the organization to include three.js on that list!
Because these frameworks weigh far more than 13kb, the requirements for this category have been softened. The size of the framework builds won’t count as part of the final 13kb limit. The allowed links for each framework to include in your game are the following:

WebXR category in JS13KGames!

The allowed links per framework to include on your game are the following:

If you feel you can present a WebXR game without using any third-party framework and still keep the 13kb limit for the whole game, you are free to do so and I’m sure the judges will value that fact.

You may use any kind of input system: gamepad, gazer, 3dof or 6dof controllers and we will still be able to test your game in different VR devices. Please indicate in the description what the device/input requirements are for your game.
If you have a standalone headset, please make sure you try your game on Firefox Reality because we plan to feature the best games of the competition on the Firefox Reality homepage.

Resources

Here are some useful links if you need some help or want to share your progress!

Enjoy and good luck!

Mozilla VR BlogCustom elements for the immersive web

Custom elements for the immersive web

We are happy to introduce the first set of custom elements for the immersive web we have been working on: <img-360> and <video-360>

From the Mixed Reality team, we keep working on improving the content creator experience, building new frameworks, tools, APIs, performance tuning and so on.
Most of these projects are based on the assumption that the users have a basic knowledge of 3D graphics and want to go deep on fully customizing their WebXR experience, (eg: using A-Frame or three.js).
But there are still a lot of use cases where content creators just want very simple interactions and don’t have the knowledge or time to create and maintain a custom application built on top of a WebXR framework.

With this project we aim to address the problems these content creators have by providing custom elements with simple, yet polished features. One could be just a simple 360 image or video viewer, another one could be a tour allowing the user to jump from one image to another.

Custom elements for the immersive web

Custom elements provide a standard way to create HTML elements to provide simple functionality that matches expectations of content creators without knowledge of 3D, WebXR or even Javascript.

How does this work?

Just include the Javascript bundle on your page and you could start using both elements in your HTML: <img-360> and <video-360>. You just need to provide them with a 360 image or video and the custom elements will do the rest, including detecting WebVR support. Here is a simple example that adds a 360 image and video to a page. All of the interaction controls are generated automatically:

You can try a demo here and find detailed information on how to use them on Github.

Next steps

Today we are releasing just these two elements but we have many others in mind and would love your feedback. What new elements would you find useful? Please join us on GitHub to discuss them.
We are also excited to see other companies working hard on providing quality custom elements for the 3D and XR web as Google with their <model-viewer> component and we hope others will follow.

Mozilla VR BlogA Summer with Particles and Emojis

A Summer with Particles and Emojis

This summer I am very lucky to join the Hubs by Mozilla as a technical artist intern. Over the 12 weeks that I was at Mozilla, I worked on two different projects.
My first project is about particle systems, the thing that I always have great interest in. I was developing the particle system feature for Spoke, the 3D editor which you can easily create a 3D scene and publish to Hubs.

Particle systems are a technique that has been used in a wide range of game physics, motion graphics and computer graphics related fields. They are usually composed of a large number of small sprites or other objects to simulate some chaotic system or natural phenomena. Particles can make a huge impact on the visual result of an application and in virtual and augmented reality, it can deepen the immersive feeling greatly.

Particle systems can be incredibly complex, so for this version of the Particle System, we wanted to separate the particle system from having heavy behaviour controls like some particle systems from native game engines, only keeping the basic attributes that are needed. The Spoke particle system can be separated into two parts, particles and the emitter. Each particle, has a texture/sprite, lifetime, age, size, color, and velocity as it’s basic attributes. The emitter is more simple, as it only has properties for its width and height and information about the particle count (how many particles it can emit per life circle).

By changing the particle count and the emitter size, users can easily customize a particle system for different uses, like to create falling snow in a wintry scene or add a small water splash to a fountain.
A Summer with Particles and Emojis
Changing the emitter size

A Summer with Particles and Emojis
Changing the number of particles from 100 to 200

You can also change the opacities and the colors of the particles. The actual color and opacity values are interpolated between start, middle and end colors/opacities.
A Summer with Particles and Emojis

And for the main visuals, we can change the sprites to the image we want by using a URL to an image, or choosing from your local assets.
A Summer with Particles and Emojis

What does a particle’s life cycle look like? Let’s take a look at this chart:
A Summer with Particles and Emojis
Every particle is born with a random negative initial age, which can be adjusted through the Age Randomness property after it’s born, its age will keep growing as time goes by. When its age is bigger than the total lifetime (formed by Lifetime and Lifetime Randomness), the particle will die immediately and be re-assigned a negative initial age, then start over again. The Lifetime here is not the actual lifetime that every particle will live, in order to not have all particles disappear at the same time, we have this Lifetime Randomness attribute to vary the actual lifetime of each particle. The higher the Lifetime Randomness, the larger the differentiation will be among the actual lifetimes of whole particle system. There is another attribute called Age Randomness, which is similar to Lifetime Randomness. The difference is that Age Randomness is used to vary the negative initial ages to have a variation on the birth of the particles, while Lifetime Randomness is to have the variation on the end of their lives.

Every particle also has velocity properties across the x, y and x axis. By adjusting the velocity in three dimensions, users can have a better control on particles’ behaviours. For example, simulation gravity or wind that kind of simple phenomena.
A Summer with Particles and Emojis
With angular velocity, you can also control on the rotation of the particle system to have a more natural and dynamic result.
A Summer with Particles and Emojis

The velocity, color and size properties all have the option to use different interpolation functions between their start, middle and end stages.

A Summer with Particles and Emojis
The particle system is officially out on Spoke, so go try it out and let us know what you think!
A Summer with Particles and Emojis

Avatar Display Emojis

My other project is about the avatar emoji display screen on Hubs. I did the design of the emojis images, UI/UX design and the actual implementation of this feature. It’s actually a straightforward project: I needed to figure out the style of the emoji display on the chest screen, some graphics design on the interface level, make decisions on the interaction flow and implement it in Hubs.

A Summer with Particles and Emojis
Evolution of the display emoji design.

We ultimately decided to have the smooth edge emoji with some bloom effect.
A Summer with Particles and Emojis
Final version of the display emoji design

A Summer with Particles and Emojis
Icon design for the menu user interface

A Summer with Particles and Emojis
Interaction design using Hubs display styles

Demo:
A Summer with Particles and Emojis

When you enter pause mode on Hubs, the emoji box will show up, replacing the chat box, and you can change your avatar’s screen to one of the emojis offered.

I want to say thank you to Hubs for having me this summer. I learned a lot from all the talented people in Hubs, especially Robert, Jim, Brian and Greg who helped me a lot to overcome the difficulties I came across. The encouragement and support from the team is the best thing I got this summer. Miss you guys already!
A Summer with Particles and Emojis

SUMO BlogCommunity Management Update

Hello SUMO community,

I have a couple announcements for today. I’d like you all to welcome our two new community managers.

First off Kiki has officially joined the SUMO team as a community manager. Kiki has been filling in with Konstantina and Ruben on our social support activities. We had an opportunity to bring her onto the SUMO team full time starting last week. She will be transitioning out of her responsibilities at the Community Development Team and will be continuing her work on the social program as well as managing SUMO days going forward.

In addition, we have hired a new SUMO community manager to join the team. Please welcome Giulia Guizzardi to the SUMO team.

You can find her on the forums as gguizzardi. Below is a short introduction:

Hey everyone, my name is Giulia Guizzardi, and I will be working as a Support Community Manager for Mozilla. 

I am currently based in Berlin, but I was born and raised in the north-east of Italy. I studied Digital Communication in Italy and Finland, and worked for half a year in Poland.

My greatest passion is music, I love participating in festivals and concerts along with collecting records and listening to new releases all day long. Other than that, I am often online, playing video games (Firewatch at the moment) or scrolling Youtube/Reddit.

I am really excited for this opportunity and happy to work alongside the community!

Now that we have two new community managers we will work with Konstantina and Ruben to transition their work to Kiki and Giulia. We’re also kicking off work to create a community strategy which we will be seeking feedback for soon. In the meantime, please help me welcome Kiki and Giulia to the team.

SeaMonkeyBuild numero deux

Hi,

After a long delay, I’ve uploaded the build2 of 2.49.5.

Checksums are there.

Next to be done will be build #3.

:ewong

Web Application SecurityWeb Authentication in Firefox for Android

Firefox for Android (Fennec) now supports the Web Authentication API as of version 68. WebAuthn blends public-key cryptography into web application logins, and is our best technical response to credential phishing. Applications leveraging WebAuthn gain new  second factor and “passwordless” biometric authentication capabilities. Now, Firefox for Android matches our support for Passwordless Logins using Windows Hello. As a result, even while mobile you can still obtain the highest level of anti-phishing account security.

Firefox for Android uses your device’s native capabilities: On certain devices, you can use built-in biometrics scanners for authentication. You can also use security keys that support Bluetooth, NFC, or can be plugged into the phone’s USB port.

The attached video shows the usage of Web Authentication with a built-in fingerprint scanner: The demo website enrolls a new security key in the account using the fingerprint, and then subsequently logs in using that fingerprint (and without requiring a password).

Adoption of Web Authentication by major websites is underway: Google, Microsoft, and Dropbox all support WebAuthn via their respective Account Security Settings’ “2-Step Verification” menu.

A few notes

For technical reasons, Firefox for Android does not support the older, backwards-compatible FIDO U2F Javascript API, which we enabled on Desktop earlier in 2019. For details as to why, see bug 1550625.

Currently Firefox Preview for Android does not support Web Authentication. As Preview matures, Web Authentication will be joining its feature set.

 

The post Web Authentication in Firefox for Android appeared first on Mozilla Security Blog.

Mozilla Add-ons BlogExtensions in Firefox 69

In our last post for Firefox 68, we’ve introduced a great number of new features. In contrast, Firefox 69 only has a few new additions. Still, we are proud to present this round of changes to extensions in Firefox.

Better Topsites

The topSites API has received a few additions to better allow developers to retrieve the top sites as Firefox knows them. There are no changes to the defaults, but we’ve added a few options for better querying. The browser.topSites.get() function has two additional options that can be specified to control what sites are returned:

  • includePinned can be set to true to include sites that the user has pinned on the Firefox new tab.
  • includeSearchShortcuts can be set to true for including search shortcuts

Passing both options allows to mimic the behavior you will see on the new tab page, where both pinned results and search shortcuts are available.

User Scripts

This is technically an addition to Firefox 68, but since we didn’t mention it in the last blog post it gets an honorable mention here. In March, we announced that user scripts were coming, and now they are here. Starting with Firefox 68,  you will be able to use the userScripts API without needing to set any preferences in about:config.

The great advantage of the userScripts API is that it can run scripts with reduced privileges. Your extension can provide a mechanism to run user-provided scripts with a custom API, avoiding the need to use eval in content scripts. This makes it easier to adhere to the security and privacy standards of our add-on policies. Please see the original post on this feature for an example on how to use the API while we update the documentation.

Miscellaneous

  • The downloads API now correctly supports byExtensionId and byExtensionName for extension initiated downloads.
  • Clearing site permissions no longer re-prompts the user to accept storage permissions after a restart.
  • Using alert() in a background page will no longer block the extension from running.
  • The proxy.onRequest API now also supports FTP and WebSocket requests.

A round of applause goes to our volunteer contributors Martin Matous, Michael Krasnov, Myeongjun Go, Joe Jalbert, as well as everyone else who has made these additions in Firefox 69 possible.

The post Extensions in Firefox 69 appeared first on Mozilla Add-ons Blog.

Mozilla VR BlogLessons from Hacking Glitch

Lessons from Hacking Glitch

When we first started building MrEd we imagined it would be done as a traditional web service. A potential user goes to a website, creates an account, then can build experiences on the site and save them to the server. We’ve all written software like this before and had a good idea of the requirements. However, as we started actually building MrEd we realized there were additional challenges.

First, MrEd is targeted at students, many of them young. My experience with teaching kids during previous summers let me know that they often don’t have email addresses, and even if they do there are privacy and legal issues around tracking what the students do. Also, we knew that this was an experiment which would end one day, but we didn’t want the students to lose access to this tool they just had just learned.

After pondering these problems we thought Glitch might be an answer. It supports anonymous use out of the box and allows easy remixing. It also has a nice CDN built in; great for hosting models and 360 images. If it would be possible to host the editor as well as the documents then Glitch would be the perfect platform for a self contained tool that lives on after the experiment was done.

The downside of Glitch is that many of its advanced features are undocumented. After much research we figured out how to modify Glitch to solve many problems, so now we’d like to share our solutions with you.

Making a Glitch from a Git Repo

Glitch’s editor is great for editing a small project, but not for building large software. We knew from the start that we’d need to edit on our local machines and store the code in a GitHub repo. The question was how to get that code initially into Glitch? It turns out Glitch supports creating a new project from an existing git repo. This was a fantastic advantage.

Lessons from Hacking Glitch

We could now create a build of the editor and set up the project just how we like, keep it versioned in Git, then make a new Glitch whenever we needed to. We built a new repo called mred-base-glitch specifically for this purpose and documented the steps to use it in the readme.

Integrating React

MrEd is built in React, so the next challenge was how to get a React app into Glitch. During development we ran the app locally using a hotreloading dev server. For final production, however, we need static files that could be hosted anywhere. Since our app was made with create-react-app we can build a static version with npm run build. The problem is that it requires you to set the hostname property in your package.json to calculate the final URL references. This wouldn’t work for us because someone’s Glitch could be renamed to anything. The solution was to set the hostname to ., so that all URLs are relative.

Next we wanted the editor to be hidden. In Glitch the user has a file list on the left side of the editor. While it’s fine to have assets and scripts be visible, we wanted the generated React code to be hidden. It turns out Glitch will hide any directory if it begins with dot: .. So in our base repo we put the code into public/.mred.

Finally we had the challenge of how to update the editor in an existing glitch without over-writing assets and documents the user had created.

Rather than putting everything into one git repo we made two. The first repo, mred, contains just the code to build the editor in React. The second repo, mred-base-glitch, contains the default documents and behaviors. This second repo integrates the first one as a git submodule. The compiled version of the editor also lives in the mred repo in the build directory. This way both the source and compiled versions of the editor can be versioned in git.

Whenever you want to update the editor in an existing glitch you can go to the Glitch console and run git submodule init and git submodule update to pull in just the editor changes. Then you can update the glitch UI with refresh. While this was a manual step, the students were able to do it easily with instruction.

Loading documents

The editor is a static React app hosted in the user’s Glitch, but it needs to save documents created in the editor at some location. Glitch doesn’t provide an API for programmatically loading and saving documents, but any Glitch can have a NodeJS server in it so we built a simple document server with express. The doc server scans the documents and scripts directories to produce a JSON API that the editor consumes.

For the launch page we wanted the user to see a list of their current projects before opening the editor. For this part the doc server has a route at / which returns a webpage containing the list as links. For URLs that need to be absolute the server uses a magic variable provided by Glitch to determine the hostname: process.env.PROJECT_DOMAIN.

The assets were a bit trickier than scripts and docs. The editor needs a list of available assets, but we can’t just scan the assets directory because assets aren’t actually stored in your Glitch. Instead they live on Glitch’s CDN using long generated URLs. However, the Glitch does have a hidden file called .glitch-assets which lists all of the assets as a JSON doc, including the mime types.

We discovered that a few of the files students wanted to use, like GLBs and WAVs, aren’t recognized by Glitch. You can still upload these files to the CDN but the .glitch-assets file won’t list the correct mime-type, so our little doc server also calculated new mime types for these files.

Having a tiny document server in the Glitch gave us a lot of flexibility to fix bugs and implement missing features. It was definitely a design win.

User Authentication

Another challenge with using Glitch is user authentication. Glitch has a concept of users and will not let a user edit someone else’s glitch without permission, but this user system is not exposed as an API. Our code had no way to know if the person interacting with the editor is the owner of that glitch or not. There are rumors of such a feature in the future, but for now we made do with a password file.

It turns out glitches can have a special file called .env for storing passwords and other secure environment variables. This file can be read by code running in the glitch, but it is not copied when remixing, so if someone remixes your glitch they won’t find out your password. To use this we require students to set a password as soon as the remix the base glitch. Then the doc server will use the password for authenticating communication with the editor.

Future Features

We managed to really modify Glitch to support our needs and it worked quite well. That said, there are a few features we’d like them to add in the future.

Documentation. Almost everything we did above came after lots of research in the support forums, and help from a few Glitch staffers. There is very little official documentation of how to do beyond basic project development. It would be nice if there was an official docs site beyond the FAQs.

A real authentication API. Using the .env file was a nice hack, but it would be nice if the editor itself could respond properly to the user. If the user isn’t logged in it could show a play only view of the experience. If the user is logged in but isn’t the owner of the glitch then it could show a remix button.

A way to populate assets programmatically. Everything you see in a glitch when you clone from GitHub comes from the underlying git repo except for the assets. To create a glitch with a pre-set list of assets (say for doing specific exercises in a class) requires manually uploading the files through the visual interface. There is no way (at least that we could find) to store the assets in the git repo or upload them programmatically.

Overall Glitch worked very well. We got an entire visual editor, assets, and document storage into a single conceptual chunk -- a glitch -- that can be shared and remixed by anyone. We couldn’t have done what we needed on such a short timeline without Glitch. We thank you Glitch Team!

Mozilla VR BlogHubs July Update

Hubs July Update

We’ve introduced new features that make it easier to moderate and share your Hubs experience. July was a busy month for the team, and we’re excited to share some updates! As the community around Hubs has grown, we’ve had the chance to see different ways that groups meet in Hubs and are excited to explore new ways that groups can choose what types of experience they want to have. Different communities have different needs for how they’re meeting in Hubs, and we think that these features are a step towards helping people get co-present together in virtual spaces in the way that works best for them.

Room-Level Permissions
It is now possible for room owners to specify which features are granted to other users in the room. This allows the owner of the room to decide if people can add media to the room, draw with the pen, pin objects, and create cameras. If you’re using Hubs for a meeting or event where there will be a larger number of attendees, this can help keep the room organized and free from distractions.

Hubs July Update

Promoting Moderators
For groups that hold larger events in Hubs, there is now the ability to promote other users in a Hubs room to also have the capabilities of the room owner. If you’ve been creating rooms using the Hubs Discord bot,you may already be familiar with rooms that have multiple owners. This feature can be especially valuable for groups that have a core set of administrators who are available in the room to help moderate and keep events running smoothly. Room owners can promote other users to moderators by opening up the user list and selecting the user from a list, then clicking ‘Promote’ on the action list. You should only promote trusted users to moderator, since they’ll have the same permissions as you do as the room owner. Users must be signed in to be promoted.

Camera Mode
Room owners can now hide the Hubs user interface by enabling camera mode, which was designed for groups that want to have a member in the room record or livestream their gathering. When in camera mode, the room owner will broadcast the view from their avatar and replace the Lobby camera, and non-essential UI elements will be hidden. The full UI can be hidden by clicking the ‘Hide All’ button, which allows for a clear, unobstructed view of what’s going on in the room.

Video Recording
The camera tool in Hubs can now be used to record videos as well as photos. When a camera is created in the room, you can toggle different recording options that can be used by using the UI on the camera itself. Like photos, videos that are taken with the in-room camera will be added to the room after they have finished capturing. Audio for videos will be recorded from the position of the avatar of the user who is recording. While recording video on a camera, users will have an indicator on their display name above their head to show that they are capturing video. The camera itself also contains a light to indicate when it is recording.

Hubs July Update

Tweet from Hubs
For users who want to share their photos, videos, and rooms through Twitter, you can now tweet from directly inside of Hubs when media is captured in a room. When you hover over a photo or video that was taken by the in-room camera, you will see a blue ‘Tweet’ button appear. The first time you share an image or video through Twitter, you will be prompted to authenticate to your Twitter account. You can review the Hubs Privacy Policy and third-party notices here, and revoke access to Hubs from your Twitter account by going to https://twitter.com/settings/applications.

Embed Hubs Rooms
You can now embed a Hubs room directly into another web page in an iFrame. When you click the 'Share' button in a Hubs room, you can copy the embed code and paste it into the HTML on another site. Keep in mind that this means anyone who visits that page will be able to join!

Hubs July Update

Discord Bot Notifications
If you have the Hubs Discord bot in your server and bridged to a channel, you can now set a reminder to notify you of a future event or meeting. Just type in the command !hubs notify set mm/dd/yyyy and your time zone, and the Hubs Bot will post a reminder when the time comes around.

Microphone Level Indicator
Have you ever found yourself wondering if other people in the room could hear you, or forgotten that you were muted? The microphone icon in the HUD now shows mic activity level, regardless of whether or not you have your mic muted. This is a handy little way to make sure that your microphone is picking up your audio, and a nice reminder that you’re talking while muted.

In the coming months, we will be continuing work on new features aimed at enabling communities to get together easily and effectively. We’ll also be exploring improvements to the avatar customization flow and new features for Spoke to improve the tools available to creators to build their own spaces for their Hubs rooms. To participate in the conversation about new features and join our weekly community meetups, join us on Discord using the invitation link here.

Open Policy & AdvocacyMozilla calls for transparency in compelled access case

Sometime last year, Facebook challenged a law enforcement request for access to encrypted communications through Facebook Messenger, and a federal judge denied the government’s demand. At least, that is what has been reported by the press. Troublingly, the details of this case are still not available to the public, as the opinion was issued “under seal.” We are trying to change that.

Mozilla, with Atlassian, has filed a friend of the court brief in a Ninth Circuit appeal arguing for unsealing portions of the opinion that don’t reveal sensitive or proprietary information or, alternatively, for releasing a summary of the court’s legal analysis. Our common law legal system is built on precedent, which depends on the public availability of court opinions for potential litigants and defendants to understand the direction of the law. This opinion would have been only the third since 2003 offering substantive precedent on compelled access—thus especially relevant input on an especially serious issue.

This case may have important implications for the current debate about whether and under what circumstances law enforcement can access encrypted data and encrypted communications. The opinion, if disclosed, could help all kinds of tech companies push back on overreaching law enforcement demands. We are deeply committed to building secure products and establishing transparency and control for our users, and this information is vital to enabling those ends. As thoughtful, mission-driven engineers and product designers, it’s critical for us as well as end users to understand the legal landscape around what the government can and cannot require.

The post Mozilla calls for transparency in compelled access case appeared first on Open Policy & Advocacy.

hacks.mozilla.orgNew CSS Features in Firefox 68

Firefox 68 landed earlier this month with a bunch of CSS additions and changes. In this blog post we will take a look at some of the things you can expect to find, that might have been missed in earlier announcements.

CSS Scroll Snapping

The headline CSS feature this time round is CSS Scroll Snapping. I won’t spend much time on it here as you can read the blog post for more details. The update in Firefox 68 brings the Firefox implementation in line with Scroll Snap as implemented in Chrome and Safari. In addition, it removes the old properties which were part of the earlier Scroll Snap Points Specification.

The ::marker pseudo-element

The ::marker pseudo-element lets you select the marker box of a list item. This will typically contain the list bullet, or a number. If you have ever used an image as a list bullet, or wrapped the text of a list item in a span in order to have different bullet and text colors, this pseudo-element is for you!

With the marker pseudo-element, you can target the bullet itself. The following code will turn the bullet on unordered lists to hot pink, and make the number on an ordered list item larger and blue.

ul ::marker {
  color: hotpink;
}

ol ::marker {
  color: blue;
  font-size: 200%;
}
An ordered and unordered list with styled bullets

With ::marker we can style our list markers

See the CodePen.

There are only a few CSS properties that may be used on ::marker. These include all font properties. Therefore you can change the font-size or family to be something different to the text. You can also color the bullets as shown above, and insert generated content.

Using ::marker on non-list items

A marker can only be shown on list items, however you can turn any element into a list-item by using display: list-item. In the example below I use ::marker, along with generated content and a CSS counter. This code outputs the step number before each h2 heading in my page, preceded by the word “step”. You can see the full example on CodePen.

h2 {
  display: list-item;
  counter-increment: h2-counter;
}

h2::marker {
  content: "Step: " counter(h2-counter) ". ";
}

If you take a look at the bug for the implementation of ::marker you will discover that it is 16 years old! You might wonder why a browser has 16 year old implementation bugs and feature requests sitting around. To find out more read through the issue, where you can discover that it wasn’t clear originally if the ::marker pseudo-element would make it into the spec.

There were some Mozilla-specific properties that achieved the result developers were looking for with something like ::marker. The properties ::moz-list-bullet and ::moz-list-marker allowed for the styling of bullets and markers respectively, using a moz- vendor prefix.

The ::marker pseudo-element is standardized in CSS Lists Level 3, and CSS Pseudo-elements Level 4, and currently implemented as of Firefox 68, and Safari. Chrome has yet to implement ::marker. However, in most cases you should be able to use ::marker as an enhancement for those browsers which support it. You can allow the markers to fall back to the same color and size as the rest of the list text where it is not available.

CSS Fixes

It makes web developers sad when we run into a feature which is supported but works differently in different browsers. These interoperability issues are often caused by the sheer age of the web platform. In fact, some things were never fully specified in terms of how they should work. Many changes to our CSS specifications are made due to these interoperability issues. Developers depend on the browsers to update their implementations to match the clarified spec.

Most browser releases contain fixes for these issues, making the web platform incrementally better as there are fewer issues for you to run into when working with CSS. The latest Firefox release is no different – we’ve got fixes for the ch unit, and list numbering shipping.

Developer Tools

In addition to changes to the implementation of CSS in Firefox, Firefox 68 brings you some great new additions to Developer Tools to help you work with CSS.

In the Rules Panel, look for the new print styles button. This button allows you to toggle to the print styles for your document, making it easier to test a print stylesheet that you are working on.

The Print Styles button in the UI highlighted

The print styles icon is top right of the Rules Panel.

 

Staying with the Rules Panel, Firefox 68 shows an icon next to any invalid or unsupported CSS. If you have ever spent a lot of time puzzling over why something isn’t working, only to realise you made a typo in the property name, this will really help!

A property named flagged invalid in the console

In this example I have spelled padding as “pudding”. There is (sadly) no pudding property so it is highlighted as an error.

 

The console now shows more information about CSS errors and warnings. This includes a nodelist of places the property is used. You will need to click CSS in the filter bar to turn this on.

The console highlighting a CSS error

My pudding error is highlighted in the Console and I can see I used it on the body element.

 

So that’s my short roundup of the features you can start to use in Firefox 68. Take a look at the Firefox 68 release notes to get a full overview of all the changes and additions that Firefox 68 brings you.

The post New CSS Features in Firefox 68 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogMrEd, an Experiment in Mixed Reality Editing

MrEd, an Experiment in Mixed Reality Editing

We are excited to tell you about our experimental Mixed Reality editor, an experiment we did in the spring to explore online editing in MR stories. What’s that? You haven’t heard of MrEd? Well please allow us to explain.

MrEd, an Experiment in Mixed Reality Editing

For the past several months Blair, Anselm and I have been working on a visual editor for WebXR called the Mixed Reality Editor, or MrEd. We started with this simple premise: non-programmers should be able to create interactive stories and experiences in Mixed Reality without having to embrace the complexity of game engines and other general purpose tools. We are not the first people to tackle this challenge; from visual programming tools to simplified authoring environments, researchers and hobbyists have grappled with this problem for decades.

Looking beyond Mixed Reality, there have been notable successes in other media. In the late 1980s Apple created a ground breaking tool for the Macintosh called Hypercard. It let people visually build applications at a time when programming the Mac required Pascal or assembly. It did this by using the concrete metaphor of a stack of cards. Anything could be turned into a button that would jump the user to another card. Within this simple framework people were able to create eBooks, simple games, art, and other interactive applications. Hypercard’s reliance on declaring possibly large numbers of “visual moments” (cards) and using simple “programming” to move between them is one of the inspirations for MrEd.

We also took inspiration from Twine, a web-based tool for building interactive hypertext novels. In Twine, each moment in the story (seen on the screen) is defined as a passage in the editor as a mix of HTML content and very simple programming expressions executed when a passage is displayed, or when the reader follows a link. Like Hypercard, the author directly builds what the user sees, annotating it with small bits of code to manage the state of the story.

No matter what the medium — text, pictures, film, or MR — people want to tell stories. Mixed Reality needs tools to let people easily tell stories by focusing on the story, not by writing a simulation. It needs content focused tools for authors, not programmers. This is what MrEd tries to be.

Scenes Linked Together

At first glance, MrEd looks a lot like other 3D editors, such as Unity3D or Amazon Sumerian. There is a scene graph on the left, letting authors create scenes, add anchors and attach content elements under them. Select an item in the graph or in the 3D windows, and a property pane appears on the right. Scripts can be attached to objects. And so on. You can position your objects in absolute space (good for VR) or relative to other objects using anchors. An anchor lets you do something like look for this poster in the real world, then position this text next to it, or look for this GPS location and put this model on it. Anchors aren’t limited to basic can also express more semantically meaningful concepts like find the floor and put this on it (we’ll dig into this in another article).

Dig into the scene graph on the left, and differences appear. Instead of editing a single world or game level, MrEd uses the metaphor of a series of scenes (inspired by Twine’s passages and Hypercard’s cards). All scenes in the project are listed, with each scene defining what you see at any given point: shapes, 3D models, images, 2D text and sounds. You can add interactivity by attaching behaviors to objects for things like ‘click to navigate’ and ‘spin around’. The story advances by moving from scene to scene; code to keep track of story state is typically executed on these scene transitions, like Hypercard and Twine. Where most 3D editors force users to build simulations for their experiences, MrEd lets authors create stores that feel more like “3D flip-books”. Within a scene, the individual elements can be animated, move around, and react to the user (via scripts), but the story progresses by moving from scene to scene. While it is possible to create complex individual scenes that begin to feel like a Unity scene, simple stories can be told through sequences of simple scenes.

We built MrEd on Glitch.com, a free web-based code editing and hosting service. With a little hacking we were able to put an entire IDE and document server into a glitch. This means anyone can share and remix their creations with others.

One key feature of MrEd is that it is built on top of a CRDT data structure to enable editing the same project on multiple devices simultaneously. This feature is critical for Mixed Reality tools because you are often switching between devices during development; the networked CRDT underpinnings also mean that logging messages from any device appear in any open editor console viewing that project, simplifying distributed development. We will tell you more details about the CRDT and Glitch in future posts.

We ran a two week class with a group of younger students in Atlanta using MrEd. The students were very interested in telling stories about their school, situating content in space around the buildings, and often using memes and ideas that were popular for them. We collected feedback on features, bugs and improvements and learned a lot from how the students wanted to use our tool.

Lessons Learned

As I said, this was an experiment, and no experiment is complete without reporting on what we learned. So what did we learn? A lot! And we are going to share it with you over the next couple of blog posts.

First we learned that idea of building a 3D story from a sequence of simple scenes worked for novice MR authors: direct manipulation with concrete metaphors, navigation between scenes as a way of telling stories, and the ability to easily import images and media from other places. The students were able to figure it out. Even more complex AR concepts like image targets and geospatial anchors were understandable when turned into concrete objects.

MrEd’s behaviors scripts are each a separate Javascript file and MrEd generates the property sheet from the definition of the behavior in the file, much like Unity’s behaviors. Compartmentalizing them in separate files means they are easy to update and share, and (like Unity) simple scripts are a great way to add interactivity without requiring complex coding. We leveraged Javascript’s runtime code parsing and execution to support scripts with simple code snippets as parameters (e.g., when the user finds a clue by getting close to it, a proximity behavior can set a global state flag to true, without requiring a new script to be written), while still giving authors the option to drop down to Javascript when necessary.

Second, we learned a lot about building such a tool. We really pushed Glitch to the limit, including using undocumented APIs, to create an IDE and doc server that is entirely remixable. We also built a custom CRDT to enable shared editing. Being able to jump back and forth between a full 2d browser with a keyboard and then XR Viewer running on an ARKit enabled iPhone is really powerful. The CRDT implementation makes this type of real time shared editing possible.

Why we are done

MrEd was an experiment in whether XR metaphors can map cleanly to a Hypercard-like visual tool. We are very happy to report that the answer is yes. Now that our experiment is over we are releasing it as open source, and have designed it to run in perpetuity on Glitch. While we plan to do some code updates for bug fixes and supporting the final WebXR 1.0 spec, we have no current plans to add new features.

Building a community around a new platform is difficult and takes a long time. We realized that our charter isn’t to create platforms and communities. Our charter is to help more people make Mixed Reality experiences on the web. It would be far better for us to help existing platforms add WebXR than for us to build a new community around a new tool.

Of course the source is open and freely usable on Github. And of course anyone can continue to use it on Glitch, or host their own copy. Open projects never truly end, but our work on it is complete. We will continue to do updates as the WebXR spec approaches 1.0, but there won’t be any major changes.

Next Steps

We are going to polish up the UI and fix some remaining bugs. MrEd will remain fully usable on Glitch, and hackable on GitHub. We also want to pull some of the more useful chunks into separate components, such as the editing framework and the CRDT implementation. And most importantly, we are going to document everything we learned over the next few weeks in a series of blogs.

If you are interested in integrating WebXR into your own rapid prototyping / educational programming platform, then please let us know. We are very happy to help you.

You can try MrEd live by remixing the Glitch and setting a password in the .env file. You can get the source from the main MrEd github repo, and the source for the glitch from the base glitch repo.

Mozilla VR BlogFirefox Reality for Oculus Quest

Firefox Reality for Oculus Quest

We are excited to announce that Firefox Reality is now available for the Oculus Quest!

Following our releases for other 6DoF headsets including the HTC Vive Focus Plus and Lenovo Mirage, we are delighted to bring the Firefox Reality VR web browsing experience to Oculus' newest headset.

Whether you’re watching immersive video or meeting up with friends in Mozilla Hubs, Firefox Reality takes advantage of the Oculus Quest’s boost in performance and capabilities to deliver the best VR web browsing experience. Try the new featured content on the FxR home page or build your own to see what you can do in the next generation of standalone virtual reality headsets.
Firefox Reality for Oculus Quest

Enhanced Tracking Protection Blocks Sites from Tracking You
To protect our users from the pervasive tracking and collection of personal data by ad networks and tech companies, Firefox Reality has Enhanced Tracking Protection enabled by default. We strongly believe privacy shouldn’t be relegated to optional settings. As an added bonus, these protections work in the background and actually increase the speed of the browser.
Firefox Reality for Oculus Quest

Firefox Reality is available in 10 different languages, including Japanese, Korean, Simplified Chinese and Traditional Chinese, with more on the way. You can also use your voice to search the web instead of typing, making it faster and easier to get where you want to go.
Firefox Reality for Oculus Quest

Stay tuned in the coming months as we roll out support for the nearly VR-ready WebXR specification, multi-window browsing, bookmarks sync, additional language support and other exciting new features.

Like all Firefox browser products, Firefox Reality is available for free in the Oculus Quest store.

For more information: https://mixedreality.mozilla.org/firefox-reality/

hacks.mozilla.orgWebThings Gateway for Wireless Routers

Wireless Routers

In April we announced that the Mozilla IoT team had been working on evolving WebThings Gateway into a full software distribution for consumer wireless routers.

Today, with the 0.9 release, we’re happy to announce the availability of the first experimental builds for our first target router hardware, the Turris Omnia.

Turris Omnia wireless router

Turris Omnia wireless router. Source: turris.cz

These builds are based on the open source OpenWrt operating system. They feature a new first-time setup experience which enables you to configure the gateway as a router and Wi-Fi access point itself, rather than connecting to an existing Wi-Fi network.

Router first time setup

Router first time setup

So far, these experimental builds only offer extremely basic router configuration and are not ready to replace your existing wireless router. This is just our first step along the path to creating a full software distribution for wireless routers.

Router network settings

Router network settings

We’re planning to add support for other wireless routers and router developer boards in the near future. We want to ensure that the user community can access a range of affordable developer hardware.

Raspberry Pi 4

As well as these new OpenWrt builds for routers, we will continue to support the existing Raspbian-based builds for the Raspberry Pi. In fact, the 0.9 release is also the first version of WebThings Gateway to support the new Raspberry Pi 4. You can now find a handy download link on the Raspberry Pi website.

Raspberry Pi 4 Model B

Raspberry Pi 4 Model B. Source: raspberrypi.org

Notifier Add-ons

Another feature landing in the 0.9 release is a new type of add-on called notifier add-ons.

Notifier Add-ons

Notifier Add-ons

In previous versions of the gateway, the only way you could be notified of events was via browser push notifications. Unfortunately, this is not supported by all browsers, nor is it always the most convenient notification mechanism for users.

A workaround was available by creating add-ons with basic “send notification” actions to implement different types of notifications. However, these required the user to add “things” to their gateway which didn’t represent actual devices and actions had to be hard-coded in the add-on’s configuration.

To remedy this, we have introduced notifier add-ons. Essentially, a notifier creates a set of “outlets”, each of which can be used as an output for a rule. For example, you can now set up a rule to send you an SMS or an email when motion is detected in your home. Notifiers can be configured with a title, a message and a priority level. This allows users to be reached where and how they want, with a message and priority that makes sense to them.

Rule with email notification

Rule with email notification

API Changes

For developers, the 0.9 release of the WebThings Gateway and 0.12 release of the WebThings Framework libraries also bring some small changes to Thing Descriptions. This will bring us more in line with the latest W3C drafts.

One small difference to be aware of is that “name” is now called “title”. There are also some experimental new base, security and securityDefinitions properties of the Thing Descriptions exposed by the gateway, which are still under active discussion at the W3C.

Give it a try!

We invite you to download the new WebThings Gateway 0.9 and continue to build your own web things with the latest WebThings Framework libraries. If you already have WebThings Gateway installed on a Raspberry Pi, it should update itself automatically.

As always, we welcome your feedback on Discourse. Please submit issues and pull requests on GitHub.

The post WebThings Gateway for Wireless Routers appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogUpcoming deprecations in Firefox 70

Several planned code deprecations for Firefox 70, currently available on the Nightly pre-release channel, may impact extension and theme developers. Firefox 70 will be released on October 22, 2019.

Aliased theme properties to be removed

In Firefox 65, we started deprecating the aliased theme properties accentcolor, textcolor, and headerURL. These properties will be removed in Firefox 70.

Themes listed on addons.mozilla.org (AMO) will be automatically updated to use supported properties. Most themes were updated back in April, but new themes have been created using the deprecated properties. If your theme is not listed on AMO, or if you are the developer of a dynamic theme, please update your theme’s manifest.json to use the supported properties.

  • For accentcolor, please use frame
  • For headerURL, please use theme_frame
  • For textcolor, please use tab_background_text

JavaScript deprecations

In Firefox 70, the non-standard, Firefox-specific Array generic methods introduced with JavaScript 1.6 will be considered deprecated and scheduled for removal in the near future. For more information about which generics will be removed and suggested alternatives, please see the Firefox Site Compatibility blog.

The Site Compatibility working group also intends to remove the non-standard prototype toSource and uneval by the end of 2019.

The post Upcoming deprecations in Firefox 70 appeared first on Mozilla Add-ons Blog.

The Mozilla BlogEmpowering voters to combat election manipulation

For the last year, Mozilla has been looking for ways to empower voters in light of the shifts in election dynamics caused by the internet and online advertising. This work included our participation in the EU’s Code of Practice on Disinformation to push for change in the industry which led to the launch of the Firefox EU Elections toolkit that provided people information on the voting process, how tracking and opaque online advertising influence their voting behavior and how they can easily protect themselves.

We also had hoped to lend our technical expertise to create an analysis dashboard that would help researchers and journalists monitor the elections. The dashboard would gather data on the political ads running on various platforms and provide a concise “behind the scenes” look at how these ads were shared and targeted.

But to achieve this we needed the platforms to follow through on their own commitment to make the data available through their Ad Archive APIs.

Here’s what happened.

Platforms didn’t supply sufficient data

On March 29, Facebook began releasing its political ad data through a publicly available API. We quickly concluded the API was inadequate.

  • Targeting information was not available.
  • Bulk data access was not offered.
  • Data wasn’t tagged properly.
  • Identical searches would produce wildly differing results.

The state of the API made it nearly impossible to extract the data needed to populate the dashboard we were hoping to create to make this information more accessible.

And although Google didn’t provide the targeting criteria advertisers use on the platform, it did provide access to the data in a format that allowed for real research and analysis.

That was not the case for Facebook.

So then what?

It took the entire month of April to figure out ways to work within or rather, around, the API to collect any information about the political ads running on the Facebook platform.

After several weeks, hundreds of hours, and thousands of keystrokes, the Mozilla team created the EU Ad Transparency Reports. The reports contained aggregated statistics on spending and impressions about political ads on Facebook, Instagram, Google, and YouTube.

While this was not the dynamic tool we had envisioned at the beginning of this journey, we hoped it would help.

But despite our best efforts to help Facebook debug their system, the API broke again from May 18 through May 26, making it impossible to use the API and generate any reports in the last days leading up to the elections.

All of this was documented through dozens of bug reports provided to Facebook, identifying ways the API needed to be fixed.

A Roadmap for Facebook

Ultimately our contribution to this effort ended up looking very different than what we had first set out to do. Instead of a tool, we have detailed documentation of every time the API failed and every roadblock encountered and a series of tips and tricks to help others use the API.

This documentation provides Facebook a clear roadmap to make the necessary improvements for a functioning and useful API before the next election takes place. The EU elections have passed, but the need for political messaging transparency has not.

In fact, important elections are expected to take place almost every month until the end of the year and Facebook has recently rolled this tool out globally.

We need Facebook to be better. We need an API that actually helps – not hinders – researchers and journalists uncover who is buying ads, the way these ads are being targeted and to whom they’re being served. It’s this important work that informs the public and policymakers about the nature and consequences of misinformation.

This is too important to get wrong. That is why we plan to continue our work on this matter and continue to work with those pushing to shine a light on how online advertising impact elections.

The post Empowering voters to combat election manipulation  appeared first on The Mozilla Blog.

QMOFirefox Nightly 70 Testday Results

Hello Mozillians!

As you may already know, last Friday – July 19th – we held a new Testday event, for Firefox Nightly 70.

Thank you all for helping us make Mozilla a better place: gaby2300, maria plachkova and Fernando noelonassis.

Results:

– several test cases executed for Fission. 

– 1 bug verified: 1564267

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

The Mozilla BlogQ&A: Igniting imaginations and putting VR in the hands of students with Kai Frazier


When you were in school, you may have taken a trip to a museum or a local park, but you probably never got to see an active volcano or watch great whites hunt. As Virtual Reality grows, this could be the way your kids will learn — using headsets the way we use computers.

When you were in school, you may have gone on a trip to the museum, but you probably never stood next to an erupting volcano, watching molten lava pouring down its sides. As Virtual Reality (VR) grows, learning by going into the educational experience could be the way children will learn — using VR headsets the way we use computers.

This kind of technology holds huge potential in shaping young minds, but like with most technology, not all public schools get the same access. For those who come from underserved communities, the high costs to technology could widen an already existing gap in learning, and future incomes.

Kai Frazier, Founder of the education startup Curated x Kai, has seen this first-hand. As a history teacher who was once a homeless teen, she’s seen how access to technology can change lives. Her experiences on both ends of the spectrum are one of the reasons why she founded Curated x Kai.

As a teacher trying to make a difference, she began to experiment with VR and was inspired by the responses from her students. Now, she is hard at work building a company that stands for opening up opportunities for all children.

We recently sat down with Frazier to talk about the challenges of launching a startup with no engineering experience, and the life-changing impact of VR in the classroom.


How did the idea to use VR to help address inequality come up?
I was teaching close to Washington D.C., and my school couldn’t afford a field trip to the Martin Luther King Memorial or any of the free museums. Even though it was a short distance from their school, they couldn’t go. Being a teacher, I know how those things correlate to classroom performance.

 

As a teacher, it must have been difficult for you to see kids unable to tour museums and monuments, especially when most are free in D.C. Was it a similar situation similar when it came to accessing technology?
When I was teaching, it was really rough because I didn’t have much. We had so few laptops, that we had a laptop cart to share between classrooms. You’d sign up for a laptop for your class, and get a laptop cart once every other week. It got so bad that we went to a ‘bring your own device’ policy. So, if they had a cellphone, they could just use a cellphone because that’s a tiny computer.

And computers are considered essential in today’s schools. We know that most kids like to play with computers, but what about VR? How do educational VR experiences impact kids, especially those from challenging backgrounds or kids that aren’t the strongest students?
The kids that are hardest to reach had such strong responses to Virtual Reality. It awakened something in them that the teachers didn’t know was there. I have teachers in Georgia who work with emotionally-troubled children, and they say they’ve never seen them behave so well. A lot of my students were doing bad in science, and now they’re excited to watch science VR experiences.

Let’s back up a bit and talk about how it all started. How did you came up with the idea for Curated x Kai?
On Christmas day, I went to the MLK Memorial with my camera. I took a few 360-videos, and my students loved it. From there, I would take a camera with me and film in other places. I would think, ‘What could happen if I could show these kids everything from history museums, to the world, to colleges to jobs?’

And is that when you decided to launch?
While chaperoning kids on a college tour to Silicon Valley, I was exposed to Bay Area insights that showed me I wasn’t thinking big enough. That was October 2017, so by November 2017, I decided to launch my company.

When I launched, I didn’t think I was going to have a VR company. I have no technical background. My degrees are in history and secondary education. I sold my house. I sold my car. I sold everything I owned. I bootstrapped everything, and moved across the country here (to the San Francisco Bay Area).

When you got to San Francisco, you tried to raise venture capital but you had a hard time. Can you tell us more about that?
No matter what stage you’re in, VCs (Venture Capitalists) want to know you’re not going to be a large risk. There aren’t many examples of people with my background creating an ed tech VR company, so it feels extremely risky to most.The stat is that there’s a 0.2 percent chance for a black woman to get VC funding. I’m sure that if I looked differently, it would have helped us to scale a lot faster.

 

You’re working out of the Kapor Center in Oakland right now. What has that experience been like for you?
The Kapor Center is different because they are taking a chance on people who are doing good for the community. It’s been an amazing experience being incubated at their facility.

You’re also collaborating with the Emerging Technologies team here at Mozilla. Can you tell us more about that?
I’m used to a lot of false promises or lures of diversity initiatives. I was doing one program that catered to diverse content creators. When I got the contract, it said they were going to take my IP, my copyright, my patent…they would own it all. Mozilla was the first time I had a tech company treat me like a human being. They didn’t really care too much about my background or what credentials I didn’t have.

And of course many early engineers didn’t have traditional credentials. They were just people interested in building something new. As Curated x Kai developed, you choose Hubs, which is a Mozilla project, as a platform to host the virtual tours and museum galleries. Why Hubs when there are other, more high-resolution options on the market?
Tools like Hubs allow me to get students interested in VR, so they want to keep going. It doesn’t matter how high-res the tool is. Hubs enables me to create a pipeline so people can learn about VR in a very low-cost, low-stakes arena. It seems like many of the more expensive VR experiences haven’t been tested in schools. They often don’t work, because the WiFi can’t handle it.

That’s an interesting point because for students to get the full benefits of VR, it has to work in schools first. Besides getting kids excited to learn, if you had one hope for educational VR, what would it be?
I hope all VR experiences, educational or other, will incorporate diverse voices and perspectives. There is a serious lack of inclusive design, and it shows, from the content to the hardware. The lack of inclusive design creates an unwelcoming experience and many are robbed from the excitement of VR. Without that excitement, audiences lack the motivation to come back to the experience and it hinders growth, and adaptation in new communities — including schools.

The post Q&A: Igniting imaginations and putting VR in the hands of students with Kai Frazier appeared first on The Mozilla Blog.

hacks.mozilla.orgMDN’s First Annual Web Developer & Designer Survey

Today we are launching the first edition of the MDN Developer & Designer Needs Survey. Web developers and designers, we need to hear from you! This is your opportunity to tell us about your needs and frustrations with the web. In fact, your participation will influence how browser vendors like Mozilla, Google, Microsoft, and Samsung prioritize feature development.

Goals of the MDN Developer Needs Survey

With the insights we gather from your input, we aim to:

  • Understand and prioritize the needs of web developers and designers around the world. If you create sites or services using HTML, CSS and/or JavaScript, we want to hear from you.
  • Understand how browser vendors should prioritize feature development to address your needs and challenges.
  • Set a baseline to evaluate how your needs change year over year.

The survey results will be published on MDN Web Docs (MDN) as an annual report. In addition, we’ll include a prioritized list of needs, with an analysis of priorities by geographic region.

It’s easy to take part. The MDN survey has been designed in collaboration with the major browser vendors mentioned above, and the W3C. We expect it will take approximately 20 minutes to complete. This is your chance to make yourself heard. Results and learnings will be shared publicly on MDN later this year.

Please participate

You’re invited to take the survey here. Your participation can help move the web forward, and make it a better experience for everyone. Thank you for your time and attention!

Take the MDN survey!

The post MDN’s First Annual Web Developer & Designer Survey appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeyChecksums were fixed… but… where are the binaries?

Hi All,

There seems to be a mistake.  I have generated… well, not *I* per se, but the automation (*lol…  yeah.. *right*) or semi-automation seemed to have sent some of the tarballs/binaries elsewhere (or ignored them…).  This will be fixed.

Seriously..  when will my incompetence end?

My apologies.

:ewong

 

Mozilla L10NQ3 2019 objectives for Mozilla localization

Every three months, the l10n-drivers dedicate a couple of weeks to review progress made on objectives from the past three months (or quarter) and plan new objectives for the next quarter. Historically, we haven’t published much about these objectives, but that’s something I intend to change with this blog post. Transparency is a priority to this team and as such, I plan to discuss more openly the non-confidential, ongoing work that the team is undertaking with each quarter.

Internally, we follow a goal-setting methodology known as OKR (Objective, Key Result). OKRs are intended to be set at each level of an organization. Team OKRs should naturally nest themselves into higher level OKRs. According to this methodology, a team identifies a vision statement (aka Objective) and then 2-5 metrics to determine whether that vision has been met (aka Key Results). Teams try to limit their Objectives to only 3-5 for a specified period of time (in our case, three months). Once these OKRs are set, the team undertakes various initiatives/projects to accomplish the metrics set in each Key Result.

Today, we’ve wrapped up OKR planning for Q3 2019 (July – September) and I’d like to share with you what our core focus will be for the next three months (in no order of priority):

By the end of Q3 we can…

Objective: Optimize the localization vendor workflow for internal teams.

As measured by…

  • Finalize onboarding and implementation for 2 existing translation vendors.
  • Develop and implement 4 content system connectors for automated retrieval and delivery of source and target language content.
  • Perform and document workflow discovery for 8 internal teams.

Objective: Measure locale manager engagement to discover sustainability threats for intervention.

As measured by…

  • 30% of total Pontoon managers self-report performance on responsibilities & expectations.
  • Define 5 manager-related sustainability threats to create automated reports in Pontoon.
  • Compare self-reported perspectives with data perspectives for 5 locales.

Objective: Be ready to rapidly develop and test new features in the Pontoon translation workbench.

As measured by…

  • Run 2 staged rollouts of Translate.Next, addressing 100% of critical regressions in each rollout.
  • Bring list P2/P3 bugs to 0.

Objective: Scale for the growing Fluent community.

As measured by…

  • Three Fluent implementations have a stable and consistent FluentBundle API.
  • Text elements and 30% of placeables have test coverage in the form of implementation-agnostic testing fixtures.
  • Define two tiers of Fluent support for adoption by two non-Mozilla l10n tools.

Objective: Communicate data around Skyline (Firefox milestone release) l10n status to stakeholders.

As measured by…

  • 50% of Skyline project one-page descriptions have word count estimates and content timelines.
  • 75% of Skyline marketing project vendor reporting is captured in vendor TMS.
  • Unified dashboard reports on status of Skyline product UI content localization for 2 products.

At the end of Q3, I plan to report on our progress on each of these, noting which were achieved, which were dropped, and which may carry over into Q4. If you have any questions about these, please reach out to the l10n-drivers.

SeaMonkeyChecksums ‘fixed’.

Dear All,

The checksums have been updated.

Sorry for the fubariness.   I had tested the script on one file and it worked, so I ran it for the rest of the binaries.  The flaw in that idea is that just because the ‘first’ binary’s checksum was right, it didn’t mean the others are.

The script’s been fixed.

:ewong

hacks.mozilla.orgAdd-Ons Outage Post-Mortem Result

Editor’s Note: July 12, 1:52pm pt – Updated Balrog update frequency and added some more background.

As I mentioned in my previous post, we’ve been conducting a post-mortem on the add-ons outage. Sorry this took so long to get out; we’d hoped to have this out within a week, but obviously that didn’t happen. There was just a lot more digging to do than we expected. In any case, we’re now ready to share the results. This post provides a high level overview of our findings, with more detail available in Sheila Mooney’s incident report and Matt Miller & Peter Saint-Andre’s technical report.

Root Cause Analysis

The first question that everyone asks is “how did you let this happen?” At a high level, the story seems simple: we let the certificate expire. This seems like a simple failure of planning, but upon further investigation it turns out to be more complicated: the team responsible for the system which generated the signatures knew that the certificate was expiring but thought (incorrectly) that Firefox ignored the expiration dates. Part of the reason for this misunderstanding was that in a previous incident we had disabled end-entity certificate checking, and this led to confusion about the status of intermediate certificate checking. Moreover, the Firefox QA plan didn’t incorporate testing for certificate expiration (or generalized testing of how the browser will behave at future dates) and therefore the problem wasn’t detected. This seems to have been a fundamental oversight in our test plan.

The lesson here is that: (1) we need better communication and documentation of these parts of the system and (2) this information needs to get fed back into our engineering and QA work to make sure we’re not missing things. The technical report provides more details.

Code Delivery

As I mentioned previously, once we had a fix, we decided to deliver it via the Studies system (this is one part of a system we internally call “Normandy”). The Studies system isn’t an obvious choice for this kind of deployment because it was intended for deploying experiments, not code fixes. Moreover, because Studies permission is coupled to Telemetry, this meant that some users needed to enable Telemetry in order to get the fix, leading to Mozilla temporarily over-collecting data that we didn’t actually want, which we then had to clean up.

This leads to the natural question: “isn’t there some other way you could have deployed the fix?” to which the answer is “sort of.” Our other main mechanisms for deploying new code to users are dot releases and a system called “Balrog”. Unfortunately, both of these are slower than Normandy: Balrog checks for updates every 12 hours (though there turns out to have been some confusion about whether this number was 12 or 24), whereas Normandy checks every 6. Because we had a lot of users who were affected, getting them fixed was a very high priority, which made Studies the best technical choice.

The lesson here is that we need a mechanism that allows fast updates that isn’t coupled to Telemetry and Studies. The property we want is the ability to quickly deploy updates to any user who has automatic updates enabled. This is something our engineers are already working on.

Incomplete Fixes

Over the weeks following the incident, we released a large number of fixes, including eight versions of the system add-on and six dot releases. In some cases this was necessary because older deployment targets needed a separate fix. In other cases it was a result of defects in an earlier fix, which we then had to patch up in subsequent work. Of course, defects in software cannot be completely eliminated, but the technical report found that at least in some cases a high level of urgency combined with a lack of available QA resources (or at least coordination issues around QA) led to testing that was less thorough than we would have liked.

The lesson here is that during incidents of this kind we need to make sure that we not only recruit management, engineering, and operations personnel (which we did) but also to ensure that we have QA available to test the inevitable fixes.

Where to Learn More

If you want to learn more about our findings, I would invite you to read the more detailed reports we produced. And as always, if those don’t answer your questions, feel free to email me at ekr-blog@mozilla.com.

The post Add-Ons Outage Post-Mortem Result appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeyLTNS…

Hi Everyone,

It’s been a hell of a long time since an update.  So… here’s the long awaited update.

  1. I’ve been swamped at work and stumbling over everything related to SeaMonkey.  My apologies.
  2. Thankfully, both Ian and Frg have taken upon themselves to get the builds going.  2.49.5 will be released in their honor.

That said, and it bears repeating, IanN and frg have both been hard at work generating the 2.49.5 build1 builds manually and I owe them a debt of gratitude for working on the builds.  While I don’t expect them to need to manually do any other versions other than 2.49.5, I can’t guarantee anything.

Now on to 2.49.5 release comments.

  1. The whole build/release process is sooooo different from what everyone’s used to, I’m not sure how to proceed.
  2. That said, build1 is being uploaded to the candidates/ section on https://archive.mozilla.org/pub/seamonkey/candidates/2.49.5-candidates…   In the past, any build<n> (where n > 1) usually means n-1 builds were failures.   It’s no longer the case as build<n> should be more or less treated as ‘release candidates’ (aka.  rc<#> in the olden days).
  3. Due to how things are done, the release files are uploaded ‘manually’ (as in not part of automation), so there’ll probably be some missing items. (Case in point, README…  those will be generated in the release/).

As for the infrastructure, it is at the bare minimum ‘of ok’ness.   That isn’t the problem or issue.  It’s the automation that’s still being worked on.   I know. My complete bad.  But it is being worked on with due haste.

Now that the current project’s update information has been updated, I’m returning to your scheduled program of preparing the tar and feathers….. um… oh wait.look…   I can explain…

:ewong

QMOFirefox Nightly 70 Testday, July 19th

Hello Mozillians,

We are happy to let you know that Friday, July 19th, we are organizing Firefox Nightly 70 Testday. We’ll be focusing our testing on: Fission. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Web Application SecurityGrizzly Browser Fuzzing Framework

At Mozilla, we rely heavily on automation to increase our ability to fuzz Firefox and the components from which it is built. Our fuzzing team is constantly developing tools to help integrate new and existing capabilities into our workflow with a heavy emphasis on scaling. Today we would like to share Grizzly – a browser fuzzing framework that has enabled us to quickly and effectively deploy fuzzers at scale.

Grizzly was designed to allow fuzzer developers to focus solely on writing fuzzers and not worry about the overhead of creating tools and scripts to run them. It was created as a platform for our team to run internal and external fuzzers in a common way using shared tools. It is cross-platform and supports running multiple instances in parallel.

Grizzly is responsible for:

  • managing the browser (via Target)
    • launching
    • terminating
    • monitoring logs
    • monitoring resource usage of the browser
    • handling crashes, OOMs, hangs… etc
  • managing the fuzzer/test case generator tool (via Adapter)
    • setup and teardown of tool
    • providing input for the tool (if necessary)
    • creating test cases
  • serving test cases
  • reporting results
    • basic crash deduplication is performed by default
    • FuzzManager support is available (with advanced crash deduplication)

Grizzly is extensible by extending the “Target” or “Adapter” interface. Targets are used to add support for specific browsers. This is where the quirks and complexities of each browser are handled. See puppet_target.py for an example which uses FFPuppet to add support for Firefox. Adapters are used to add support for fuzzers. A basic functional example can be found here. See here for a slightly more advanced example that can be modified to support existing fuzzers.

Grizzly is primarily intended to support blackbox fuzzers. For a feedback driven fuzzing interface please see the libfuzzer fuzzing interface. Grizzly also has a test case reduction mode that can be used on crashes it finds.

For more information please checkout the README.md in the repository and the wiki. Feel free to ask questions on IRC in #fuzzing.

The post Grizzly Browser Fuzzing Framework appeared first on Mozilla Security Blog.

hacks.mozilla.orgTesting Picture-in-Picture for videos in Firefox 69 Beta and Developer Edition

Editor’s Note: We updated this post on July 11, 2019 to mention that the Picture-in-Picture feature is currently only enabled Firefox 69 Beta and Developer Edition on Windows. We apologize for getting your hopes up if you’re on macOS or Linux, and we hope to have this feature enabled on those platforms once it reaches our quality standards.

Have you ever needed to scan a recipe while also watching a cooking video? Or perhaps you wanted to watch a recording of a lecture while also looking at the course slides. Or maybe you wanted to watch somebody stream themselves playing video games while you work.

We’ve recently shipped a version of Firefox for Windows on our Beta and Developer Edition release channels with an experimental feature that aims to make this easier for you to do!

Picture-in-Picture allows you to pop a video out from where it’s being played into a special kind of window that’s always on top. Then you can move that window around or resize it however you need!

There are two ways to pop out a video into a Picture-in-Picture window:

Via the context menu

If you open the context menu on a <video> element, you’ll sometimes see the media context menu that looks like this:

Showing the default context menu when opened on a video element, with the Picture-in-Picture menu item highlighted.

There’s a Picture-in-Picture menu item in that context menu that you can use to toggle the feature.

Many sites, however, make it difficult to access the context menu for <video> elements. YouTube, for example, overrides the default context menu with their own.

You can get to the default native context menu by either holding Shift while right-clicking, or double right-clicking. We feel, however, that this is not the most obvious gesture for accessing the feature, so that leads us to the other toggling mechanism – the Picture-in-Picture video toggle.

Via the new Picture-in-Picture video toggle

The Picture-in-Picture toggle appears when you hover over videos with the mouse cursor. It is a small blue rectangle that slides out when you hover over it. Clicking on the blue rectangle will open the underlying video in the Picture-in-Picture player window.

Showing the Picture-in-Picture toggle overlaying a video element on YouTube.

Note that the toggle doesn’t appear when hovering all videos. We only show it for videos that include an audio track that are also of sufficient size and play length.

The advantage of the toggle is that we think we can make this work for most sites out of the box, without making the site authors do anything special!

Using the Picture-in-Picture player window

The Picture-in-Picture window also gives you the ability to quickly play or pause the video — hovering the video with your mouse will expose that control, as well as a control for closing the window, and closing the window while returning you to the tab that the video came from.

Asking for your feedback

We’re still working on hammering out keyboard accessibility, as well as some issues on how the video is displayed at extreme window sizes. We wanted to give Firefox Beta and Developer Edition users on Windows the chance to try the feature out and let us know how it feels. We’ll use the information that we gather to determine whether or not we’ve got the UI right for most users, or need to go back to the drawing board. We’re also hoping to bring this same Picture-in-Picture support to macOS and Linux in the near future.

We’re particularly interested in feedback on the video toggle — there’s a fine balance between discoverability and obtrusiveness, and we want to get a clearer sense of where the blue toggle falls for users on sites out in the wild.

So grab yourself an up-to-date copy of Firefox 69 Beta or Developer Edition for Windows, and give Picture-in-Picture a shot! If you’ve got constructive feedback to share, here’s a form you can use to submit it.

Happy testing!

The post Testing Picture-in-Picture for videos in Firefox 69 Beta and Developer Edition appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogChanges in Firefox 68

Firefox 68 is coming out today, and we wanted to highlight a few of the changes coming to add-ons. We’ve updated addons.mozilla.org (AMO) and the Add-ons Manager (about:addons) in Firefox to help people find high-quality, secure extensions more easily. We’re also making it easier to manage installed add-ons and report potentially harmful extensions and themes directly from the Add-ons Manager.

Recommended Extensions

In April, we previewed the Recommended Extensions program as one of the ways we plan to make add-ons safer. This program will make it easier for users to discover extensions that have been reviewed for security, functionality, and user experience.

In Firefox 68, you may begin to notice the first small batch of these recommendations in the Add-ons Manager. Recommendations will include star ratings and the number of users that currently have the extension installed. All extensions recommended in the Add-ons Manager are vetted through the Recommended Extensions program.

As the first iteration of a new design, you can expect some clean-up in upcoming releases as we refine it and incorporate feedback.

recommended extensions card in about:addons

On AMO starting July 15, Recommended extensions will receive special badging to indicate its inclusion in the program. Additionally, the AMO homepage will be updated to only display Recommended content, and AMO search results will place more emphasis on Recommended extensions.

Note: We previously stated that modifications to AMO would occur on July 11. This has been changed to July 15.

AMO recommended extension badge

As the Recommended Extensions program continues to evolve, more extensions will be added to the curated list.

Add-ons management and abuse reporting

In alignment with design changes in Firefox, we’ve refreshed the Add-ons Manager to deliver a cleaner user experience. As a result, an ellipsis (3-dot) icon has been introduced to keep options organized and easy to find. You can find all the available controls, including the option to report an extension or theme to Mozilla—in one place.

new about:addons look

The new reporting feature allows users to provide us with a better understanding of the issue they’re experiencing. This new process can be used to report any installed extension, whether they were installed from AMO or somewhere else.

report option in about:addonsselect issue type when reporting extension

Users can also report an extension or theme when they uninstall an add-on. More information about the new abuse reporting process is available here.

Permissions

It’s easy to forget about the permissions that were previously granted to an extension. While most extensions are created by trustworthy third-party developers, we recommend periodically checking what you have installed, what permissions you’ve granted, and making sure you only keep the ones you really want.

Starting in Firefox 68, you can view the permissions of installed extensions directly in the Add-ons Manager, making it easier to perform these periodic checks. Here’s a summary of all extension permissions, so you can review them for yourself when deciding which extensions to keep installed.

permissions panel in about:addons

In upcoming releases, we will be adjusting and refining changes to the Add-ons Manager to continue aligning the design with the rest of Firefox and incorporating feedback we receive. We’re also developing a Recommended Extensions Community Board for contributors to assist with extension recommendations—we’ll have more information soon.

The post Changes in Firefox 68 appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgFirefox 68: BigInts, Contrast Checks, and the QuantumBar

Firefox 68 is available today, featuring support for big integers, whole-page contrast checks, and a completely new implementation of a core Firefox feature: the URL bar.

These are just the highlights. For complete information, see:

BigInts for JavaScript

Firefox 68 now supports JavaScript’s new BigInt numeric type.

Screenshot of the DevTools Console showing 2**64 with normal numbers and with BigInts. The screenshot shows how floating point numbers lose precision as they grow larger.

Since its introduction, JavaScript has only had a single numeric type: Number. By definition, Numbers in JavaScript are floating point numbers, which means they can represent both integers (like 22 or 451) and decimal fractions (like 6.28 or 0.30000000000000004). However, this flexibility comes at a cost: 64-bit floats cannot reliably represent integers larger than 2 ** 53.

» 2 ** 53
9007199254740992

» (2 ** 53) + 1
9007199254740992  // <- Shouldn't that end in 3?

» (2 ** 53) + 2
9007199254740994

This limitation makes it difficult to work with very large numbers. For example, it’s why Twitter’s JSON API returns Tweet IDs as strings instead of literal numbers.

BigInt makes it possible to represent arbitrarily large integers.

» 2n ** 53n  // <-- the "n" means BigInt
9007199254740992n

» (2n ** 53n) + 1n
9007199254740993n  // <- It ends in 3!

» (2n ** 53n) + 2n
9007199254740994n

JavaScript does not automatically convert between BigInts and Numbers, so you can’t mix and match them in the same expression, nor can you serialize them to JSON.

» 1n + 2
TypeError: can't convert BigInt to number

» JSON.stringify(2n)
TypeError: BigInt value can't be serialized in JSON

You can, however, losslessly convert BigInt values to and from strings:

» BigInt("994633657141813248")
994633657141813248n

» String(994633657141813248n)
"994633657141813248"  // <-- The "n" goes away

The same is not true for Numbers — they can lose precision when being parsed from a string:

» Number("994633657141813248")
994633657141813200  // <-- Off by 48!

MDN has much more information on BigInt.

Accessibility Checks in DevTools

Each release of Firefox brings improved DevTools, but Firefox 68 marks the debut of a brand new capability: checking for basic accessibility issues.

Screenshot of the Accessibility panel in the Firefox DevTools, showing the results of a text contrast check. The page being checked is a Wired article from 2016, "How the Web Became Unreadable." Ironically, its header fails the contrast check.

With Firefox 68, the Accessibility panel can now report any color contrast issues with text on a page. More checks are planned for the future.

We’ve also:

  • Included a button in the Inspector that enables “print media emulation,” making it easy to see what elements of a page would be visible when printed. (Try it on Wikipedia!)
  • Improved CSS warnings in the console to show more information and include a link to related nodes.Screenshot of the Firefox DevTools showing a CSS warnings in the Console of the form "unknown property 'foo', declaration dropped." The warning shows a list of nodes matching the selector with the erroneous rule.
  • Added support for adjusting letter spacing in the Font Editor.
  • Implemented RegEx-based filtering in the DevTools Console: just enclose your query in slashes, like /(foo|bar)/.
  • Made it possible to block specific requests by right-clicking on them in the Network panel.

Firefox 68 also includes refinements to the smarter debugging features we wrote about a few weeks ago.

Web Compatibility

Keeping the Web open is hard work. Sometimes browsers disagree on how to interpret web standards. Other times, browsers implement and ship their own ideas without going through the standards process. Even worse, some developers intentionally block certain browsers from their sites, regardless of whether or not those browsers would have worked.

Screenshot of a blank webpage telling the visitor that the site is "currently not supporting your browser."

At Mozilla, we call these “Web Compatibility” problems, or “webcompat” for short.

Each release of Firefox contains fixes for webcompat issues. For example, Firefox 68 implements:

In the latter case, even with a standard line-clamp property in the works, we have to support the -webkit- version to ensure that existing sites work in Firefox.

Unfortunately, not all webcompat issues are as simple as implementing non-standard APIs from other browsers. Some problems can only be fixed by modifying how Firefox works on a specific site, or even telling Firefox to pretend to be something else in order to evade browser sniffing.

Screenshot of a webpage that blocks Firefox users, but which works perfectly with a webcompat intervention.

We deliver these targeted fixes as part of the webcompat system add-on that’s bundled with Firefox. This makes it easier to update our webcompat interventions as sites change, without needing to bake those fixes directly into Firefox itself. And as of Firefox 68, you can view (and disable) these interventions by visiting about:compat and toggling the relevant switches.

Our first preference is always to help developers ensure their sites work on all modern browsers, but we can only address the problems that we’re aware of. If you run into a web compatibility issue, please report it at webcompat.com.

CSS: Scroll Snapping and Marker Styling

Firefox 68 supports the latest syntax for CSS scroll snapping, which provides a standardized way to control the behavior of scrolling inside containers. You can find out more in Rachel Andrew’s article, CSS Scroll Snap Updated in Firefox 68.

As shown in the video above, scroll snapping allows you to start scrolling a container so that, when a certain threshold is reached, letting go will neatly finish scrolling to the next available snap point. It is easier to understand this if you try it yourself, so download Firefox 68 and try it out on some of the examples in the MDN Scroll Snapping docs.

And if you are wondering where this leaves the now old-and-deprecated Scroll Snap Points spec, read Browser compatibility and Scroll Snap.

Today’s release of Firefox also adds support for the ::marker pseudo-element. This makes it possible to style the bullets or counters that appear to the side of list items and summary elements.

Last but not least, CSS transforms now work on SVG elements like mark, marker, pattern and clipPath, which are indirectly rendered.

We have an entire article in the works diving into these and other CSS changes in Firefox 68; look for it later this month.

Browser: WebRender and QuantumBar Updates

Two months ago, Firefox 67 became the first Firefox release with WebRender enabled by default, though limited to users with NVIDIA GPUs on Windows 10. Firefox 68 expands that audience to include people with AMD GPUs on Windows 10, with more platforms on the way.

We’ve also been hard at work in other areas of Firefox’s foundation. The URL bar (affectionately known as the “AwesomeBar”) has been completely reimplemented using web technologies: HTML, CSS, and JavaScript. This new ”QuantumBar” should be indistinguishable from the previous AwesomeBar, but its architecture makes it easier to maintain and extend in the future. We move one step closer to the eventual elimination of our legacy XUL/XBL toolkit with this overhaul.

DOM APIs

Firefox 68 brings several changes to existing DOM APIs, notably:

  • Access to cameras, microphones, and other media devices is no longer allowed in insecure contexts like plain HTTP.
  • You can now pass the noreferrer option to window.open() to avoid leaking referrer information upon opening a link in a new window.

We’ve also added a few new APIs, including support for the Visual Viewport API on Android, which returns the viewport taking into consideration things like on-screen keyboards or pinch-zooming. These may result in a smaller visible area than the overall layout viewport.

It is also now possible to use the .decode() method on HTMLImageElement to download and decode elements before adding them to the DOM. For example, this API simplifies replacing low-resolution placeholders with higher resolution images: it provides a way to know that a new image can be immediately displayed upon insertion into the page.

More Inside

These highlights just scratch the surface. In addition to these changes in Firefox, the last month has seen us release Lockwise, a password manager that lets you take your saved credentials with you on mobile. We’ve also released a brand new Firefox Preview on Android, and more.

From all of us at your favorite nominee for Internet Villain of the Year, thank you for choosing Firefox.

The post Firefox 68: BigInts, Contrast Checks, and the QuantumBar appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogLatest Firefox Release Available today for iOS and Desktop

Since our last Firefox release, we’ve been working on features to make the Firefox Quantum browser work better for you. We added by default Enhanced Tracking Protection which blocks known “third-party tracking cookies” from following your every move. With this latest Firefox release we’ve added new features so you can browse the web the way you want — unfettered and free. We’ve also made improvements for IT managers who want more flexibility when using Firefox in the workplace.

Key highlights for today’s update includes:

  • Blackout shades come to Firefox Reader View: One of the most popular ways that people use our Reader View is by changing the contrast from light to dark. Initially, this only covered the text area. Now, when a user moves the contrast to dark all the sections of the site — including the sidebars and toolbars will be completely in dark mode.

All sections of the site will be completely in dark mode

  • Firefox Recommended Extensions and more:  We curated a list of recommended extensions that have been thoroughly reviewed for security, usability and usefulness. You can find the list on the “Get Add-ons” page in the Firefox Add-ons Manager (about:addons). Plus we are making it even easier to report any bad extensions you come across. You can do this directly through your Firefox Add-ons Manager to ensure others’ safety. This new feature is part of the larger effort to make the add-ons ecosystem safer.
  • Popular User Requested Features added to Firefox for iOS: Users are at the center of everything we do, and most of the features we’ve added to Firefox for iOS in the past have been requests straight from our community. Today, we’re adding two new features which have been asked for the most. They include:
          • Bookmark editing – We added a new ‘Recently Bookmarked’ section to bookmarks and also added support for editing all bookmarks. Now users can reorder, rename, or update the URL for bookmarks.
          • Set sites to always open with Desktop Version – We know that some sites aren’t optimized for mobile, and for users the desktop version of a site is the better and preferred experience. Today, users can set sites to always open in desktop mode, and we’re also introducing a badge to help you identify when a site is being displayed in desktop mode.

Reorder, rename, or update the URL for bookmarks

More customization for IT Pros

Since the launch of Firefox Quantum for Enterprise last year, we’ve received feedback from IT (information technology) professionals who wanted to make Firefox Quantum more flexible and easier to use so they can meet their workplace needs. Today we’re adding a number of new enterprise policies for IT leads who want to customize Firefox for their employees.

The new policies for today’s Firefox Quantum for Enterprise will help IT managers configure their company’s infrastructure in the best way to meet their own personalized needs. This includes adding a support menu so enterprise users can easily contact their internal support teams and configuring or removing the new tab page so companies can bring their intranet or other sites front and center for employees. For shared machines and protecting employees’ privacy, IT managers can turn off search suggestions. You can look here to see a complete list of policies supported by Firefox.

We are always looking for ways to improve Firefox Quantum for Enterprise, our Extended Support Release. For organizations that need access to specific preferences, we’ve already started a list in our GitHub repository, and we will continue to add based on the requests we receive. Feel free to submit at the GitHub link on specific preferences you need.

To read the complete list of new items or see what we’ve changed in today’s release, you can check out our release notes.

We hope you try out and download the latest version of Firefox Quantum for desktop, iOS and Enterprise.

The post Latest Firefox Release Available today for iOS and Desktop appeared first on The Mozilla Blog.

about:communityFirefox 68 new contributors

With the release of Firefox 68, we are pleased to welcome the 55 developers who contributed their first code change to Firefox in this release, 49 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

The Mozilla BlogMozilla’s Latest Research Grants: Prioritizing Research for the Internet

We are very happy to announce the results of our Mozilla Research Grants for the first half of 2019. This was an extremely competitive process, and we selected proposals which address twelve strategic priorities for the internet and for Mozilla. This includes researching better support for integrating Tor in the browser, improving scientific notebooks, using speech on mobile phones in India, and alternatives to advertising for funding the internet. The Mozilla Research Grants program is part of our commitment to being a world-class example of using inclusive innovation to impact culture, and reflects Mozilla’s commitment to open innovation.

We will open a new round of grants in Fall of 2019. See our Research Grant webpage for more details and to sign up to be notified when applications open.

Lead Researchers Institution Project Title
Valentin Churavy MIT Bringing Julia to the Browser
Jessica Outlaw Concordia University of Portland Studying the Unique Social and Spatial affordances of Hubs by Mozilla for Remote Participation in Live Events
Neha Kumar Georgia Tech Missing Data: Health on the Internet for Internet Health
Piotr Sapiezynski, Alan Mislove, & Aleksandra Korolova Northeastern University & University of Southern California Understanding the impact of ad preference controls
Sumandro Chattapadhyay The Centre for Internet and Society (CIS), India Making Voices Heard: Privacy, Inclusivity, and Accessibility of Voice Interfaces in India
Weihang Wang State University of New York Designing Access Control Interfaces for Wasmtime
Bernease Herman University of Washington Toward generalizable methods for measuring bias in crowdsourced speech datasets and validation processes
David Karger MIT Tipsy: A Decentralized Open Standard for a Microdonation-Supported Web
Linhai Song Pennsylvania State University Benchmarking Generic Functions in Rust
Leigh Clark University College Dublin Creating a trustworthy model for always-listening voice interfaces
Steven Wu University of Minnesota DP-Fathom: Private, Accurate, and Communication-Efficient
Nikita Borisov University of Illinois, Urbana-Champaign Performance and Anonymity of HTTP/2 and HTTP/3 in Tor

Many thanks to everyone who submitted a proposal and helped us publicize these grants.

The post Mozilla’s Latest Research Grants: Prioritizing Research for the Internet appeared first on The Mozilla Blog.

Mozilla Gfx Teammoz://gfx newsletter #46

Hi there! As previously announced WebRender has made it to the stable channel and a couple of million users are now using it without having opted into it manually. With this important milestone behind us, now is a good time to widen the scope of the newsletter and give credit to other projects being worked on by members of the graphics team.

The WebRender newsletter therefore becomes the gfx newsletter. This is still far from an exhaustive list of the work done by the team, just a few highlights in WebRender and graphics in general. I am hoping to keep the pace around a post per month, we’ll see where things go from there.

What’s new in gfx

Async zoom for desktop Firefox

Botond has been working on desktop zooming

  • The work is currently focused on the ability to use pinch gestures to zoom (scaling only, no reflow, like on mobile) on desktop platforms.
  • The initial focus is on touchscreens, with support for touchpads to follow.
  • We hope to have this ready for some early adopters to try out in the coming weeks.

WebGL power usage

Jeff Gilbert has been working on power preference options for WebGL.
WebGL has three power preference options available during canvas context creation:

  • default
  • low-power
  • high-performance

The vast majority of web content implicitly requests “default”. Since we don’t want to regress web content performance, we usually treat “default” like “high-performance”. On macOS with multiple GPUs (MacBook Pro), this means activating the power-hungry dedicated GPU for almost all WebGL contexts. While this keeps our performance high, it also means every last one-off or transient WebGL context from ads, trackers, and fingerprinters will keep the high-power GPU running until they are garbage-collected.
In bug 1562812, Jeff added a ramp-up/ramp-down behavior for “default”: For the first couple seconds after WebGL context creation, things stay on the low-power GPU. After that grace period, if the context continues to render frames for presentation, we migrate to the high-power GPU. Then, if the context stops producing frames for a couple seconds, we release our lock of the high-power GPU, to try to migrate back to the low-power GPU.
What this means is that active WebGL content should fairly quickly end up ramped-up onto the high-power GPU, but inactive and orphaned WebGL contexts won’t keep the browser locked on to the high-power GPU anymore, which means better battery life for users on these machines as they wander the web.

DisplayList building optimization

Miko, Matt, Timothy and Dan have worked on improving display list build times

  • The two main areas of improvement have been avoiding unnecessary work during display list merging, and improving the memory access patterns during display list building.
  • The improved display list merging algorithm utilizes the invalidation assumptions of the frame tree, and avoids preprocessing sub display lists that cannot have changed. (bug 1544948)
  • Some commonly used display items have drastically shrunk in size, which has reduced the memory usage and allocations. For example, the size of transform display item went down from 1024 bytes to 512 bytes. (bug 1502049, bug 1526941)
  • The display item size improvements have also tangentially helped with caching and prefetching performance. For example, the base display item state booleans were collapsed into a bit field and moved to the first cache line. (bug 1526972, bug 1540785)
  • Retained display lists were enabled for Android devices and for the parent process. (bugs 1413567 and 1413546)
  • Telemetry probes show that since the Orlando All Hands, the mean display list build time has gone down by 40%, from ~1.8ms to ~1.1ms. The 95th percentile has gone down by 30%, from ~6.2ms to ~4.4ms.
  • While these numbers might seem low, they are still a considerable proportion of the target 16ms frame budget. There is more promising follow-up work scheduled in bugs 1539597 and 1554503.

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

Software backend investigations

Glenn, Jeff Muizelaar and Jeff Gilbert are investigating WebRender on top of swiftshader or llvmpipe when the avialable GPU featureset is too small or too buggy. The hope is that these emulation layers will help us quickly migrate users that can’t have hardware acceleration to webrender with a minimal amount of specific code (most likely some simple specialized blitting routines in a few hot spots where the emulation layer is unlikely provide optimal speed).
It’s too early to tell at this point whether this experiment will pan out. We can see some (expected) regressions compared to the non-webrender software backend, but with some amount of optimization it’s probable that performance will get close enough and provide an acceptable user experience.
This investigation is important since it will determine how much code we have to maintain specifically for non-accelerated users, a proportion of users that will decrease over time but will probably never quite get to zero.

Pathfinder 3 investigations

Nical spent some time in the last quarter familiarizing with pathfinder’s internals and investigating its viability for rendering SVG paths in WebRender/Firefox. He wrote a blog post explaining in details how pathfinder fills paths on the GPU.
The outcome of this investigation is that we are optimistic about pathfinder’s approach and think it’s a good fit for integration in Firefox. In particular the approach is flexible and robust, and will let us improve or replace parts in the longer run.
Nical is also experimenting with a different tiling algorithm, aiming at reducing the CPU overhead.

The next steps are to prototype an integration of pathfinder 3 in WebRender, start using it for simple primitives such as CSS clip paths and gradually use it with more primitives.

Picture caching improvements

Glenn continues his work on picture caching. At the moment WebRender only triggers the optimization for a single scroll root. Glenn has landed a lot of infrastructure work to be able to benefit from picture caching at a more granular level (for example one per scroll-root), and pave the way for rendering picture cached slices directly into OS compositor surfaces.

The next steps are, roughly:

  • Add a couple follow up optimizations such as exact dirty rectangles for smaller updates / multi-resolution tiles, detect tiles that are solid colors.
  • Enable caching on content + UI explicitly (as an intermediate step, to give caching on the UI).
  • Implement the right support for multiple cached slices that have transparency and subpixel text anti-aliasing.
  • Enable fine grained caching on multiple slices.
  • Expose those multiple cached slices to OS compositor for power savings
    and better scrolling performance where supported.

WebRender on Android

Thanks to the continuous efforts of Jamie, Kats, WebRender is starting to be pretty solid on android. GeckoView is required to enable WebRender so it isn’t possible to enable it now in Firefox for Android, but we plan make the option available for the Firefox Preview browser (which is powered by GeckoView) when it will have nightly builds.

Mozilla L10NL10n report: July edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Manx was added to Pontoon and they’re starting to work on localizing Firefox.

New content and projects

What’s new or coming up in Firefox desktop

As usual, let’s start with some important dates:

  • Firefox 68 will be released on July 9. At that point, Firefox 69 will be in beta, while Nightly will have Firefox 70.
  • The deadline to ship localization updates in Firefox 69 is August 20.

Firefox 69 has been a lightweight release in terms of new strings, with most of the changes coming from Developer Tools. Expect a lot more content in Firefox 70, with particular focus on the Password Manager – integrating most of the Lockwise add-on features directly into Firefox – and work around Tracking Protection and Security, with brand new about pages.

What’s new or coming up in mobile

Since our last report, we’ve shipped the first release of Firefox Preview (Fenix) in 11 languages (including en-US). The next upcoming step will be to open up the project to more locales. If you are interested, make sure to follow closely the dev.l10n mailing list this week. And congratulations to the teams that helped make this a successful localized first release!

On another note, Firefox for iOS deadline for l10n work on v18 just passed this week, and we’re adding two new locales: Finnish and Gujarati. Congratulations to the teams!

What’s new or coming up in web projects

Keep brand and product names in English

The current policy per branding and legal teams is that the trademarked names should be left unchanged. Unless indicated in the comments, brand names should be left in English, they cannot be transliterated or declined. If you have any doubt, ask through the available communication channels.

Product names such as Lockwise, Monitor, Send, and Screenshot, to name a few, used alone or with Firefox, should be left unchanged. Common Voice is also a product name and should remain unchanged. Pocket is a trademark and must remain as is too. Locale managers, please inform the rest of the communities on the policy especially to new contributors. Search in Pontoon or Transvision for these names and revert them to English.

Mozilla.org

The recently added new pages contain marketing languages to promote all the Firefox products. Localize them and start spreading the words on social media and other platforms. The deadline is indicative.

  • New: firefox/best-browser.lang, firefox/campaign-trailhead.lang, firefox/all-unified.lang;
  • Update: firefox/accounts-2019.lang;
  • Obsolete: firefox/accounts.lang will be removed soon. Stop working on this page if is incomplete.

What’s new or coming up in SuMo

Newly published localizer facing documentation

The Firefox marketing guides are living documents and will be updated as needed.

  • The guide in German was written by Mozilla staff copywriter. It sets the tone for marketing content localization, including the mozilla.org site. With this guide, the hope is that the site has a more unified tone from page to page, regardless of who contributed to the localization: the community or a staff.
  • The guide in English was derived from the guide written in German and is meant to be a template for any communities who want to create a marketing content localization guide. This is in addition to the style guide a community has already created.
  • The guide in French will be authored by a Mozilla staff copywriter. We will update you in a future report.

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

The Mozilla BlogMozilla joins brief for protection of LGBTQ employees from discrimination

Last year, we joined the call in support of transgender equality as part of our longstanding commitment to diversity, inclusion and fostering a supportive work environment. Today, we are proud to join over 200 companies, big and small, as friends of the court, in a brief brought to the Supreme Court of the United States.

The brief says, in part:

“Amici support the principle that no one should be passed over for a job, paid less, fired, or subjected to harassment or any other form of discrimination based on their sexual orientation or gender identity. Amici’s commitment to equality is violated when any employee is treated unequally because of their sexual orientation or gender identity. When workplaces are free from discrimination against LGBT employees, everyone can do their best work, with substantial benefits for both employers and employees.”

The post Mozilla joins brief for protection of LGBTQ employees from discrimination appeared first on The Mozilla Blog.

Open Policy & AdvocacyBuilding on the UK white paper: How to better protect internet openness and individuals’ rights in the fight against online harms

In April 2019 the UK government unveiled plans for sweeping new laws aimed at tackling illegal and harmful content and activity online, described by the government as ‘the toughest internet laws in the world’. While the UK government’s proposal contains some interesting avenues of exploration for the next generation of European content regulation laws, it also includes several critical weaknesses and grey areas. We’ve just filed comments with the government that spell out the key areas of concern and provide recommendations on how to address them.

The UK government’s white paper responds to legitimate public policy concerns around how technology companies deal with illegal and harmful content online. We understand that in many respects the current European regulatory paradigm is not fit for purpose, and we support an exploration of what codified content ‘responsibility’ might look like in the UK and at EU-level, while ensuring strong and clear protections for individuals’ free expression and due process rights.

As we have noted previously, we believe that the white paper’s proposed regulatory architecture could have some potential. However, the UK government’s vision to put this model into practice contains serious flaws. Here are some of the changes we believe the UK government must make to its proposal to avoid the practical implementation pitfalls:

  • Clarity on definitions: The government must provide far more detail on what is meant by the terms ‘reasonableness’ and ‘proportionality’, if these are to serve as meaningful safeguards for companies and citizens. Moreover, the government must clearly define the relevant ‘online harms’ that are to be made subject to the duty of care, to ensure that companies can effectively target their trust and safety efforts.
    ddd
  • A rights-protective governance model:  The regulator tasked with overseeing the duty of care must be truly co-regulatory in nature, with companies and civil society groups central to the process by which the Codes of Practice are developed. Moreover, the regulator’s mission must include a mandate to protect fundamental rights and internet openness, and it must not have power to issue content takedown orders.
    ddd
  • A targeted scope: The duty of care should be limited to online services that store and publicly disseminate user-uploaded content. There should be clear exemptions for electronic communications services, internet service providers, and cloud services, whose operational and technical architecture are ill-suited and problematic for a duty of care approach.
    ddd
  • Focus on practices over outcomes:  The regulator’s role should be to operationalise the duty of care with respect to companies’ practices – the steps they are taking to reduce ‘online harms’ on their service. The regulator should not have a role in assessing the legality or harm of individual pieces of content. Even the best content moderation systems can sometimes fail to identify illegal or harmful content, and so focusing exclusively on outcomes-based metrics to assess the duty of care is inappropriate.

We look forward to engaging more with the UK government as it continues its consultation on the Online Harms white paper, and hopefully the recommendations in this filing can help address some of the white paper’s critical shortcomings. As policymakers from Brussels to Delhi contemplate the next generation of online content regulations, the UK government has the opportunity to set a positive standard for the world.

The post Building on the UK white paper: How to better protect internet openness and individuals’ rights in the fight against online harms appeared first on Open Policy & Advocacy.

Web Application SecurityFixing Antivirus Errors

After the release of Firefox 65 in December, we detected a significant increase in a certain type of TLS error that is often triggered by the interaction of antivirus software with the browser. Today, we are announcing the results of our work to eliminate most of these issues, and explaining how we have done so without compromising security.

On Windows, about 60% of Firefox users run antivirus software and most of them have HTTPS scanning features enabled by default. Moreover, CloudFlare publishes statistics showing that a significant portion of TLS browser traffic is intercepted. In order to inspect the contents of encrypted HTTPS connections to websites, the antivirus software intercepts the data before it reaches the browser. TLS is designed to prevent this through the use of certificates issued by trusted Certificate Authorities (CAs). Because of this, Firefox will display an error when TLS connections are intercepted unless the antivirus software anticipates this problem.

Firefox is different than a number of other browsers in that we maintain our own list of trusted CAs, called a root store. In the past we’ve explained how this improves Firefox security. Other browsers often choose to rely on the root store provided by the operating system (OS) (e.g. Windows). This means that antivirus software has to properly reconfigure Firefox in addition to the OS, and if that fails for some reason, Firefox won’t be able to connect to any websites over HTTPS, even when other browsers on the same computer can.

The interception of TLS connections has historically been referred to as a “man-in-the-middle”, or MITM. We’ve developed a mechanism to detect when a Firefox error is caused by a MITM. We also have a mechanism in place that often fixes the problems. The “enterprise roots” preference, when enabled, causes Firefox to import any root CAs that have been added to the OS by the user, an administrator, or a program that has been installed on the computer. This option is available on Windows and MacOS.

We considered adding a “Fix it” button to MITM error pages (see example below) that would allow users to easily enable the “enterprise roots” preference when the error is displayed. However, we realized that this was something we want users to do rather than an “override” button that allows a user to bypass an error at their own risk.

Example of a MitM Error Page in Firefox

Beginning with Firefox 68, whenever a MITM error is detected, Firefox will automatically turn on the “enterprise roots” preference and retry the connection. If it fixes the problem, then the “enterprise roots” preference will remain enabled (unless the user manually sets the “security.enterprise_roots.enabled” preference to false). We’ve tested this change to ensure that it doesn’t create new problems. We are also recommending as a best practice that antivirus vendors enable this preference (by modifying prefs.js) instead of adding their root CA to the Firefox root store. We believe that these actions combined will greatly reduce the issues encountered by Firefox users.

In addition, in Firefox ESR 68, the “enterprise roots” preference will be enabled by default. Because extended support releases are often used in enterprise settings where there is a need for Firefox to recognize the organization’s own internal CA, this change will streamline the process of deploying Firefox for administrators.

Finally, we’ve added an indicator that allows the user to determine when a website is relying on an imported root CA certificate. This notification is on the site information panel accessed by clicking the lock icon in the URL bar.

It might cause some concern for Firefox to automatically trust CAs that haven’t been audited and gone through the rigorous Mozilla process. However, any user or program that has the ability to add a CA to the OS almost certainly also has the ability to add that same CA directly to the Firefox root store. Also, because we only import CAs that are not included with the OS, Mozilla maintains our ability to set and enforce the highest standards in the industry on publicly-trusted CAs that Firefox supports by default. In short, the changes we’re making meet the goal of making Firefox easier to use without sacrificing security.

The post Fixing Antivirus Errors appeared first on Mozilla Security Blog.

Mozilla UXiPad Sketching with GoodNotes 5

I’m always making notes and sketching on my iPad and people often ask me what app I’m using. So I thought I’d make a video of how I use GoodNotes 5 in my design practice.

Links:
This video on YouTube
GoodNotes
Templates: blank, storyboard, crazy 8s, iPhone X

Transcript:
Hi, I’m Michael Verdi, and I’m a product designer for Firefox. Today, I wanna talk about how I use sketching on my iPad in my design practice. So, first, sketching on a iPad, what I really like about it is that these apps on the iPad allow me to collect all my stuff in one place. So, I’ve got photos and screenshots, I’ve got handwritten notes, typed up notes, whatever, it’s all together. And I can organize it, and I can search through it and find what I need.

There’s really two basic kind of apps that you can use for this. There’s drawing and sketching apps, and then there’s note-taking apps. Personally, I prefer the note-taking apps because they usually have better search tools and organization. The thing that I like that I’m gonna talk about today is GoodNotes 5.

I’ve got all kinds of stuff in here. I’ve got handwritten notes with photographs, I’ve got some typewritten notes, screenshots, other things that I’ve saved, photographs, again. Yeah, I really like using this. I can do storyboards in here, right, or I can draw things, copy and paste them so that I can iterate quickly, make multiple variations over and over again. I can stick in screenshots and then draw on top of them or annotate them. Right, so, let me do a quick demo of using this to draw.

So, one of the things that I’ll do is actually, maybe I’ve drawn some stuff before, and I’ll save that drawing as an image in my photo library. And then I’ll come stick it in here, and I’ll draw on top of it. So, I work on search, so here’s Firefox with no search box. So, I’m gonna draw one. Let’s use some straight lines to draw one. I’m gonna draw a big search box, but I’m doing it here in the middle because I’m gonna place it a little better in a second. And we have the selection tool, and I’m gonna make the, the selection is not selecting images, right? So, I can come over here and just grab my box, and then I can move my box around on top. Okay, so, I still have this gray line. I can’t erase that because it’s an image. So, I’m gonna come over here, and I’m gonna get some white, and I’m gonna just draw over it. Right, okay. Let’s go back and get my gray color. I can zoom in when I need to, and I’m gonna copy this, and I’m gonna pate it a bunch of times. Then I can annotate this. Right, so, there we go.

Another thing that I really like about GoodNotes is the ability to search through stuff that you’ve done. So, I’m gonna search here, and I’m gonna search for spaces. So, this was a thing that we mocked up with a storyboard. This is it right here. And it recognized my, it read my handwriting, which is really cool. So, I can find this thing in a notebook of a jillion pages. But there’s also another way to find things. So, you have this view here, this is called the outline view. These are sorta like named bookmarks. There’s also a thumbnail view, right? Here’s all the pages in this notebook. But if I go to the outlines, so, here, I did some notes about a critique format, and I can jump right to them. But let’s say this new drawing, well, where did I do it? This new drawing, I wanna be able to get to this all the time, right? So, I can come up here, and I can say Add This Page to the Outline, and now I can give it a name. And I don’t know what I’m gonna call this, so I’m just callin’ it sample for right now. And so, now it is in the outline. Oh, I guess I had already done this as a demo. But there it is. And that’s how I can get to it now. That’s super, super cool.

Okay, and then one last thing I wanna show you about this is templates. So, I actually made this template to better fit my setup here. I’ve got no status bar at the top. And then these are just PDFs. And you can import your own. And I can change the template for this page, and I’ve made a storyboard template. And I can apply that here. And now I’ve got a thing so I can draw a storyboard. Or maybe I don’t wanna do a storyboard, but what else do I have? Oh, I wanna do a crazy eights exercise. So, now I’ve got one ready for a crazy eight exercise. I love these templates, they’re super handy. I’ll include those in the post with this, a link to some of these things.

So, that’s sketching on the iPad with GoodNotes 5. Thanks for watching.

 

hacks.mozilla.orgGeckoView in 2019

Logo of GeckoViewLast September we wrote about using GeckoView to bring Firefox’s rendering engine to Android as a reusable library. By decoupling the Gecko engine from the Firefox application, we’ve created a newer, faster, and more maintainable way to create Android applications. This approach leverages Gecko’s excellent performance, privacy, and support for cutting-edge web standards.

With today’s release of our GeckoView-powered Firefox Preview, we’d like to share an update on what we’ve accomplished and where GeckoView is going in 2019.

Introducing Firefox Preview

Wordmark of Firefox PreviewWe’re excited to announce today’s initial release of Firefox Preview (GitHub), an entire browser built from the ground up with GeckoView and Mozilla Android Components at its core. Though still an early preview, this is our first end-user product built completely with these new technologies.

Two screenshots of Firefox Preview showing the home screen and a page loaded with the main menu open

Firefox Preview is our platform for building, testing, and delivering unique features. We’ll use it to explore new concepts for how mobile browsers should look and feel. We encourage you to give it a try!

Other Projects using GeckoView

But that is not all — Mozilla is using GeckoView in many other products as well:

Firefox Focus

Logo of Firefox FocusTo date, Firefox Focus has been our most prominent consumer of GeckoView. Focus’s simplicity lends itself to experimentation. Currently we’re using Focus to split test between GeckoView and Android’s built-in WebView. This helps us ensure that GeckoView’s performance and stability meet or exceed expectations set by Android’s platform libraries.

While Focus is great at what it does, it is not a general purpose browser. By design, Focus does not keep track of history or bookmarks, nor does it support APIs like WebRTC. Yet we need a place to test those features to ensure that GeckoView is sufficiently robust and capable of building fully-featured browsers. That’s where Reference Browser comes in.

Reference Browser

Logo of Reference BrowserLike Firefox Preview, Reference Browser is a full browser built with GeckoView and Mozilla Android Components, but crucially, it is not an end-user product. Its intended audience is browser developers. Indeed, Reference Browser is a proving ground where we validate that GeckoView and the Components fit together and work as expected. We gain the ability to develop our core libraries without the constraints of an in-market product.

Firefox Reality

Logo of Firefox RealityGeckoView also powers Firefox Reality, a browser designed for standalone virtual reality headsets. In addition to leveraging Gecko’s excellent support for immersive web technologies, Firefox Reality demonstrates GeckoView’s versatility. The same library that’s at the heart of “traditional” browsers like Focus and Firefox Preview can also power experiences in an entirely different medium.

Firefox for Android

Logo of FirefoxLastly, while Firefox for Android (“Fennec”) does not use GeckoView for normal browsing, it does use it to support Progressive Web Apps and Custom Tabs. Moreover, because GeckoView and Fennec are both based on Gecko, they jointly benefit from improvements to that common infrastructure.

GeckoView is the foundation of Mozilla’s next generation of mobile products. To better support that future, we’ve halted new feature development on Focus while we concentrate on refining GeckoView and prepare for the launch of Firefox Preview. If you’re interested in supporting Focus in the future, please help by filling out this survey.

Internals

Aside from product development, the past six months have seen many improvements to GeckoView’s internals, especially around compiler-level optimizations and support for additional CPU architectures. Highlights include:

  • Profile-Guided Optimization (PGO) on Android is now enabled, which allows the compiler to generate more efficient code by considering data gathered by actually running and observing GeckoView.
  • The IonMonkey JavaScript JIT compiler is now enabled for 64-bit ARM builds of GeckoView.
  • We’re now producing builds of GeckoView for the x86_64 architecture.

In addition to meeting Google’s upcoming requirements for listing in the Play Store, supporting 64-bit architectures further improves GeckoView’s stability (fewer out-of-memory crashes) and security.

For upcoming releases, we’re working on support for web push and “add to home screen,” among other things.

Get involved

GeckoView isn’t just for Mozilla, we want it to be useful to you.

Thanks to Emily Toop for new work on the GeckoView Documentation website. It’s easier than ever to get started, either as an app developer using GeckoView or as a GeckoView contributor. If you spot something that could be better documented, pull requests are always welcome.

Logos of all GeckoView-powered projects

We’d also love to help you directly. If you need assistance with anything GeckoView-related, you can find us:

Please do reach out with any questions or comments.

The post GeckoView in 2019 appeared first on Mozilla Hacks - the Web developer blog.