Paul McLanahanMulti-Stage Dockerfiles and Python Virtualenvs

Using Docker’s multi-stage build feature and Python’s virtualenv tool, we can make smaller and more secure docker images for production.

QMOFirefox 66 Beta 8 Testday Results

Hello Mozillians!

As you may already know, last Friday February 15th – we held a new Testday event, for Firefox 66 Beta 8.

Thank you all for helping us make Mozilla a better place: gaby2300, Priyadharshini  A and Aishwarya Narasimhan.

Results:

– several test cases executed for “Storage Access API/Cookie Restrictions” .

Thanks for another successful testday! 🙂

Mozilla Addons BlogExtensions in Firefox 66

Firefox 66 is currently in beta and, for extension developers, the changes to the WebExtensions API center primarily around improving performance, stability, and the development experience. A total of 30 issues were resolved in Firefox 66, including contributions from several volunteer community members.

Major Performance Improvements for Storage

I want to start by highlighting an important change that has a major, positive impact for Firefox users. Starting in release 66, extensions use IndexedDB as the backend for local storage instead of a JSON file. This results in a significant performance improvement for many extensions, while simultaneously reducing the amount of memory that Firefox uses.

This change is completely transparent to extension developers – you do not need to do anything to take advantage of this improvement.  When users upgrade to Firefox 66, the local storage JSON file is silently migrated to IndexedDB. All extensions using the storage.local() API immediately realize the benefits, especially if they store small changes to large structures, as is true for ad-blockers, the most common and popular type of extension used in Firefox.

The video below, using Adblock Plus as an example, shows the significant performance improvements that extension users could see.

Other Improvements

The remaining bug fixes and feature enhancements won’t be as noticeable as the change to local storage, but they nevertheless raise the overall quality of the WebExtensions API and make the development experience better.  Some of the highlights include:

Thank you to everyone who contributed to the Firefox 66 release, but a special thank you to our volunteer community contributors, including: tossj, Varun Dey, and Edward Wu.

The post Extensions in Firefox 66 appeared first on Mozilla Add-ons Blog.

Mozilla VR BlogJingle Smash: Geometry and Textures

Jingle Smash: Geometry and Textures

This is part 3 of my series on how I built Jingle Smash, a block smashing WebVR game

I’m not a designer or artist. In previous demos and games I’ve used GLTFs, which are existing 3D models created by someone else that I downloaded into my game. However, for Jingle Smash I decided to use procedural generation, meaning I combined primitives in interesting ways using code. I also generated all of the textures with code. I don’t know how to draw pretty textures by hand in a painting tool, but 20 years of 2D coding means I can code up a texture pretty easily.

Jingle Smash has three sets of graphics: the blocks, the balls, and the background imagery. Each set uses its own graphics technique.

Block Textures

The blocks all use the same texture placed on every side, depending on the block type. For blocks that you can knock over I called these ‘presents’ and gave them red ribbon stripes over a white background. I drew this into an HTML Canvas with standard 2D canvas code, then turned it into a texture using the THREE.CanvasTexture class.

const canvas = document.createElement('canvas')
canvas.width = 128
canvas.height = 128
const c = canvas.getContext('2d')

//white background
c.fillStyle = 'white'
c.fillRect(0,0,canvas.width, canvas.height)

//lower left for the sides
c.save()
c.translate(0,canvas.height/2)
c.fillStyle = 'red'
c.fillRect(canvas.width/8*1.5, 0, canvas.width/8, canvas.height/2)
c.restore()

//upper left for the bottom and top
c.save()
c.translate(0,0)
c.fillStyle = 'red'
c.fillRect(canvas.width/8*1.5, 0, canvas.width/8, canvas.height/2)
c.fillStyle = 'red'
c.fillRect(0,canvas.height/8*1.5, canvas.width/2, canvas.height/8)
c.restore()

c.fillStyle = 'black'

const tex = new THREE.CanvasTexture(canvas)
this.textures.present1 = tex

this.materials[BLOCK_TYPES.BLOCK] = new THREE.MeshStandardMaterial({
    color: 'white',
    metalness: 0.0,
    roughness: 1.0,
    map:this.textures.present1,
})

Once the texture is made I can create a ThreeJS material with it. I tried to use PBR (physically based rendering) materials in this project. Since the presents are supposed to be made of paper I used a metalness of 0.0 and roughness of 1.0. All textures and materials are saved in global variables for reuse.

Here is the finished texture. The lower left part is used for the sides and the upper left for the top and bottom.

Jingle Smash: Geometry and Textures

The other two box textures are similar, a square and cross for the crystal boxes and simple random noise for the walls.

Jingle Smash: Geometry and Textures
Jingle Smash: Geometry and Textures

Skinning the Box

By default a BoxGeometry will put the same texture on all six sides of the box. However, we want to use different portions of the texture above for different sides. This is controlled with the UV values of each face. Fortunately ThreeJS has a face abstraction to make this easy. You can loop over the faces and manipulate the UVs however you wish. I scaled and moved them around to capture just the parts of the texture I wanted.

geo.faceVertexUvs[0].forEach((f,i)=>{
    if(i === 4 || i===5 || i===6 || i===7 ) {
        f.forEach(uv=>{
            uv.x *= 0.5 //scale down
            uv.y *= 0.5 //scale down
            uv.y += 0.5 //move from lower left quadrant to upper left quadrant
        })
    } else {
        //rest of the sides. scale it in
        f.forEach(uv=>{
            uv.x *= 0.5 // scale down
            uv.y *= 0.5 // scale down
        })
    }
})

Striped Ornaments

There are two different balls you can shoot. A spherical ornament with a stem and an oblong textured one. For the textures I just generated stripes with canvas.

{
    const canvas = document.createElement('canvas')
    canvas.width = 64
    canvas.height = 16
    const c = canvas.getContext('2d')

    c.fillStyle = 'black'
    c.fillRect(0, 0, canvas.width, canvas.height)
    c.fillStyle = 'red'
    c.fillRect(0, 0, 30, canvas.height)
    c.fillStyle = 'white'
    c.fillRect(30, 0, 4, canvas.height)
    c.fillStyle = 'green'
    c.fillRect(34, 0, 30, canvas.height)

    this.textures.ornament1 = new THREE.CanvasTexture(canvas)
    this.textures.ornament1.wrapS = THREE.RepeatWrapping
	  this.textures.ornament1.repeat.set(8, 1)
}

{
    const canvas = document.createElement('canvas')
    canvas.width = 128
    canvas.height = 128
    const c = canvas.getContext('2d')
    c.fillStyle = 'black'
    c.fillRect(0,0,canvas.width, canvas.height)

    c.fillStyle = 'red'
    c.fillRect(0, 0, canvas.width, canvas.height/2)
    c.fillStyle = 'white'
    c.fillRect(0, canvas.height/2, canvas.width, canvas.height/2)

    const tex = new THREE.CanvasTexture(canvas)
    tex.wrapS = THREE.RepeatWrapping
    tex.wrapT = THREE.RepeatWrapping
    tex.repeat.set(6,6)
    this.textures.ornament2 = tex
}

The code above produces these textures:

Jingle Smash: Geometry and Textures
Jingle Smash: Geometry and Textures

What makes the textures interesting is repeating them on the ornaments. ThreeJS makes this really easy by using the wrap and repeat values, as shown in the code above.

One of the ornaments is meant to have an oblong double turnip shape, so I used a LatheGeometry. With a lathe you define a curve and ThreeJS will rotate it to produce a 3D mesh. I created the curve with the equations x = Math.sin(I*0.195) * radius and y = i * radius /7.

let points = [];
for (let I = 0; I <= 16; I++) {
    points.push(new THREE.Vector2(Math.sin(I * 0.195) * rad, I * rad / 7));
}
var geometry = new THREE.LatheBufferGeometry(points);
geometry.center()
return new THREE.Mesh(geometry, new THREE.MeshStandardMaterial({
    color: ‘white’,
    metalness: 0.3,
    roughness: 0.3,
    map: this.textures.ornament1,
}))

Jingle Smash: Geometry and Textures

For the other ornament I wanted a round ball with a stem on the end like a real Christmas tree ornament. To build this I combined a sphere and cylinder.

const geo = new THREE.Geometry()
geo.merge(new THREE.SphereGeometry(rad,32))
const stem = new THREE.CylinderGeometry(rad/4,rad/4,0.5,8)
stem.translate(0,rad/4,0)
geo.merge(stem)
return new THREE.Mesh(geo, new THREE.MeshStandardMaterial({
    color: ‘white’,
    metalness: 0.3,
    roughness: 0.3,
    map: this.textures.ornament2,
}))

Jingle Smash: Geometry and Textures

Since I wanted the ornaments to appear shiny and plasticy, but a shiny as a chrome sphere, I used metalness and roughness values of 0.3 and 0.3.

Note that I had to center the oblong ornament with geometry.center(). Even though the ornaments have different shapes I represented them both as spheres on the physics side. If you roll the oblong one on the ground it may look strange seeing it perfectly like a ball, but it was good enough for this game. Game development is all about cutting the right corners.

Building a Background

It might not look like it if you are in a 3 degree of freedom (3dof) headset like the Oculus Go, but the background is not a static painting. The clouds in the sky are an image but everything else was created with real geometry.

Jingle Smash: Geometry and Textures

The snow covered hills are actually full spheres placed mostly below the ground plane. The trees and candy are all simple cones. The underlying stripe texture I drew in Acorn, a desktop drawing app. Other than the clouds it is the only real texture I used in the game. I probably could have done the stripe in code as well but I was running out of time. In fact both the trees and candy mountains use the exact same texture, just with a different base color.

        const tex = game.texture_loader.load(‘./textures/candycane.png’)
        tex.wrapS = THREE.RepeatWrapping
        tex.wrapT = THREE.RepeatWrapping
        tex.repeat.set(8,8)

        const background = new THREE.Group()

        const candyCones = new THREE.Geometry()
        candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(-22,5,0))
        candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(22,5,0))
        candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(7,5,-30))
        candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(-13,5,-20))
        background.add(new THREE.Mesh(candyCones,new THREE.MeshLambertMaterial({ color:’white’, map:tex,})))

        const greenCones = new THREE.Geometry()
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-15,2,-5))
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-8,2,-28))
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-8.5,0,-25))
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(15,2,-5))
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(14,0,-3))

        background.add(new THREE.Mesh(greenCones,new THREE.MeshLambertMaterial({color:’green’, map:tex,})))

Jingle Smash: Geometry and Textures

All of them were positioned by hand in code. To make this work I had to constantly adjust code then reload the scene in VR. At first I would just preview in my desktop browser, but to really feel how the scene looks you have to view it in a real 3D headset. This is one of the magical parts about VR development with the web. Iteration is so fast.

Note that even though I have many different cones I merged them all into just two geometries so they can be drawn together. It’s far better to have two draw calls instead of 10 for a static background.

Next Steps

I'm pretty happy with how the textures turned out. By sticking to just a few core colors I was able to create with both consistency and variety. Furthermore, I was able to do it without any 3D modeling. Just some simple canvas code and a lot of iteration.

Next time I'll dive into the in-game level editor.

Mozilla Security BlogWhy Does Mozilla Maintain Our Own Root Certificate Store?

Mozilla maintains a database containing a set of “root” certificates that we use as “trust anchors”. This database, commonly referred to as a “root store”, allows us to determine which Certificate Authorities (CAs) can issue SSL/TLS certificates that are trusted by Firefox, and email certificates that are trusted by Thunderbird. Properly maintaining a root store is a significant undertaking – it requires constant effort to evaluate new trust anchors, monitor existing ones, and react to incidents that threaten our users. Despite the effort involved, Mozilla is committed to maintaining our own root store because doing so is vital to the security of our products and the web in general. It gives us the ability to set policies, determine which CAs meet them, and to take action when a CA fails to do so.

A major advantage to controlling our own root store is that we can do so in a way that reflects our values. We manage our CA Certificate Program in the open, and by encouraging public participation we give individuals a voice in these trust decisions. Our root inclusion process is one example. We process lots of data and perform significant due diligence, then publish our findings and hold a public discussion before accepting each new root. Managing our own root store also allows us to have a public incident reporting process that emphasizes disclosure and learning from experts in the field. Our mailing list includes participants from many CAs, CA auditors, and other root store operators and is the most widely recognized forum for open, public discussion of policy issues.

The value delivered by our root program extends far beyond Mozilla. Everyone who relies on publicly-trusted certificates benefits from our work, regardless of their choice of browser. And because our root store, which is part of the NSS cryptographic library, is open source, it has become a de-facto standard for many Linux distributions and other products that need a root store but don’t have the resources to curate their own. Providing one root store that many different products can rely on, regardless of platform, reduces compatibility problems that would result from each product having a unique set of root certificates.

Finally, operating a root store allows Mozilla to lead and influence the entire web Public Key Infrastructure (PKI) ecosystem. We created the Common Certificate Authority Database (CCADB) to help us manage our own program, and have since opened it up to other root store operators, resulting in better information and less redundant work for all involved. With full membership in the CA/Browser Forum, we collaborate with other root store operators, CAs, and auditors to create standards that continue to increase the trustworthiness of CAs and the SSL/TLS certificates they issue. Our most recent effort was aimed at improving the standards for validating IP Addresses.

The primary alternative to running our own root store is to rely on the one that is built in to most operating systems (OSs). However, relying on our own root store allows us to provide a consistent experience across OS platforms because we can guarantee that the exact same set of trust anchors is available to Firefox. In addition, OS vendors often serve customers in government and industry in addition to their end users, putting them in a position to sometimes make root store decisions that Mozilla would not consider to be in the best interest of individuals.

Sometimes we experience problems that wouldn’t have occurred if Firefox relied on the OS root store. Companies often want to add their own private trust anchors to systems that they control, and it is easier for them if they can modify the OS root store and assume that all applications will rely on it. The same is true for products that intercept traffic on a computer. For example, many antivirus programs unfortunately include a web filtering feature that intercepts HTTPS requests by adding a special trust anchor to the OS root store. This will trigger security errors in Firefox unless the vendor supports Firefox by turning on the setting we provide to address these situations.

Principle 4 of the Mozilla Manifesto states that “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” The costs of maintaining a CA Certificate Program and root store are significant, but there are fundamental benefits for our users and the larger internet community that undoubtedly make doing it ourselves the right choice for Mozilla.

The post Why Does Mozilla Maintain Our Own Root Certificate Store? appeared first on Mozilla Security Blog.

Hacks.Mozilla.OrgFearless Security: Thread Safety

In Part 2 of my three-part Fearless Security series, I’ll explore thread safety.

Today’s applications are multi-threaded—instead of sequentially completing tasks, a program uses threads to perform multiple tasks simultaneously. We all use concurrency and parallelism every day:

  • Web sites serve multiple simultaneous users.
  • User interfaces perform background work that doesn’t interrupt the user. (Imagine if your application froze each time you typed a character because it was spell-checking).
  • Multiple applications can run at the same time on a computer.

While this allows programs to do more faster, it comes with a set of synchronization problems, namely deadlocks and data races. From a security standpoint, why do we care about thread safety? Memory safety bugs and thread safety bugs have the same core problem: invalid resource use. Concurrency attacks can lead to similar consequences as memory attacks, including privilege escalation, arbitrary code execution (ACE), and bypassing security checks.

Concurrency bugs, like implementation bugs, are closely related to program correctness. While memory vulnerabilities are nearly always dangerous, implementation/logic bugs don’t always indicate a security concern, unless they occur in the part of the code that deals with ensuring security contracts are upheld (e.g. allowing a security check bypass). However, while security problems stemming from logic errors often occur near the error in sequential code, concurrency bugs often happen in different functions from their corresponding vulnerability, making them difficult to trace and resolve. Another complication is the overlap between mishandling memory and concurrency flaws, which we see in data races.

Programming languages have evolved different concurrency strategies to help developers manage both the performance and security challenges of multi-threaded applications.

Problems with concurrency

It’s a common axiom that parallel programming is hard—our brains are better at sequential reasoning. Concurrent code can have unexpected and unwanted interactions between threads, including deadlocks, race conditions, and data races.

A deadlock occurs when multiple threads are each waiting on the other to take some action in order to proceed, leading to the threads becoming permanently blocked. While this is undesirable behavior and could cause a denial of service attack, it wouldn’t cause vulnerabilities like ACE.

A race condition is a situation in which the timing or ordering of tasks can affect the correctness of a program, while a data race happens when multiple threads attempt to concurrently access the same location in memory and at least one of those accesses is a write. There’s a lot of overlap between data races and race conditions, but they can also occur independently. There are no benign data races.

Potential consequences of concurrency bugs:

  1. Deadlock
  2. Information loss: another thread overwrites information
  3. Integrity loss: information from multiple threads is interlaced
  4. Loss of liveness: performance problems resulting from uneven access to shared resources

The best-known type of concurrency attack is called a TOCTOU (time of check to time of use) attack, which is a race condition between checking a condition (like a security credential) and using the results. TOCTOU attacks are examples of integrity loss.

Deadlocks and loss of liveness are considered performance problems, not security issues, while information and integrity loss are both more likely to be security-related. This paper from Red Balloon Security examines some exploitable concurrency errors. One example is a pointer corruption that allows privilege escalation or remote execution—a function that loads a shared ELF (Executable and Linkable Format) library holds a semaphore correctly the first time it’s called, but the second time it doesn’t, enabling kernel memory corruption. This attack is an example of information loss.

The trickiest part of concurrent programming is testing and debugging—concurrency bugs have poor reproducibility. Event timings, operating system decisions, network traffic, etc. can all cause different behavior each time you run a program that has a concurrency bug.

Not only can behavior change each time we run a concurrent program, but inserting print or debugging statements can also modify the behavior, causing heisenbugs (nondeterministic, hard to reproduce bugs that are common in concurrent programming) to mysteriously disappear. These operations are slow compared to others and change message interleaving and event timing accordingly.

Concurrent programming is hard. Predicting how concurrent code interacts with other concurrent code is difficult to do. When bugs appear, they’re difficult to find and fix. Instead of relying on programmers to worry about this, let’s look at ways to design programs and use languages to make it easier to write concurrent code.

First, we need to define what “threadsafe” means:

“A data type or static method is threadsafe if it behaves correctly when used from multiple threads, regardless of how those threads are executed, and without demanding additional coordination from the calling code.” MIT

How programming languages manage concurrency

In languages that don’t statically enforce thread safety, programmers must remain constantly vigilant when interacting with memory that can be shared with another thread and could change at any time. In sequential programming, we’re taught to avoid global variables in case another part of code has silently modified them. Like manual memory management, requiring programmers to safely mutate shared data is problematic.

Generally, programming languages are limited to two approaches for managing safe concurrency:

  1. Confining mutability or limiting sharing
  2. Manual thread safety (e.g. locks, semaphores)

Languages that limit threading either confine mutable variables to a single thread or require that all shared variables be immutable. Both approaches eliminate the core problem of data races—improperly mutating shared data—but this can be too limiting. To solve this, languages have introduced low-level synchronization primitives like mutexes. These can be used to build threadsafe data structures.

Python and the global interpreter lock

The reference implementation of Python, CPython, has a mutex called the Global Interpreter Lock (GIL), which only allows a single thread to access a Python object. Multi-threaded Python is notorious for being inefficient because of the time spent waiting to acquire the GIL. Instead, most parallel Python programs use multiprocessing, meaning each process has its own GIL.

Java and runtime exceptions

Java is designed to support concurrent programming via a shared-memory model. Each thread has its own execution path, but is able to access any object in the program—it’s up to the programmer to synchronize accesses between threads using Java built-in primitives.

While Java has the building blocks for creating thread-safe programs, thread safety is not guaranteed by the compiler (unlike memory safety). If an unsynchronized memory access occurs (aka a data race), then Java will raise a runtime exception—however, this still relies on programmers appropriately using concurrency primitives.

C++ and the programmer’s brain

While Python avoids data races by synchronizing everything with the GIL, and Java raises runtime exceptions if it detects a data race, C++ relies on programmers to manually synchronize memory accesses. Prior to C++11, the standard library did not include concurrency primitives.

Most programming languages provide programmers with the tools to write thread-safe code, and post hoc methods exist for detecting data races and race conditions; however, this does not result in any guarantees of thread safety or data race freedom.

How does Rust manage concurrency?

Rust takes a multi-pronged approach to eliminating data races, using ownership rules and type safety to guarantee data race freedom at compile time.

The first post of this series introduced ownership—one of the core concepts of Rust. Each variable has a unique owner and can either be moved or borrowed. If a different thread needs to modify a resource, then we can transfer ownership by moving the variable to the new thread.

Moving enforces exclusion, allowing multiple threads to write to the same memory, but never at the same time. Since an owner is confined to a single thread, what happens if another thread borrows a variable?

In Rust, you can have either one mutable borrow or as many immutable borrows as you want. You can never simultaneously have a mutable borrow and an immutable borrow (or multiple mutable borrows). When we talk about memory safety, this ensures that resources are freed properly, but when we talk about thread safety, it means that only one thread can ever modify a variable at a time. Furthermore, we know that no other threads will try to reference an out of date borrow—borrowing enforces either sharing or writing, but never both.

Ownership was designed to mitigate memory vulnerabilities. It turns out that it also prevents data races.

While many programming languages have methods to enforce memory safety (like reference counting and garbage collection), they usually rely on manual synchronization or prohibitions on concurrent sharing to prevent data races. Rust’s approach addresses both kinds of safety by attempting to solve the core problem of identifying valid resource use and enforcing that validity during compilation.

Either one mutable borrow or infinitely many immutable borrows

But wait! There’s more!

The ownership rules prevent multiple threads from writing to the same memory and disallow simultaneous sharing between threads and mutability, but this doesn’t necessarily provide thread-safe data structures. Every data structure in Rust is either thread-safe or it’s not. This is communicated to the compiler using the type system.

A well-typed program can’t go wrong. Robin Milner, 1978

In programming languages, type systems describe valid behaviors. In other words, a well-typed program is well-defined. As long as our types are expressive enough to capture our intended meaning, then a well-typed program will behave as intended.

Rust is a type safe language—the compiler verifies that all types are consistent. For example, the following code would not compile:

    let mut x = "I am a string";
    x = 6;
    error[E0308]: mismatched types
     --> src/main.rs:6:5
      |
    6 | x = 6; //
      |     ^ expected &str, found integral variable
      |
      = note: expected type `&str`
                 found type `{integer}`

All variables in Rust have a type—often, they’re implicit. We can also define new types and describe what capabilities a type has using the trait system. Traits provide an interface abstraction in Rust. Two important built-in traits are Send and Sync, which are exposed by default by the Rust compiler for every type in a Rust program:

  • Send indicates that a struct may safely be sent between threads (required for an ownership move)
  • Sync indicates that a struct may safely be shared between threads

This example is a simplified version of the standard library code that spawns threads:

    fn spawn<Closure: Fn() + Send>(closure: Closure){ ... }

    let x = std::rc::Rc::new(6);
    spawn(|| { x; });

The spawn function takes a single argument, closure, and requires that closure has a type that implements the Send and Fn traits. When we try to spawn a thread and pass a closure value that makes use of the variable x, the compiler rejects the program for not fulfilling these requirements with the following error:

    error[E0277]: `std::rc::Rc<i32>` cannot be sent between threads safely
     --> src/main.rs:8:1
      |
    8 | spawn(move || { x; });
      | ^^^^^ `std::rc::Rc<i32>` cannot be sent between threads safely
      |
      = help: within `[closure@src/main.rs:8:7: 8:21 x:std::rc::Rc<i32>]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<i32>`
      = note: required because it appears within the type `[closure@src/main.rs:8:7: 8:21 x:std::rc::Rc<i32>]`
    note: required by `spawn`

The Send and Sync traits allow the Rust type system to reason about what data may be shared. By including this information in the type system, thread safety becomes type safety. Instead of relying on documentation, thread safety is part of the compiler’s law.

This allows programmers to be opinionated about what can be shared between threads, and the compiler will enforce those opinions.

While many programming languages provide tools for concurrent programming, preventing data races is a difficult problem. Requiring programmers to reason about complex instruction interleaving and interaction between threads leads to error prone code. While thread safety and memory safety violations share similar consequences, traditional memory safety mitigations like reference counting and garbage collection don’t prevent data races. In addition to statically guaranteeing memory safety, Rust’s ownership model prevents unsafe data modification and sharing across threads, while the type system propagates and enforces thread safety at compile time.
Pikachu finally discovers fearless concurrency with Rust

The post Fearless Security: Thread Safety appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogFirefox for iOS Amps Up Private Browsing and More

Today we’re rolling out updated features for iPhone and iPad users, including a new layout for menu and settings, persistent Private Browsing tabs and new organization options within the New Tabs feature. This round of updates is the result of requests we received straight from our users, and we’re taking your feedback to make this version of iOS work harder and smarter for you.

With this in mind, in the latest update of Firefox for iOS we overhauled both the Settings and Menu options to more closely mirror the desktop application. Now you can access bookmarks, history, Reading List and downloads in the “Library” menu item.

Private Browsing – Keep browsing like nobody’s watching

Private browsing tabs can now live across sessions, meaning, if you open a private browsing tab and then exit the app, Firefox will automatically launch in private browsing the next time you open the app. Keeping your private browsing preferences seamless is just another way we’re making it simple and easy to give you back control of the privacy of your online experience.

Private browsing tabs can now live across sessions

Organize your New Tabs (like a pro)

Today’s release also includes a few different options for New Tabs organization. You can now choose to have new tabs open with your bookmark list, in Firefox Home (with top sites and Pocket stories), with a list of recent history, a custom URL or in a blank page.

 

More options for New Tabs organization

We’re also making it easier to customize Firefox Home with top sites and Pocket content. All tabs can now be rearranged by dragging a tab into the tab bar or tab tray.

Customize Firefox Home with top sites and Pocket content

Whether it’s your personal data or how you organize your online experience, Firefox continues to bring more privacy and control to you.

To get the latest version of Firefox for iOS, visit the App Store.

 

The post Firefox for iOS Amps Up Private Browsing and More appeared first on The Mozilla Blog.

Mozilla GFXWebRender newsletter #40

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Mozilla’s research web browser Servo and on its way to becoming Firefox‘s rendering engine.

Notable WebRender and Gecko changes

  • Kats made improvements to the continuous integration on Mac.
  • Kvark fixed a crash.
  • Kvark added a way to dump the state of the frame builder for debugging.
  • kvark made transform flattening operate at preserve-3d context boundaries.
  • kvark enabled non-screen-space rasterization of plane-splits.
  • kvark fixed seams between image tiles.
  • Glenn fixed a bug with border-style: double where the border widths are exactly 1 pixel.
  • Glenn made some improvements to pixel snapping.
  • Glenn added some debugging infrastructure for pixel snapping.
  • Glenn tidied up some code and added a few optimizations.
  • Nical fixed a rendering bug with shadows and blurs causing them to flicker in some cases.
  • Nical simplified the code that manages the lifetime of image and blob image handles on the content process.
  • Nical added a test.
  • Sotaro enabled mochitest-chrome with WebRender in the CI.
  • Sotaro improved scrolling smoothness when using direct composition.
  • Sotaro fixed a window creation failure when using WebRender with Wayland.
  • Emilio improved background-clip: text invalidation.

Blocker bugs countdown

Only 0 P2 bugs and 4 P3 bugs left (two of which have fixes up for review)!

Enabling WebRender in Firefox Nightly

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Mozilla Open Policy & Advocacy BlogMozilla statement on the conclusion of EU copyright directive ‘trialogue’ negotiations

Yesterday the EU institutions concluded ‘trialogue’ negotiations on the EU Copyright directive, a procedural step that makes the final adoption of the directive a near certainty.

Here’s a statement from Raegan MacDonald, Mozilla’s Head of EU Public Policy –

The Copyright agreement gives the green light to new rules that will compel online services to implement blanket upload filters, with an overly complex and limited SME carve out that will be unworkable in practice.  At the same time, lawmakers have forced through a new ancillary copyright for press publishers, a regressive and disproven measure that will undermine access to knowledge and the sharing of information online.

The legal uncertainty that will be generated by these complex rules means that only the largest, most established platforms will be able to fully comply and thrive in such a restricted online environment.

With this development, the EU institutions have squandered the opportunity of a generation to bring European copyright law into the 21st century.  At a time of such concern about web centralisation and the ability of small European companies to compete in the digital marketplace, these new rules will serve to entrench the incumbents.

We recognise the efforts of many Member States and MEPs who laboured to find workable solutions that would have rectified some of the gravest shortcomings in the proposal. Unfortunately the majority of their progressive compromises were rejected.

The file is expected to be adopted officially in a final European Parliament vote in the coming weeks. We’re continuously working with our allies in the Parliament and the broader community to explore any and every opportunity to limit the potential damage of this outcome.

The post Mozilla statement on the conclusion of EU copyright directive ‘trialogue’ negotiations appeared first on Open Policy & Advocacy.

Cameron KaiserSo long, Opportunity rover

It's time to say goodbye to another PowerPC in space, this time the Opportunity rover, also known as the Mars Exploration Rover B (or MER-1). Finally declared at end of mission today after 5,352 Mars solar days when NASA couldn't re-establish contact, it had been apparently knocked off-line by a dust storm and was unable to restart. Originally intended for a 90 Mars solar day mission, its mission became almost 60 times longer than anticipated and it traveled nearly 30 miles on the surface in total. Spirit, or MER-2, its sister unit, had previously reached end of mission in 2010.

Both Opportunity and Spirit were powered by the 20MHz BAE RAD6000, a radiation-hardened version of the original IBM POWER1 RISC Single Chip CPU and the indirect ancestor of the PowerPC 601. Many PowerPC-based spacecraft are still in operation, both with the original RAD6000 and its successor the RAD750, a radiation-hardened version of the G3.

Meanwhile, the Curiosity rover, which is running a pair of RAD750s (one main and one backup, plus two SPARC accessory CPUs), is still in operation at 2,319 Mars solar days and ticking. There is also the 2001 Mars Odyssey orbiter, which is still circling the planet with its own RAD6000 and is expected to continue operations until 2025. Curiosity's design is likely to be reused for the Mars 2020 rover, meaning possibly even more PowerPC design will be exploring the cosmos in the very near future.

Dave TownsendWelcoming a new Firefox/Toolkit peer

Please join me in welcoming Bianca Danforth to the set of peers blessed with reviewing patches to Firefox and Toolkit. She’s been doing great work making testing experiment extensions easy and so it’s time for her to level-up.

Mozilla Open Policy & Advocacy BlogMozilla Foundation fellow weighs in on flawed EU Terrorist Content regulation

As we’ve noted previously, the EU’s proposed Terrorist Content regulation would seriously undermine internet health in Europe, by forcing companies to aggressively suppress user speech with limited due process and user rights safeguards. Yet equally concerning is the fact that this proposal is likely to achieve little in terms of reducing the actual terrorism threat or the phenomenon of radicalisation in Europe. Here, Mozilla Foundation Tech Policy fellow and community security expert Stefania Koskova* unpacks why, and proposes an alternative approach for EU lawmakers.

With the proposed Terrorist Content regulation, the EU has the opportunity to set a global standard in how to effectively address what is a pressing public policy concern. To be successful, harmful and illegal content policies must carefully and meaningfully balance the objectives of national security, internet-enabled economic growth and human rights. Content policies addressing national security threats should reflect how internet content relates to ‘offline’ harm and should provide sufficient guidance on how to comprehensively and responsibly reduce it in parallel with other interventions. Unfortunately, the Commission’s proposal falls well short in this regard.

Key shortcomings:

  • Flawed definitions: In its current form there is a considerable lack of clarity and specificity in the definition of ‘terrorist content’, which creates unnecessary confusion between ‘terrorist content’ and terrorist offences. Biased application, including through the association of terrorism with certain national or religious minorities and certain ideologies, can lead to serious harm and real-world consequences. This in turn can contribute to further polarisation and radicalisation.
  • Insufficient content assessment: Within the proposal there is no standardisation of the ‘terrorist content’ assessment procedure from a risk perspective, and no standardisation of the evidentiary requirements that inform content removal decisions by government authorities or online services. Member States and hosting service providers are asked to evaluate the terrorist risk associated with specific online content, without clear or precise assessment criteria.
  • Weak harm reduction model: Without a clear understanding of the impact of ‘terrorist content’ on the radicalisation process in specific contexts and circumstances, it seems inadvisable and contrary to the goal of evidence-based policymaking to assume that removal, blocking, or filtering will reduce radicalisation and prevent terrorism. Further, potential adverse effects of removal, blocking, and filtering, such as fueling grievances of those susceptible to terrorist propaganda, are not considered.

As such, the European Commission’s draft proposal in its current form creates additional risks with only vaguely defined benefits to countering radicalisation and preventing terrorism. To ensure the most negative outcomes are avoided, the following amendments to the proposal should be made as a matter of urgency:

  • Improving definition of terrorist content: The definition of ‘terrorist content’ should be clarified such that it depends on illegality and intentionality. This is essential to protect the public interest speech of journalists, human rights defenders, and other witnesses and archivists of terrorist atrocities.
  • Disclosing ‘what counts’ as terrorism through transparency reporting and monitoring: The proposal should ensure that Member States and hosting platforms are obliged to report on how much illegal terrorist content is removed, blocked or filtered under the regulation – broken down by category of terrorism (incl. nationalist-separatist, right-wing, left-wing, etc.) and the extent to which content decision and action was linked to law enforcement investigations. With perceptions of terrorist threat in the EU diverging across countries and across the political spectrum, this can safeguard against intentional or unintentional bias in implementation.
  • Assessing security risks: In addition to to being grounded in a legal assessment, content control actions taken by competent authorities and companies should be strategic –  i.e. be based on an assessment of the content’s danger to public safety and the likelihood that it will contribute to the commission of terrorist acts.  This risk assessment should also take into account the likely negative repercussions arising from content removal/blocking/filtering.
  • Focusing on impact: The proposal should require or ensure that all content policy measures are closely coordinated and coincide with the deployment of strategic radicalisation counter-narratives, and broader terrorism prevention and rehabilitation programmes.

The above recommendations address shortcomings in the proposal in the terrorism prevention context. Additionally, however, there remains the contested issue of 60-minute content takedowns and mandated proactive filtering, both of which are serious threats to internet health. There is an opportunity, through the parliamentary procedure, to address these concerns. Constructive feedback, including specific proposals that can significantly improve the current text, has been put forward by EU Parliament Committees, civil society and industry representatives.

The stakes are high. With this proposal, the EU can create a benchmark for how democratic societies should address harmful and illegal online content without compromising their own values. It is imperative that lawmakers take the opportunity.

*Stefania Koskova is a Mozilla Foundation Tech Policy fellow and a counter-radicalisation practitioner. Learn more about her Mozilla Foundation fellowship here.

The post Mozilla Foundation fellow weighs in on flawed EU Terrorist Content regulation appeared first on Open Policy & Advocacy.

Will Kahn-GreeneSocorro: January 2019 happenings

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

January was a good month. This blog post summarizes activities.

Read more… (5 mins to read)

The Mozilla BlogFacebook Answers Mozilla’s Call to Deliver Open Ad API Ahead of EU Election

After calls for increased transparency and accountability from Mozilla and partners in civil society, Facebook announced it would open its Ad Archive API next month. While the details are still limited, this is an important first step to increase transparency of political advertising and help prevent abuse during upcoming elections.

Facebook’s commitment to make the API publicly available could provide researchers, journalists and other organizations the data necessary to build tools that give people a behind the scenes look at how and why political advertisers target them. It is now important that Facebook follows through on these statements and delivers an open API that gives the public the access it deserves.

The decision by Facebook comes after months of engagement by the Mozilla Corporation through industry working groups and government initiatives and most recently, an advocacy campaign led by the Mozilla Foundation.

This week, the Mozilla Foundation was joined by a coalition of technologists, human rights defenders, academics, journalists demanding Facebook take action and deliver on the commitments made to put users first and deliver increased transparency.

“In the short term, Facebook needs to be vigilant about promoting transparency ahead of and during the EU Parliamentary elections,” said Ashley Boyd, Mozilla’s VP of Advocacy. “Their action — or inaction — can affect elections across more than two dozen countries. In the long term, Facebook needs to sincerely assess the role its technology and policies can play in spreading disinformation and eroding privacy.”

And in January, Mozilla penned a letter to the European Commission underscoring the importance of a publicly available API. Without the data, Mozilla and other organizations are unable to deliver products designed to pull back the curtain on political advertisements.

“Industry cannot ignore its potential to either strengthen or undermine the democratic process,” said Alan Davidson Mozilla’s VP of Global Policy, Trust and Security. “Transparency alone won’t solve misinformation problems or election hacking, but it’s a critical first step. With real transparency, we can give people more accurate information and powerful tools to make informed decisions in their lives.”

This is not the first time Mozilla has called on the industry to prioritize user transparency and choice. In the wake of the Cambridge Analytica news, the Mozilla Foundation rallied tens of thousands of internet users to hold Facebook accountable for its post-scandal promises. And Mozilla Corporation took action with a pause on advertising our products on Facebook and provided users with Facebook Container for Firefox, a product that keeps Facebook from tracking people around the web when they aren’t on the platform.

While the announcement from Facebook indicates a move towards transparency, it is critical the company follows through and delivers not only on this commitment but the other promises also made to European lawmakers and voters.

The post Facebook Answers Mozilla’s Call to Deliver Open Ad API Ahead of EU Election appeared first on The Mozilla Blog.

RabimbaARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.

What we will cover today:

  • How ARCore and ARKit does it's SLAM/Visual Inertia Odometry
  • Can we D.I.Y our own SLAM with reasonable accuracy to understand the process better

Sensing the world: as a computer

When we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from camera and pairs it up with other sensors.
Once it has those data it tries to do the following two things
  1. Build a point cloud mesh of the environment by building a map
  2. Assign a relative position of the device within that perceived environment
From our previous article, we know it's not always easy to build this map from unique feature points and maintain that. However, that becomes easy in certain scenarios if you have the freedom to place beacons at different known locations. Something we did at Mozfest 2016 when Mozilla still had the Magnets project which we had utilized as our beacons. A similar approach is used in a few museums for providing turn by turn navigation to point of interests as their indoor navigation system. However Augmented Reality systems don't have this luxury.

A little saga about relationships

We will start with a map.....about relationships. Or rather "A Stochastic Map For Uncertain Spatial Relationships" by Smith et al. 
In the real world, you have precise and correct information about the exact location of every object. However in AR world that is not the case. For understanding the case lets assume we are in an empty room and our mobile has detected a reliable unique anchor (A) (or that can be a stationary beacon) and our position is at (B). 
In a perfect situation, we know the distance between A and B, and if we want to move towards C we can infer exactly how we need to move.

Unfortunately, in the world of AR and SLAM we need to work with imprecise knowledge about the position of A and C. This results in uncertainties and the need to continually correct the locations. 

The points have a relative spatial relationship with each other and that allows us to get a probability distribution of every possible position. Some of the common methods to deal with the uncertainty and correct positioning errors are Kalman Filter (this is what we used in Mozfest), Maximum Posteriori Estimation or Bundle Adjustment. 
Since these estimations are not perfect, every new sensor update also has to update the estimation model.

Aligning the Virtual World

To map our surroundings reliably in Augmented Reality, we need to continually update our measurement data. The assumptions are, every sensory input we get contains some inaccuracies. We can take help from Milios et al in their paper "Globally Consistent Range Scan Alignment for Environment Mapping" to understand the issue. 
Image credits: Lu, F., & Milios, E. (1997). Globally consistent range scan alignment for environment mapping
Here in figure a, we see how going from position P1....Pn accumulates little measurement errors over time until the resulting environment map is wrong. But when we align the scan sin fig b, the result is considerably improved. To do that, the algorithm keeps track of all local frame data and network spatial relations among those.
A common problem at this point is how much data to store to keep doing the above correctly. Often to reduce complexity level the algorithm reduces the keyframes it stores.

Let's build the map a.k.a SLAM

To make Mixed Reality feasible, SLAM has the following challenges to handle
  1. Monocular Camera input
  2. Real-time
  3. Drift

Skeleton of SLAM

How do we deal with these in a Mixed Reality scene?
We start with the principles by Cadena et. al in their "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age" paper. From that paper, we can see the standard architecture of SLAM to be something like
Image Credit: Cadena et al
If we deconstruct the diagram we get the following four modules
  1. Sensor: On mobiles, this is primarily Camera, augmented by accelerometer, gyroscope and depending on the device light sensor. Apart from Project Tango enabled phones, nobody ahd depth sensor for Android.
  2. Front End: The feature extraction and anchor identification happens here as we described in previous post.
  3. Back End: Does error correction to compensate for the drift and also takes care of localizing pose model and overall geometric reconstruction.
  4. SLAM estimate: This is the result containing the tracked features and locations.
To better understand it, we can take a look at one of the open source implementations of SLAM.

D.I.Y SlAM: Taking a peek at ORB-SLAM

To try our hands on to understand how SLAM works let's take a look at a recent algorithm by Montiel et al called ORB-SLAM. We will use the code of its successor ORB-SLAM2. The algorithm is available in Github under GPL3 and I found this excellent blog which goes into nifty details on how we can run ORB-SLAM2 in our computer. I highly encourage you to read that to avoid encountering problems at the setup.
His talk is also available here to see and is very interesting


ORB-SLAM just uses the camera and doesn't utilize any other gyroscope or accelerometer inputs. But the result is still impressive.
  1. Detecting Features: ORB-SLAM, as the name suggests uses ORB to find keypoint and generate binary descriptors. Internally ORB is based on the same method to find keypoint and generating binary descriptors as we discussed in part 1 for BRISK. In short ORB-SLAM analyzes each picture to find keyframe and then store it with a reference to the keyframe in a map. These are utilized in future to correct historical data.
  2. Keypoint > 3d landmark: The algorithm looks for new frames from the image and when it finds one it performs keypoint detection on it. These are then matched with the previous frame to get a spatial distance. This so far provides a good idea on where it can find the same key points again in a new frame. This provides the initial camera pose estimation.
  3. Refine Camera Pose: The algorithm repeats Step 2 by projecting the estimated initial camera pose into next camera frame to search for more keypoint which corresponds to the one it already knows. If it is certain it can find them, it uses the additional data to refine the pose and correct any spatial measurement error.
green squares  = tracked keypoints. Blue boxes: keyframes. Red box = camera view. Red points = local map points.
Image credits: ORB-SLAM video by Raúl Mur Artal


Returning home a.k.a Loop Closing

One of the goals of MR is when you walk back to your starting point it should understand you have returned. The inherent inefficiency and the induced error make it hard to accurately predict this. This is called loop closing for SLAM. ORB-SLAM handles it by defining a threshold. It tries to match keypoints in a frame with next frames and if the previously detected frames matching percentage exceeds a threshold then it knows you have returned.
Loop Closing performed by the ORB-SLAM algorithm.
Image credits: Mur-Artal, R., Montiel
To account for the error, the algorithm has to propagate coordinate correction throughout the whole frame with updated knowledge to know the loop should be closed
The reconstructed map before (up) and after (down) loop closure.
Image credits: Mur-Artal, R., Montiel

SLAM today:

Google: ARCore's documentation describes it's tracking method as "concurrent odometry and mapping" which is essentially SLAM+sensor inputs. Their patent also indicates they have included inertial sensors into the design.

Apple: Apple also is using Visual Interial Odometry which they acquired by buying Metaio and FlyBy. I learned a lot about what they are doing by having a look at this video at WWDC18.

Additional Read: I found this "A comparative analysis of tightly-coupled monocular, binocular, and stereo VINS" paper to be a nice read to see how different IMU's are used and compared. IMU's are the devices that provide all this sensory data to our devices today. And their calibration is supposed to be crazy difficult. 

I hope this post along with the previous one provides a better understanding of how our world is tracked inside ARCore/ARKit.

In a few days, I will start another blog series on how to build Mixed Reality applications and use experimental as well as some stable WebXR api's to build Mixed Reality application demos.
As always feedbacks are welcome.

References/Interesting Reads:

Mozilla Future Releases BlogMaking the Building of Firefox Faster for You with Clever-Commit from Ubisoft

Firefox fights for people online: for control and choice, for privacy, for safety. We do this because it is our mission to keep the web open and accessible to all. No other tech company has people’s back like we do.

Part of keeping you covered is ensuring that our Firefox browser and the other tools and services we offer are running at top performance. When we make an update, or add a new feature the experience should be as seamless and smooth as possible for the user. That’s why Mozilla just partnered with Ubisoft to start using Clever-Commit, an Artificial Intelligence coding assistant developed by Ubisoft La Forge that will make the Firefox code-writing process faster and more efficient. Thanks to Clever-Commit, Firefox users will get to use even more stable versions of Firefox and have even better browsing experiences.

We don’t spend a ton of time regaling our users with the ins-and-outs of how we build our products because the most important thing is making sure you have the best experience when you’re online with us. But building a browser is no small feat. A web browser plays audio and video, manages various network protocols, secures communications using advanced cryptographic algorithms, handles content running in parallel multiple processes, all this to render the content that people want to see on the websites they visit.

And underneath all of this is a complex body of code that includes millions of lines written in various programming languages: JavaScript, C++, Rust. The code is regularly edited, released and updated onto Firefox users’ machines. Every Firefox release is an investment, with an average of 8,000 software edits loaded into the browser’s code by hundreds of Firefox staff and contributors for each release. It has a huge impact, touching hundreds of millions of internet users.

With a new release every 6 to 8 weeks, making sure the code we ship is as clean as possible is crucial to the performance people experience with Firefox. The Firefox engineering team will start using Clever-Commit in its code-writing, testing and release process. We will initially use the tool during the code review phase, and if conclusive, at other stages of the code-writing process, in particular during automation. We expect to save hundreds of hours of bug riskiness analysis and detection. Ultimately, the integration of Clever-Commit into the full Firefox developer workflow could help catch up to 3 to 4 out of 5 bugs before they are introduced into the code.

By combining data from the bug tracking system and the version control system (aka changes in the code base), Clever-Commit uses artificial intelligence to detect patterns of programming mistakes based on the history of the development of the software. This allows us to address bugs at a stage when fixing a bug is a lot cheaper and less time-consuming, than upon release.

Mozilla will contribute to the development of Clever-Commit by providing programming language expertise in Rust, C++ and Javascript, as well as expertise in C++ code analysis and analysis of bug tracking systems.

The post Making the Building of Firefox Faster for You with Clever-Commit from Ubisoft appeared first on Future Releases.

Mozilla VR BlogJingle Smash: Choosing a Physics Engine

Jingle Smash: Choosing a Physics Engine

This is part 2 of my series on how I built Jingle Smash, a block smashing WebVR game .

The key to a physics based game like Jingle Smash is of course the physics engine. In the Javascript world there are many to choose from. My requirements were for fully 3D collision simulation, working with ThreeJS, and being fairly easy to use. This narrowed it down to CannonJS, AmmoJS, and Oimo.js: I chose to use the CannonJS engine because AmmoJS was a compiled port of a C lib and I worried would be harder to debug, and Oimo appeared to be abandoned (though there was a recent commit so maybe not?).

CannonJS

CannonJS is not well documented in terms of tutorials, but it does have quite a bit of demo code and I was able to figure it out. The basic usage is quite simple. You create a Body object for everything in your scene that you want to simulate. Add these to a World object. On each frame you call world.step() then read back position and orientations of the calculated bodies and apply them to the ThreeJS objects on screen.

While working on the game I started building an editor for positioning blocks, changing their physical properties, testing the level, and resetting them. Combined with physics this means a whole lot of syncing data back and forth between the Cannon and ThreeJS sides. In the end I created a Block abstraction which holds the single source of truth and keeps the other objects updated. The blocks are managed entirely from within the BlockService.js class so that all of this stuff would be completely isolated from the game graphics and UI.

Physics Bodies

When a Block is created or modified it regenerates both the ThreeJS objects and the Cannon objects. Since ThreeJS is documented everywhere I'll only show the Cannon side.

let type = CANNON.Body.DYNAMIC
if(this.physicsType === BLOCK_TYPES.WALL) {
    type = CANNON.Body.KINEMATIC
}

this.body = new CANNON.Body({
    mass: 1,//kg
    type: type,
    position: new CANNON.Vec3(this.position.x,this.position.y,this.position.z),
    shape: new CANNON.Box(new CANNON.Vec3(this.width/2,this.height/2,this.depth/2)),
    material: wallMaterial,
})
this.body.quaternion.setFromEuler(this.rotation.x,this.rotation.y,this.rotation.z,'XYZ')
this.body.jtype = this.physicsType
this.body.userData = {}
this.body.userData.block = this
world.addBody(this.body)

Each body has a mass, type, position, quaternion, and shape.

For mass I’ve always used 1kg. This works well enough but if I ever update the game in the future I’ll make the mass configurable for each block. This would enable more variety in the levels.

The type is either dynamic or kinematic. Dynamic means the body can move and tumble in all directions. A kinematic body is one that does not move but other blocks can hit and bounce against it.

The shape is the actual shape of the body. For blocks this is a box. For the ball that you throw I used a sphere. It is also possible to create interactive meshes but I didn’t use them for this game.

An important note about Boxes. In ThreeJS the BoxGeometry takes the the full width, height, and depth in the constructor. In CannonJS you use the extent from the center, which is half of the full width, height, and depth. I didn’t realize this when I started, only to discover my cubes wouldn’t fall all the way to the ground. :)

The position and quaternion (orientation) properties use the same values in the same order as ThreeJS. The material refers to how that block will bounce against others. In my game I use only two materials: wall and ball. For each pair of materials you will create a contact material which defines the friction and restitution (bounciness) to use when that particular pair collides.

const wallMaterial = new CANNON.Material()
// …
const ballMaterial = new CANNON.Material()
// …
world.addContactMaterial(new CANNON.ContactMaterial( 
	wallMaterial,ballMaterial,
    {
        friction:this.wallFriction,
        restitution: this.wallRestitution
    }
))

Gravity

All of these bodies are added to a World object with a hard coded gravity property set to match Earth gravity (9.8m/s^2), though individual levels may override this. The last three levels of the current game have gravity set to 0 for a different play experience.

const world = new CANNON.World();
world.gravity.set(0, -9.82, 0);

Once the physics engine is set up and simulating the objects we need to update the on screen graphics after every world step. This is done by just copying the properties out of the body and back to the ThreeJS object.

this.obj.position.copy(this.body.position)
this.obj.quaternion.copy(this.body.quaternion)

Collision Detection

There is one more thing we need: collisions. The engine handles colliding all of the boxes and making them fall over, but the goal of the game is that the player must knock over all of the crystal boxes to complete the level. This means I have to define what knock over means. At first I just checked if a block had moved from its original orientation, but this proved tricky. Sometimes a box would be very gently knocked and tip slightly, triggering a ‘knock over’ event. Other times you could smash into a block at high speed but it wouldn’t tip over because there was a wall behind it.

Instead I added a collision handler so that my code would be called whenever two objects collide. The collision event includes a method to get the velocity at the impact. This allows me to ignore any collisions that aren’t strong enough.

You can see this in player.html

function handleCollision(e) {
    if(game.blockService.ignore_collisions) return

    //ignore tiny collisions
    if(Math.abs(e.contact.getImpactVelocityAlongNormal() < 1.0)) return

    //when ball hits moving block,
    if(e.body.jtype === BLOCK_TYPES.BALL) {
        if( e.target.jtype === BLOCK_TYPES.WALL) {
            game.audioService.play('click')
        }

        if (e.target.jtype === BLOCK_TYPES.BLOCK) {
            //hit a block, just make the thunk sound
            game.audioService.play('click')
        }
    }

    //if crystal hits anything and the impact was strong enought
    if(e.body.jtype === BLOCK_TYPES.CRYSTAL || e.target.jtype === BLOCK_TYPES.CRYSTAL) {
        if(Math.abs(e.contact.getImpactVelocityAlongNormal() >= 2.0)) {
            return destroyCrystal(e.target)
        }
    }
    // console.log(`collision: body ${e.body.jtype} target ${e.target.jtype}`)
}

The collision event handler was also the perfect place to add sound effects for when objects hit each other. Since the event includes which objects were involved I can use different sounds for different objects, like the crashing glass sound for the crystal blocks.

Firing the ball is similar to creating the block bodies except that it needs an initial velocity based on how much force the player slingshotted the ball with. If you don’t specify a velocity to the Body constructor then it will use a default of 0.

fireBall(pos, dir, strength) {
    this.group.worldToLocal(pos)
    dir.normalize()
    dir.multiplyScalar(strength*30)
    const ball = this.generateBallMesh(this.ballRadius,this.ballType)
    ball.castShadow = true
    ball.position.copy(pos)
    const sphereBody = new CANNON.Body({
        mass: this.ballMass,
        shape: new CANNON.Sphere(this.ballRadius),
        position: new CANNON.Vec3(pos.x, pos.y, pos.z),
        velocity: new CANNON.Vec3(dir.x,dir.y,dir.z),
        material: ballMaterial,
    })
    sphereBody.jtype = BLOCK_TYPES.BALL
    ball.userData.body = sphereBody
    this.addBall(ball)
    return ball
}

Next Steps

Overall CannonJS worked pretty well. I would like it to be faster as it costs me about 10fps to run, but other things in the game had a bigger impact on performance. If I ever revisit this game I will try to move the physics calculations to a worker thread, as well as redo the syncing code. I’m sure there is a better way to sync objects quickly. Perhaps JS Proxies would help. I would also move the graphics & styling code outside, so that the BlockService can really focus just on physics.

While there are some more powerful solutions coming with WASM, today I definitely recommend using CannonJS for the physics in your WebVR games. The ease of working with the API (despite being under documented) meant I could spend more time on the game and less time worrying about math.

The Mozilla BlogRetailers: All We Want for Valentine’s Day is Basic Security

Mozilla and our allies are asking four major retailers to adopt our Minimum Security Guidelines

 

Today, Mozilla, Consumers International, the Internet Society, and eight other organizations are urging Amazon, Target, Walmart, and Best Buy to stop selling insecure connected devices.

Why? As the Internet of Things expands, a troubling pattern is emerging:

[1] Company x makes a “smart” product — like connected stuffed animals — without proper privacy or security features

[2] Major retailers sell that insecure product widely

[3] The product gets hacked, and consumers are the ultimate loser

This has been the case with smart dolls, webcams, doorbells, and countless other devices. And the consequences can be life threatening: “Internet-connected locks, speakers, thermostats, lights and cameras that have been marketed as the newest conveniences are now also being used as a means for harassment, monitoring, revenge and control,” the New York Times reported last year. Compounding this: It is estimated that by 2020, 10 billion IoT products will be active.

Last year, in an effort to make connected devices on the market safer for consumers, Mozilla, the Internet Society, and Consumers International published our Minimum Security Guidelines: the five basic features we believe all connected devices should have. They include encrypted communications; automatic updates; strong password requirements; vulnerability management; and an accessible privacy policy.

Now, we’re calling on four major retailers to publicly endorse these guidelines, and also commit to vetting all connected products they sell against these guidelines. Mozilla, Consumers International, and the Internet Society have sent a sign-on letter to Amazon, Target, Walmart, and Best Buy.

The letter is also signed by 18 Million Rising, Center for Democracy and Technology, ColorOfChange, Consumer Federation of America, Common Sense Media, Hollaback, Open Media & Information Companies Initiative, and Story of Stuff.

Currently, there is no shortage of insecure products on shelves. In our annual holiday buyers guide, which ranks popular devices’ privacy and security features, about half the products failed to meet our Minimum Security Guidelines. And in the Valentine’s Day buyers guide we released last week, nine out of 18 products failed.

Why are we targeting retailers, and not the companies themselves? Mozilla can and does speak with the companies behind these devices. But by talking with retailers, we believe we can have an outsized impact. Retailers don’t want their brands associated with insecure goods. And if retailers drop a company’s product, that company will be compelled to improve its product’s privacy and security features.

We know this approach works. Last year, Mozilla called on Target and Walmart to stop selling CloudPets, an easily-hackable smart toy. Target and Walmart listened, and stopped selling the toys.

In the short-term, we can get the most insecure devices off shelves. In the long-term, we can fuel a movement for a more secure, privacy-centric Internet of Things.

Read the full letter, here or below.


Dear Target, Walmart, Best Buy and Amazon, 

The advent of new connected consumer products offers many benefits. However, as you are aware, there are also serious concerns regarding standards of privacy and security with these products. These require urgent attention if we are to maintain consumer trust in this market.

It is estimated that by 2020, 10 billion IoT products will be active. The majority of these will be in the hands of consumers. Given the enormous growth of this space, and because so many of these products are entrusted with private information and conversations, it is incredibly important that we all work together to ensure that internet-enabled devices enhance consumers’ trust.

Cloudpets illustrated the problem, however we continue to see connected devices that fail to meet the basic privacy and security thresholds. We are especially concerned about how these issues impact children, in the case of connected toys and other devices that children interact with. That’s why we’re asking you to publicly endorse these minimum security and privacy guidelines, and commit publicly to use them to vet any products your company sells to consumers. While many products can and should be expected to meet a high set of privacy and security standards, these minimum requirements are a strong start that every reputable consumer company must be expected to meet. These minimum guidelines require all IoT devices to have:

1) Encrypted communications

The product must use encryption for all of its network communications functions and capabilities. This ensures that all communications are not eavesdropped or modified in transit.

2) Security updates

The product must support automatic updates for a reasonable period after sale, and be enabled by default. This ensures that when a vulnerability is known, the vendor can make security updates available for consumers, which are verified (using some form of cryptography) and then installed seamlessly. Updates must not make the product unavailable for an extended period.

3) Strong passwords

If the product uses passwords for remote authentication, it must require that strong passwords are used, including having password strength requirements. Any non-unique default passwords must also be reset as part of the device’s initial setup. This helps protect the device from vulnerability to guessable password attacks, which could result in device compromise.

4) Vulnerability management

The vendor must have a system in place to manage vulnerabilities in the product. This must also include a point of contact for reporting vulnerabilities and a vulnerability handling process internally to fix them once reported. This ensures that vendors are actively managing vulnerabilities throughout the product’s lifecycle.

5) Privacy practices

The product must have a privacy policy that is easily accessible, written in language that is easily understood and appropriate for the person using the device or service at the point of sale. At a minimum, users should be notified about substantive changes to the policy. If data is being collected, transmitted or shared for marketing purposes, that should be clear to users and, in line with the EU’s General Data Protection Regulation (GDPR), there should be a way to opt-out of such practices. Users should also have a way to delete their data and account. Additionally, like in GDPR, this should include a policy setting standard retention periods wherever possible.

We’ve seen headline after headline about privacy and security failings in the IoT space. And it is often the same mistakes that have led to people’s private moments, conversations, and information being compromised. Given the value and trust that consumers place in your company, you have a uniquely important role in addressing this problem and helping to build a more secure, connected future. Consumers can and should be confident that, when they buy a device from you, that device will not compromise their privacy and security. Signing on to these minimum guidelines is the first step to turn the tide and build trust in this space.

Yours,

Mozilla, Internet Society, Consumer’s International, ColorOfChange, Open Media & Information Companies Initiative, Common Sense Media, Story of Stuff, Center for Democracy and Technology, Consumer Federation of America, 18 Million Rising, Hollaback

The post Retailers: All We Want for Valentine’s Day is Basic Security appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgAnyone can create a virtual reality experience with this new WebVR starter kit from Mozilla and Glitch

Here at Mozilla, we are big fans of Glitch. In early 2017 we made the decision to host our A-Frame content on their platform. The decision was easy. Glitch makes it easy to explore, and remix live code examples for WebVR.

We also love the people behind Glitch. They have created a culture and a community that is kind, encouraging, and champions creativity. We share their vision for a web that is creative, personal, and human. The ability to deliver immersive experiences through the browser opens a whole new avenue for creativity. It allows us to move beyond screens, and keyboards. It is exciting, and new, and sometimes a bit weird (but in a good way).

Building a virtual reality experience may seem daunting, but it really isn’t. WebVR and frameworks like A-Frame make it really easy to get started. This is why we worked with Glitch to create a WebVR starter kit. It is a free, 5-part video course with interactive code examples that will teach you the fundamentals of WebVR using A-Frame. Our hope is that this starter kit will encourage anyone who has been on the fence about creating virtual reality experiences to dive in and get started.

Check out part one of the five-part series below. If you want more, I’d encourage you to check out the full starter kit here, or use the link at the bottom of this post.

 

In the Glitch viewer embedded below, you can see how to make a WebVR planetarium in just a few easy-to-follow steps. You learn interactively (and painlessly) by editing and remixing the working code in the viewer:

 


 

Ready to keep going? Click below to view the full series on Glitch.



The post Anyone can create a virtual reality experience with this new WebVR starter kit from Mozilla and Glitch appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 273

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is sysinfo, a system handler to get information and interact with processes. Thanks to GuillaumeGomez for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

236 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Once again, we have two quotes for the price of one:

I love Rust because it reduces bugs by targeting it’s biggest source… me.

ObliviousJD on Twitter

Say the same thing about seatbelts in a car. If you don’t plan to have accidents, why do you need seatbelts?

Car accidents, like mistakes in programming are a risk that has a likelihood that is non-zero. A seatbelt might be a little bit annoying when things go well, but much less so when they don’t. Rust is there to stop you in most cases when you try to accidentally shot yourself into the leg, unless you deliberately without knowing what you are doing while yelling “hold my beer” (unsafe). And contrary to popular belief even in unsafe blocks many of Rust’s safety guarantees hold, just not all.

Just like with the seatbelt, there will be always those that don’t wear one for their very subjective reasons (e.g. because of edge cases where a seatbelt could trap you in a burning car, or because it is not cool, or because they hate the feeling and think accidents only happen to people who can’t drive).

atoav on HN comparing Rust's safety guarantees with seat-belts.

Thanks to Kornel and pitdicker for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

The Mozilla BlogOpen Letter: Facebook, Do Your Part Against Disinformation

Mozilla, Access Now, Reporters Without Borders, and 35 other organizations have published an open letter to Facebook.

Our ask: make good on your promises to provide more transparency around political advertising ahead of the 2019 EU Parliamentary Elections

 

Is Facebook making a sincere effort to be transparent about the content on its platform? Or, is the social media platform neglecting its promises?

Facebook promised European lawmakers and users it would increase the transparency of political advertising on the platform to prevent abuse during the elections. But in the very same breath, they took measures to block access to transparency tools that let users see how they are being targeted.

With the 2019 EU Parliamentary Elections on the horizon, it is vital that Facebook take action to address this problem. So today, Mozilla and 37 other organizations — including Access Now and Reporters Without Borders — are publishing an open letter to Facebook.

“We are writing you today as a group of technologists, human rights defenders, academics, journalists and Facebook users who are deeply concerned about the validity of Facebook’s promises to protect European users from targeted disinformation campaigns during the European Parliamentary elections,” the letter reads.

“Promises and press statements aren’t enough; instead, we’ll be watching for real action over the coming months and will be exploring ways to hold Facebook accountable if that action isn’t sufficient,” the letter continues.

Individuals may sign their name to the letter, as well. Sign here.

Read the full letter, here or below. The letter will also appear in the Thursday print edition of POLITICO Europe.

Lire cette lettre en français    

Diesen Brief auf Deutsch lesen

The letter urges Facebook to make good on its promise to EU lawmakers. Last year, Facebook signed the EU’s Code of Practice on disinformation and pledged to increase transparency around political advertising. But since then, Facebook has made political advertising more opaque, not more transparent. The company recently blocked access to third-party transparency tools.

Specifically, our open letter urges Facebook to:

  • Roll out a functional, open Ad Archive API that enables advanced research and development of tools that analyse political ads served to Facebook users in the EU

 

  • Ensure that all political advertisements are clearly distinguished from other content and are accompanied by key targeting criteria such as sponsor identity and amount spent on the platform in all EU countries

 

  • Cease all harassment of good faith researchers who are building tools to provide greater transparency into the advertising on Facebook’s platform.

To safeguard the integrity of the EU Parliament elections, Facebook must be part of the solution. Users and voters across the EU have the right to know who is paying to promote the political ads they encounter online; if they are being targeted; and why they are being targeted.


The full letter

Dear Facebook:

We are writing you today as a group of technologists, human rights defenders, academics, journalists and Facebook users who are deeply concerned about the validity of Facebook’s promises to protect European users from targeted disinformation campaigns during the European Parliamentary elections. You have promised European lawmakers and users that you will increase the transparency of political advertising on the platform to prevent abuse during the elections. But in the very same breath, you took measures to block access to transparency tools that let your users see how they are being targeted.

In the company’s recent Wall Street Journal op-ed, Mark Zuckerberg wrote that the most important principles around data are transparency, choice and control. By restricting access to advertising transparency tools available to Facebook users, you are undermining transparency, eliminating the choice of your users to install tools that help them analyse political ads, and wielding control over good faith researchers who try to review data on the platform. Your alternative to these third party tools provides simple keyword search functionality and does not provide the level of data access necessary for meaningful transparency.

Actions speak louder than words. That’s why you must take action to meaningfully deliver on the commitments made to the EU institutions, notably the increased transparency that you’ve promised. Promises and press statements aren’t enough; instead, we need to see real action over the coming months, and we will be exploring ways to hold Facebook accountable if that action isn’t sufficient.

Specifically, we ask that you implement the following measures by 1 April 2019 to give developers sufficient lead time to create transparency tools in advance of the elections:

  • Roll out a functional, open Ad Archive API that enables advanced research and development of tools that analyse political ads served to Facebook users in the EU

 

  • Ensure that all political advertisements are clearly distinguished from other content and are accompanied by key targeting criteria such as sponsor identity and amount spent on the platform in all EU countries

 

  • Cease harassment of good faith researchers who are building tools to provide greater transparency into the advertising on your platform

We believe that Facebook and other platforms can be positive forces that enable democracy, but this vision can only be realized through true transparency and trust. Transparency cannot just be on the terms with which the world’s largest, most powerful tech companies are most comfortable.

We look forward to the swift and complete implementation of these transparency measures that you have promised to your users.

Sincerely,

Mozilla Foundation

and also signed by:

Access Now
AlgorithmWatch
All Out
Alto Data Analytics
ARTICLE 19
Aufstehn
Bits of Freedom
Bulgarian Helsinki Committee
BUND – Friends of the Earth Germany
Campact
Campax
Center for Democracy and Technology
CIPPIC
Civil Liberties Union for Europe
Civil Rights Defenders
Declic
doteveryone
Estonian Human Rights Center
Free Press Unlimited
GONG Croatia
Greenpeace
Italian Coalition for Civil Liberties and Rights (CILD)
Mobilisation Lab
Open Data Institute
Open Knowledge International
OpenMedia
Privacy International
PROVIDUS
Reporters Without Borders
Skiftet
SumOfUs
The Fourth Group
Transparent Referendum Initiative
Uplift
Urgent Action Fund for Women’s Human Rights
WhoTargetsMe
Wikimedia UK


Note: This blog post has been updated to reflect additional letter signers.

The post Open Letter: Facebook, Do Your Part Against Disinformation appeared first on The Mozilla Blog.

Shing LyuDownload JavaScript Data as Files on the Client Side

When building websites or web apps, creating a “Download as file” link is quite useful. For example if you want to allow user to export some data as JSON, CSV or plain text files so they can open them in external programs or load them back later. Usually this requires a web server to format the file and serve it. But actually you can export arbitrary JavaScript variable to file entirely on the client side. I have implemented that function in one of my project, MozApoy, and here I’ll explain how I did that.

First, we create a link in HTML

<a id="download_link" download="my_exported_file.txt" href=”” >Download as Text File</a>

The download attribute will be the filename for your file. It will look like this:

plain text file download

Notice that we keep the href attribute blank. Traditionally we fill this attribute with a server-generated file path, but this time we’ll assign it dynamically generate the link using JavaScript.

Then, if we want to export the content of the text variable as a text file, we can use this JavaScript code:

var text = 'Some data I want to export';
var data = new Blob([text], {type: 'text/plain'});

var url = window.URL.createObjectURL(data);

document.getElementById('download_link').href = url;

The magic happens on the third line, the window.URL.createObjectURL() API takes a Blob and returns an URL to access it. The URL lives as long as the document in the window on which it was created. Notice that you can assign the type of the data in the new Blob() constructor. If you assign the correct format, the browser can better handle the file. Other commonly seen formats include application/json and text/csv. For example, if we name the file as *.csv and give it type: 'text/csv', Firefox will recognize it as “CSV document” and suggest you open it with LibreOffice Calc.

csv file download

And in the last line we assign the url to the <a/> element’s href attribute, so when the user clicks on the link, the browser will initiate an download action (or other default action for the specific file type.)

Everytime you call createObjectURL(), a new object URL will be created, which will use up the memory if you call it many times. So if you don’t need the old URL anymore, you should call the revokeObjectURL() API to free them.

var url = window.URL.createObjectURL(data);
window.URL.revokeObjectURL(url);

This is a simple trick to let your user download files without setting up any server. If you want to see it in action, you can check out this CodePen.

Mozilla Open Policy & Advocacy BlogKenya Government mandates DNA-linked national ID, without data protection law

Last month, the Kenya Parliament passed a seriously concerning amendment to the country’s national ID law, making Kenya home to the most privacy-invasive national ID system in the world. The rebranded, National Integrated Identity Management System (NIIMS) now requires all Kenyans, immigrants, and refugees to turn over their DNA, GPS coordinates of their residential address, retina scans, iris pattern, voice waves, and earlobe geometry before being issued critical identification documents. NIIMS will consolidate information contained in other government agency databases and generate a unique identification number known as Huduma Namba.

It is hard to see how this system comports with the right to privacy articulated in Article 31 of the Kenyan Constitution. It is deeply troubling that these amendments passed without public debate, and were approved even as a data protection bill which would designate DNA and biometrics as sensitive data is pending.

Before these amendments, in order to issue the National ID Card (ID), the government only required name, date and place of birth, place of residence, and postal address. The ID card is a critical document that impacts everyday life, without it, an individual cannot vote, purchase property, access higher education, obtain employment, access credit, or public health, among other fundamental rights.

Mozilla strongly believes that that no digital ID system should be implemented without strong privacy and data protection legislation. The proposed Data Protection Bill of 2018 which Parliament is likely to consider next month, is a strong and thorough framework that contains provisions relating to data minimization as well as collection and purpose limitation. If NIIMS  is implemented, it will be in conflict with these provisions, and more importantly in conflict with Article 31 of the Constitution, which specifically protects the right to privacy.

Proponents of NIIMS claim that the system provides a number of benefits, such as accurate delivery of government services. These arguments also seem to conflate legal and digital identity. Legal ID used to certify one’s identity through basic data about one’s personhood (such as your name and the date and place of your birth) is a commendable goal. It is one of the United Nations Sustainable Development Goals 16.9 that aims “to provide legal identity for all, including birth registration by 2030”.  However, it is important to remember this objective can be met in several ways. “Digital ID” systems, and especially those that involve sensitive biometrics or DNA, are not a necessary means of verifying identity, and in practice raise significant privacy and security concerns. The choice of whether to opt for a digital ID let alone a biometric ID therefore should be closely scrutinized by governments in light of these risks, rather than uncritically accepted as beneficial.

  • Security Concerns: The centralized nature of NIIMS creates massive security vulnerabilities. It could become a honeypot for malicious actors and identity thieves who can exploit other identifying information linked to stolen biometric data. The amendment is unclear on how the government will establish and institute strong security measures required for the protection of such a sensitive database. If there’s a breach, it’s not as if your DNA or retina can be reset like a password or token.
  • Surveillance Concerns:  By centralizing a tremendous amount of sensitive data in a government database, NIIMS creates an opportunity for mass surveillance by the State. Not only is the collection of biometrics incredibly invasive, but gathering this data combined with transaction logs of where ID is used could substantially reduce anonymity. This is all the more worrying considering Kenya’s history of extralegal  surveillance and intelligence sharing.
  • Ethnic Discrimination  Concerns: The collection of DNA is particularly concerning as this information can be used to identify an individual’s ethnic identity. Given Kenya’s history of  politicization of ethnic identity, collecting this data in a centralized database like NIIMS could reproduce and exacerbate patterns of discrimination.

The process was not constitutional

Kenya’s constitution requires public input before any new law can be adopted. No public discussions were conducted for this amendment. It was offered for parliamentary debate under “Miscellaneous” amendments, which exempted it from procedures and scrutiny that would have required introduction as a substantive bill and corresponding public debate. The Kenyan government must not implement this system without sufficient public debate and meaningful engagement to determine how such a system should be implemented if at all.

The proposed law does not provide people with the opportunity to opt in or out of giving their sensitive and precise data. The Constitution requires that all Kenyans be granted identification. However, if an individual were to refuse to turn over their DNA or other sensitive information to the State, as they should have the right to do, they could risk not being issued their identity or citizenship documents. Such a denial would contravene Articles 12, 13, and 14 of the Constitution.

Opting out of this system should not be used to discriminate or exclude any individual from accessing essential public services and exercising their fundamental rights.

Individuals must be in full control of their digital identities with the right to object to processing and use and withdraw consent. These aspects of control and choice are essential to empowering individuals in the deployment of their digital identities. Therefore policy and technical decisions must take into account systems that allow individuals to identify themselves rather than the system identifying them.

Mozilla urges the government of Kenya to suspend the implementation of NIIMS and we hope Kenyan members of parliament will act swiftly to pass the Data Protection Bill of 2018.

The post Kenya Government mandates DNA-linked national ID, without data protection law appeared first on Open Policy & Advocacy.

Mozilla VR BlogImmersive Media Content Creation Guide

Immersive Media Content Creation Guide

Firefox Reality is ready for your panoramic images and videos, in both 2D and 3D. In this guide you will find advice for creating and formatting your content to best display on the immersive web in Firefox Reality.

Images

The web is a great way to share immersive images, either as standalone photos or as part of an interactive tour. Most browsers can display immersive (360°) images but need a little help. Generally these images are regular JPGs or PNGs that have been taken with a 180° or 360° camera. Depending on the exact format you may need different software to display it in a browser. You can host the images themselves on your own server or use one of the many photo tour websites listed below.

Equirectangular Images

360 cameras usually take photos in equirectangular format, meaning an aspect ratio of 2 to 1. Here are some examples on Flickr.

Immersive Media Content Creation Guide

To display one of these on the web in VR you will need an image viewer library. Here are some examples:

Spherical Images and 3D Images

Some 360 cameras save as spherical projection, which generally looks like one or two circles. Generally these should be converted to equirectangular with the tools that came with your camera. 3D images from 180 cameras will generally be two images side by side or one above the other. Again most camera makers provide tools to prepare these for the web. Look at the documentation for your camera.

Immersive Media Content Creation Guide

Photo Tours

One of the best ways to use immersive images on the web is to build an interactive tour with them. There are many excellent web-based tools for building 360 tours. Here are just a few of them:

Video

360 and 3D video is much like regular video. It is generally encoded with the h264 codec and stored inside of an mp4 container. However, 360 and 3D video is very large. Generally you do not want to host it on your own web server. Instead you can host it with a video provider like YouTube or Vimeo. They each have their own instructions for how to process and upload videos.

If you chose to host the video file yourself on a standard web server then you will need to use a video viewer library built with a VR framework like AFrame or ThreeJS.

3D videos

3D video is generally just two 180 or 360 videos stuck together. This is usually called ‘over and under’ format, meaning each video frame is a square containing two equirectangular images, the top half is for the left eye and the bottom half is for the right eye.

Compression Advice

Use as high quality as you can get away with and let your video provider convert it as needed. If you are doing it yourself go for 4k in h264 with the highest bitrate your camera supports.

Devices for capturing 360 videos and images

You will get the best results from a camera built for 360,180, or 3D. Amazon has many fine products to choose from. They should all come with instructions and software for capturing and converting both photos and video.

Members of the Mozilla Mixed Reality team have personally used:

Though you will get better results from a dedicated camera, it is also possible to capture 360 images from custom smartphone camera apps such as FOV, Cardboard Camera and Facebook. See these tutorials on 360 iOS apps and Android apps for more information.

Sharing your Immersive Content

You can share your content on your own website, but if that won’t work for you then consider one of the many 360 content hosting sites like these:

Get Featured

Once you have your immersive content on the web, please let us know about it. We might be able to feature it in the Firefox Reality home page, getting your content in front of many viewers right inside VR.

QMOFirefox 66 Beta 8 Testday, February 15th

Hello Mozillians,

We are happy to let you know that Friday, February 15th, we are organizing Firefox 66 Beta 8 Testday. We’ll be focusing our testing on: Storage Access API/Cookie Restrictions. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Daniel Stenbergcommercial curl support!

If you want commercial support, ports of curl to other operating systems or just instant help to fix your curl related problems, we’re here to help. Get in touch now! This is the premiere. This has not been offered by me or anyone else before.

I’m not sure I need to say it, but I personally have authored almost 60% of all commits in the curl source code during my more than twenty years in the project. I started the project, I’ve designed its architecture etc. There is simply no one around with my curl experience and knowledge of curl internals. You can’t buy better curl expertise.

curl has become one of the world’s most widely used software components and is the transfer engine doing a large chunk of all non-browser Internet transfers in the world today. curl has reached this level of success entirely without anyone offering commercial services around it. Still, not every company and product made out there has a team of curl experts and in this demanding time and age we know there are times when you rather hire the right team to help you out.

We are the curl experts that can help you and your team. Contact us for all and any support questions at support@wolfssl.com.

What about the curl project?

I’m heading into this new chapter of my life and the curl project with the full knowledge that this blurs the lines between my job and my spare time even more than before. But fear not!

The curl project is free and open and will remain independent of any commercial enterprise helping out customers. I realize me offering companies and organizations to deal with curl problems and solving curl issues for compensation creates new challenges and questions where boundaries go, if for nothing else for me personally. I still think this is worth pursuing and I’m sure we can figure out and handle whatever minor issues this can lead to.

My friends, the community, the users and harsh critiques on twitter will all help me stay true and honest. I know this. This should end up a plus for the curl project in general as well as for me personally. More focus, more work and more money involved in curl related activities should improve the project.

It is with great joy and excitement I take on this new step.

Alex GibsonCreating a dark mode theme using CSS Custom Properties

The use of variables in CSS preprocessors such as Sass and Less has been common practise in front-end development for some time now. Using variables to reference recurring values (such as for color, margin and padding etc), helps us to write cleaner, more maintainable, and consistent CSS. Whilst preprocessors have extended CSS with some great patterns that we now often take for granted, using variables when compiling to static CSS has some limitations. This is especially true when it comes to theming.

As an example, let’s take a look at creating a dark mode theme using Sass. Here’s a simplified example for what could be a basic page header:


$color-background: #fff;
$color-text: #000;
$color-title: #999;

$color-background-dark: #000;
$color-text-dark: #fff;
$color-title-dark: #ccc;

.header {
    // default theme colours
    background-color: $color-background;
    color: $color-text;

    .header-title {
        color: $color-title;
    }

    // dark theme colours
    .t-dark & {
        background-color: $color-background-dark;
        color: $color-text-dark;

        .header-title {
            color: $color-title-dark;
        }
    }
}

The above example isn’t very complicated, but it also isn’t terribly efficient. Because Sass has to compile all our styles down to static CSS at build time, we lose all the benefits that dynamic variables could provide. We end up generating selectors for each theme style, and repeating the same properties for each colour change. Here’s what the generated CSS looks like:

.header {
    background-color: #fff;
    color: #000;
}

.header .header-title {
    color: #999;
}

.t-dark .header {
    background-color: #000;
    color: #fff;
}

.t-dark .header .header-title {
    color: #ccc;
}

This may not look like much, but for large or complex projects with many of components, this pattern can end up creating a lot of extra CSS. Bundling every selector required to do each theme is suboptimal in terms of performance. We could try to solve this by compiling each theme to a separate stylesheet and then dynamically loading our CSS, or by removing unused CSS post-compilation, but this all adds more complexity when it should be simple.

Another issue with the above pattern is related to maintainability. Ensuring that selectors and matching properties exist for each theme can easily be prone to human error. We could try and alleviate this by using component mixins to generate our CSS, but this still doesn’t fix the underlying problem we see with static compilation. Wouldn’t it be great if we could just update the variables at runtime and be done with it all?

Hello Custom Properties

CSS Custom Properties (or CSS variables, as they are commonly referred to) help to solve many of these problems. Unlike preprocessors which compile variables to static values in CSS, custom properties are dynamic. This makes them incredibly flexible and a perfect fit for writing efficient, practical CSS themes.

Custom properties are powerful because they follow the rules of inheritance and the cascade, just like regular CSS properties. If the value of a custom property changes, all DOM elements associated with a selector that uses that property are repainted by the browser automatically.

Here’s the same header example implemented using CSS custom properties:

:root {
    --color-background: #fff;
    --color-text: #000;
    --color-title: #999;
}

:root.t-dark {
    --color-background: #000;
    --color-text: #fff;
    --color-title: #ccc;
}

.header {
    background-color: var(--color-background);
    color: var(--color-text);

    .header-title {
        color: var(--color-title);
    }
}

Much more readable and succinct! We only ever have to write our header component CSS once, and our theme colours can be updated dynamically at runtime in the browser. Whilst this is only a basic example, it is easy to see how this scales so much better. And because custom properties are dynamic it means we can do all kinds of new things, such as change their values from within CSS media queries, or even via JavaScript.

Supporting dark mode in macOS Mojave

Safari recently added support for a prefers-color-scheme media query that works with macOS Mojave’s new dark mode feature. This enables web pages to opt-in to whichever mode the system preference is set to.

@media (prefers-color-scheme: dark) {
    :root {
        --color-background: #000;
        --color-text: #fff;
        --color-title: #ccc;
    }
}

You can also detect the preference in JavaScript like so:

// detect dark mode preference.
const prefersDarkMode = window.matchMedia('(prefers-color-scheme: dark)').matches;

Because I’m a big fan of dark UI’s (I find them much easier on the eye, especially over extended periods of time), I couldn’t resist adding a custom theme to this blog. For browsers which don’t yet support prefers-color-scheme, I also added a theme toggle to the top right navigation.

Detecting support for CSS custom properties

Detecting browser support for custom properties is pretty straight forward, and can be done in either CSS or JavaScript.

You can detect support for custom properties in CSS using @supports:

@supports (--color-background: #fff) {
    :root {
        --color-background: #fff;
        --color-text: #000;
        --color-title: #999;
    }
}

You can also provide fallbacks for older browsers just by using a previous declaration, which is often simpler.

.header {
    background-color: #fff;
    background-color: var(--color-background);
}

It’s worth noting here that providing a fallback does create more redundant properties in your CSS. Depending on your project, not using any fallback may be just fine. Browsers that don’t support custom properties will resort to default user agent styles, which are still perfectly accessible.

You can also detect support for custom properties in JavaScript using the following line of code (I’m using this to determine whether or not to initialise the theme selector in the navigation).

const supportsCustomProperties = window.CSS && window.CSS.supports('color', 'var(--fake-color');

Take a closer look

If you would like to take a closer look at the code I used to implement my dark mode theme, feel free to poke around at the source code on GitHub.

The Mozilla BlogMozilla Heads to Capitol Hill to Defend Net Neutrality

Today Denelle Dixon, Mozilla COO, had the honor of testifying on behalf of Mozilla before a packed United States House of Representatives Energy & Commerce Telecommunications Subcommittee in support of our ongoing fight for net neutrality. It was clear: net neutrality principles are broadly embraced, even in partisan Washington.

Dixon in front of the United States House of Representatives Energy & Commerce Telecommunications Subcommittee

Our work to restore net neutrality is driven by our mission to build a better, healthier internet that puts users first. And we believe that net neutrality is fundamental to preserving an open internet that creates room for new businesses and new ideas to emerge and flourish, and where internet users can choose freely the companies, products, and services that put their interests first.

We are committed to restoring the protections users deserve and will continue to go wherever the fight for net neutrality takes us.

For more, check out the replay of the hearing or read Denelle’s prepared written testimony to the subcommittee.

The post Mozilla Heads to Capitol Hill to Defend Net Neutrality appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgRefactoring MDN macros with async, await, and Object.freeze()

A frozen soap bubble

In March of last year, the MDN Engineering team began the experiment of publishing a monthly changelog on Mozilla Hacks. After nine months of the changelog format, we’ve decided it’s time to try something that we hope will be of interest to the web development community more broadly, and more fun for us to write. These posts may not be monthly, and they won’t contain the kind of granular detail that you would expect from a changelog. They will cover some of the more interesting engineering work we do to manage and grow the MDN Web Docs site. And if you want to know exactly what has changed and who has contributed to MDN, you can always check the repos on GitHub.

In January, we landed a major refactoring of the KumaScript codebase and that is going to be the topic of this post because the work included some techniques of interest to JavaScript programmers.

Modern JavaScript

One of the pleasures of undertaking a big refactor like this is the opportunity to modernize the codebase. JavaScript has matured so much since KumaScript was first written, and I was able to take advantage of this, using let and const, classes, arrow functions, for...of loops, the spread (…) operator, and destructuring assignment in the refactored code. Because KumaScript runs as a Node-based server, I didn’t have to worry about browser compatibility or transpilation: I was free (like a kid in a candy store!) to use all of the latest JavaScript features supported by Node 10.

KumaScript and macros

Updating to modern JavaScript was a lot of fun, but it wasn’t reason enough to justify the time spent on the refactor. To understand why my team allowed me to work on this project, you need to understand what KumaScript does and how it works. So bear with me while I explain this context, and then we’ll get back to the most interesting parts of the refactor.

First, you should know that Kuma is the Python-based wiki that powers MDN, and KumaScript is a server that renders macros in MDN documents. If you look at the raw form of an MDN document (such as the HTML <body> element) you’ll see lines like this:

It must be the second element of an {{HTMLElement("html")}} element.

The content within the double curly braces is a macro invocation. In this case, the macro is defined to render a cross-reference link to the MDN documentation for the html element. Using macros like this keeps our links and angle-bracket formatting consistent across the site and makes things simpler for writers.

MDN has been using macros like this since before the Kuma server existed. Before Kuma, we used a commercial wiki product which allowed macros to be defined in a language they called DekiScript. DekiScript was a JavaScript-based templating language with a special API for interacting with the wiki. So when we moved to the Kuma server, our documents were full of macros defined in DekiScript, and we needed to implement our own compatible version, which we called KumaScript.

Since our macros were defined using JavaScript, we couldn’t implement them directly in our Python-based Kuma server, so KumaScript became a separate service, written in Node. This was 7 years ago in early 2012, when Node itself was only on version 0.6. Fortunately, a JavaScript-based templating system known as EJS already existed at that time, so the basic tools for creating KumaScript were all in place.

But there was a catch: some of our macros needed to make HTTP requests to fetch data they needed. Consider the HTMLElement macro shown above for instance. That macro renders a link to the MDN documentation for a specified HTML tag. But, it also includes a tooltip (via the title attribute) on the link that includes a quick summary of the element:

A rendered link to documentation for an HTML element, displaying a tooltip containing a summary of the linked documentation.

That summary has to come from the document being linked to. This means that the implementation of the KumaScript macro needs to fetch the page it is linking to in order to extract some of its content. Furthermore, macros like this are written by technical writers, not software engineers, and so the decision was made (I assume by whoever designed the DekiScript macro system) that things like HTTP fetches would be done with blocking functions that returned synchronously, so that technical writers would not have to deal with nested callbacks.

This was a good design decision, but it made things tricky for KumaScript. Node does not naturally support blocking network operations, and even if it did, the KumaScript server could not just stop responding to incoming requests while it fetched documents for pending requests. The upshot was that KumaScript used the node-fibers binary extension to Node in order to define methods that blocked while network requests were pending. And in addition, KumaScript adopted the node-hirelings library to manage a pool of child processes. (It was written by the original author of KumaScript for this purpose). This enabled the KumaScript server to continue to handle incoming requests in parallel because it could farm out the possibly-blocking macro rendering calls to a pool of hireling child processes.

Async and await

This fibers+hirelings solution rendered MDN macros for 7 years, but by 2018 it had become obsolete. The original design decision that macro authors should not have to understand asynchronous programming with callbacks (or Promises) is still a good decision. But when Node 8 added support for the new async and await keywords, the fibers extension and hirelings library were no longer necessary.

You can read about async functions and await expressions on MDN, but the gist is this:

  • If you declare a function async, you are indicating that it returns a Promise. And if you return a value that is not a Promise, that value will be wrapped in a resolved Promise before it is returned.
  • The await operator makes asynchronous Promises appear to behave synchronously. It allows you to write asynchronous code that is as easy to read and reason about as synchronous code.

As an example, consider this line of code:

let response = await fetch(url);

In web browsers, the fetch() function starts an HTTP request and returns a Promise object that will resolve to a response object once the HTTP response begins to arrive from the server. Without await, you’d have to call the .then() method of the returned Promise, and pass a callback function to receive the response object. But the magic of await lets us pretend that fetch() actually blocks until the HTTP response is received. There is only one catch:

  • You can only use await within functions that are themselves declared async. Meantime, await doesn’t actually make anything block: the underlying operation is still fundamentally asynchronous, and even if we pretend that it is not, we can only do that within some larger asynchronous operation.

What this all means is that the design goal of protecting KumaScript macro authors from the complexity of callbacks can now be done with Promises and the await keyword. And this is the insight with which I undertook our KumaScript refactor.

As I mentioned above, each of our KumaScript macros is implemented as an EJS template. The EJS library compiles templates to JavaScript functions. And to my delight, the latest version of the library has already been updated with an option to compile templates to async functions, which means that await is now supported in EJS.

With this new library in place, the refactor was relatively simple. I had to find all the blocking functions available to our macros and convert them to use Promises instead of the node-fibers extension. Then, I was able to do a search-and-replace on our macro files to insert the await keyword before all invocations of these functions. Some of our more complicated macros define their own internal functions, and when those internal functions used await, I had to take the additional step of changing those functions to be async. I did get tripped up by one piece of syntax, however, when I converted an old line of blocking code like this:

var title = wiki.getPage(slug).title;

To this:

let title = await wiki.getPage(slug).title;

I didn’t catch the error on that line until I started seeing failures from the macro. In the old KumaScript, wiki.getPage() would block and return the requested data synchronously. In the new KumaScript, wiki.getPage() is declared async which means it returns a Promise. And the code above is trying to access a non-existent title property on that Promise object.

Mechanically inserting an await in front of the invocation does not change that fact because the await operator has lower precedence than the . property access operator. In this case, I needed to add some extra parentheses to wait for the Promise to resolve before accessing the title property:

let title = (await wiki.getPage(slug)).title;

This relatively small change in our KumaScript code means that we no longer need the fibers extension compiled into our Node binary; it means we don’t need the hirelings package any more; and it means that I was able to remove a bunch of code that handled the complicated details of communication between the main process and the hireling worker processes that were actually rendering macros.

And here’s the kicker: when rendering macros that do not make HTTP requests (or when the HTTP results are cached) I saw rendering speeds increase by a factor of 25 (not 25% faster–25 times faster!). And at the same time CPU load dropped in half. In production, the new KumaScript server is measurably faster, but not nearly 25x faster, because, of course, the time required to make asynchronous HTTP requests dominates the time required to synchronously render the template. But achieving a 25x speedup, even if only under controlled conditions, made this refactor a very satisfying experience!

Object.create() and Object.freeze()

There is one other piece of this KumaScript refactor that I want to talk about because it highlights some JavaScript techniques that deserve to be better known. As I’ve written above, KumaScript uses EJS templates. When you render an EJS template, you pass in an object that defines the bindings available to the JavaScript code in the template. Above, I described a KumaScript macro that called a function named wiki.getPage(). In order for it to do that, KumaScript has to pass an object to the EJS template rendering function that binds the name wiki to an object that includes a getPage property whose value is the relevant function.

For KumaScript, there are three layers of this global environment that we make available to EJS templates. Most fundamentally, there is the macro API, which includes wiki.getPage() and a number of related functions. All macros rendered by KumaScript share this same API. Above this API layer is an env object that gives macros access to page-specific values such as the language and title of the page within which they appear. When the Kuma server submits an MDN page to the KumaScript server for rendering, there are typically multiple macros to be rendered within the page. But all macros will see the same values for per-page variables like env.title and env.locale. Finally, each individual macro invocation on a page can include arguments, and these are exposed by binding them to variables $0, $1, etc.

So, in order to render macros, KumaScript has to prepare an object that includes bindings for a relatively complex API, a set of page-specific variables, and a set of invocation-specific arguments. When refactoring this code, I had two goals:

  • I didn’t want to have to rebuild the entire object for each macro to be rendered.
  • I wanted to ensure that macro code could not alter the environment and thereby affect the output of future macros.

I achieved the first goal by using the JavaScript prototype chain and Object.create(). Rather than defining all three layers of the environment on a single object, I first created an object that defined the fixed macro API and the per-page variables. I reused this object for all macros within a page. When it was time to render an individual macro, I used Object.create() to create a new object that inherited the API and per-page bindings, and I then added the macro argument bindings to that new object. This meant that there was much less setup work to do for each individual macro to be rendered.

But if I was going to reuse the object that defined the API and per-page variables, I had to be very sure that a macro could not alter the environment, because that would mean that a bug in one macro could alter the output of a subsequent macro. Using Object.create() helped a lot with this: if a macro runs a line of code like wiki = null;, that will only affect the environment object created for that one render, not the prototype object that it inherits from, and so the wiki.getPage() function will still be available to the next macro to be rendered. (I should point out that using Object.create() like this can cause some confusion when debugging because an object created this way will look like it is empty even though it has inherited properties.)

This Object.create() technique was not enough, however, because a macro that included the code wiki.getPage = null; would still be able to alter its execution environment and affect the output of subsequent macros. So, I took the extra step of calling Object.freeze() on the prototype object (and recursively on the objects it references) before I created objects that inherited from it.

Object.freeze() has been part of JavaScript since 2009, but you may not have ever used it if you are not a library author. It locks down an object, making all of its properties read-only. Additionally it “seals” the object, which means that new properties cannot be added and existing properties can not be deleted or configured to make them writable again.

I’ve always found it reassuring to know that Object.freeze() is there if I need it, but I’ve rarely actually needed it. So it was exciting to have a legitimate use for this function. There was one hitch worth mentioning, however: after triumphantly using Object.freeze(), I found that my attempts to stub out macro API methods like wiki.getPage() were failing silently. By locking down the macro execution environment so tightly, I’d locked out my own ability to write tests! The solution was to set a flag when testing and then omit the Object.freeze() step when the flag was set.

If this all sounds intriguing, you can take a look at the Environment class in the KumaScript source code.

The post Refactoring MDN macros with async, await, and Object.freeze() appeared first on Mozilla Hacks - the Web developer blog.

Mozilla GFXWebRender newsletter #39

Hi there! The project keeps making very good progress (only 7 blocker bugs left at the time of writing these words, some of which have fixes in review). This mean WebRender has a good chance of making it in Firefox 67 stable. I expect bugs and crash reports to spike as WebRender reaches a larger user population, which will keep us busy for a short while, and once things settle we’ll be able to go back to something we have been postponing for a while: polishing, adding new features and preparing WebRender for new platforms. Exciting!
I’d like to showcase a few projects that use WebRender in a future WebRender newsletter. If you maintain or know about one, please let us know in the comments section of this post.

Notable WebRender and Gecko changes

  • Jeff experimented with enabling WebRender for a few more configurations.
  • Kats enabled more WPT tests for windows-qr
  • Kvark fixed more perspective interpolation issues.
  • Kvark improved the way the resolution of transformed intermediate surfaces is computed and followed up with more improvements.
  • Kvark fixed some plane-splitting bugs.
  • Kvark prevented a crash with non-mappable clip rects.
  • Andrew fixed a pixel snapping issue.
  • srijs and Lee worked around yet another Mac GLSL compiler bug.
  • Lee fixed a performance regression related to animated blobs being invalidated too frequently.
  • Emilio fixed a clipping regression.
  • Glenn fixed a regression with tiled clip masks.
  • Glenn improved the performance of large blur radii by down-scaling more aggressively.
  • Glenn added more debugging infrastructure in wrench.
  • Sotaro enabled mochitest-chrome in WebRender.
  • Sotaro fixed an intermittent assertion.
  • Sotaro fixed a race condition between GPU process crashes and video playback.
  • Doug improved document splitting generalization and integration with APZ.

Blocker bugs countdown

The team keeps going through the remaining blockers (0 P2 bugs and 7 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Matt BrubeckRust: A unique perspective

The Rust programming language is designed to ensure memory safety, using a mix of compile-time and run-time checks to stop programs from accessing invalid pointers or sharing memory across threads without proper synchronization.

The way Rust does this is usually introduced in terms of mutable and immutable borrowing and lifetimes. This makes sense, because these are mechanisms that Rust programmers must use directly. They describe what the Rust compiler checks when it compiles a program.

However, there is another way to explain Rust. This alternate story focuses on unique versus shared access to memory. I believe this version is useful for understanding why various checks exist and how they provide memory safety.

Most experienced Rust programmers are already familiar with this concept. Five years ago, Niko Matsakis even proposed changing the mut keyword to uniq to emphasize it. My goal is to make these important ideas more accesssible to beginning and intermediate Rust programmers.

This is a very quick introduction that skips over many details to focus on high-level concepts. It should complement the official Rust documentation, not supplant it.

Unique access

The first key observation is: If a variable has unique access to a value, then it is safe to mutate it.

By safe, I mean memory-safe: free from invalid pointer accesses, data races, or other causes of undefined behavior. And by unique access, I mean that while this variable is alive, there are no other variables that can be used to read or write any part of the same value.

Unique access makes memory safety very simple: If there are no other pointers to the value, then you don’t need to worry about invalidating them. Similarly, if variables on other threads can't access the value, you needn’t worry about synchronization.

Unique ownership

One form of unique access is ownership. When you initialize a variable with a value, it becomes the sole owner of that value. Because the value has just one owner, the owner can safely mutate the value, destroy it, or transfer it to a new owner.

Depending on the type of the value, assigning a value to a new variable will either move it or copy it. Either way, unique ownership is preserved. For a move type, the old owner becomes inaccessible after the move, so we still have one value owned by one variable:

let x = vec![1, 2, 3];
let y = x;             // move ownership from x to y
// can’t access x after moving its value to y

For a copy type, the value is duplicated, so we end up with two values owned by two variables:

let x = 1;
let y = x; // copy the value of x into y

In this case, each variable ends up with a separate, independent value. Mutating one will not affect the other.

One value might be owned by another value, rather than directly by a variable. For example, a struct owns its fields, a Vec<T> owns the T items inside it, and a Box<T> owns the T that it points to.

Unique borrowing

If you have unique access to a value of type T, you can borrow a unique reference to that value. A unique reference to a T has type &mut T.

Because it’s safe to mutate when you have a unique reference, unique references are also called “mutable references.“

The Rust compiler enforces this uniqueness at compile time. In any region of code where the unique reference may be used, no other reference to any part of the same value may exist, and even the owner of that value may not move or destroy it. Violating this rule triggers a compiler error.

A reference only borrows the value, and must return it to its owner. This means that the reference can be used to mutate the value, but not to move or destroy it (unless it overwrites it with a new value, for example using replace). Just like in real life, you need to give back what you’ve borrowed.

Borrowing a value is like locking it. Just like a mutex lock in a multi-threaded program, it’s usually best to hold a borrowed reference for as little time as possible. Storing a unique reference in a long-lived data structure will prevent any other use of the value for as long as that structure exists.

Unique references can't be copied

An &mut T cannot be copied or cloned, because this would result in two ”unique” references to the same value. It can only be moved:

let mut a = 1;
let x = &mut a;
let y = x; // move the reference from x into y
// x is no longer accessible here

However, you can temporarily ”re-borrow” from a unique reference. This gives a new unique reference to the same value, but the original reference can no longer be accessed until the new one goes out of scope or is no longer used (depending on which version of Rust you are using):

let mut a = 1;
let x = &mut a;
{
    let y = &mut *x;
    // x is "re-borrowed" and cannot be used while y is alive
    *y = 4; // y has unique access and can mutate `a`
}
// x becomes accessible again after y is dead
*x += 1; // now x has unique access again and can mutate the value
assert_eq!(*x, 5);

Re-borrowing happens implicitly when you call a function that takes a unique reference. This greatly simplifies code that passes unique references around, but can confuse programmers who are just learning about these restrictions.

Shared access

A value is shared if there are multiple variables that are alive at the same time that can be used to access it.

While a value is shared, we have to be a lot more careful about mutating it. Writing to the value through one variable could invalidate pointers held by other variables, or cause a data race with readers or writers on other threads.

Rust ensures that you can read from a value only while no variables can write to it, and you can write to a value only while no other variables can read or write to it. In other words, you can have a unique writer, or multiple readers, but not both at once. Some Rust types enforce this at compile time and others at run time, but the principle is always the same.

Shared ownership

One way to share a value of type T is to create an Rc<T>, or “reference-counted pointer to T”. This allocates space on the heap for a T, plus some extra space for reference counting (tracking the number of pointers to the value). Then you can call Rc::clone to increment the reference count and receive another Rc<T> that points to the same value:

let x = Rc::new(1);
let y = x.clone();
// x and y hold two different Rc that point to the same memory

Because the T lives on the heap and x and y just hold pointers to it, it can outlive any particular pointer. It will be destroyed only when the last of the pointers is dropped. This is called shared ownership.

Shared borrowing

Since Rc<T> doesn't have unique access to its T, it can’t give out a unique &mut T reference (unless it checks at run time that the reference count is equal to 1, so it is not actually shared). But it can give out a shared reference to T, whose type is written &T. (This is also called an “immutable reference.”)

A shared reference is another “borrowed” type which can’t outlive its referent. The compiler ensures that a shared reference can’t be created while a unique reference exists to any part of the same value, and vice-versa. And (just like unique references) the owner isn’t allowed to drop/move/mutate the value while any shared references are alive.

If you have unique access to a value, you can produce many shared references or one unique reference to it. However, if you only have shared access to a value, you can’t produce a unique reference (at least, not without some additional checks, which I’ll discuss soon). One consequence of this is that you can convert an &mut T to an &T, but not vice-versa.

Because multiple shared references are allowed, an &T can be copied/cloned (unlike &mut T).

Thread safety

Astute readers might notice that merely cloning an Rc<T> mutates a value in memory, since it modifies the reference count. This could cause a data race if another clone of the Rc were accessed at the same time on a different thread! The compiler solves this in typical Rust fashion: By refusing to compile any program that passes an Rc to a different thread.

Rust has two built-in traits that it uses to mark types that can be accessed safely by other threads:

  • T: Send means it's safe to access a T on a single other thread, where one thread at a time has exclusive access. A value of this type can be moved to another thread by unique ownership, or borrowed on another thread by unique reference (&mut T). A more descriptive name for this trait might be UniqueThreadSafe.

  • T: Sync means it’s safe for many threads to access a T simultaneously, with each thread having shared access. Values of such types can be accessed on other threads via shared ownership or shared references (&T). A more descriptive name would be SharedThreadSafe.

Rc<T> implements neither of these traits, so an Rc<T> cannot be moved or borrowed into a variable on a different thread. It is forever trapped on the thread where it was born.

The standard library also offers an Arc<T> type, which is exactly like Rc<T> except that it implements Send, and uses atomic operations to synchronize access to its reference counts. This can make Arc<T> a little more expensive at run time, but it allows multiple threads to share a value safely.

These traits are not mutually exclusive. Many types are both Send and Sync, meaning that it’s safe to give unique access to one other thread (for example, moving the value itself or sending an &mut T reference) or shared access to many threads (for example, sending multiple Arc<T> or &T).

Shared mutability

So far, we’ve seen that sharing is safe when values are not mutated, and mutation is safe when values are not shared. But what if we want to share and mutate a value? The Rust standard library provides several different mechanisms for shared mutability.

The official documentation also calls this “interior mutability” because it lets you mutate a value that is “inside” of an immutable value. This terminology can be confusing: What does it mean for the exterior to be “immutable” if its interior is mutable? I prefer “shared mutability” which puts the spotlight on a different question: How can you safely mutate a value while it is shared?

What could go wrong?

What’s the big deal about shared mutation? Let’s start by listing some of the ways it could go wrong:

First, mutating a value can cause pointer invalidation. For example, pushing to a vector might cause it to reallocate its buffer. If there are other variables that contained addresses of items in the buffer, they would now point to deallocated memory. Or, mutating an enum might overwrite a value of one type with a value of a different type. A pointer to the old value will now be pointing at memory occupied by the wrong type. Either of these cases would trigger undefined behavior.

Second, it could violate aliasing assumptions. For example, the optimizing compiler assumes by default that the referent of an &T reference will not change while the reference exists. It might re-order code based on this assumption, leading to undefined behavior when the assumption is violated.

Third, if one thread mutates a value at the same time that another thread is accessing it, this causes a data race unless both threads use synchronization primitives to prevent their operations from overlapping. Data races can cause arbitrary undefined behavior (in part because data races can also violate assumptions made by the optimizer during code generation).

UnsafeCell

To fix the problem of aliasing assumptions, we need UnsafeCell<T>. The compiler knows about this type and treats it specially: It tells the optimizer that the value inside an UnsafeCell is not subject to the usual restrictions on aliasing.

Safe Rust code doesn’t use UnsafeCell directly. Instead, it’s used by libraries (including the standard library) that provide APIs for safe shared mutability. All of the shared mutable types discussed in the following sections use UnsafeCell internally.

UnsafeCell solves only one of the three problems listed above. Next, we'll see some ways to solve the other two problems: pointer invalidation and data races.

Multi-threaded shared mutability

Rust programs can safely mutate a value that’s shared across threads, as long as the basic rules of unique and shared access are enforced: Only one thread at a time may have unique access to a value, and only this thread can mutate it. When no thread has unique access, then many threads may have shared access, but the value can’t be mutated while they do.

Rust has two main types that allow thread-safe shared mutation:

  • Mutex<T> allows one thread at a time to “lock” a mutex and get unique access to its contents. If a second thread tries to lock the mutex at the same time, the second thread will block until the first thread unlocks it. Since Mutex provides access to only one thread at a time, it can be used to share any type that implements the Send (“unique thread-safe”) trait.

  • RwLock<T> is similar but has two different types of lock: A “write” lock that provides unique access, and a “read” lock that provides shared access. It will allow many threads to hold read locks at the same time, but only one thread can hold a write lock. If one thread tries to write while other threads are reading (or vice-versa), it will block until the other threads release their locks. Since RwLock provides both unique and shared access, its contents must implement both Send (“unique thread-safe”) and Sync (“shared thread-safe”).

These types prevent pointer invalidation by using run-time checks to enforce the rules of unique and shared borrowing. They prevent data races by using synchronization primitives provided by the platform’s native threading system.

In addition, various atomic types allow safe shared mutation of individual primitive values. These prevent data races by using compiler intrinsics that provide synchronized operations, and they prevent pointer invalidation by refusing to give out references to their contents; you can only read from them or write to them by value.

All these types are only useful when shared by multiple threads, so they are often used in combination with Arc. Because Arc lets multiple threads share ownership of a value, it works with threads that might outlive the function that spawns them (and therefore can’t borrow references from it). However, scoped threads are guaranteed to terminate before their spawning function, so they can capture shared references like &Mutex<T> instead of Arc<Mutex<T>>.

Single-threaded shared mutability

The standard library also has two types that allow safe shared mutation within a single thread. These types don’t implement the Sync trait, so the compiler won't let you share them across multiple threads. This neatly avoids data races, and also means that these types don’t need atomic operations (which are potentially expensive).

  • Cell<T> solves the problem of pointer invalidation by forbidding pointers to its contents. Like the atomic types mentioned above, you can only read from it or write to it by value. Changing the data “inside” of the Cell<T> is okay, because there are no shared pointers to that data – only to the Cell<T> itself, whose type and address do not change when you mutate its interior. (Now we see why “interior mutability” is also a useful concept.)

  • Many Rust types are useless without references, so Cell is often too restrictive. RefCell<T> allows you to borrow either unique or shared references to its contents, but it keeps count of how many borrowers are alive at a time. Like RwLock, it allows one unique reference or many shared references, but not both at once. It enforces this rule using run-time checks. (But since it’s used within a single thread, it can’t block the thread while waiting for other borrowers to finish. Instead, it panics if a program violates its borrowing rules.)

These types are often used in combination with Rc<T>, so that a value shared by multiple owners can still be mutated safely. They may also be used for mutating values behind shared references. The std::cell docs have some examples.

Summary

To summarize some key ideas:

  • Rust has two types of references: unique and shared.
  • Unique mutable access is easy.
  • Shared immutable access is easy.
  • Shared mutable access is hard.
  • This is true for both single-threaded and multi-threaded programs.

We also saw a couple of ways to classify Rust types. Here’s a table showing some of the most common types according to this classification scheme:

Unique Shared
Borrowed &mut T &T
Owned T, Box<T> Rc<T>, Arc<T>

I hope that thinking of these types in terms of uniqueness and sharing will help you understand how and why they work, as it helped me.

Want to know more?

As I said at the start, this is just a quick introduction and glosses over many details. The exact rules about unique and shared access in Rust are still being worked out. The Aliasing chapter of the Rustonomicon explains more, and Ralf Jung’s Stacked Borrows model is the start of a more complete and formal definition of the rules.

If you want to know more about how shared mutability can lead to memory-unsafety, read The Problem With Single-threaded Shared Mutability by Manish Goregaokar.

The Swift language has an approach to memory safety that is similar in some ways, though its exact mechanisms are different. You might be interested in its recently-introduced Exclusivity Enforcement feature, and the Ownership Manifesto that originally described its design and rationale.

Mozilla Localization (L10N)A New Year with New Goals for Mozilla Localization

 

We had a really ambitious and busy year in 2018! Thanks to the help of the global localization community as well as a number of cross-functional Mozilla staff, we were able to focus our efforts on improving the foundations of our localization program. These are some highlights of what we accomplished in 2018:

  • Fluent syntax stability.
  • New design for review process in Pontoon.
  • Continuous localization for Firefox desktop.
  • arewefluentyet.com
  • Formation of Mozilla Terminology Working Group for defining en-US source terms.
  • 8 community-organized workshops around the world.
  • Firefox Lite localization.
  • Research and recommendations for future international brand management.
  • Begun rewrite of Pontoon’s Translate view to React.
  • Clearly defined l10n community roles and their responsibilities.

Rather than plan out our goals for the full year in 2019, we’ve been encouraged to take it a quarter at a time. That being said, there are a number of interesting themes that will pop up in 2019 as well as the continuation of work from 2018:

Standardize & Scale

There are still areas within our tool-chain as well as our processes that make it hard to scale localization to all of Mozilla. Over the course of 2018 we saw more and more l10n requests from internal teams that required customized processes. The good news here is that the organization as a whole wants to localize more and more content (that hasn’t been true in the past)!

While we’ve seen success in standardizing the processes for localizing product user interfaces, we’ve struggled to rein in the customizations for other types of content. In 2019, we’ll focus a lot of our energy on bringing more stability and consistency to localizers by standardizing localization processes according to specific content types. Once standardized, we’ll be able to scale to meet the the needs of these internal teams while keeping the amount of new content to translate in consistent volumes.

Mobilize South East Asian Locales

One of the primary focus areas for all of Mozilla this year is South East Asian markets. The Emerging Markets team in Taipei is focused on creating products for those markets that meet the needs of users there, building on the success of Screenshots Go and Firefox Lite. This year we’ll see more products coming to these markets and it will be more important than ever for us to know how to mobilize l10n communities in those regions in order to localize these exciting, new products.

New Technologies

Early this year we plan to hit a major milestone: Fluent 1.0! This is the culmination of over a decade’s worth of work and we couldn’t be more proud of this accomplishment. Fluent will continue to be implemented in Firefox as well as other Mozilla projects throughout 2019. We’re planning a roadmap for an ecosystem of tooling to support Fluent 1.0 as well as exploring how to build a thriving Fluent community.

Pontoon’s Translate view rewrite to React will be complete and we’ll be implementing features for a newly redesigned review process. Internationalizing the Pontoon Translate UI will be a priority, as well as addressing some long-requested feature updates, like terminology support as well as improved community and user profile metrics.

Train the Trainers

In 2018 we published clear descriptions of the responsibilities and expectations of localizers in specific community roles. These roles are mirror images of Pontoon roles, as Pontoon is the central hub for localization at Mozilla. In 2019, we plan to organize a handful of workshops in the latter half of the year to train Managers on how to be effective leaders in their communities and reliable extensions of the l10n-drivers team. We would like to record at least one of these and make the workshop training available to everyone through the localizer documentation (or some other accessible place).

We aim to report on the progress of these themes throughout the year in quarterly reports. In each report, we’ll share the outcomes of the objectives of one quarter and describe the objectives for the next quarter. In Q1 of 2019 (January – March), the l10n-drivers will:

  • Announce release of Fluent 1.0 to the world
  • Standardize vendor localization process under separate, self-service tool-chain for vendor-sourced content types.
  • Standardize the way Android products are bootstrapped and localized
  • Know how to effectively mobilize South/East Asian communities
  • Transition mozilla.org away from .lang-based l10n infrastructure.
  • Port Pontoon’s translate view to React and internationalize it.

As always, if you have questions about any of these objectives or themes for 2019, please reach out to an l10n-driver, we’d be very happy to chat.

Mike ConleyFirefox Front-End Performance Update #12

Well, here I am again – apologizing about a late update. Lots of stuff has been going on performance-wise in the Firefox code-base, and I’ll just be covering a small section of it here.

You might also notice that I changed the title of the blog series from “Firefox Performance Update” to “Firefox Front-end Performance Update”, to reflect that the things the Firefox Front-end Performance team is doing to keep Firefox speedy (though I’ll still add a grab-bag of other performance related work at the end).

So what are we waiting for? What’s been going on?

Migrate consumers to the new Places Observer system (Paused by Doug Thayer)

Doug was working on this later in 2018, and successfully ported a good chunk of our bookmarks code to use the new batched Places Observer system. There’s still a long-tail of other call sites that need to be updated to the new system, but Doug has shifted focus from this to other things in the meantime.

Document Splitting (In-Progress by Doug Thayer)

With WebRender becoming an ever-closer reality to our general user population, Doug has been focusing on “Document Splitting”, which makes WebRender more efficient by splitting updates that occur in the browser UI from updates that occur in the content area.

This has been a pretty long-haul task, but Doug has been plugging away, and landed a significant chunk of the infrastructure for this. At this time, Doug is working with kats to make Document Splitting integrate nicely with Async-Pan-Zooming (APZ).

The current plan is for Document Splitting to land disabled by default, since it’s blocked by parent-process retained display lists (which still have a few bugs to shake out).

Warm-up Service (In-Progress by Doug Thayer)

Doug is investigating the practicalities of having a service run during Windows start-up to preload various files that Firefox will need when started.

Doug’s prototype shows that this can save us something like 1 second of net start-up time, at least on the reference hardware.

We’re still researching this at multiple levels, and haven’t yet determined if this is a thing that we’d eventually want to ship. Stay tuned.

Smoother Tab Animations (In-Progress by Felipe Gomes)

After much ado, simplification, and review back-and-forth, the initial set of new tab animations have landed in Nightly. You can enable them by setting browser.tabs.newanimations to true in about:config and then restarting the browser. These new animations run entirely on the compositor, instead of painting at each refresh driver tick, so they should be smoother than the current animations that we ship.

There are still some cases that need new animations, and Felipe is waiting on UX for those.

Overhauling about:performance (V1 Completed by Florian Quèze)

The new about:performance shipped late last year, and now shows both energy as well as memory usage of your tabs and add-ons.

The current iteration allows you to close the tabs that are hogging your resources. Current plans should allow users to pause JavaScript execution in busy background tabs as well.

Browser Adjustment Project (In-Progress by Gijs Kruitbosch)

Gijs has landed some patches in Nightly (which have recently uplifted to Beta, and are only enabled on early Betas), which lowers the default frame rate of Firefox from 60fps to 30fps on devices that are considered “low-end”1.

This has been on Nightly for a while, but as our Nightly population tends to skew to more powerful hardware, we expect not a lot of users have experienced the impact there.

At least one user has noticed the lowered frame rate on Beta, and this has highlighted that our CPU sampling code doesn’t take dynamic changes to clock speed into account.

While the lowered frame rate seemed to have a positive impact on page load time in the lab on our “low-end” reference hardware, we’re having a much harder time measuring any appreciable improvement in CI. We have scheduled an experiment to see if improvements are detectable via our Telemetry system on Beta.

We need to be prepared that this particular adjustment will either not have the desired page load improvement, or will result in a poorer quality of experience that is not worth any page load improvement. If that’s the case, we still have a few ideas to try, including:

  • Lowering the refresh driver tick, rather than the global frame rate. This would mean things like scrolling and videos would still render at 60fps, but painting the UI and web content would occur at a lower frequency.
  • Use the hardware vsync again (switching to 30fps turns hardware vsync off), but just paint every other time. This is to test whether or not software vsync results in worse page load times than hardware vsync.

Avoiding spurious about:blank loads in the parent process (Completed by Gijs Kruitbosch)

Gijs short-circuited a bunch of places where we were needlessly creating about:blank documents that we were just going to throw away (see this bug and dependencies). There are still a long tail of cases where we still do this in some cases, but they’re not the common cases, and we’ve decided to apply effort for other initiatives in the meantime.

Experiments with the Process Priority Manager (In-Progress by Mike Conley)

This was originally Doug Thayer’s project, but I’ve taken it on while Doug focuses on the epic mountain that is WebRender Document Splitting.

If you recall, the goal of this project is to lower the process priority for tabs that are only sitting in the background. This means that if you have tabs in the background that are attempting to use system resources (running JavaScript for example), those tabs will have less priority at the operating system level than tabs that are in the foreground. This should make it harder for background tabs to cause foreground tabs to be starved of processing resources.

After clearing a few final blockers, we enabled the Process Priority Manager by default last week. We also filed a bug to keep background tabs at a higher priority if they’re playing audio and video, and the fix for that just landed in Nightly today.

So if you’re on Windows on Nightly, and you’re curious about this, you can observe the behaviour by opening up the Windows Task Manager, switching to the “Details” tab, and watching the “Base priority” reading on your firefox.exe processes as you switch tabs.

Cheaper tabs in titlebar (Completed by Mike Conley)

After an epic round of review (thanks, Dao!), the patches to move our tabs-in-titlebar logic out of JS and into CSS landed late last year.

Along with simplifying our code, and hammering out at least one pretty nasty layout bug, this also had the benefit of reducing the number of synchronous reflows caused when opening new windows to zero.

This project is done!

Enable the separate Activity Stream content process by default (In-Progress by Mike Conley

There’s one known bug remaining that’s preventing us from letting the privileged content process from being enabled by default.

Thankfully, the cause is understood, and a fix is being worked on. Unfortunately, this is one of those bugs where the proper solution involves refactoring a bit of old crufty stuff, so it’s taking longer than I’d like.

Still, if all goes well, this bug should be closed out soon, and we can see about letting the privileged content process ride the trains.

Grab bag of notable performance work

This is an informal list of things that I’ve seen land in the tree lately that I believe will have a positive performance impact for our users. Have you seen something that you’d like to nominate for a future list? Submit the bug here!

Also, keep in mind that some of these landed months ago and already shipped to release. That’s what I get for taking so long to write a blog post.


  1. For now, “low-end” means a machine with 2 or fewer cores, and a clock speed of 1.8Ghz or slower 

The Mozilla BlogDoes Your Sex Toy Use Encryption?

This Valentine’s Day, Mozilla is assessing the privacy and security features of romantic connected devices

 

This Valentine’s Day, use protection.

To be more specific: use encryption and strong passwords.

As the Internet of Things expands, the most intimate devices are coming online. Sex toys and beds now connect to the internet. These devices collect, store, and often share our personal data.

Connected devices in the bedroom can amp up romance. But they also have the possibility to expose the most intimate parts of our lives. Consumers have the right know if their latest device has privacy and security features that meet their standards.

So today, Mozilla is releasing a Valentine’s Day supplement to *Privacy Not Included, our annual holiday shopping guide.

Last November, we assessed the privacy and security features of 70 popular products, from Nintendo Switch and Google Home to drones and smart coffee makers. The idea: help consumers shop for gifts by highlighting a product’s privacy features, rather than just price and performance.

Now, we’re assessing 18 more products, just in time for February 14.

We researched vibrators; smart beds and sleep trackers; connected aromatherapy machines; and more.

Our research is guided by Mozilla’s Minimum Security Standards, five basic guidelines we believe all connected devices should adhere to. Mozilla developed these standards alongside our friends at Consumers International and the Internet Society. Our Minimum Security Standards include encrypted communications; automatic security updates; strong, unique passwords; vulnerability management; and an accessible privacy policy.

Of the 18 products we reviewed for this guide, nine met our standards. Among these nine: a smart vibrator that uses encryption and features automatic security updates. A Kegel exerciser that doesn’t share user data with unexpected third parties. And a fitness tracker that allows users to easily delete stored data.

Nine products did not meet our Minimum Security Standards, or weren’t clear enough in their privacy policies or our correspondences for Mozilla to make a determination. Among these nine: a smart vibrator that can be hacked by spoofing requests. And a smart vibrator with no privacy policy at all.

Lastly: This installment once again features the Creep-O-Meter, an emoji-based tool that lets readers share how creepy (or not creepy) they believe a product’s approach to privacy and security is.

Thanks for reading. And Happy Valentine’s Day 💖


Jen Caltrider is Mozilla’s Content Strategy Lead and a co-creator of the guide.

The post Does Your Sex Toy Use Encryption? appeared first on The Mozilla Blog.

Daniel Stenbergcurl 7.64.0 – like there’s no tomorrow

I know, has there been eight weeks since the previous release already? But yes it has – I double-checked! And then as the laws of nature dictates, there has been yet another fresh curl version released out into the wild.

Numbers

the 179th release
5 changes
56 days (total: 7,628)

76 bug fixes (total: 4,913)
128 commits (total: 23,927)
0 new public libcurl functions (total: 80)
3 new curl_easy_setopt() options (total: 265)

1 new curl command line option (total: 220)
56 contributors, 29 new (total: 1,904)
32 authors, 13 new (total: 658)
  3 security fixes (total: 87)

Security fixes

This release we have no less than three different security related fixes. I’ll describe them briefly here, but for the finer details I advice you to read the dedicated pages and documentation we’ve written for each one of them.

CVE-2018-16890 is a bug where the existing range check in the NTLM code is wrong, which allows a malicious or broken NTLM server to send a header to curl that will make it read outside a buffer and possibly crash or otherwise misbehave.

CVE-2019-3822 is related to the previous but with much worse potential effects. Another bad range check actually allows a sneaky NTLMv2 server to be able to send back crafted contents that can overflow a local stack based buffer. This is potentially in the worst case a remote code execution risk. I think this might be the worst security issue found in curl in a long time. A small comfort is that by disabling NTLM, you will avoid it until patched.

CVE-2019-3823 is a potential read out of bounds of a heap based buffer in the SMTP code. It is fairly hard to trigger and it will mostly cause a crash when it does.

Changes

  1. curl now supports Mike West’s cookie update known as draft-ietf-httpbis-cookie-alone. It basically means that cookies that are set as “secure” has to be set over HTTPS to be allow to override a previous secure cookie. Safer cookies.
  2. The –resolve option as well as CURLOPT_RESOLVE now support specifying a wildcard as port number.
  3. libcurl can now send trailing headers in chunked uploads using the new options.
  4. curl now offers options to enable HTTP/0.9 responses, The default is still enabled, but the plan is to deprecate that and in 6 months time switch over the to default to off.
  5. curl now uses higher resolution timer accuracy on windows.

Bug-fixes

Check out the full change log to see the whole list. Here are some of the bug fixes I consider to be most noteworthy:

  • We re-implemented the code coverage support for autotools builds due to a license problem. It turned out the previously used macro was GPLv2 licensed in an unusual way for autoconf macros.
  • We make sure –xattr never stores URLs with credentials, following the security problem reported on a related tool. Not considered a security problem since this is actually what the user asked for, but still done like this for added safety.
  • With -J, curl should not be allowed to append to the file. It could lead to curl appending to a file that was in the download directory since before.
  • –tls-max didn’t work correctly on macOS when built to use Secure Transport.
  • A couple of improvements in the libssh-powered SSH backend.
  • Adjusted the build for OpenSSL 3.0.0 (the coming future version).
  • We no longer refer to Schannel as “winssl” anywhere. winssl is dead. Long live Schannel!
  • When built with mbedTLS, ignore SIGPIPE accordingly!
  • Test cases were adjusted and verified to work fine up until February 2037.
  • We fixed several parsing errors in the URL parser, mostly related to IPv6 addresses. Regressions introduced in 7.62.0.

Next

The next release cycle will be one week shorter and we expect to ship next release on March 27 – just immediately after curl turns 22 years old. There are already several changes in the pipe so we expect that to become 7.65.0.

We love your help and support! File bugs you experience or see, submit pull requests for the features or corrections you work on!

The Firefox FrontierStop texting yourself links. With Send Tabs there’s a better way.

It’s 2019 friends. We don’t have to keep emailing and texting ourselves links. It’s fussy to copy and paste on a mobile device. It’s annoying to have to switch between … Read more

The post Stop texting yourself links. With Send Tabs there’s a better way. appeared first on The Firefox Frontier.

Julien VehentInterviewing tips for junior engineers

I was recently asked by the brother of a friend who is about to graduate for tips about working in IT in the US. His situation is not entirely dissimilar to mine, being a foreigner with a permit to work in America. Below is my reply to him, that I hope will be helpful to other young engineers in similar situations.

For background, I've been working in the US since early 2011. I had a few years of experience as a security engineer in Paris when we moved. I first took a job as a systems engineer while waiting for my green card, then joined a small tech company in the email marketing space to work on systems and security, then joined Mozilla in 2013 as a security engineer. I've been at Mozilla for almost six years, now running the Firefox operations security team with a team of six people scattered across the US, Canada and the UK. I've been hiring engineers for a few years in various countries.


Are there any skills or experiences beyond programming required to be an interesting candidate for foreign employers

You're just getting started in your career, so employers will mostly look for technical proficiency and being a pleasant person to work with. Expectations are fairly low at this stage. If you can solve technical puzzles and people enjoy talking to you, you can pass the bar at most companies.

This changes after 5-ish years of experience, when employers want to see project management, technical leadership, and maybe a hint of people management too. But for now, I wouldn't worry about it.

I would say the most important thing is to have a plan: where do you want to be 10/15/20 years from now? My aim was toward Chief Security Officer roles, an executive position that requires technical skills, strategic thinking, communication, risk and project management, etc. It takes 15 to 20 years to get to that level, so I picked jobs that progressively gave me the experience needed (and that were a lot of fun, because I'm a geek).


Were there any challenges that you faced, other than immigration, that I may need to be aware of as a foreign candidate?

Immigration is the only problem to solve. When I first applied for jobs in the US, I was on a J-1 visa doing my Master's internship at University of Maryland. I probably applied to 150 jobs and didn't get a single reply, most likely because no one wants to hire candidates that need a visa (a long and expensive process). i ended up going back to France for a couple years, and came back after obtaining my green card, which took the immigration question out of the way. So my advise here is to settle the immigration question in the country where you want to work before you apply for jobs, or if you need employer support to get a visa, be up front about it when talking to them. (I have several former students who now work for US companies that have hiring processes that incorporate visa applications, so it's not unheard of, but the bar is high).

I have a Master of Science from a small french university that is completely unknown to the US, yet that never came up as an issue. A Master is a Master. Should degree equivalence come up as an issue during an interview, offer to provide a grade comparison between US GPA and your country grades (some paid service provide that). That should put employers at ease.

Language has also never been an issue for me, even with my strong french accent. If you're good at the technical stuff, people won't pay attention to your accent (at least in the US). And I imagine you're fully fluent anyway, so having a deep-dive architecture conversation in English won't be a problem.


And lastly, do you have any advice on how to stand out from the rest of the candidates aside from just a good resume (maybe some specific volunteer experience or unique skills that I could gain while I am still finishing my thesis?)

Resume are mostly useless. I spend between 30 and 60 seconds on a candidate's resume. The problem is most people pad their resumes with a lot of buzzwords and fancy-sounding projects I cannot verify, and thus cannot trust. It would seem the length of a resume is inversely proportional to the actual skills of a candidate. Recruiters use them to check for minimal requirements: has the right level of education, knows programming language X, has the right level of experience. Engineering managers will instead focus on actual technical questions to assess your skills.

At your level, keep you resume short. My rule of thumb is one page per five years of experience (you should only have a single page, I have three). This might contradict advise you're reading elsewhere that recommend putting your entire life story in your resume, so if you're concerned about not having enough details, make two versions: the one page short overview (linkedin-style), and the longer version hosted on your personal site. Offer a link to the longer version in the short version, so people can check it out if they want to, but most likely won't. Recruiters have to go through several hundred candidates for a single position, so they don't have time, or care for, your life story. (Someone made a short version of Marissa Meyer's resume that I think speaks volume).

Make sure to highlight any interesting project you worked on, technical or otherwise. Recruiters love discussing actual accomplishment. Back when I started, I had a few open source projects and articles written in technical magazines that I put on my resume. Nowadays, that would be a GitHub profile with personal (or professional, if you're lucky) projects. You don't need to rewrite the Linux kernel, but if you can publish a handful of tools you developed over the years, it'll help validate your credentials. Just don't go fork fancy projects to pad your GitHub profile, it won't fool anyone (I know, it sounds silly, but I see that all too often).

Another thing recruiters love is Hackerrank, a coding challenge website used by companies to verify the programming skills of prospective candidates. It's very likely US companies will send you some sort of coding challenge as part of the interview process (we even do it before talking to candidates nowadays). My advise is to spend a few weekends building a profile on Hackerrank and getting used to the type of puzzle they ask for. This is similar to what the GAFA ask for in technical interviews ("quicksort on a whiteboard" type of questions).

At the end of the day, I expect a junior engineer to be smart and excited about technology, if not somewhat easily distracted. Those are good qualities to show during an interview and on your resume.

My last advise would be to pick a path you want to follow for the next few years and be very clear about it when interviewing. You should have a goal. You have a lot of degrees, and recruiters will ask you what you're looking for. So if you want to go into the tech world, be up front about it and tell them you want to focus on engineering for the foreseeable future. In my experience, regardless of your level of education, you need to start at the bottom of the ladder and climb your way up. A solid education will help you climb a lot faster than other folks, and you could reach technical leadership in just a couple years at the right company, or move to a large corporation and use that fancy degree to climb the management path. In both cases, I would recommend getting those first couple years of programming and engineering work under your belt first. Heck, even the CTO of Microsoft started as a mere programmer!



I hope this helps folks who are getting started. And I'm always happy to answer questions from junior engineers, so don't hesitate to reach out!

Daniel StenbergMy 10th FOSDEM

I didn’t present anything during last year’s conference, so I submitted my DNS-over-HTTPS presentation proposal early on for this year’s FOSDEM. Someone suggested it was generic enough I should rather ask for main track instead of the DNS room, and so I did. Then time passed and in November 2018 “HTTP/3” was officially coined as a real term and then, after the Mozilla devroom’s deadline had been extended for a week I filed my second proposal. I might possibly even have been an hour or two after the deadline. I hoped at least one of them would be accepted.

Not only were both my proposed talks accepted, I was also approached and couldn’t decline the honor of participating in the DNS privacy panel. Ok, three slots in the same FOSDEM is a new record for me, but hey, surely that’s no problems for a grown-up..

HTTP/3

I of coursed hoped there would be interest in what I had to say.

I spent the time immediately before my talk with a coffee in the awesome newly opened cafeteria part to have a moment of calmness before I started. I then headed over to the U2.208 room maybe half an hour before the start time.

It was packed. Quite literally there were hundreds of persons waiting in the area outside the U2 rooms and there was this totally massive line of waiting visitors queuing to get into the Mozilla room once it would open.

<figcaption>The “Sorry, this room is FULL” sign is commonly seen on FOSDEM.</figcaption>

People don’t know who I am by my appearance so I certainly didn’t get any special treatment, waiting for my talk to start. I waited in line with the rest and when the time for my presentation started to get closer I just had to excuse myself, leave my friends behind and push through the crowd. I managed to get a “sorry, it’s full” told to me by a conference admin before one of the room organizers recognized me as the speaker of the next talk and I could walk by a very long line of humans that eventually would end up not being able to get in. The room could fit 170 souls, and every single seat was occupied when I started my presentation just a few minutes late.

This presentation could have filled a much larger room. Two years ago my HTTP/2 talk filled up the 300 seat room Mozilla had that year.

Video

<figcaption>Video from my HTTP/3 talk. Duration 1 hour.</figcaption>

The slides from my HTTP/3 presentation.

DNS over HTTPS

I tend to need a little “landing time” after having done a presentation to cool off an come back to normal senses and adrenaline levels again. I got myself a lunch, a beer and chatted with friends in the cafeteria (again). During this conversation, it struck me I had forgotten something in my coming presentation and I added a slide that I felt would improve it (the screenshot showing “about:networking#dns” output with DoH enabled). In what felt like no time, it was again to move. I walked over to Janson, the giant hall that fits 1,470 persons, which I entered a few minutes ahead of my scheduled time and began setting up my machine.

I started off with a little technical glitch because the projector was correctly detected and setup as a second screen on my laptop but it would detect and use a too high resolution for it, but after just a short moment of panic I lowered the resolution on that screen manually and the image appeared fine. Phew! With a slightly raised pulse, I witnessed the room fill up. Almost full. I estimate over 90% of the seats were occupied.

<figcaption>The DNS over HTTPS talk seen from far back. Photo by Steve Holme.</figcaption>

This was a brand new talk with all new material and I performed it for the largest audience I think I’ve ever talked in front of.

Video

<figcaption>Video of my DNS over HTTPS presentation. Duration 50 minutes.</figcaption>

To no surprise, my talk triggered questions and objections. I spent a while in the corridor behind Janson afterward, discussing DoH details, the future of secure DNS and other subtle points of the different protocols involved. In the end I think I manged pretty good, and I had expected more arguments and more tough questions. This is after all the single topic I’ve had more abuse and name-calling for than anything else I’ve ever worked on before in my 20+ years in Internet protocols. (After all, I now often refer to myself and what I do as webshit.)

My DNS over HTTPS slides.

DNS Privacy panel

I never really intended to involve myself in DNS privacy discussions, but due to the constant misunderstandings and mischaracterizations (both on purpose and by ignorance) sometimes spread about DoH, I’ve felt a need to stand up for it a few times. I think that was a contributing factor to me getting invited to be part of the DNS privacy panel that the organizers of the DNS devroom setup.

There are several problems and challenges left to solve before we’re in a world with correctly and mostly secure DNS. DoH is one attempt to raise the bar. I was content to had the opportunity to really spell out my view of things before the DNS privacy panel.

While sitting next to these giants from the DNS world, Stéphane Bortzmeyer, Bert Hubert and me discussed DoT, DoH, DNS centralization, user choice, quad-dns-hosters and more. The discussion didn’t get very heated but instead I think it showed that we’re all largely in agreement that we need more secure DNS and that there are obstacles in the way forward that we need to work further on to overcome. Moderator Jan-Piet Mens did an excellent job I think, handing over the word, juggling the questions and taking in questions from the audience.

Video

<figcaption>Video from the DNS Privacy panel. Duration 30 minutes.</figcaption>

Ten years, ten slots

Appearing in three scheduled slots during the same FOSDEM was a bit much, and it effectively made me not attend many other talks. They were all great fun to do though, and I appreciate people giving me the chance to share my knowledge and views to the world. As usually very nicely organized and handled. The videos of each presentation are linked to above.

I met many people, old and new friends. I handed out a lot of curl stickers and I enjoyed talking to people about my recently announced new job at wolfSSL.

After ten consecutive annual visits to FOSDEM, I have appeared in ten program slots!

I fully intend to go back to FOSDEM again next year. For all the friends, the waffles, the chats, the beers, the presentations and then for the waffles again. Maybe I will even present something…

This Week In RustThis Week in Rust 272

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is log-derive, a procedural macro to log function outputs. Thanks to elichai2 for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

157 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

This time, we have two quotes for the price of one:

The borrow checker breaks you down so that it can build you back up, stronger and more resilient than you once were. It also had me do all sorts of weird things like catch flies with chopsticks and scrub counters to a polish.

– /u/bkv on /r/rust

I always think of borrowck as an angel sitting on your shoulder, advising you not to sin against the rules of ownership and borrowing, so your design will be obvious and your code simple and fast.

– llogiq on /r/rust

Thanks to Christopher Durham for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

The Mozilla BlogPutting Users and Publishers at the Center of the Online Value Exchange

Publishers are getting a raw deal in the current online advertising ecosystem. The technology they depend on to display advertisements also ensures they lose the ability to control who gets their users’ data and who gets to monetize that data. With third-party cookies, users can be tracked from high-value publishers to sites they have never chosen to trust, where they are targeted based on their behavior from those publisher sites. This strips value from publishers and fuels rampant ad fraud.

In August, Mozilla announced a new anti-tracking strategy intended to get to the root of this problem. That strategy includes new restrictions on third-party cookies that will make it harder to track users across websites and that we plan to turn on by default for all users in a future release of Firefox. Our motive for this is simple: online tracking is unacceptable for our users and puts their privacy at risk. We know that a large portion of desktop users have installed ad blockers, showing that people are demanding more online control. But our approach also offers an opportunity to rebalance the ecosystem in a way that is in the long-term interest of publishers.

There needs to be a profitable revenue ecosystem on the web in order to create, foster and support innovation. Our third-party cookie restrictions will allow loading of advertising and other types of content (such as videos and sponsored articles), but will prevent the cookie-based tracking that users cannot meaningfully control. This strikes a better balance for publishers than ad blocking – user data is protected and publishers are still able to monetize page visits through advertisements and other content.

Our new approach will deliver both upsides and downsides for publishers, and we want to be clear about both. On the upside, by removing more sophisticated, profile-based targeting, we are also removing the technology that allows other parties to siphon off data from publishers. Ad fraud that depends on 3rd party cookies to track users from high-value publishers to low-value fraudster sites will no longer work. On the downside, our approach will make it harder to do targeted advertising that depends on cross-site browsing profiles, possibly resulting in an impact on the bottomline of companies that depend on behavioral advertising. Targeting that depends on the context (i.e. what the user is reading) and location will continue to be effective.

In short, behavioral targeting will become more difficult, but publishers should be able to recoup a larger portion of the value overall in the online advertising ecosystem. This means the long-term revenue impact will be on those third-parties in the advertising ecosystem that are extracting value from publishers, rather than bringing value to those publishers.

We know that our users are only one part of the equation here; we need to go after the real cause of our online advertising dysfunction by helping publishers earn more than they do from the status quo. That is why we need help from publishers to test the cookie restrictions feature and give us feedback about what they are seeing and what the potential impact will be. Reach out to us at publisher-feedback@mozilla.com. The technical documentation for these cookie restrictions can be found here. To test this feature in Firefox 65, visit “about:preferences#privacy” using the address bar. Under “Content Blocking” click “Custom”, click the checkbox next to “Cookies”, and ensure the dropdown menu is set to “Third-party trackers”.

We look forward to working with publishers to build a more sustainable model that puts them and our users first.

The post Putting Users and Publishers at the Center of the Online Value Exchange appeared first on The Mozilla Blog.

Nika LayzellFission Engineering Newsletter #1

TL;DR Fission is happening and our first "Milestone" is targeted at the end of February. Please file bugs related to fission and mark them as "Fission Milestone: ?" so we can triage them into the correct milestone.

A little than more a year ago, a serious security flaw affecting almost all modern processors was publicly disclosed. Three known variants of the issue were announced with the names dubbed as Spectre (variants 1 and 2) and Meltdown (variant 3). Spectre abuses a CPU optimization technique known as speculative execution to exfiltrate secret data stored in memory of other running programs via side channels. This might include cryptographic keys, passwords stored in a password manager or browser, cookies, etc. This timing attack posed a serious threat to the browsers because webpages often serve JavaScript from multiple domains that run in the same process. This vulnerability would enable malicious third-party code to steal sensitive user data belonging to a site hosting that code, a serious flaw that would violate a web security cornerstone known as Same-origin policy.

Thanks to the heroic efforts of the Firefox JS and Security teams, we were able to mitigate these vulnerabilities right away. However, these mitigations may not save us in the future if another security vulnerability is released exploiting the same underlying problem of sharing processes (and hence, memory) between different domains, some of which may be malicious. Chrome spent multiple years working to isolate sites in their own processes.

We aim to build a browser which isn't just secure against known security vulnerabilities, but also has layers of built-in defense against potential future vulnerabilities. To accomplish this, we need to revamp the architecture of Firefox and support full Site Isolation. We call this next step in the evolution of Firefox’s process model "Project Fission". While Electrolysis split our browser into Content and Chrome, with Fission, we will "split the atom", splitting cross-site iframes into different processes than their parent frame.

Over the last year, we have been working to lay the groundwork for Fission, designing new infrastructure. In the coming weeks and months, we’ll need help from all Firefox teams to adapt our code to a post-Fission browser architecture.

Planning and Coordination

Fission is a massive project, spanning across many different teams, so keeping track of what everyone is doing is a pretty big task. While we have a weekly project meeting, which someone on your team may already be attending, we have started also using a Bugzilla project tracking flag to keep track of the work we have in progress.

Now that we've moved past much of the initial infrastructure ground work, we are going to keep track of work with our milestone targets. Each milestone will contain a collection of new features and improved functionality which brings us incrementally closer to our goal.

Our first milestone, "Milestone 1" (clever, I know), is currently targeted for the end of February. In Milestone 1, we plan to have the groundwork for out-of-process iframes, which encompasses some major work, including, but not limited to, the following contributions:

  • :rhunt is implementing basic out-of-process iframe rendering behind a pref. (Bug 1500257)
  • :jdai is implementing native JS Window Actor APIs to migrate FrameScripts. (Bug 1467212)
  • :farre is adding support for BrowsingContext fields to be synchronized between multiple content processes. (Bug 1523645)
  • :peterv has implemented new cross-process WindowProxy objects to correctly emulate the Window object APIs exposed to cross-origin documents. (Bug 1353867)
  • :mattn is converting the FormAutoFillListeners code to the actors infrastructure. (Bug 1474143)
  • :felipe simulated the Fission API for communicating between parent and child processes. (Bug 1493984)
  • :heycam is working on sharing UA stylesheets between processes. (Bug 1474793)
  • :kmag, :erahm and many others have reduced per-process memory overhead!
  • :jld is working on async process launches
  • :dragana, :kershaw and others are moving networking logic into a socket process. (Bug 1322426)
  • ...and so much more!

If you want an up-to-date view of Milestone 1, you can see the current Milestone 1 status on Bugzilla.

If you have a bug which may be relevant to fission, please let us know by setting the "Fission Milestone" project flag to '?'. We'll swing by and triage it into the correct milestone.

Setting Fission Milestone Project Flag

If you have any questions, feel free to reach out to one of us, and we'll get you answers, or guide you to someone who can:

  • Ron Manning <rmanning@mozilla.com> (Fission Engineering Project Manager)
  • Nika Layzell <nika@mozilla.com> (Fission Tech Lead)
  • Neha Kochar <nkochar@mozilla.com> (DOM Fission Engineering Manager)

What's Changing?

In order to make each component of Firefox successfully adapt to a post-Fission world, many of them are going to need changes of varying scale. Covering all of the changes which we're going to need would be impossible within a single newsletter. Instead, I will focus on the changes to actors, messageManagers, and document hierarchies.

Today, Firefox has process separation between the UI - run in the parent process, and web content - run in content processes. Communication between these two trees of "Browsing Contexts" is done using the TabParent and TabChild actors in C++ code, and Message Managers in JS code. These systems communicate directly between the "embedder", which in this case is the <browser> element, and the root of the embedded tree, which in this case would be the toplevel DocShell in the tab.

However, in a post-Fission world, this layer for communication is no longer sufficient. It will be possible for multiple processes to render distinct subframes, meaning that each tab has multiple connected processes.

Components will need to adapt their IPC code to work in this new world, both by updating their use of existing APIs, and by adapting to use new Actors and APIs which are being added as part of the Fission project.

Per-Window Global Actors

For many components, the full tree of Browsing Contexts is not important, rather communication is needed between the parent process and any specific document. For these cases, a new actor has been added which is exposed both in C++ code and JS code called PWindowGlobal.

Unlike other actors in gecko, such as Tab{Parent,Child}, this actor exists for all window globals, including those loaded within the parent process. This is handled using a new PInProcess manager actor, which supports sending main thread to main thread IPDL messages.

JS code running within a FrameScript may not be able to inspect every frame at once, and won't be able to handle events from out of process iframes. Instead, it will need to use our new JS Window Actor APIs, which we are targeting to land in Milestone 1. These actors are "managed" by the WindowGlobal actors, and are implemented as JS classes instantiated when requested for any particular window. They support sending async messages, and will be present for both in-process and out-of-process windows.

C++ logic which walks the frame tree from the TabChild may stop working. Instead, C++ code may choose to use the PWindowGlobal actor to send messages in a manner similar to JS code.

BrowsingContext objects

C++ code may also maintain shared state on the BrowsingContext object. We are targeting landing the field syncing infrastructure in Milestone 1, and it will provide a place to store data which should be readable by all processes with a view of the structure.

The parent process holds a special subclass of the BrowsingContext object: CanonicalBrowsingContext. This object has extra fields which can be used in the parent to keep track of the current status of all frames in one place.

TabParent, TabChild and IFrames

The Tab{Parent,Child} actors will continue to exist, and will always bridge from the parent process to a content process. However, in addition to these actors being present for toplevel documents, they will also be present for out-of-process subtrees.

As an example, consider the following tree of nested browsing contexts:

         +-- 1 --+
         | a.com |
         +-------+
          /     \
    +-- 2 --+ +-- 4 --+
    | a.com | | b.com |
    +-------+ +-------+
        |         |
    +-- 3 --+ +-- 5 --+
    | b.com | | b.com |
    +-------+ +-------+

Under e10s, we have a single Tab{Parent,Child} pair for the entire tab, which would connect to 1, and FrameScripts would run with content being the 1's global.

After Fission, there will still be a Tab{Parent,Child} actor for the root of the tree, at 1. However, there will also be two additional Tab{Parent,Child} actors: one at 3 and one at 4. Each of these nested TabParent objects are held alive in the parent process by a RemoteFrameParent actor whose corresponding RemoteFrameChild is held by the embedder's iframe.

The following is a diagram of the documents and actors which build up the actor tree, excluding the WindowGlobal actors. RF{P,C} stands for RemoteFrame{Parent,Child}, and T{P,C} stands for Tab{Parent,Child}. The RemoteFrame actors are managed by their embedding Tab actors, and use the same underlying transport.

- within a.com's process -

         +-------+
         | TC: 1 |
         +-------+
             |
         +-- 1 --+
         | a.com |
         +-------+
          /     \
    +-- 2 --+ +-------+
    | a.com | | RFC:2 |
    +-------+ +-------+
        |
    +-------+
    | RFC:1 |
    +-------+

- within b.com's process -

    +-------+    +-------+
    | TC: 2 |    | TC: 3 |
    +-------+    +-------+
        |            |
    +-- 3 --+    +-- 4 --+
    | b.com |    | b.com |
    +-------+    +-------+
                     |
                 +-- 5 --+
                 | b.com |
                 +-------+

- within the parent process -

         +-------+
         | TP: 1 |
         +-------+
          /     \    (manages)
    +-------+ +-------+
    | RFP:1 | | RFP:2 |
    +-------+ +-------+
        |         |
    +-------+ +-------+
    | TP: 2 | | TP: 3 |
    +-------+ +-------+

This Newsletter

I hope to begin keeping everyone updated on the latest developments with Fission over the coming months, but am not quite ready to commit to a weekly or bi-weekly newsletter schedule.

If you're interested in helping out with the newsletter, please reach out and let me (Nika) know!.


Thanks for reading, and best of luck splitting the atom!

The Project Fission Team

Gijs KruitboschGetting Firefox artifact builds working on an arm64/aarch64 windows device

If, like me, you’re debugging a frontend issue and you think “I can just create some artifact builds on this device” — you might run in to a few issues. In the main, they’re caused by various bits of the build system attempting to use 64-bit x86 binaries. arm64 can run 32-bit x86 code under emulation, but not 64-bit. Here are the issues I encountered, chronologically.

  1. The latest mozilla-build doesn’t work, because it only supplies 64-bit tools. Using an older version (2.2) does work.
    1. Note: Obviously this comes with older software. You should update pip and mercurial (using pip). I’d recommend not using the old software to connect to anything you don’t trust, no warranty, etc. etc.
  2. You can now hg clone mozilla-central. It’ll take a while. You can also use pip to install/run other useful things, like mozregression (which seems to work but chokes when trying to kill off and delete Firefox processes when done; unsure why).
  3. Running ./mach bootstrap mostly works if you pick artifact builds, but:
    1. It tries to install 64-bit x86 rustup, which doesn’t work (this bug should be fixed soon). Commenting out this line makes things work.
    2. It installs a 64-bit version of NodeJS, which also won’t work. You’ll want to remove ~/.mozbuild/node, download the 32-bit windows .zip from the NodeJS website and extract the contents at ~/.mozbuild/node to placate it.
  4. If you now try to build, configure will choke on the lack of python3. This isn’t part of the old mozillabuild package, and so you want to download the latest python3 version as an “embeddable zip file” version off the python website , and extract it to path/to/mozilla-build/python3 . Then you will also want to make a copy of python.exe in that directory available as python3.exe, because that’s the path mozilla-build expects.
  5. Next, configure will choke on the 64-bit version of watchman.exe that mozilla-build has helpfully provided. Rename the watchman directory inside mozilla-build (or delete it if you’re feeling vengeful) to deal with this.

That’s it! Now artifact builds should work – or at least, they did for me. Some of the issues are caused by bootstrap, and thus fixable, but obviously we can’t retrospectively change an old copy of mozilla-build. I’ve filed a bug to provide mozilla-build for aarch64.

Hacks.Mozilla.OrgFirefox 66 to block automatically playing audible video and audio

Isn’t it annoying when you click on a link or open a new browser tab and audible video or audio starts playing automatically?

We know that unsolicited volume can be a great source of distraction and frustration for users of the web. So we are making changes to how Firefox handles playing media with sound. We want to make sure web developers are aware of this new autoplay blocking feature in Firefox.

Starting with the release of Firefox 66 for desktop and Firefox for Android, Firefox will block audible audio and video by default. We only allow a site to play audio or video aloud via the HTMLMediaElement API once a web page has had user interaction to initiate the audio, such as the user clicking on a “play” button.

Any playback that happens before the user has interacted with a page via a mouse click, printable key press, or touch event, is deemed to be autoplay and will be blocked if it is potentially audible.

Muted autoplay is still allowed. So script can set the “muted” attribute on HTMLMediaElement to true, and autoplay will work.

We expect to roll out audible autoplay blocking enabled by default, in Firefox 66, scheduled for general release on 19 March 2019. In Firefox for Android, this will replace the existing block autoplay implementation with the same behavior we’ll be using in Firefox on desktop.

There are some sites on which users want audible autoplay audio and video to be allowed. When Firefox for Desktop blocks autoplay audio or video, an icon appears in the URL bar. Users can click on the icon to access the site information panel, where they can change the “Autoplay sound” permission for that site from the default setting of “Block” to “Allow”. Firefox will then allow that site to autoplay audibly. This allows users to easily curate their own whitelist of sites that they trust to autoplay audibly.

Firefox expresses a blocked play() call to JavaScript by rejecting the promise returned by HTMLMediaElement.play() with a NotAllowedError. All major browsers which block autoplay express a blocked play via this mechanism. In general, the advice for web authors when calling HTMLMediaElement.play(), is to not assume that calls to play() will always succeed, and to always handle the promise returned by play() being rejected.

If you want to avoid having your audible playback blocked, you should only play media inside a click or keyboard event handler, or on mobile in a touchend event. Another strategy to consider for video is to autoplay muted, and present an “unmute” button to your users. Note that muted autoplay is also currently allowed by default in all major browsers which block autoplay media.

We are also allowing sites to autoplay audibly if the user has previously granted them camera/microphone permission, so that sites which have explicit user permission to run WebRTC should continue to work as they do today.

At this time, we’re also working on blocking autoplay for Web Audio content, but have not yet finalized our implementation. We expect to ship with autoplay Web Audio content blocking enabled by default sometime in 2019. We’ll let you know!

The post Firefox 66 to block automatically playing audible video and audio appeared first on Mozilla Hacks - the Web developer blog.

The Servo BlogThis Week In Servo 125

In the past two weeks, we merged 80 PRs in the Servo organization’s repositories.

If Windows nightlies have crashed at startup in the past, try the latest nightly!

Planning and Status

Our roadmap is available online. Plans for 2019 will be published soon.

This week’s status updates are here.

Exciting works in progress

  • ferjm is implementing parts of the Shadow DOM API in order to support UI for media controls and complex form controls.

Notable Additions

  • Manishearth implemented initial support for getUserMedia and other WebRTC APIs.
  • jdm fixed an issue preventing some brotli-encoded content from loading correctly.
  • Hyperion101010 improved the debug output when interacting with large URLs.
  • UK992 bundled all required DLLs in the Servo package for Windows.
  • CYBAI implemented support for the new formdata DOM event.
  • gterzian made the background hang reporter more resilient to unexpected stack values.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Tom SchusterMozilla JS Holiday Update

Bas SchoutenMQ to Changeset Evolution: A Dummy Guide

So, not having had time to post on here for a long time. I realized there's a problem a bunch of us at Mozilla are facing, we've been using mercurial queues for I don't know how long, but we're increasingly facing a toolchain that isn't compatible with the MQ workflow. I found patches in my queue inadvertently being converted into actual commits and other such things. I'm no expert on versioning systems, and as such mercurial queues provided me with an easy method of just having a bunch of patches, and a file which orders them, and that was easy for me to understand and work with. Seeing an increasing amount of tools not supporting it though, I decided to make the switch, and I'd like to document my experience here, some of my suggestions may not be optimal, please let me know if any of my suggestions are unwise. I also use Windows as my primary OS, mileage on other operating systems may vary, but hopefully not by much.

First, preparation, make sure you've got the latest version of mercurial using ./mach bootstrap, when you get to the mercurial configuration wizard, enable all the history editing and evolve extensions, you will need them for this to work.

Now, to go through the commands, first, the basics:

hg qnew basically just becomes hg ci we're going to assume none of our commits are necessarily permanent, and we're fine having hidden, dead branches in our repository.

hg qqueue is largely replaced by hg bookmark, it allows you to create a new 'bookmarked branch', list the bookmarked branches and which is active. An important difference is that a bookmark describes the tips of the different branches. Making new commits on top of a bookmark will migrate the bookmark along with that commit.

hg up [bookmark name] will activate a different bookmark, hopping to the tip of that branch.

hg qpop once you've created a new commit becomes hg prev an important thing to note is that unlike with qpop, 'tip' will remain the tip of your current bookmark. Note that unlike with qpop, you can 'prev' right past the root of your patch set and through the destination repository, so make sure you're at the right changeset! It's also important to note this deactivates the current bookmark.

Once you've popped all the way to the tree you're working on top of, you can just use hg ci and hg bookmark again to start a new bookmarked branch (or queue, if you will).

hg qpush when you haven't made any changes bascially becomes hg next, it will take you to the next changeset, if there's multiple branches coming off here, it will offer you a prompt to select which one you'd like to continue on.

Making changes inside a queue

Now this is where it gets a little more complicated, there's essentially two ways one could make changes to an existing queue, first, there is the most common action of changing an existing changeset in the queue, this is fairly straightforward:

  • Use hg prev to go to the changeset you wish to modify, much like in mq
  • Make your changes and use hg ci --amend much like you would hg qref, this will orphan its existing children
  • Use hg next --evolve as a qpush for your changesets, this will rebase them back on top of your change, and offer a 3-way merging tool if needed.

In short qpop, make change, qref, qpush becomes prev, make change, ci --amend, next --evolve.

The second method to make changes inside a queue is to add a changeset inbetween two changesets already in the queue. In the past this was straightforward, you qpopped, made changes, qnewed, and just happily qpushed the rest of your queue on top of it, the new method is this:

  • Use hg prev to go to the changeset you wish to modify, much like in mq
  • Make your changes and use hg ci much like you would hg qnew, this will create a new branching point
  • Use hg rebase -b [bookmark name/revision], this will rebase your queue back on top of your change, and offer a 3-way merging tool if needed.
  • Use hg next to go back down your 'queue'

Pushing

Basically hg qfin is no longer needed, you go to the changeset where you want to push and you can push up until that changeset directly to, for example, central. ./mach try also seems to work as expected and submits the changeset you're currently at.

Some additional tips

The hg absorb extension I've found to be quite powerful in combination with the new system, particularly when processing review comments. Essentially you can make a bunch of changes from the tip of your patch queue, execute the command, and based on where the changes are it will attempt to figure out which commits they belong to, and essentially amend these commits with the changes, without you ever having to leave the tip of your branch. This not only takes away a bunch of work, it also means you don't retouch all of the files affected by your queue, greatly reducing rebuild times.

I've found that being able to create additional branching points, or queues, if you will, off some existing work on occasion is a helpful addition to the abilities I had with mercurial queues.

Final Thoughts

In the end I like my versioning system not to get in the way of my work, I'm not necessarily convinced that the benefits outweigh the cost of learning a new system or the slightly more complex actions required for what to me are the more common operations. But with the extensions now available I can keep my workflow mostly the same, with the added benefit of hg absorb I hope this guide will make the transition easy enough that in the end most of us can be satisfied with the switch.

If I missed something, am in error somewhere, or if a better guide exists out there somewhere (I wasn't able to find one or I wouldn't have written this :-)), do let me know!

Daniel StenbergI’m on team wolfSSL

Let me start by saying thank you to all and everyone who sent me job offers or otherwise reached out with suggestions and interesting career moves. I received more than twenty different offers and almost every one of those were truly good options that I could have said yes to and still pulled home a good job. What a luxury challenge to have to select something from that! Publicly announcing me leaving Mozilla turned out a great ego-boost.

I took some time off to really reflect and contemplate on what I wanted from my next career step. What would the right next move be?

I love working on open source. Internet protocols, and transfers and doing libraries written in C are things considered pure fun for me. Can I get all that and yet keep working from home, not sacrifice my wage and perhaps integrate working on curl better in my day to day job?

I talked to different companies. Very interesting companies too, where I have friends and people who like me and who really wanted to get me working for them, but in the end there was one offer with a setup that stood out. One offer for which basically all check marks in my wish-list were checked.

wolfSSL

On February 5, 2019 I’m starting my new job at wolfSSL. My short and sweet period as unemployed is over and now it’s full steam ahead again! (Some members of my family have expressed that they haven’t really noticed any difference between me having a job and me not having a job as I spend all work days the same way nevertheless: in front of my computer.)

Starting now, we offer commercial curl support and various services for and around curl that companies and organizations previously really haven’t been able to get. Time I do not spend on curl related activities for paying customers I will spend on other networking libraries in the wolfSSL “portfolio”. I’m sure I will be able to keep busy.

I’ve met Larry at wolfSSL physically many times over the years and every year at FOSDEM I’ve made certain to say hello to my wolfSSL friends in their booth they’ve had there for years. They’re truly old-time friends.

wolfSSL is mostly a US-based company – I’m the only Swede on the team and the only one based in Sweden. My new colleagues all of course know just as well as you that I’m prevented from traveling to the US. All coming physical meetings with my work mates will happen in other countries.

commercial curl support!

We offer all sorts of commercial support for curl. I’ll post separately with more details around this.

Firefox UXBias and Hiring: How We Hire UX Researchers

This year, the Firefox User Research team is planning to add two new researchers to our group. The job posting went live last month, and after just a few weeks of accepting applications, we had over 900 people apply.

Current members of the Firefox User Research Team fielded dozens of messages from prospective applicants during this time, most asking for informational meetings to discuss the open role. We decided as a team to decline these requests across the board because we did not have the bandwidth for the number of meetings requested, and more importantly we have spent a significant amount of time this year working on minimizing bias in our hiring process.

We felt that meeting with candidates outside of the formal hiring process would give unfair advantage to some candidates and undermine our de-biasing work. At the same time, in alignment with Mozilla’s values and to build on Mozilla’s diversity and inclusion disclosures from earlier this year, we realized there was an opportunity to be more transparent about our hiring process for the benefit of future job applicants and teams inside and outside Mozilla thinking about how they can minimize bias in their own hiring.

Our Hiring Process Before This Year

Before this year, our hiring process consisted of a number of steps. First, a Mozilla recruiter would screen resumes for basic work requirements such as legal status to work in the regions where we were hiring and high-level relevant work experience. Applicants with resumes that passed the initial screen would then be screened by the recruiter over the phone. The purpose of the phone screen was to verify the HR requirements, the applicant’s requirements, work history, and most relevant experience.

If the applicant passed the screen with the recruiter, two members of the research team would conduct individual phone screens with the applicant to understand the applicant’s experience with different research methods and any work with distributed teams. Applicants who passed the screen with the researchers would be invited to a Mozilla office for a day of 1:1 in-person interviews with researchers and non-researchers and asked to present a research exercise prepared in advance of the office visit.

<figcaption>Steps to hiring a UX researcher at Mozilla, from resume screen to hiring team debrief</figcaption>

This hiring process served us well in several ways. It involved both researchers and roles that interact with researchers regularly, such as UX designers and product managers. Also, the mix of remote and in-person components reflected the ways we actually work at Mozilla. The process overall yielded hires — our current research team members — who have worked well together and with cross-functional teams.

However, there were also a lot of limitations to our former hiring process. Each Mozilla staff person involved determined their own questions for the phone and in-person components. We had a living document of questions team members liked to ask, but staff were free to draw on this list as little or as much as they wanted. Moreover, while each staff person had to enter notes into our applicant tracking system after a phone screen or interview with an applicant, we had no explicit expectations about how these notes were to be structured. We were also inconsistent in how we referred to notes during the hiring team debrief meetings where final decisions about applicants were typically made.

Our New Hiring Process: What We’ve Done

Our new hiring process is a work in progress. We want to share the strides we have made and also what we would still like to do. Our first step in trying to reduce bias in our hiring process was to document our current hiring process, which was not documented comprehensively anywhere, and to try and identify areas for improvement. Simultaneously, we set out to learn as much as we could about bias in hiring in general. We consulted members of Mozilla’s Diversity and Inclusion team, dug into materials from Stanford’s Clayman Institute for Gender Research, and talked with several managers in other parts of Mozilla who had undertaken de-biasing efforts for their own hiring. This “discovery” period helped us identify a number of critical steps.

First, we needed to develop a list of essential and desired criteria for job candidates. The researcher job description we had been using reflected many of the criteria we ultimately kept, but the exercise of distilling essential and desired criteria allowed current research team members to make much that was implicit, explicit.

Team members were able to ask questions about the criteria, challenge assumptions, and in the end build a shared understanding of expectations for members of our team. For example, we previously sought out candidates with 1–3 years of work experience. With this criteria, we were receiving applications from some candidates who only had experience within academia. It was through discussing how each of our criteria relates to ways we actually work at Mozilla that we determined that what was even more essential than 1–3 years of any user research experience was that much experience specifically working in industry. The task of distilling our hiring criteria was not necessarily difficult, but it took several hours of synchronous and asynchronous discussion — time we all acknowledged was well-spent because our new hiring process would be built from these agreed-upon criteria.

Next, we wrote phone screen and interview questions that aligned with the essential and desired criteria. We completed this step mostly asynchronously, with each team member contributing and reviewing questions. We also asked UX designers, content strategists, and product managers that we work with to contribute questions, also aligned with our essential and desired criteria, that they would like to ask researcher candidates.

The next big piece was to develop a rubric for grading answers to the questions we had just written. For each question, again mostly asynchronously, research team members detailed what they thought were “excellent,” “acceptable,” and “poor answers,” with the goal of producing a rubric that was self-evident enough that it could be used by hiring team members other than ourselves. Like the exercise of crafting criteria, this step required as much research team discussion time as writing time. We then took our completed draft of a rubric and determined at which phase of the hiring process each question would be asked.

Additionally, we revisited the research exercise that we have candidates complete to make its purpose and the exercise constraints more explicit. Like we did for the phone screen and interview questions, we developed a detailed rubric for the research exercise based on our essential and desirable hiring criteria.

Most recently, we have turned our new questions and rubrics into worksheets, which Mozilla staff will use to document applicants’ answers. These worksheets will also allow staff to document any additional questions they pose to applicants and the corresponding answers, as well as questions applicants ask us. Completed worksheets will be linked to our applicant tracking system and be used to structure the hiring team debrief meetings where final decisions about leading applicants will be made.

From the work we have done to our hiring process, we anticipate a number of benefits, including:

  • Less bias on the part of hiring team members about what we think of as desirable qualities in a candidate
  • Less time spent screening resumes given the established criteria
  • Less time preparing for and processing interviews given the standardized questions and rubrics
  • Flexibility to add new questions to any of the hiring process steps but more attention to how these new questions are tracked and answers documented
  • Less time on final decision making given the criteria, rubrics, and explicit expectations for documenting candidates’ answers

Next Steps

Our Mozilla recruiter and members of the research team have started going through the 900+ resumes we have received to determine which candidates will be screened by phone. We fully expect to learn a lot and make changes to our hiring process after this first attempt at putting it into practice. There are also several other resource-intensive steps we would like to take in the near future to mitigate bias further, including:

  • Making our hiring process more transparent by publishing it where it would be discoverable (for instance, some Mozilla teams are publishing hiring materials to Github)
  • Establishing greater alignment between our new process and the mechanics of our applicant tracking system to make the hiring process easier for hiring team members
  • At the resume screening phase, blinding parts of resumes that can contribute to bias such as candidate names, names of academic institutions, and graduation dates
  • Sharing the work we have done on our hiring process via blog posts and other platforms to help foster critical discussion

Teams who are interested in trying out some of the exercises we carried out to improve our hiring process are welcome to use the template we developed for our purposes. We are also interested in learning how other teams have tackled bias in hiring and welcome suggestions, in particular, for blinding when hiring people from around the world.

We are looking forward to learning from this work and welcoming new research team members who can help us advance our efforts.

Thank you to Gemma Petrie and Mozilla’s Diversity & Inclusion Team for reviewing an early draft of this post.

Also published on the Firefox UX blog


Bias and Hiring: How We Hire UX Researchers was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXHow do people decide whether or not to get a browser extension?

The Firefox Add-ons Team works to make sure people have all of the information they need to decide which browser extensions are right for them. Past research conducted by Bill Selman and the Add-ons Team taught us a lot about how people discover extensions, but there was more to learn. Our primary research question was: “How do people decide whether or not to get a specific browser extension?”

We recently conducted two complementary research studies to help answer that big question:

  1. An addons.mozilla.org (AMO) survey, with just under 7,500 respondents
  2. An in-person think-aloud study with nine recruited participants, conducted in Vancouver, BC

The survey ran from July 19, 2018 to July 26, 2018 on addons.mozilla.org (AMO). The survey prompt was displayed when visitors went to the site and was localized into ten languages. The survey asked questions about why people were visiting the site, if they were looking to get a specific extension (and/or theme), and if so what information they used to decide to get it.

<figcaption>Screenshot of the survey message bar on addons.mozilla.org.</figcaption>

The think-aloud study took place at our Mozilla office in Vancouver, BC from July 30, 2018 to August 1, 2018. The study consisted of 45-minute individual sessions with nine participants, in which they answered questions about the browsers they use, and completed tasks on a Windows laptop related to acquiring a theme and an extension. To get a variety of perspectives, participants included three Firefox users and six Chrome users. Five of them were extension users, and four were not.

<figcaption>Mozilla office conference room in Vancouver, where the think-aloud study took place.</figcaption>

What we learned about decision-making

People use social proof on the extension’s product page

Ratings, reviews, and number of users proved important for making a decision to get the extension in both the survey and think-aloud study. Think-aloud participants used these metrics as a signal that an extension was good and safe. All except one think-aloud participant used this “social proof” before installing an extension. The importance of social proof was backed up by the survey responses where ratings, number of users, and reviews were among the top pieces of information used.

<figcaption>Screenshot of Facebook Container’s page on addons.mozilla.org with the “social proof” outlined: number of users, number of reviews, and rating.</figcaption>
<figcaption>AMO survey responses to “Think about the extension(s) you were considering getting. What information did you use to decide whether or not to get the extension?”</figcaption>

People use social proof outside of AMO

Think-aloud participants mentioned using outside sources to help them decide whether or not to get an extension. Outside sources included forums, advice from “high authority websites,” and recommendations from friends. The same result is seen among the survey respondents, where 40.6% of respondents used an article from the web and 16.2% relied on a recommendation from a friend or colleague. This is consistent with our previous user research, where participants used outside sources to build trust in an extension.

<figcaption>Screenshot of an example outside source: TechCrunch article about Facebook Container extension.</figcaption>
<figcaption>AMO survey responses to “What other information did you use to decide whether or not to get an extension?”</figcaption>

People use the description and extension name

<figcaption>Screenshot of Facebook Container’s page on addons.mozilla.org with extension name, descriptions, and screenshot highlighted.</figcaption>

Almost half of the survey respondents use the description to make a decision about the extension. While the description was the top piece of content used, we also see that over one-third of survey respondents evaluate the screenshots and the extension summary (the description text beneath the extension name), which shows their importance as well.

Think-aloud participants also used the extension’s description (both the summary and the longer description) to help them decide whether or not to get it.

While we did not ask about the extension name in the survey, it came up during our think-aloud studies. The name of the extension was cited as important to think-aloud participants. However, they mentioned how some names were vague and therefore didn’t assist them in their decision to get an extension.

Themes are all about the picture

In addition to extensions, AMO offers themes for Firefox. From the survey responses, the most important part of a theme’s product page is the preview image. It’s clear that the imagery far surpasses any social proof or description based on this survey result.

<figcaption>Screenshot of a theme on addons.mozilla.org with the preview image highlighted.</figcaption>
<figcaption>AMO survey responses to “Think about the theme(s) you were considering getting. What information did you use to decide whether or not to get the theme?”</figcaption>

All in all, we see that while social proof is essential, great content on the extension’s product page and in external sources (such as forums and articles) are also key to people’s decisions about whether or not to get an extension. When we’re designing anything that requires people to make an adoption decision, we need to remember the importance of social proof and great content, within and outside of our products.

In alphabetical order by first name, thanks to Amy Tsay, Ben Miroglio, Caitlin Neiman, Chris Grebs, Emanuela Damiani, Gemma Petrie, Jorge Villalobos, Kev Needham, Kumar McMillan, Meridel Walkington, Mike Conca, Peiying Mo, Philip Walmsley, Raphael Raue, Richard Bloor, Rob Rayborn, Sharon Bautista, Stuart Colville, and Tyler Downer, for their help with the user research studies and/or reviewing this blog post.

Also published on the Firefox UX blog.


How do people decide whether or not to get a browser extension? was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXReflections on a co-design workshop

Authors: Jennifer Davidson, Meridel Walkington, Emanuela Damiani, Philip Walmsley

Co-design workshops help designers learn first-hand the language of the people who use their products, in addition to their pain points, workflows, and motivations. With co-design methods [1] participants are no longer passive recipients of products. Rather, they are involved in the envisioning and re-imagination of them. Participants show us what they need and want through sketching and design exercises. The purpose of a co-design workshop is not to have a pixel-perfect design to implement, rather it’s to learn more about the people who use or will use the product, and to involve them in generating ideas about what to design.

We ran a co-design workshop at Mozilla to inform our product design, and we’d like to share our experience with you.

<figcaption>Sketching exercises during the co-design workshop were fueled by coffee and tea.</figcaption>

Before the workshop

Our UX team was tasked with improving the Firefox browser extension experience. When people create browser extensions, they use a form to submit their creations. They submit their code and all the metadata about the extension (name, description, icon, etc.). The metadata provided in the submission form is used to populate the extension’s product page on addons.mozilla.org.

<figcaption>A cropped screenshot of the third step of the submission form, which asks for metadata like name and description of the extension.</figcaption>
<figcaption>Screenshot of an extension product page on addons.mozilla.org.</figcaption>

The Mozilla Add-ons team (i.e., Mozilla staff who work on improving the extensions and themes experience) wanted to make sure that the process to submit an extension is clear and useful, yielding a quality product page that people can easily find and understand. Improving the submission flow for developers would lead to higher quality extensions for people to use.

We identified some problems by using test extensions to “eat our own dog food” (i.e. walk through the current process). Our content strategist audited the submission flow experience to understand product page guidelines in the submission flow. Then some team members conducted a cognitive walkthrough [2] to gain knowledge of the process and identify potential issues.

After identifying some problems, we sought to improve our submission flow for browser extensions. We decided to run a co-design workshop that would identify more problem areas and generate new ideas. The workshop took place in London on October 26, one day before MozFest, an annual week-long “celebration for, by, and about people who love the internet.” Extension and theme creators were selected from our global add-ons community to participate in the workshop. Mozilla staff members were involved, too: program managers, a community manager, an Engineering manager, and UX team members (designers, a content strategist, and a user researcher).

<figcaption>A helpful and enthusiastic sticky note on the door of our workshop room. Image: “Submission flow workshop in here!!” posted on a sticky note on a wooden door.</figcaption>

Steps we took to create and organize the co-design workshop

After the audit and cognitive walkthrough, we thought a co-design workshop might help us get to a better future. So we did the following:

  1. Pitch the idea to management and get buy-in
  2. Secure budget
  3. Invite participants
  4. Interview participants (remotely)
  5. Analyze interviews
  6. Create an agenda for the workshop. Our agenda included: ice breaker, ground rules, discussion of interview results, sketching (using this method [3]) & critique sessions, creating a video pitch for each group’s final design concept.
  7. Create workshop materials
  8. Run the workshop!
  9. Send out a feedback survey
  10. Debrief with Mozilla staff
  11. Analyze results (over three days) with Add-ons UX team
  12. Share results (and ask for feedback) of analysis with Mozilla staff and participants

Lessons learned: What went well

Interview participants beforehand

We interviewed each participant before the workshop. The participants relayed their experience about submitting extensions and their motivations for creating extensions. They told us their stories, their challenges, and their successes.

Conducting these interviews beforehand helped our team in a few ways:

  • The interviews introduced the team and facilitators, helping to build rapport before the workshop.
  • The interviews gave the facilitators context into each participant’s experience. We learned about their motivations for creating extensions and themes as well as their thoughts about the submission process. This foundation of knowledge helped to shape the co-design workshop (including where to focus for pain points), and enabled us to prepare an introductory data summary for sharing at the workshop.
  • We asked for participants’ feedback about the draft content guidelines that our content strategist created to provide developers with support, examples, and writing exercises to optimize their product page content. Those guidelines were to be incorporated into the new submission flow, so it was very helpful to get early user feedback. It also gave the participants some familiarity with this deliverable so they could help incorporate it into the submission flow during the workshop.
<figcaption>A photo of Jennifer, user researcher, presenting interview results back to the participants, near the beginning of the workshop.</figcaption>

Thoughtfully select diverse participants

The Add-ons team has an excellent community manager, Caitlin Neiman, who interfaces with the greater Add-ons community. Working with Mozilla staff, she selected a diverse group of community participants for the workshop. The participants hailed from several different countries, some were paid to create extensions and some were not, and some had attended Mozilla events before and some had not. This careful selection of participants resulted in diverse perspectives, workflows, and motivations that positively impacted the workshop.

Create Ground Rules

Design sessions can benefit from a short introductory activity of establishing ground rules to get everyone on the same page and set the tone for the day. This activity is especially helpful when participants don’t know one another.

Using a flip chart and markers, we asked the room of participants to volunteer ground rules. We captured and reviewed those as a group.

<figcaption>A photo of Emanuela, UX Designer and facilitator, scribing ground rules on a flip chart.</figcaption>

Why are ground rules important?

Designing the rules together, with facilitators and participants, serves as a way to align the group with a set of shared values, detecting possible harmful group behaviors and proposing productive and healthy interactions. Ground rules help make everyone’s experience a more rich and satisfying one.

Assign roles and create diverse working groups during the workshop

The Mozilla UX team in Taipei recently conducted a participatory workshop with older adults. In their blog post, they also highlight the importance of creating diverse working groups for the workshops [4].

In our workshop, each group was comprised of:

  • multiple participants (i.e. extension and theme creators)
  • a Mozilla staff program manager, engineering manager, community manager, and/or engineer.
  • a facilitator who was either a Mozilla staff designer or program manager. As a facilitator, the designer was a neutral party in the group and could internalize participants’ mental models, workflows, and vocabulary through the experience.

We also assigned roles during group critique sessions. Each group member chose to be a dreamer (responds to ideas with a “Why not?” attitude), a realist (responds to ideas with “How?”), or a spoiler (responds to ideas by pointing out their flaws). This format is called the Walt Disney approach [5].

<figcaption>Post-its for each critique role: Realist, Spoiler, Dreamer</figcaption>

Why are critique roles important?

Everyone tends to fit into one of the Walt Disney roles naturally. Being pushed to adopt a role that may not be their tendency gets participants to step out of their comfort zone gently. The roles help participants empathize with other perspectives.

We had other roles throughout the workshop as well, namely, a “floater” who kept everyone on track and kept the workshop running, a timekeeper, and a photographer.

Ask for feedback about the workshop results

The “co” part of “co-design” doesn’t have to end when the workshop concludes. Using what we learned during the workshop, the Add-ons UX team created personas and potential new submission flow blueprints. We sent those deliverables to the workshop participants and asked for their feedback. As UX professionals, it was useful to close the feedback loop and make sure the deliverables accurately reflected the people and workflows being represented.

Lessons Learned: What could be improved

The workshop was too long

We flew from around the world to London to do this workshop. A lot of us were experiencing jet lag. We had breaks, coffee, biscuits, and lunch. Even so, going from 9 to 4, sketching for hours and iterating multiple times was just too much for one day.

<figcaption>Jorge, a product manager, provided feedback about the workshop’s duration. Image: “Jorge is done” text written above a skull and crossbones sketch.</figcaption>

We have ideas about how to fix this. One approach is to introduce a variety of tasks. In the workshop we mostly did sketching over and over again. Another idea is to extend the workshop across two days, and do a few hours each day. Another idea is to shorten the workshop and do fewer iterations.

There were not enough Mozilla staff engineers present

The workshop was developed by a user researcher, designers, and a content strategist. We included a community manager and program managers, but we did not include engineers in the planning process (other than providing updates). One of the engineering managers said that it would have been great to have engineers present to help with ideation and hear from creators first-hand. If we were to do a design workshop again, we would be sure to have a genuinely interdisciplinary set of participants, including more Mozilla staff engineers.

And with that…

We hope that this blog post helps you create a co-design workshop that is interdisciplinary, diverse, caring of participants’ perspectives, and just the right length.

Acknowledgements

Much gratitude to our colleagues who created the workshop with us and helped us edit this blog post! Thanks to Amy Tsay, Caitlin Neiman, Jorge Villalobos, Kev Needham, Stuart Colville, Mike Conca, and Gemma Petrie.

References

[1] Sanders, Elizabeth B-N., and Pieter Jan Stappers. “Co-creation and the new landscapes of design.” Co-design 4.1 (2008): 5–18.

[2] “How to Conduct a Cognitive Walkthrough.” The Interaction Design Foundation, 2018, www.interaction-design.org/literature/article/how-to-conduct-a-cognitive-walkthrough.

[3] Gray, Dave. “6–8–5.” Gamestorming, 2 June 2015, gamestorming.com/6–8–5s/.

[4] Hsieh, Tina. “8 Tips for Hosting Your First Participatory Workshop.” Medium.com, Firefox User Experience, 20 Sept. 2018, medium.com/firefox-ux/8-tips-for-hosting-your-first-participatory-workshop-f63856d286a0.

[5] “Disney Brainstorming Method: Dreamer, Realist, and Spoiler.” Idea Sandbox, idea-sandbox.com/blog/disney-brainstorming-method-dreamer-realist-and-spoiler/.

Also published on the Firefox UX Blog.


Reflections on a co-design workshop was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla Addons BlogFebruary’s featured extensions

Firefox Logo on blue background

Pick of the Month: ContextSearch

by Mike B
Select text to quickly search the phrase from an array of engines.

“Very intuitive and customizable. Well done!”

Featured: Word Count

by Trishul
Simply highlight text, right click, and select Word Count to easily do just that.

“Beautifully simple and incredibly useful for those of us who write for a living.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post February’s featured extensions appeared first on Mozilla Add-ons Blog.

Ludovic Hirlimann10 years working for Mozilla

10 years ago, today I started working for Mozillamessaging. I was using my own computer until fosdem 2009 during which Davida and I ended up a MediaMarkt in Brussels to buy my Mac (I needed a mac in order to be able to test Linux, Windows and MacOS), so I could properly work as the Thunderbird QA lead. When Mozilla phased out Thunderbird, I asked to join the IT team - my sneaky plan was to manage email but server side. I ended up in the SRE team that got renamed MOC. I've been contributing since probably 1999 (reporting bugs because mozilla wasn't available on my platform of choice). I've changed projects numerous time, The suite -> Camino -> Thunderbird -> IT. I've had 5 bosses. and used bugzilla probably more than what's good for my sanity.

Firefox NightlyThese Weeks in Firefox: Issue 52

Highlights

  • Check out this awesome 2018 retrospective post by nchevobbe about all the nice things that made it to the Console panel last year.
  • The initial UI for the new Event Listener Breakpoints feature has landed! (pref: devtools.debugger.features.event-listeners-breakpoints)
    • The debugger shows a list of the various event handlers one can set breakpoints for.

      This should make debugging sites on the web easier in Firefox!

  • We are replacing our existing localization formats with a new shiny one: Fluent. You can track our progress on arewefluentyet.com – To this date we already have 2.3k FTL Messages!
    • A graph showing that we're slowly but surely increasing the number of Fluent strings in the codebase.

      Fluent incoming!

  • The default private browsing window page has been given a facelift and a new search input!
    • Showing the new search input on the default private browsing window page.

      Just in case you wanted to search for some engagement rings.

  • We’ve released an Anti-Tracking policy documentcheck out this blog post for more details.

Friends of the Firefox team

Introductions

  • Welcome back Prathiksha!

Resolved bugs (excluding employees)

Project Updates

Add-ons / Web Extensions

Applications

Screenshots
  • Ian is working on an export for Firefox 67 that will notify uploaders the server is going away.
  • Blog post with more details on what’s going on with Screenshots.

Browser Architecture

Developer Tools

  • Docs:
  • Technical debt:
    • We want to be able to work faster on the codebase, remove more XUL, get rid of intermittent failures, etc. As a result, we’re setting our plan to remove the Shader Editor, Canvas Debugger and WebAudio Editor in motion: look out for an announcement on the mailing list this week. Then later in 67, there will be a removal notice in DevTools, and in 68, the panels will be removed (we’ll make it possible for interested people to get the code, talk to yulia if you want to know more).
    • We are also migrating the CSS Rules panels to React and Redux. It’s one of our oldest piece of code and also one of our most important. Migrating it means more opportunities to reuse code from other parts of DevTools, a more approachable and predictable codebase, and faster feature development times.
  • Console:
    • In progress right now: a brand new multi-line console editor (bug, tweet)!
      • Showing off the console input taking multiple lines of input.

        Because sometimes the things you want to do takes multiple lines.

  • Layout Tools:
    • We’re hard at work designing what the next layout debugging tools will be. People love our flex sizing inspection tool, and we want to do more like this. Ideas so far: debugging z-index stacking issues, debugging unwanted scrollbar issues, debugging inline layouts, etc.
    • Introducing a new “markup badge” in the inspector, to quickly find out which elements have scrollable overflows:
      • A screenshot of the DevTools inspector showing that the <body> element on a page has a "scrollable" tag on it to indicate that it can be scrolled.

        This will hopefully help you identify important elements in the DOM.

Fission

  • All teams continue to make progress on Fission
  • DOM team working on core functionality (process switching, BrowsingContext, WindowProxies, PWindowGlobal, JS IPDL, etc.)
  • Major work ongoing on rewriting SessionStore from JS to C++
  • MattN is converting the FormAutoFillListeners code to the actors infrastructure
  • mconley is working on last dependencies to let the privileged process ride the trains
  • Other work: Shared UA style sheets, shared font lists

Lint

NodeJS

  • JS Debugger has started using Babel (for jsx)/Jest (elegant JS unit tests)/Flow (type-checking for front-end).
    • Also hoping to start using webpack this week to build the debugger workers.

Password Manager

  • Prathiksha started last week on the team, joining the rest of the developers formerly on Web Payments
  • Working on a short-term bug list (low-hanging fruit and test fixes) while we work with Lockbox to figure out the longer-term roadmap

Performance

Policy Engine

  • Added policies for min/max SSL version (1522182)
  • Working on local file blocking (1450309) and extension whitelisting (1498745)

Privacy/Security

Search and Navigation

Search
Quantum Bar

Hacks.Mozilla.OrgNew in Firefox DevTools 65

We just released Firefox 65 with a number of new developer features that make it even easier for you to create, inspect and debug the web.

Among all the features and bug fixes that made it to DevTools in this new release, we want to highlight two in particular:

  • Our brand new Flexbox Inspector
  • Smarter JavaScript inspection and debugging

We hope you’ll love using these tools just as much as we and our community loved creating them.

Understand CSS layout like never before

The Firefox DevTools team is on a mission to help you master CSS layout. We want you to go from “trying things until they work” to really understanding how your browser lays out a page.

Introducing the Flexbox Inspector

Flexbox is a powerful way to organize and distribute elements on a page, in a flexible way.

To achieve this, the layout engine of the browser does a lot of things under the hood. When everything works like a charm, you don’t have to worry about this. But when problems appear in your layout it may feel frustrating, and you may really need to understand why elements behave a certain way.

That’s exactly what the Flexbox Inspector is focused on.

Highlighting containers, lines, and items

First and foremost, the Flexbox Inspector highlights the elements that make up your flexbox layout: the container, lines and items.

Being able to see where these start and end — and how far apart they are — will go a long way to helping you understand what’s going on.

Flexbox highlighter showing containers, items and available space

Once toggled, the highlighter shows three main parts:

  • A dotted outline that highlights the flex container itself
  • Solid lines that show where the flex items are
  • A background pattern that represents the free space between items

One way to toggle the highlighter for a flexbox container is by clicking its “flex” badge in the inspector.  This is an easy way to find flex containers while you’re scanning elements in the DOM. Additionally, you can turn on the highlighter from the flex icon in the CSS rules panel, as well as from the toggle in the new Flexbox section of the layout sidebar.

Animation showing how to enable the flexbox highlighter

Understanding how flex items got their size

The beauty of Flexbox is that you can leave the browser in charge of making the right layout decisions for you. How much should an element stretch, or should an element wrap to a new line?

But when you give up control, how do you know what the browser is actually doing?

The Flexbox Inspector comes with functionality to show how the browser distributed the sizing for a given item.
Flexbox container panel showing a list of flexbox items

The layout sidebar now contains a Flex Container section that lists all the flex items, in addition to providing information about the container itself.

Clicking any of these flex items opens the Flex Item section that displays exactly how the browser calculated the item size.
Overview of Flexbox Item panel showing sizing informatin

The diagram at the top of the flexbox section shows a quick overview of the steps the browser took to give the item its size.

It shows your item’s base size (either its minimum content size or its flex-basis size), the amount of flexible space that was added (flex-grow) or removed (flex-shrink) from it, and any minimum or maximum defined sizes that restricted the item from becoming any shorter or longer.

If you are reading this on Firefox 65, you can take this for a spin right now!

Open the Inspector on this page, and select the div.masthead.row element. Look for the Flex Container panel in the sidebar, and click on the 2 items to see how their sizes are computed by the browser.

Animation showing how to use the Flexbox Inspector

After the bug fix, keep track of changes

Let’s suppose you have fixed a flexbox bug thanks to the Flexbox Inspector. To do so, you’ve made a few edits to various CSS rules and elements. That’s when you’re usually faced with a problem we’ve all had: “What did I actually change to make it work?”.

In Firefox 65, we’ve also introduced a new Changes panel to do just that.

New changes panel showing additions, deletions and modifications of CSS as diff.

It keeps track of all the CSS changes you’ve made within the inspector, so you can keep working as you normally would. Once you’re happy, open the Changes tab and see everything you did.

What’s next for layout tools?

We’re really excited for you to try these two new features and let us know what you think. But there’s more in store.

You’ve been telling us exactly what your biggest CSS challenges are, and we’ve been listening. We’re currently prototyping layout tools for debugging unwanted scrollbars, z-indexes that don’t work, and more tools like the Flexbox Inspector but for other types of layouts. Also, we’re going to make it even easier for you to extract your changes from the Changes panel.

Smarter JavaScript inspection & debugging

When developing JavaScript, the Console and Debugger are your windows into your code’s execution flow and state changes. Over the past releases we’ve focused on making debugging work better for modern toolchains. Firefox 65 continues this theme.

Collapsing Framework Stack Traces

If you’re working with frameworks and build tools, then you’re used to seeing really long error stack traces in the Console. The new smarter stack traces identify 3rd party code (such as frameworks) and collapse it by default. This significantly reduces the information displayed in the Console and lets you focus on your code.
Before and after version of stack traces in console.

The Collapsing feature works in the Console stack traces for errors and logs, and in the Debugger call stacks.

Reverse search your Console history

If you are tired of smashing the arrow key to find that awesome one-liner you ran one hour ago in the console, then this is for you. Reverse search is a well known command-line feature that lets you quickly browse recent commands that match the entered string.

Animation showing how to use reverse search in the console

To use it in the Console, press F9 on Windows/Linux or Ctrl+R on MacOS and start typing. You can then use Ctrl+R to move to the previous or Ctrl+S to the next result. Finally, hit return to confirm.

Invoke getters to inspect the return value

JavaScript getters are very useful for dynamic properties and heavily used in frameworks like vue.js for computed properties. But when you log an object with a getter to the Console, the reference to the method is logged, not its return value. The method does not get invoked automatically, as that could change your application’s state. Since you often actually want to see the value, you can now manually invoke getters on logged objects.

Animation showing how to invoke getters in the console

Wherever objects can be inspected, in the Console or Debugger, you’ll see >> icons next to getters. Clicking these will execute the method and print the return value.

Pause on XHR/Fetch Breakpoints

Console logging is just one aspect of understanding application state. For complex issues, you need to pause state at precisely the right moment. Fetching data is usually one of those moments, and it is now made “pausable” with the new XHR/Fetch Breakpoint in the Debugger.

XHR Breakpoints panel in the debugger
Kudos to Firefox DevTools code contributor Anshul Malik for “casually” submitting the patch for this useful feature and for his ongoing contributions.

What’s next for JavaScript debugging?

You might have noticed that we’ve been heads down over recent releases to make the JavaScript debugging experience rock solid – for breakpoints, stepping, source maps, performance, etc. Raising the quality bar and continuing to polish and refine remains the focus for the entire team.

There’s work in progress on much requested features like Column Breakpoints, Logpoints, Event and DOM Breakpoints. Building out the authoring experience in the Console, we are adding an multi-line editing mode (inspired by Firebug) and a more powerful autocomplete. Keep an eye out for those features in the latest release of Firefox Developer Edition.

Thank you

Countless contributors helped DevTools staff by filing bugs, writing patches and verifying them. Special thanks go to:

Also, thanks to Patrick Brosset, Nicolas Chevobbe and the whole DevTools team & friends for helping put together this article.

Contribute

As always, we would love to hear your feedback on how we can improve DevTools and the browser.

Download Firefox Developer Edition to get early access to upcoming tooling and platform.

The post New in Firefox DevTools 65 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla GFXWebRender newsletter #38

Greetings! WebRender’s best and only newsletter is here. The number of blocker bugs is rapidly decreasing, thanks to the efforts of everyone involved (staff and volunteers alike). The project is in a good enough shape that some people are now moving on to other projects and we are starting to experiment with webrender on new hardware. WebRender is now enabled by default in Nightly for some subset of AMD GPUs on Windows and we are looking into Intel integrated GPUs as well. As usual we start with small subsets with the goal of gradually expanding in order to avoid running into an overwhelming amount of platform/configuration specific bugs at once.

Notable WebRender and Gecko changes

  • Bobby improved the test infrastructure for picture caching.
  • Jeff added restrictions to filter inputs.
  • Jeff enabled WebRender for a subset of AMD GPUs on Windows.
  • Matt fixed a filter clipping issue.
  • Matt made a few improvements to blob image performance.
  • Emilio fixed perspective scrolling.
  • Lee worked around transform animation detection disabling sub-pixel AA on some sites.
  • Lee fixed fixed the dwrote font descriptor handling so we don’t crash anymore on missing fonts.
  • Lee, Jeff and Andrew fixed how we handle snapping with the will-change property and animated transforms.
  • Glenn improved the accuracy of sub-pixel box shadows.
  • Glenn fixed double inflation of text shadows.
  • Glenn added GPU timers for scale operations.
  • Glenn optimized drawing axis-aligned clip rectangles into clip masks.
  • Glenn used down-scaling more often to avoid large blur radii.
  • Glenn and Nical fixed uneven rendering of transformed shadows with fractional offsets.
  • Nical rewrote the tile decomposition logic to support negative tile offsets and arbitrary tiling origins.
  • Nical surveyed the available GPU debugging tools and documented the workarounds.
  • Sotaro fixed a bug with the lifetime of animations.
  • Sotaro skipped a test which is specific to how non-webrender backends work.
  • Sotaro fixed another test that was specific to the non-webrender rendering logic.
  • Sotaro fixed a bug in the iteration over image bridges when dispatching compositing notifications.
  • Doug made APZ document-splitting-aware.
  • Kvark fixed a perspective interpolation issue.

Ongoing work

The team keeps going through the remaining blockers (3 P2 bugs and 11 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

The Mozilla BlogMozilla Raises Concerns Over Facebook’s Lack of Transparency

Today Denelle Dixon, Mozilla’s Chief Operating Officer, sent a letter to the European Commission surfacing concerns about the lack of publicly available data for political advertising on the Facebook platform.

It has come to our attention that Facebook has prevented third parties from conducting analysis of the ads on their platform. This impacts our ability to deliver transparency to EU citizens ahead of the EU elections. It also prevents any developer, researcher, or organization to develop tools, critical insights, and research designed to educate and empower users to understand and therefore resist targeted disinformation campaigns.

Mozilla strongly believes that transparency cannot just be on the terms with which the world’s largest, most powerful tech companies are most comfortable. To have true transparency in this space, the Ad Archive API needs to be publicly available to everyone. This is all the more critical now that third party transparency tools have been blocked. We appreciate the work that Facebook has already done to counter the spread of disinformation, and we hope that it will fulfill its promises made under the Commission’s Code of Practice and deliver transparency to EU citizens ahead of the EU Parliamentary elections.

Mozilla’s letter to European Commission on Facebook Transparency 31 01 19

The post Mozilla Raises Concerns Over Facebook’s Lack of Transparency appeared first on The Mozilla Blog.

Mozilla B-Teamhappy bmo push day!

In this release: Support for OAuth2 w/jwt tokens, and a 10x performance boost to the bug search API. A welcome security enhancement from @psiinon as well: All responses get HSTS headers set.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1511490] BMO’s oauth tokens should be use jwt
  • [1519782] The OrangeFactor extension should link back to Intermittent Failure View using ‘&tree=all’
  • [1523004] Sort Phabricator revisions by numeric value instead of alphabetically
  • [1523172] Advanced Search link on home page doesn’t always take me to Advanced Search
  • [1523365]…

View On WordPress

Alex GibsonUsing ES modules as a modern baseline for progressive enhancement

It’s pretty commonplace today to write JavaScript using ES2015 (or ES6), and then to transpile that code to ES5 in a build step, using something like Babel. Transpiling enables us to write code using the latest ECMAScript syntax and features, but then compiles it in a backward compatible way so that it works with older browsers. The downside to this is that transpiling can also lead to increased bundle sizes if you’re not careful, and for browsers that already support ES2015 features, you’re potentially shipping a lot of redundant code. Modern browsers need neither the transpiled code, nor the polyfills that can get bundled along with it.

Before we get into the details of this post, I’ll start by emphasizing that depending on your project and its target audience, transpiling and bundling your JavaScript may be the exact right thing to do still. Browser segmentation and page load times are key metrics to measure. However, the with technologies like HTTP/2 and support for ES modules landing in all major browsers, we now have other options on the table. There is a new opportunity to build progressively enhanced websites using modern ES2015 features, but without as much of the bloat and complicated build steps we’ve become accustomed to.

Cutting the mustard

Feature detection has long been a useful technique to determine the code paths that run in our web pages. Techniques such as “cutting the mustard” have often been used to define a baseline of browser support for core functionality. Browsers that pass the test can run modern code, and older browsers are given a degraded experience. With ES modules, this kind of feature test can now evolve considerably.

ES2015 introduced several new pieces of syntax, such as let, const, arrow functions, classes, and of course import and export for modules. Since this new syntax can’t be parsed by older browsers without throwing errors, web pages need a way to opt-in to use the new syntax where traditional feature detection falls short. Thankfully, web standards people are clever and have delivered such a mechanism!

Modules to the rescue

When the spec for loading ES modules was introduced, it added a new value for the type attribute to the <script> tag, type="module". This acts as an identifier to browsers that the script is an ES module and can be loaded as such. It’s also important to note that modules use defer by default, so do not block the HTML parser like regular scripts.

<script type="module" src="./my-module.js"></script>

<!-- Modules can also be inline -->

<script type="module">
  import thing from './my-module.js';
</script>

So how does this all tie together with progressive enhancement? Well, type="module" is a value that older browsers will not understand. When a browser encounters what it considers to be an invalid type, it won’t execute that script. Whilst it may still be downloaded, the script will not be parsed, and none of its import dependencies will be requested either. This allows us to safely use ES2015 features for modern browsers, and also improves page load performance for people on older browsers or operating systems, since they have less to download over the network. Older browsers can receive a nicely degraded experience of your choosing.

Of course, if you still wanted to transpile and bundle code for older browsers, or provide some sort of baseline JS support then you can still do that too, using the nomodule attribute. This attribute signals to a browser that supports ES modules that the script can be ignored, meaning only older browsers will download and run it.

<script nomodule type="text/javascript">
  // degraded experience
</script>

If you’re building a progressively enhanced website, then depending on your approach this fallback may not even be needed. It really depends on your target audience and what you see as a suitable baseline experience.

Performance

As I mentioned at the beginning of this article, using unbundled ES modules in production today may not (yet) be a viable option for many large or complex websites. Using ES modules may result in shipping less code compared to transpiled code, but there are still trade-offs to shipping unbundled vs bundled scripts. Whilst browser vendors are continually working hard to improve module loading performance, you should still read about the current trade-offs and carefully measure the impact that modules may have before switching. For simpler websites or personal projects, using ES modules today may be just fine. It’s up to you to decide.

Speeding up module loading

If you do decide to use ES modules, you may also want to look into preloading, so that browsers can preparse modules and their dependencies as early as they can. Of course, minifying is also recommended as well.

<head>
  <link rel="modulepreload" href="./my-module.js">
</head>

<script type="module" src="./my-module.js"></script>

Note: preloading is currently only supported in Chrome at the time of writing.

A modern baseline, or a taste of the future?

As we have seen, ES modules can provide a simple, modern baseline for building progressively enhanced websites that can still degrade gracefully on older browsers. Whilst this probably isn’t ready for large, complex websites just yet, it might just be fine for smaller sites and personal projects. To help tempt you, here’s a list of just some of the features that this technique unlocks access to:

If you would like to learn more about ES modules, Lin Clark wrote an excellent deep dive on the Mozilla Hacks blog. I highly suggest reading it.

Mozilla Open Policy & Advocacy BlogOnline content regulation in Europe: a paradigm for the future #1

Lawmakers in the European Union are today focused on regulating online content, and compelling online services to make greater efforts to reduce the illegal and harmful activity on their services. As we’ve blogged previously, many of the present EU initiatives – while well-intentioned – are falling far short of what is required in this space, and pose real threats to users rights online and the decentralised open internet. Ahead of the May 2019 elections, we’ll be taking a close look at the current state of content regulation in the EU, and advancing a vision for a more sustainable paradigm that adequately addresses lawmakers’ concerns within a rights- and ecosystem-protective framework.

Concerns about illegal and harmful content online, and the role of online services in tackling it, is a policy issue that is driving the day in jurisdictions around the world. Whether it’s in India, the United States, or the European Union itself, lawmakers are grappling with what is ultimately a really hard problem – removing ‘bad’ content at scale without impacting ‘good’ content, and in ways that work for different types of internet services and that don’t radically change the open character of the internet. Regrettably, despite the fact that many great minds in government, academia, and civil society are working on this hard problem, online content regulation remains stuck in a paradigm that undermines users’ rights and the health of the internet ecosystem, without really improving users’ internet experience.

More specifically, the policy approaches of today – epitomised in Europe by the proposed EU Terrorist Content regulation and the EU Copyright Reform directive – are characterised by three features that, together, fail to mitigate effectively the harms of bad content, while also failing to protect the good:

  • Flawed metrics: The EU’s approach to content regulation today frames ‘success’ in terms of the speed and quantity of content removal. As we will see later in this series, this quantitative framing undermines proportionality and due process, and is unfitting for an internet defined by user-uploaded content.
  • The lack of user safeguards: Under existing content control paradigms, online service providers are forced to play the role of judge and jury, and terms of service (ToS) effectively function as a law unto themselves. As regulation becomes ‘privatised’ in this way, users have little access to the redress and oversight that one is entitled to when fundamental rights are restricted.
  • The one-size-fits-all approach: The internet is characterised by a rich diversity of service providers and use-cases. Yet at the same time, today’s online content control paradigm functions as if there is only one type of online service – namely, large, multinational social media companies. Forcing all online services to march to the compliance beat of a handful of powerful and well-resourced companies has the effect of undermining competition and internet openness.

In that context, it is clear that the present model is not fit-for purpose, and there is an urgent need to rethink how we do online content regulation in Europe. At the same time, the fact that online content regulation at scale is a hard problem is not an excuse to do nothing. As we’ve highlighted before, illegal content is symptomatic of an unhealthy internet ecosystem, and addressing it is something that we care deeply about. To that end, we recently adopted an addendum to our Manifesto, in which we affirmed our commitment to an internet that promotes civil discourse, human dignity, and individual expression. The issue is also at the heart of our recently published Internet Health Report, through its dedicated section on digital inclusion.

For these reasons, we’re focused on shaping a more progressive and sustainable discourse around online content regulation in the EU. In that endeavour there’s no time like the present: 2019 will see critical developments in EU policy initiatives around illegal and harmful content online (think terrorism, copyright, disinformation), and the new European Commission is expected to review the rules around intermediary liability in Europe – the cornerstone of online enforcement and compliance today.

In the coming weeks, we’ll be using this blog to unpack the key considerations of online content regulation, and slowly build out a vision for what a better framework could look like. We hope you’ll join us on the journey.

 

 

 

 

 

 

 

The post Online content regulation in Europe: a paradigm for the future #1 appeared first on Open Policy & Advocacy.

Mozilla Reps CommunityReps OKRs – First half of the year 2019

Here are the OKRs for the first half of the year:

Objective 1: Reps are the bridge between their local communities (Mozilla or other local open source communities) and the Mozilla contribution opportunities

  • Key Result 1.1: 60% of Reps have a connection with another open source community (as a result of our community connection training)
  • Key Result 1.2: 70% of Reps have gathered data about the interests of their community

Objective 2: Mozilla projects reach out to Reps as gateway to community engagement

  • Key Result 2.1: 200 employees say they know about the Reps program’s purpose (Implementation hint: We aim for an All Hands Lightning Talk and being featured in the tl:dr newsletter with an update about the program)
  • Key Result 2.2: Reps collaborate with two new functional teams

Objective 3: Reps feel more involved in the program

  • Key Result 3.1: As a result of the mobilizing activities of the Reps we are able to connect 50% of campaign outcomes to their mobilizing efforts

Objective 4: External, non-Mozilla entities identify the Reps program as a connector to the broader Mozilla community

  • Key Result 4.1: External open source communities are informed about 2019 Reps plans with two publications
  • Key Result 4.2: By updating the Reps description in all resources to reflect the current purpose of the program more people could explain the purpose of the Reps program

Objective 5: Existing Reps and new applicants understand the resources we provide —

  • Key Result 5.1: we have less questions by new Reps on understanding the community coordinator role (Implementation hint: Document resources on what we provide)

Objective 6: We understand what is missing for the Reps Program to enable personal growth

  • Key Result 6.1: 50% of Reps have reported what the Reps program helped them to do in terms of community building and personal growth
  • Key Result 6.2: We identified 3 new personal growth area opportunities we want to provide

Hacks.Mozilla.OrgFirefox 65: WebP support, Flexbox Inspector, new tooling & platform updates

Well now, there’s no better way to usher out the first month of the year than with a great new Firefox release. It’s winter for many of us, but that means more at-home time to install Firefox version 65, and check out some of the great new browser and web platform features we’ve included within. Unless you’d rather be donning your heavy coat and heading outside to grit the driveway, that is (or going to the beach, in the case of some of our Australian chums).

A good day for DevTools

Firefox 65 features several notable DevTools improvements. The highlights are as follows:

CSS Flexbox Inspector

At Mozilla, we believe that new features of the web platform are often best understood with the help of intuitive, visual tools. That’s why our DevTools team has spent the last few years getting feedback from the field, and prioritizing innovative new tooling to allow web devs and designers to inspect, edit, understand, and tinker with UI features. This drive led to the release of the CSS Grid Inspector, Font Editor, and Shape Path Editor.

Firefox 65 sees these features joined by a new friend — the CSS Flexbox Inspector — which allows you to easily visualize where your flex containers and items are sitting on the page and how much free space is available between them, what each flex item’s default and final size is, how much they are being shrunk or grown, and more.

The Firefox 65 Flexbox inspector showing several images of colored circles laid out using Flexbox

Changes panel

When you’re done tweaking your site’s interface using these tools, our new Changes panel tracks and summarizes all of the CSS modifications you’ve made during the current session, so you can work out what you did to fix a particular issue, and can copy and paste your fixes back out to your code editor.

Firefox 65 Changes panel, showing a diff of CSS added and CSS removed

Advanced color contrast ratio

We have also added an advanced color contrast ratio display. When using the Accessibility Inspector’s accessibility picker, hovering over the text content of an element displays its color contrast ratio, even if its background is complex (for example a gradient or detailed image), in which case it shows a range of color contrast values, along with a WCAG rating.

Firefox Accessibility picker, showing the color contrast ratio range of some text with a gradient behind it

JavaScript debugging improvements

Firefox 65 also features some nifty JavaScript debugging improvements:

  • When displaying stack traces (e.g. in console logs or with the JavaScript debugger), calls to framework methods are identified and collapsed by default, making it easier to home in on your code.
  • In the same fashion as native terminals, you can now use reverse search to find entries in your JavaScript console history (F9 (Windows/Linux) or Ctrl + R (macOS) and type a search term, followed by Ctrl + R/Ctrl + S to toggle through results).
  • The JavaScript console’s $0 shortcut (references the currently inspected element on the page) now has autocomplete available, so for example you could type $0.te to get a suggestion of $0.textContent to reference text content.

Find out more

CSS platform improvements

A number of CSS features have been added to Gecko in 65. The highlights are described below.

CSS environment variables

CSS environment variables are now supported, accessed via env() in stylesheets. These variables are usable in any part of a property value or descriptor, and are scoped globally to a particular document, whereas custom properties are scoped to the element(s) they are declared on. These were initially provided by the iOS browser to allow developers to place their content in a safe area of the viewport, i.e., away from the area covered by the notch.

body {
  padding:
    env(safe-area-inset-top, 20px)
    env(safe-area-inset-right, 20px)
    env(safe-area-inset-bottom, 20px)
    env(safe-area-inset-left, 20px);
}

steps() animation timing function

We’ve added the steps() CSS animation timing function, along with the related jump-* keywords. This allows you to easily create animations that jump in a series of equidistant steps, rather than a smooth animation.

As an example, we might previously have added a smooth animation to a DOM node like this:

.smooth {
  animation: move-across 2s infinite alternate linear;
}

Now we can make the animation jump in 5 equal steps, like this:

.stepped {
  animation: move-across 2s infinite alternate steps(5, jump-end);
}

Note: The steps() function was previously called frames(), but some details changed, and the CSS Working Group decided to rename it to something less confusing.

break-* properties

New break-before, break-after, and break-inside CSS properties have been added, and the now-legacy page-break-* properties have been aliased to them. These properties are part of the CSS Fragmentation spec, and set how page, column, or region breaks should behave before, after, or inside a generated box.

For example, to stop a page break occurring inside a list or paragraph:

ol, ul, p {
  break-inside: avoid;
}

JavaScript/APIs

Firefox 65 brings many updates to JavaScript/APIs.

Readable streams

Readable streams are now enabled by default, allowing developers to process data chunk by chunk as it arrives over the network, e.g. from a fetch() request.

You can find a number of ReadableStream demos on GitHub.

Relative time formats

The Intl.RelativeTimeFormat constructor allows you to output strings describing localized relative times, for easier human-readable time references in web apps.

A couple of examples, to sate your appetite:

let rtf1 = new Intl.RelativeTimeFormat('en', { style: 'narrow' });
console.log(rtf1.format(2, 'day')); // expected output: "in 2 days"

let rtf2 = new Intl.RelativeTimeFormat('es', { style: 'narrow' });
console.log(rtf2.format(2, 'day')); // expected output: "dentro de 2 días"

Storage Access API

The Storage Access API has been enabled by default, providing a mechanism for embedded, cross-origin content to request access to client-side storage mechanisms it would normally only have access to in a first-party context. This API features a couple of simple methods, hasStorageAccess() and requestStorageAccess(), which respectively check and request storage access. For example:

document.requestStorageAccess().then(
  () => { console.log('access granted') },
  () => { console.log('access denied') }
);

Other honorable mentions

  • The globalThis keyword has been added, for accessing the global object in whatever context you are in. This avoids needing to use a mix of window, self, global, or this, depending on where a script is executing (e.g. a webpage, a worker, or Node.js).
  • The FetchEvent object’s replacesClientId and resultingClientId properties are now implemented — allowing you to monitor the origin and destination of a navigation.
  • You can now set a referrer policy on scripts applied to your documents (e.g. via a referrerpolicy attribute on <script> elements)
  • Lastly, to avoid popup spam, Window.open() may now only be called once per user interaction event.

Media: Support for WebP and AV1, and other improvements

At long last, Firefox 65 now supports the WebP image format. WebP offers both lossless and lossy compression modes, and typically produces files that are 25-34% smaller than equivalent JPEGs or PNGs with the same image quality. Smaller files mean faster page loads and better performance, so this is obviously a good thing.

Not all browsers support WebP. You can use the <picture> element in your HTML to offer both WebP and traditional image formats, leaving the final choice to the user’s browser. You can also detect WebP support on the server-side and serve images as appropriate, as supported browsers send an Accept: image/webp header when requesting images.

Images are great, but what about video? Mozilla, along with industry partners, has been developing the next-generation AV1 video codec, which is now supported in Firefox 65 for Windows. AV1 is nearly twice as efficient as H.264 in terms of compression, and, unlike H.264, it’s completely open and royalty-free. Support for other operating systems will be enabled in future releases.

Other additions

  • The MediaRecorder pause and resume events are finally supported in Firefox, as of version 65.
  • For developers creating WebGL content, Firefox 65 supports the BPTC and RGTC texture compression extensions.

Firefox Internals

We’ve also updated several aspects of Firefox itself:

  • Support for Handoff between iOS and macOS devices is now available.
  • Preferences for content blocking have been completely redesigned to give people greater and more obvious control over how Firefox protects them from third-party tracking.
  • The about:performance dashboard now reports the memory used by tabs and extensions.
  • WebSockets have been implemented over HTTP/2.
  • Lastly, for Windows administrators, Firefox is now available as an MSI package in addition to a traditional self-extracting EXE.

WebExtensions improvements

We’ve added some useful WebExtensions API features too!

  • The Tabs API now allows extensions to control which tab gets focused when the current tab is closed. You can read more about the motivation for this feature on Piro’s blog, where he discusses it in the context of his Tree Style Tab extension.

Interoperability

The web often contains conflicting, non-standard, or under-specified markup, and it’s up to us to ensure that pages which work in other browsers also work in Firefox.

To that end, Firefox 65:

  • supports even more values of the non-standard -webkit-appearance CSS property.
  • behaves the same as other browsers when encountering the user-select CSS property in nested, shadow, or content editable contexts.
  • clears the content of <iframe>s when the src attribute is removed, matching the behavior of Safari and Chrome.

Further Reading

The post Firefox 65: WebP support, Flexbox Inspector, new tooling & platform updates appeared first on Mozilla Hacks - the Web developer blog.

The Firefox FrontierControl trackers your own way with Enhanced Tracking Protection from Firefox

It’s 2019 and we’re all tired of that uneasy feeling we get when we see an ad online that seems to know too much about us. You may feel like … Read more

The post Control trackers your own way with Enhanced Tracking Protection from Firefox appeared first on The Firefox Frontier.

The Mozilla BlogToday’s Firefox Gives Users More Control over their Privacy

Privacy. While it’s the buzzword for 2019, it has always been a core part of the Mozilla mission, and continues to be a driving force in how we create features for Firefox right from the start. For example, last year at this time we had just announced Firefox Quantum with Opt-in Tracking Protection.

We’ve always made privacy for our users a priority and we saw the appetite for more privacy-focused features that protect our users’ data and put them in control. So, we knew it was a no-brainer for us to meet this need. It’s one of the reasons we broadened our approach to anti-tracking.

One of the features we outlined in our approach to anti-tracking was Enhanced Tracking Protection, otherwise known as “removing cross-site tracking”. We initially announced in October that we would roll out Enhanced Tracking Protection off-by-default. This was just one of the many steps we took to help prepare users when we turn this on by default this year. We continue to experiment and share our journey to ensure we balance these new preferences with the experiences our users want and expect. Before we roll this feature out by default, we plan to run a few more experiments and users can expect to hear more from us about it.

As a result of some of our previous testing, we’re happy to announce a new set of redesigned controls for the Content Blocking section in today’s Firefox release where users can choose their desired level of privacy protection. Here’s a video that shows you how it works:

Firefox Enhanced Tracking Protection lets you see and control how websites track you on the web

Your Choice in How to Control your Privacy

When it comes to user privacy, choice and control are first and foremost. To see the new redesigned Content Blocking section, you can view it in two ways. Click on the small “i” icon in the address bar and under Content Blocking, click on the gear on the right side. The other way is to go to your Preferences. Click on Privacy & Security on the left hand side. From there, users will see Content Blocking listed at the top. There will be three distinct choices. They include:

  • Standard: For anyone who wants to “set it and forget it,” this is currently the default where we block known trackers in Private Browsing Mode. In the future, this setting will also block Third Party tracking cookies.

Block known trackers in Private Browsing Mode

  • Strict: For people who want a bit more protection and don’t mind if some sites break. This setting blocks known trackers by Firefox in all windows.

Block known trackers by Firefox in all windows

  • Custom: For those who want complete control to pick and choose what trackers and cookies they want to block. We talk more about tracking cookies here and about cross-site tracking on our Firefox Frontier blog post.
    • Trackers: You can choose to block in Private Windows or All Windows. You can also change your block list from two Disconnect lists: basic (recommended) or strict (blocks all known trackers).
    • Cookies:  You have the following four choices to block – Third-party trackers; Cookies from unvisited websites; All third-party cookies (may cause websites to break); and All cookies (will cause websites to break).

Pick and choose what trackers and cookies you want to block

Additional features in today’s Firefox release include:

  • AV1 Support – For Windows users, Firefox now supports the royalty-free video compression technology, AV1. Mozilla has contributed to this new open standard which keep high-quality video affordable for everyone. It can open up business opportunities, and remove barriers to entry for entrepreneurs, artists, and regular people.
  • Updated Performance Management – For anyone who likes to look under the hood and find out why a specific web page is taking too long to load, you can check our revamped Task Manager page when you type about:performance in the address bar. It reports memory usage for tabs and add-ons. From there you can see what (tab, ads in tabs, extension, etc) could be the possible cause, and find a solution either by refreshing/closing the tab, blocking tab, or uninstall the extension.

For the complete list of what’s new or what we’ve changed, you can check out today’s release notes.

Check out and download the latest version of Firefox Quantum, available here.

The post Today’s Firefox Gives Users More Control over their Privacy appeared first on The Mozilla Blog.

David BryantMozilla Celebrates Release of Free, High-Quality Video Compression Technology AV1 in Firefox 65

Blame cord cutters. Or cell phones. Or the rise of great original content. Whatever the reason, people now have an obvious and insatiable hunger for streaming online video and that demand is only increasing.

Whether it’s their favorite Netflix shows or must-see live sports, people want to watch more video. They want it now, on all their devices — computer, laptop, tablet and mobile — and they want it to be high quality. But what you might not know is that there’s been a battle going on behind the scenes over who is allowed to use the technology needed to bring video to the people.

For the past several years companies and creators have had to pay millions of dollars in licensing fees to use the technology that helps deliver videos to consumers. This makes it difficult or even impossible for creators to innovate on new platforms that deliver high-quality video.

We’ve been working hard to change all that, and today’s release of Firefox 65 marks another important milestone in that revolution. The Alliance for Open Media (AOMedia), a consortium featuring some of the biggest names in content creation, software, hardware, video conferencing and web technologies including Amazon, Apple, ARM, Cisco, Facebook, Google, IBM, Intel, Microsoft, Netflix and NVIDIA, has developed and standardized a next-generation royalty-free video compression technology called AV1. In short, this will allow producers and consumers of content to access the best in video compression technology that was, until now, prohibitively expensive. Firefox 65 includes support for AV1 so any of that content can be freely enjoyed by all.

We think someone’s ability to participate in online video shouldn’t be dependent on the size of their checkbook.

It’s something we’re passionate about at Mozilla. Our engineers working on the Daala project spent years studying how we could create a better way to compress videos, and in the spirit of Mozilla that better way had to be open source so anyone could have access. To succeed however, we would also need all parties to ensure there would be no royalty fees. In 2015 we helped launch AOMedia to ensure that video compression technology becomes a public resource, open and accessible to all.

For this to work, it wasn’t good enough for the technology to be royalty-free. It also had to be superior to today’s royalty-encumbered alternatives and offer better quality for a large number of use cases. We worked with our partners to make sure that what we settled on creating could stand up against and surpass the existing alternatives.

AOM and AV1 were able to get to this point because this initiative isn’t just about software makers. We’ve also had hardware manufacturers on board, which means you’ll see the technology in cell phones, computers and TVs. The diversity of interests assures we have a wide enough market representation to push for this adoption and the follow through to actually implement it.

An open source and royalty free video codec is needed for video to thrive on the internet. If licensing fees become a relic of the past then the expensive barrier to entry for new content creators and streaming platforms will be eliminated. They’ll no longer have to fear the threat of patent lawsuits, and can move forward unleashed.

If this barrier to entry for online video services is removed, that’s a victory for consumers. Consumers get more choices as more start-ups will enter the marketplace with an ability to compete with the big companies who, until now, were the only ones with pockets deep enough to afford the fees to deliver high quality video online.

The AV1 format is already 30% percent better than competing formats such as HEVC and VP9, and we’re not done yet. We’ve only just scratched the surface of what is possible. The fact that this technology is free will push open the doors of innovation and supports our mission of building an Internet that is open and accessible to all.

So creators, grab your cameras and consumers, get ready to take your binge-watching to the next level, because streaming video on the Internet is about to get a whole lot better.


Mozilla Celebrates Release of Free, High-Quality Video Compression Technology AV1 in Firefox 65 was originally published in Mozilla Tech on Medium, where people are continuing the conversation by highlighting and responding to this story.

This Week In RustThis Week in Rust 271

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is typetag, a small crate to allow for serde trait objects. Thanks to Christopher Durham for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

186 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Africa
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is kind of nice in that it lets you choose between type erasure and monomorphization, or between heap-allocation and stack-allocation, but the downside is that you have to choose.

– Brook Heisler on discord (login needed, sorry!)

Thanks to scottmcm for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Niko MatsakisSalsa: Incremental recompilation

So for the last couple of months or so, I’ve been hacking in my spare time on this library named salsa, along with a number of awesome other folks. Salsa basically extracts the incremental recompilation techniques that we built for rustc into a general-purpose framework that can be used by other programs. Salsa is developing quickly: with the publishing of v0.10.0, we saw a big step up in the overall ergonomics, and I think the current interface is starting to feel very nice.

Salsa is in use by a number of other projects. For example, matklad’s rust-analyzer, a nascent Rust IDE, is using salsa, as is the Lark1 compiler. Notably, rustc does not – it still uses its own incremental engine, which has some pros and cons compared to salsa.2

If you’d like to learn more about Salsa, you can check out [the Hello World example – but, even better, you can check out two videos that I just recorded:

  • How Salsa Works, which gives a high-level introduction to the key concepts involved and shows how to use salsa;
  • Salsa In More Depth, which really digs into the incremental algorithm and explains – at a high-level – how Salsa is implemented.
    • Thanks to Jonathan Turner for helping me to make this one!

If you’re interested in salsa, please jump on to our Zulip instance at salsa.zulipchat.com. It’s a really fun project to hack on, and we’re definitely still looking for people to help out with the implementation and the design. Over the next few weeks, I expect to be outlining a “path to 1.0” with a number of features that we need to push over the finish line.

Footnotes

  1. …worthy of a post of its own, but never mind.

  2. I would like to eventually port rustc to salsa, but it’s not a direct goal.

Mozilla Security BlogDefining the tracking practices that will be blocked in Firefox

For years, web users have endured major privacy violations. Their browsing continues to be routinely and silently tracked across the web. Tracking techniques have advanced to the point where users cannot meaningfully control how their personal data is used.

At Mozilla, we believe that privacy is fundamental, and that pervasive online tracking is unacceptable. Simply put: users need more protection from tracking. In late 2018, Mozilla announced that we are changing our approach to anti-tracking, with a focus on providing tracking protection by default, for the benefit of everyone using Firefox.

In support of this effort, today we are releasing an anti-tracking policy that outlines the tracking practices that Firefox will block by default. At a high level, this new policy will curtail tracking techniques that are used to build profiles of users’ browsing activity. In the policy, we outline the types of tracking practices that users cannot meaningfully control. Firefox may apply technical restrictions to the parties found using each of these techniques.

With the release of our new policy, we’ve defined the set of tracking practices that we think users need to be protected against. As a first step in enforcing this policy, Firefox includes a feature that prevents domains classified as trackers from using cookies and other browser storage features (e.g., DOM storage) when loaded as third parties. While this feature is currently off by default, we are working towards turning it on for all of our users in a future release of Firefox.

Furthermore, the policy also covers query string tracking, browser fingerprinting, and supercookies. We intend to apply protections that block these tracking practices in Firefox in the future.

Parties not wishing to be blocked by this policy should stop tracking Firefox users across websites. To classify trackers, we rely on Disconnect’s Tracking Protection list, which is curated in alignment with this policy. If a party changes their tracking practices and updates their public documentation to reflect these changes, they should work with Disconnect to update the classification of their domains.

This initial release of the anti-tracking policy is not meant to be the final version. Instead, the policy is a living document that we will update in response to the discovery and use of new tracking techniques. We believe that all web browsers have a fundamental obligation to protect users from tracking and we hope the launch of our policy advances the conversation about what privacy protections should be the default for all web users.

Clarification (2019-01-28): Added a sentence to clarify the current status of the cookie blocking feature.

The post Defining the tracking practices that will be blocked in Firefox appeared first on Mozilla Security Blog.

Don MartiPerfect timing

(I work for Mozilla. Not speaking for Mozilla here.)

January 28, 2019:

Male impotence, substance abuse, right-wing politics, left-wing politics, sexually transmitted diseases, cancer, mental health....Intimate and highly sensitive inferences such as these are then systematically broadcast and shared with what can be thousands of third party companies, via the real-time ad auction broadcast process which powers the modern programmatic online advertising system. So essentially you’re looking at the rear-end reality of how creepy ads work.

—[Natasha Lomas, on TechCrunch](https://techcrunch.com/2019/01/27/google-and-iab-ad-category-lists-show-massive-leakage-of-highly-intimate-data-gdpr-complaint-claims/)

Also January 28, 2019:

Simply put: users need more protection from tracking....In support of this effort, today we are releasing an anti-tracking policy that outlines the tracking practices that Firefox will block by default. At a high level, this new policy will curtail tracking techniques that are used to build profiles of users’ browsing activity. In the policy, we outline the types of tracking practices that users cannot meaningfully control.

—[Steven Englehardt and Marshall Erwin, on the Mozilla Security Blog](https://blog.mozilla.org/security/2019/01/28/defining-the-tracking-practices-that-will-be-blocked-in-firefox/)

Ian BickingThe Over-engaged Knowledge Worker

I recently listened to a discussion of knowledge work in the browser. Along the way people imagined idealized workflows and the tools that could enable them. This result felt familiar from concept videos since forever (such as this old Mozilla concept video):

The result featured lots of jet-setting highly engaged people deep in collaboration. For instance: Joe sends his friend a mortgage refinancing proposal to get feedback.

None of my friends have ever just blasted a mortgage refinancing proposal to me for a quick review. Thank god. But I’ve gotten similar requests, we all have, and nobody wants to receive these things. Usually the request sits guiltily in my inbox, mocking me and my purported friendship. If it’s job-related I will eventually get to even the work I loathe, but there’s always a particular pile of work that haunts me. This is not engagement.

This is the reality of knowledge work that none of these conceptualizations address: it’s hard (in very specific ways), some of it we don’t want to do, and the work we don’t want to do piles up and becomes dominant simply because it remains undone.

Our real work looks different than how we idealize our work: work items are smaller, less impactful, higher-touch, and collaboration spreads work out over time, decreasing personal engagement. We also imagine situations where people are much more actively engaged as a total percentage of their interactions, while we spend a lot of time passively receiving information, or simply deciding: what, if anything, should I react to?

So then what?

So how might we approach idea generation around knowledge work without idealizing the knowledge worker?

We could still go too far into acceptance. We’d build solitaire into the browser, the ultimate knowledge worker tool. It’s what people want! People shouldn’t have to pick up their phones to be distracted, we should keep the web as the universal distraction platform it was always meant to be. Oops, I make the mistake of thinking phone-vs-web, rather we should focus on providing distraction continuity across all your platforms. Sorry, this sarcasm is becoming uncomfortable…

I’m not sure a distraction tool is wrong. Giving people a mental break when they want it, but without trying to capture those people, could be positive. The web is full of mental breaks, but they aren’t “breaks”, they are manufactured to hold onto attention long after the needed break has finished.

But if we don’t build engagement tools (because people aren’t looking to be more engaged) and we don’t build distraction tools (because a web browser is already a sufficiently good distraction tool), what do we build?

I think there’s more opportunity in accepting the mentally fatigued and distracted state of knowledge workers, and working from that instead of against it. With that in mind I’d break down the problem into a few categories:

  1. Reduce the drain of knowledge work, so that distractions are less necessary.
  2. Support positive mental relaxation.
  3. Support continuity of mental effort; make it easier to get back on track.

And I’d leave out:

  1. Efficiency: usually efficiency means speed, number of steps, integrations, and so it calls for higher engagement. We care about efficiency, but only the efficient use of mental resources.
  2. Blocking distractions: people want something out of distractions, and while we might aspire to replace distractions it’s probably unsustainable to block those distractions. Blocking is like starting an exercise plan by getting rid of all your chairs.
  3. Communication and collaboration: even if distractions don’t break your continuity, collaboration will! Collaboration is obviously an interesting space, but you can’t do anything without pulling your collaborators into yet another tool. Trying to convert other people to a new way of working is not mentally relaxing.

Here’s where I throw my hands up and admit that I don’t have solutions to these problems, just a new problem statement.

But it does point in some different directions: how do we support a continuity of intention across a long task? In the context of the browser, how do we contextualize pages and interactions inside some abstract task? How do we clarify context? If the human is forced to multitask, can the multitasking tools be grounding instead of stretching us out?

The resulting exploration is not one that constructs an enviable user. It’s a user with virtual piles of papers on their desk with a PB&J forgetten a third of the way down, with a People Magazine placed sneakily inside an important report, with a pile of mail and every single piece is marked Urgent: Open Immediately. People aren’t always knolling… but maybe we could be.

Cameron KaiserTenFourFox FPR12 available

TenFourFox Feature Parity Release 12 final is now available for testing (downloads, hashes, release notes). There are no additional changes except for one outstanding security update and to refresh the certificate and TLD stores. As usual it will go live Monday evening Pacific time assuming no difficulties.

For "lucky" FPR13 I want to take a whack at solving issue 541, since my ability to work on Github from the G5 is seriously impaired at the moment (I have to resort to various workarounds or do tasks from the Talos II with regular Firefox). Since this has some substantial regression risk it will probably be the only JavaScript change I do for that release pending further feasibility tests on the whole enchilada. However, a couple people have asked again about AppleScript support and there is an old patch around that I think could be dusted off and made to work. That release is scheduled for March 19.

Speaking of the Talos II, I should be getting my second POWER9 system in soon, a 4-core Raptor Blackbird we'll be using as a media system. I've already got the mATX case picked out and some decent peripherals and it will probably run Fedora also, since I'm pretty accustomed to it by now. If these systems are starting to interest you but the sticker shock of a full T2 loadout is too much, the Blackbird can give you a taste of next-generation Power ISA without too much pain to your pocketbook.

Meanwhile, over on our sister Talospace blog, if you've been thinking about the Linux plunge (either with a POWER9 or on your own system) but your Mac habits die hard, here's a better way to get the Command key to work properly than faffing about with AutoKey and you can still run Mac OS X apps in virtualization or emulation.

The Mozilla BlogMozilla Fosters the Next Generation of Women in Emerging Technologies

At Mozilla, we want to empower people to create technology that reflects the diversity of the world we live in. Today we’re excited to announce the release of the Inclusive Development Space toolkit. This is a way for anyone around the world to set up their own pop-up studio to support diverse creators.

The XR Studio was a first-of-its-kind pop-up at Mozilla’s San Francisco office in the Summer of 2018. It provided a deeply needed space for women and gender non-binary people to collaborate, learn and create projects using virtual reality, augmented reality, and artificial intelligence.

The XR Studio program was founded to offer a jump-start for women creators, providing access to mentors, equipment, ideas, and a community with others like them. Including a wide range of ages, technical abilities, and backgrounds was essential to the program experience.

Inclusive spaces are needed in the tech industry. In technology maker-spaces, eighty percent of makers are men. As technologies like VR and AI become more widespread, it’s crucial that a variety of viewpoints are represented to eliminate biases from lack of diversity.

The XR Studio cohort had round-the-clock access to high quality VR, AR, and mixed reality hardware, as well as mentorship from experts in the field. The group came together weekly to share experiences and connect with leading industry experts like Unity’s Timoni West, Fast.ai’s Rachel Thomas, and VR pioneer Brenda Laurel.

We received more than 100 applications in little over two weeks and accepted 32 participants. Many who applied cited a chance to experiment with futuristic tools as the most important reason for applying to the program, with career development a close second.

“I couldn’t imagine XR Studio being with any other organization. Don’t know if it would have had as much success if it wasn’t with Mozilla. That really accentuated the program.” – Tyler Musgrave, recently named Futurist in residence at ARVR Women.

Projects spanned from efforts to improve bias awareness in education, self defense training, criminal justice system education, identifying police surveillance and more. Participants felt the safe and supportive environment gave them a unique advantage in technology creation. “With Mozilla’s XR Studio, I am surrounded by women just as passionate and supportive about creating XR products as I am,” said Neilda Pacquing, Founder and CEO MindGlow, Inc., a company that focuses on safety training using immersive experiences. “There’s no other place like it and I feel I’ve gone further in creating my products than I would have without it.”

So what’s next?

The Mozilla XR Studio program offered an opportunity to learn and build confidence, overcome imposter syndrome, and make amazing projects. We learned lessons about architecting an inclusive space that we plan to use to create future Mozilla spaces that will support underrepresented groups in creating with emerging technologies.

Mozilla is also sponsoring the women in VR brunch at the Sundance Film Festival this Sunday. It will be a great opportunity to learn, collaborate, and fellowship with women from around the world. If you will be in the area, please reach out and say hello.

Want to create your own inclusive development space in your community, city or company? Check out our toolkit.

The post Mozilla Fosters the Next Generation of Women in Emerging Technologies appeared first on The Mozilla Blog.

The Firefox FrontierFast vs private? Have it all with Firefox.

Two years ago there weren’t many options when it came to a fast vs private browser. If you wanted fast internet, you had to give up privacy. If you went … Read more

The post Fast vs private? Have it all with Firefox. appeared first on The Firefox Frontier.

Mozilla Future Releases BlogClarifying the Future of Firefox Screenshots

Screenshots has been a popular part of Firefox since its launch in Firefox 56 in September 2017. Last year alone it was used by more than 20 million people to take nearly 180 million screenshots! The feature grew in popularity each month as new users discovered it in Firefox.

So it’s not surprising that any hints of changes coming to how we administer this popular feature generated interest from developers, press and everyday Firefox users. We want to take this opportunity to clarify exactly what the the future holds for Screenshots.

What is happening to Screenshots?

The Screenshots feature is not being removed from Firefox.

Screenshots users will still be able to crop shots, capture visible parts of pages and even capture full web pages. Users will continue to be able to download these images and copy them to their clipboard.

What is changing is that in 2019 users will no longer have the option to save screenshots to a standalone server hosted by Firefox. Previously, shots could be saved to our server, expiring after two weeks unless a user expressly chose to save them for longer.

Why are we making this change?

While some users made use of the save-to-server feature, downloading and copying shots to clipboard have become far more popular options for our users. We’ve decided to simplify the Screenshots service by focusing on these two options and sunsetting the Screenshots server in 2019.

Where did the confusion come from?

We’re an open source organization so sometimes when we’re contemplating changes that will enhance the experience of our users, information is shared while we’re still noodling the right path forward. That was the case here. In response to user feedback, we had planned to change the “Save” button on Screenshots to “Upload” to better indicate that shots would be saved to a server. When we decided that we’d no longer be offering the save-to-server option for screenshots, we shelved the button copy change.

User feedback about the button copy had nothing to do with the removal of the server. We are choosing to take the latter step simply because the copy to clipboard and download options are considerably more popular and we want to offer a simpler user experience.

OK, so when do I have to clear out the “attic”?

Starting in Firefox 67 which is released in May, users will no longer be able to upload shots to the Screenshots server. Pre-release users will see these changes starting in February as Firefox 67 enters Nightly.

We will be alerting users who have shots saved to the server by showing messaging about how to export their saved shots starting in February as well.

Users will have until late summer to export any permanently saved shots they have on the Screenshots server. You can visit our support site for additional information on how to manage this transition.

How are you gonna make it up me? What’s coming next?

Screenshots quickly became a popular tool in Firefox. Look for new features like keyboard shortcuts and improved shot preview UI coming soon. We’re also interested in finding new ways to let Firefox users know the feature is there, and are planning experiments to highlight Screenshots as one of many tools that make Firefox unique.

The post Clarifying the Future of Firefox Screenshots appeared first on Future Releases.

Support.Mozilla.Org[Important] Changes to the SUMO staff team

TL;DR

  • Social Community Manager changes: Konstantina and Kiki will be taking over Social Community Management. As of today, Rachel has left Mozilla as an employee.
  • L10n/KB Community Manager changes: Ruben will be taking over Community Management for KB translations. As of today, Michal has left Mozilla as an employee.
  • SUMO community call to introduce Konstantina, Kiki and Ruben on the 24th of January at 9 am PST.
  • If you have questions or concerns please join the conversation on the SUMO forums or the SUMO discourse

Today we’d like to announce some changes to the SUMO staff team. Rachel McGuigan and Michał Dziewoński will be leaving Mozilla.

Rachel and Michal have been crucial to our efforts of creating and running SUMO for many years. Rachel first showed great talent with her work on FxOS support. Her drive with our social support team have been crucial to the support of Firefox releases. Michal’s drive and passion for languages have ensured SUMO KB has a fantastic coverage of languages and that support to use the free, open browser that is Firefox, is available for more people. We wish Rachel and Michal all the best on their next adventure and thank them for their contributions to Mozilla.

With these changes, we will be thinking about how best to organize the SUMO team. Rest assured, we will continue investing in community management and will be growing the overall size of the SUMO team throughout 2019.

In the meantime Konstantina, Kiki and Ruben will be stepping in temporarily while we seek to backfill these roles to help us ensure we still have full focus on our work and continue working on our projects with you all.

We are confident in the positive future of SUMO in Mozilla, and we remain excited about the many new products and platforms we will introduce support for.  We have an incredible opportunity in front of us to continue delivering huge impact for Mozilla in 2019 and are looking forward to making this real with all of you.

Keep rocking the helpful web!

Mozilla GFXWebRender newsletter #37

Hi! Last week I mentioned picture caching landing in nightly and I am happy to report that it didn’t get backed out (never to take for granted with a change of that importance) and it’s here to stay.
Another rather hot topic but which didn’t appear in the newsletter was Jeff and Matt’s long investigation of content frame time telemetry numbers. It turned into a real saga, featuring performance improvements but also a lot of adjustments to the way we do the measurements to make sure that we get apple to apple comparisons of Firefox running with and without WebRender. The content frame time metric is important because it correlates with user perception of stuttering, and we now have solid measurements backing that WebRender improves this metric.

Notable WebRender and Gecko changes

  • Bobby did various code cleanups and improvements.
  • Chris wrote a prototype Windows app to test resizing a child HWND in a child process and figure out how to do that without glitches.
  • Matt fixed an SVG filter clipping issue.
  • Matt Enabled SVG filters to be processed on the GPU in more cases.
  • Andrew fixed a pixel snapping issue with transforms.
  • Andrew fixed a blob image crash.
  • Emilio fixed a bug with perspective transforms.
  • Glenn included root content clip rect in picture caching world bounds.
  • Glenn added support for multiple dirty rects in picture caching.
  • Glenn fixed adding extremely large primitives to picture caching tile dependencies.
  • Glenn skipped some redundant work during picture caching updates.
  • Glenn removed unused clear color mode.
  • Glenn reduced invalidation caused by world clip rects.
  • Glenn fixed an invalidation issue with picture caching when encountering a blur filter.
  • Glenn avoided interning text run primitives due to scrolled offset field.
  • Sotaro improved the performance of large animated SVGs in some cases.

Ongoing work

The team keeps going through the remaining blockers (7 P2 bugs and 20 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.