QMOFirefox 66 Beta 8 Testday Results

Hello Mozillians!

As you may already know, last Friday February 15th – we held a new Testday event, for Firefox 66 Beta 8.

Thank you all for helping us make Mozilla a better place: gaby2300, Priyadharshini  A and Aishwarya Narasimhan.

Results:

– several test cases executed for “Storage Access API/Cookie Restrictions” .

Thanks for another successful testday! 🙂

SeaMonkeyAll humors aside… ;(

Hi Everyone,

tl;dr; I’ve messed up majorly and need to revamp the infrastructure; meaning further delaying any hopes of releasing ANYTHING. Blame lies solely on me as I had contracted the serious case of “Thomas the Steam Engine”-itis.

That said, things probably aren’t that bad; I’m just so deep into this hole that I’ve dug the project into, I certainly can’t see the light of day.

Call me, Mole…  Mr. Mole.

— Long Missive —

I have taken up the mantle of the person who will bring bad news. (Could be good news, depending on your point of view, I guess).

But first, a confession.

I screwed up. I admit it. In the past, our old infrastructure’s CI was manageable; just had to fix up some code. But now, things have become untenable as the current build process is completely incompatible with the current CI code and changing the whole backend codebase is requires understanding the current build process (which has changed dramatically since Mozilla moved to using TaskCluster).
(NB: No.. Don’t get me wrong. I’m not blaming Mozilla. Just saying that *I* can’t keep up with their changes, which probably speaks volumes of my competencies and delusional thought process.) So what is needed is revamping the whole CI code to make it work.

That said, since time is of the essence, I’ve consulted with the rest of the guys and we’re moving to Jenkins, since revamping the old CI code would require some hacking at an already Frankenstein-like code, so the technical debt needs to be paid.

Is it the end of the world? No. I’m just particularly unhappy that we’re in this situation; but hindsight is always 20/20. So yes, this means any future releases will depend on getting the whole build process encoded into whatever way Jenkins requires.

Also note that SeaMonkey needs to completely stop relying on Mozilla’s infrastructure (*every single thing*, including this blog, bugzilla… you name it.. we need to be off it) by end of the year.

Anyway, I sincerely apologize for the mess; both to every single one of SeaMonkey’s users (both past and present) and to my fellow devs (again, both past and present).   As part of my defense, last year, I thought (with the delusions of competencies) moving to Azure wouldn’t be that problematic and while I did get a ‘running’ (though not really building) infra, everything went crazy near the end of last year as I had realized the required builds and branches needed new tool chains; and building these toolchains required a lot of time and energy.  At the end… a failed attempt at keeping up to date with the whole build process.

So… the project is at a standstill.

In any event, I would like to thank everyone for their support in the past and continual support and infinite patience as this project continues to climb this steep (or as Richie from “Bottom” would say, “f’ing” vertical) hill/mountain.

*sigh*

:ewong

NB: In other words… Live and Learn.

Mozilla Add-ons BlogExtensions in Firefox 66

Firefox 66 is currently in beta and, for extension developers, the changes to the WebExtensions API center primarily around improving performance, stability, and the development experience. A total of 30 issues were resolved in Firefox 66, including contributions from several volunteer community members.

Major Performance Improvements for Storage

I want to start by highlighting an important change that has a major, positive impact for Firefox users. Starting in release 66, extensions use IndexedDB as the backend for local storage instead of a JSON file. This results in a significant performance improvement for many extensions, while simultaneously reducing the amount of memory that Firefox uses.

This change is completely transparent to extension developers – you do not need to do anything to take advantage of this improvement.  When users upgrade to Firefox 66, the local storage JSON file is silently migrated to IndexedDB. All extensions using the storage.local() API immediately realize the benefits, especially if they store small changes to large structures, as is true for ad-blockers, the most common and popular type of extension used in Firefox.

The video below, using Adblock Plus as an example, shows the significant performance improvements that extension users could see.

Other Improvements

The remaining bug fixes and feature enhancements won’t be as noticeable as the change to local storage, but they nevertheless raise the overall quality of the WebExtensions API and make the development experience better.  Some of the highlights include:

Thank you to everyone who contributed to the Firefox 66 release, but a special thank you to our volunteer community contributors, including: tossj, Varun Dey, and Edward Wu.

The post Extensions in Firefox 66 appeared first on Mozilla Add-ons Blog.

Mozilla VR BlogJingle Smash: Geometry and Textures

Jingle Smash: Geometry and Textures

This is part 3 of my series on how I built Jingle Smash, a block smashing WebVR game

I’m not a designer or artist. In previous demos and games I’ve used GLTFs, which are existing 3D models created by someone else that I downloaded into my game. However, for Jingle Smash I decided to use procedural generation, meaning I combined primitives in interesting ways using code. I also generated all of the textures with code. I don’t know how to draw pretty textures by hand in a painting tool, but 20 years of 2D coding means I can code up a texture pretty easily.

Jingle Smash has three sets of graphics: the blocks, the balls, and the background imagery. Each set uses its own graphics technique.

Block Textures

The blocks all use the same texture placed on every side, depending on the block type. For blocks that you can knock over I called these ‘presents’ and gave them red ribbon stripes over a white background. I drew this into an HTML Canvas with standard 2D canvas code, then turned it into a texture using the THREE.CanvasTexture class.

const canvas = document.createElement('canvas')
canvas.width = 128
canvas.height = 128
const c = canvas.getContext('2d')

//white background
c.fillStyle = 'white'
c.fillRect(0,0,canvas.width, canvas.height)

//lower left for the sides
c.save()
c.translate(0,canvas.height/2)
c.fillStyle = 'red'
c.fillRect(canvas.width/8*1.5, 0, canvas.width/8, canvas.height/2)
c.restore()

//upper left for the bottom and top
c.save()
c.translate(0,0)
c.fillStyle = 'red'
c.fillRect(canvas.width/8*1.5, 0, canvas.width/8, canvas.height/2)
c.fillStyle = 'red'
c.fillRect(0,canvas.height/8*1.5, canvas.width/2, canvas.height/8)
c.restore()

c.fillStyle = 'black'

const tex = new THREE.CanvasTexture(canvas)
this.textures.present1 = tex

this.materials[BLOCK_TYPES.BLOCK] = new THREE.MeshStandardMaterial({
    color: 'white',
    metalness: 0.0,
    roughness: 1.0,
    map:this.textures.present1,
})

Once the texture is made I can create a ThreeJS material with it. I tried to use PBR (physically based rendering) materials in this project. Since the presents are supposed to be made of paper I used a metalness of 0.0 and roughness of 1.0. All textures and materials are saved in global variables for reuse.

Here is the finished texture. The lower left part is used for the sides and the upper left for the top and bottom.

Jingle Smash: Geometry and Textures

The other two box textures are similar, a square and cross for the crystal boxes and simple random noise for the walls.

Jingle Smash: Geometry and Textures
Jingle Smash: Geometry and Textures

Skinning the Box

By default a BoxGeometry will put the same texture on all six sides of the box. However, we want to use different portions of the texture above for different sides. This is controlled with the UV values of each face. Fortunately ThreeJS has a face abstraction to make this easy. You can loop over the faces and manipulate the UVs however you wish. I scaled and moved them around to capture just the parts of the texture I wanted.

geo.faceVertexUvs[0].forEach((f,i)=>{
    if(i === 4 || i===5 || i===6 || i===7 ) {
        f.forEach(uv=>{
            uv.x *= 0.5 //scale down
            uv.y *= 0.5 //scale down
            uv.y += 0.5 //move from lower left quadrant to upper left quadrant
        })
    } else {
        //rest of the sides. scale it in
        f.forEach(uv=>{
            uv.x *= 0.5 // scale down
            uv.y *= 0.5 // scale down
        })
    }
})

Striped Ornaments

There are two different balls you can shoot. A spherical ornament with a stem and an oblong textured one. For the textures I just generated stripes with canvas.

{
    const canvas = document.createElement('canvas')
    canvas.width = 64
    canvas.height = 16
    const c = canvas.getContext('2d')

    c.fillStyle = 'black'
    c.fillRect(0, 0, canvas.width, canvas.height)
    c.fillStyle = 'red'
    c.fillRect(0, 0, 30, canvas.height)
    c.fillStyle = 'white'
    c.fillRect(30, 0, 4, canvas.height)
    c.fillStyle = 'green'
    c.fillRect(34, 0, 30, canvas.height)

    this.textures.ornament1 = new THREE.CanvasTexture(canvas)
    this.textures.ornament1.wrapS = THREE.RepeatWrapping
	  this.textures.ornament1.repeat.set(8, 1)
}

{
    const canvas = document.createElement('canvas')
    canvas.width = 128
    canvas.height = 128
    const c = canvas.getContext('2d')
    c.fillStyle = 'black'
    c.fillRect(0,0,canvas.width, canvas.height)

    c.fillStyle = 'red'
    c.fillRect(0, 0, canvas.width, canvas.height/2)
    c.fillStyle = 'white'
    c.fillRect(0, canvas.height/2, canvas.width, canvas.height/2)

    const tex = new THREE.CanvasTexture(canvas)
    tex.wrapS = THREE.RepeatWrapping
    tex.wrapT = THREE.RepeatWrapping
    tex.repeat.set(6,6)
    this.textures.ornament2 = tex
}

The code above produces these textures:

Jingle Smash: Geometry and Textures
Jingle Smash: Geometry and Textures

What makes the textures interesting is repeating them on the ornaments. ThreeJS makes this really easy by using the wrap and repeat values, as shown in the code above.

One of the ornaments is meant to have an oblong double turnip shape, so I used a LatheGeometry. With a lathe you define a curve and ThreeJS will rotate it to produce a 3D mesh. I created the curve with the equations x = Math.sin(I*0.195) * radius and y = i * radius /7.

let points = [];
for (let I = 0; I <= 16; I++) {
    points.push(new THREE.Vector2(Math.sin(I * 0.195) * rad, I * rad / 7));
}
var geometry = new THREE.LatheBufferGeometry(points);
geometry.center()
return new THREE.Mesh(geometry, new THREE.MeshStandardMaterial({
    color: ‘white’,
    metalness: 0.3,
    roughness: 0.3,
    map: this.textures.ornament1,
}))

Jingle Smash: Geometry and Textures

For the other ornament I wanted a round ball with a stem on the end like a real Christmas tree ornament. To build this I combined a sphere and cylinder.

const geo = new THREE.Geometry()
geo.merge(new THREE.SphereGeometry(rad,32))
const stem = new THREE.CylinderGeometry(rad/4,rad/4,0.5,8)
stem.translate(0,rad/4,0)
geo.merge(stem)
return new THREE.Mesh(geo, new THREE.MeshStandardMaterial({
    color: ‘white’,
    metalness: 0.3,
    roughness: 0.3,
    map: this.textures.ornament2,
}))

Jingle Smash: Geometry and Textures

Since I wanted the ornaments to appear shiny and plasticy, but a shiny as a chrome sphere, I used metalness and roughness values of 0.3 and 0.3.

Note that I had to center the oblong ornament with geometry.center(). Even though the ornaments have different shapes I represented them both as spheres on the physics side. If you roll the oblong one on the ground it may look strange seeing it perfectly like a ball, but it was good enough for this game. Game development is all about cutting the right corners.

Building a Background

It might not look like it if you are in a 3 degree of freedom (3dof) headset like the Oculus Go, but the background is not a static painting. The clouds in the sky are an image but everything else was created with real geometry.

Jingle Smash: Geometry and Textures

The snow covered hills are actually full spheres placed mostly below the ground plane. The trees and candy are all simple cones. The underlying stripe texture I drew in Acorn, a desktop drawing app. Other than the clouds it is the only real texture I used in the game. I probably could have done the stripe in code as well but I was running out of time. In fact both the trees and candy mountains use the exact same texture, just with a different base color.

        const tex = game.texture_loader.load(‘./textures/candycane.png’)
        tex.wrapS = THREE.RepeatWrapping
        tex.wrapT = THREE.RepeatWrapping
        tex.repeat.set(8,8)

        const background = new THREE.Group()

        const candyCones = new THREE.Geometry()
        candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(-22,5,0))
        candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(22,5,0))
        candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(7,5,-30))
        candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(-13,5,-20))
        background.add(new THREE.Mesh(candyCones,new THREE.MeshLambertMaterial({ color:’white’, map:tex,})))

        const greenCones = new THREE.Geometry()
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-15,2,-5))
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-8,2,-28))
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-8.5,0,-25))
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(15,2,-5))
        greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(14,0,-3))

        background.add(new THREE.Mesh(greenCones,new THREE.MeshLambertMaterial({color:’green’, map:tex,})))

Jingle Smash: Geometry and Textures

All of them were positioned by hand in code. To make this work I had to constantly adjust code then reload the scene in VR. At first I would just preview in my desktop browser, but to really feel how the scene looks you have to view it in a real 3D headset. This is one of the magical parts about VR development with the web. Iteration is so fast.

Note that even though I have many different cones I merged them all into just two geometries so they can be drawn together. It’s far better to have two draw calls instead of 10 for a static background.

Next Steps

I'm pretty happy with how the textures turned out. By sticking to just a few core colors I was able to create with both consistency and variety. Furthermore, I was able to do it without any 3D modeling. Just some simple canvas code and a lot of iteration.

Next time I'll dive into the in-game level editor.

Web Application SecurityWhy Does Mozilla Maintain Our Own Root Certificate Store?

Mozilla maintains a database containing a set of “root” certificates that we use as “trust anchors”. This database, commonly referred to as a “root store”, allows us to determine which Certificate Authorities (CAs) can issue SSL/TLS certificates that are trusted by Firefox, and email certificates that are trusted by Thunderbird. Properly maintaining a root store is a significant undertaking – it requires constant effort to evaluate new trust anchors, monitor existing ones, and react to incidents that threaten our users. Despite the effort involved, Mozilla is committed to maintaining our own root store because doing so is vital to the security of our products and the web in general. It gives us the ability to set policies, determine which CAs meet them, and to take action when a CA fails to do so.

A major advantage to controlling our own root store is that we can do so in a way that reflects our values. We manage our CA Certificate Program in the open, and by encouraging public participation we give individuals a voice in these trust decisions. Our root inclusion process is one example. We process lots of data and perform significant due diligence, then publish our findings and hold a public discussion before accepting each new root. Managing our own root store also allows us to have a public incident reporting process that emphasizes disclosure and learning from experts in the field. Our mailing list includes participants from many CAs, CA auditors, and other root store operators and is the most widely recognized forum for open, public discussion of policy issues.

The value delivered by our root program extends far beyond Mozilla. Everyone who relies on publicly-trusted certificates benefits from our work, regardless of their choice of browser. And because our root store, which is part of the NSS cryptographic library, is open source, it has become a de-facto standard for many Linux distributions and other products that need a root store but don’t have the resources to curate their own. Providing one root store that many different products can rely on, regardless of platform, reduces compatibility problems that would result from each product having a unique set of root certificates.

Finally, operating a root store allows Mozilla to lead and influence the entire web Public Key Infrastructure (PKI) ecosystem. We created the Common Certificate Authority Database (CCADB) to help us manage our own program, and have since opened it up to other root store operators, resulting in better information and less redundant work for all involved. With full membership in the CA/Browser Forum, we collaborate with other root store operators, CAs, and auditors to create standards that continue to increase the trustworthiness of CAs and the SSL/TLS certificates they issue. Our most recent effort was aimed at improving the standards for validating IP Addresses.

The primary alternative to running our own root store is to rely on the one that is built in to most operating systems (OSs). However, relying on our own root store allows us to provide a consistent experience across OS platforms because we can guarantee that the exact same set of trust anchors is available to Firefox. In addition, OS vendors often serve customers in government and industry in addition to their end users, putting them in a position to sometimes make root store decisions that Mozilla would not consider to be in the best interest of individuals.

Sometimes we experience problems that wouldn’t have occurred if Firefox relied on the OS root store. Companies often want to add their own private trust anchors to systems that they control, and it is easier for them if they can modify the OS root store and assume that all applications will rely on it. The same is true for products that intercept traffic on a computer. For example, many antivirus programs unfortunately include a web filtering feature that intercepts HTTPS requests by adding a special trust anchor to the OS root store. This will trigger security errors in Firefox unless the vendor supports Firefox by turning on the setting we provide to address these situations.

Principle 4 of the Mozilla Manifesto states that “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” The costs of maintaining a CA Certificate Program and root store are significant, but there are fundamental benefits for our users and the larger internet community that undoubtedly make doing it ourselves the right choice for Mozilla.

The post Why Does Mozilla Maintain Our Own Root Certificate Store? appeared first on Mozilla Security Blog.

hacks.mozilla.orgFearless Security: Thread Safety

In Part 2 of my three-part Fearless Security series, I’ll explore thread safety.

Today’s applications are multi-threaded—instead of sequentially completing tasks, a program uses threads to perform multiple tasks simultaneously. We all use concurrency and parallelism every day:

  • Web sites serve multiple simultaneous users.
  • User interfaces perform background work that doesn’t interrupt the user. (Imagine if your application froze each time you typed a character because it was spell-checking).
  • Multiple applications can run at the same time on a computer.

While this allows programs to do more faster, it comes with a set of synchronization problems, namely deadlocks and data races. From a security standpoint, why do we care about thread safety? Memory safety bugs and thread safety bugs have the same core problem: invalid resource use. Concurrency attacks can lead to similar consequences as memory attacks, including privilege escalation, arbitrary code execution (ACE), and bypassing security checks.

Concurrency bugs, like implementation bugs, are closely related to program correctness. While memory vulnerabilities are nearly always dangerous, implementation/logic bugs don’t always indicate a security concern, unless they occur in the part of the code that deals with ensuring security contracts are upheld (e.g. allowing a security check bypass). However, while security problems stemming from logic errors often occur near the error in sequential code, concurrency bugs often happen in different functions from their corresponding vulnerability, making them difficult to trace and resolve. Another complication is the overlap between mishandling memory and concurrency flaws, which we see in data races.

Programming languages have evolved different concurrency strategies to help developers manage both the performance and security challenges of multi-threaded applications.

Problems with concurrency

It’s a common axiom that parallel programming is hard—our brains are better at sequential reasoning. Concurrent code can have unexpected and unwanted interactions between threads, including deadlocks, race conditions, and data races.

A deadlock occurs when multiple threads are each waiting on the other to take some action in order to proceed, leading to the threads becoming permanently blocked. While this is undesirable behavior and could cause a denial of service attack, it wouldn’t cause vulnerabilities like ACE.

A race condition is a situation in which the timing or ordering of tasks can affect the correctness of a program, while a data race happens when multiple threads attempt to concurrently access the same location in memory and at least one of those accesses is a write. There’s a lot of overlap between data races and race conditions, but they can also occur independently. There are no benign data races.

Potential consequences of concurrency bugs:

  1. Deadlock
  2. Information loss: another thread overwrites information
  3. Integrity loss: information from multiple threads is interlaced
  4. Loss of liveness: performance problems resulting from uneven access to shared resources

The best-known type of concurrency attack is called a TOCTOU (time of check to time of use) attack, which is a race condition between checking a condition (like a security credential) and using the results. TOCTOU attacks are examples of integrity loss.

Deadlocks and loss of liveness are considered performance problems, not security issues, while information and integrity loss are both more likely to be security-related. This paper from Red Balloon Security examines some exploitable concurrency errors. One example is a pointer corruption that allows privilege escalation or remote execution—a function that loads a shared ELF (Executable and Linkable Format) library holds a semaphore correctly the first time it’s called, but the second time it doesn’t, enabling kernel memory corruption. This attack is an example of information loss.

The trickiest part of concurrent programming is testing and debugging—concurrency bugs have poor reproducibility. Event timings, operating system decisions, network traffic, etc. can all cause different behavior each time you run a program that has a concurrency bug.

Not only can behavior change each time we run a concurrent program, but inserting print or debugging statements can also modify the behavior, causing heisenbugs (nondeterministic, hard to reproduce bugs that are common in concurrent programming) to mysteriously disappear. These operations are slow compared to others and change message interleaving and event timing accordingly.

Concurrent programming is hard. Predicting how concurrent code interacts with other concurrent code is difficult to do. When bugs appear, they’re difficult to find and fix. Instead of relying on programmers to worry about this, let’s look at ways to design programs and use languages to make it easier to write concurrent code.

First, we need to define what “threadsafe” means:

“A data type or static method is threadsafe if it behaves correctly when used from multiple threads, regardless of how those threads are executed, and without demanding additional coordination from the calling code.” MIT

How programming languages manage concurrency

In languages that don’t statically enforce thread safety, programmers must remain constantly vigilant when interacting with memory that can be shared with another thread and could change at any time. In sequential programming, we’re taught to avoid global variables in case another part of code has silently modified them. Like manual memory management, requiring programmers to safely mutate shared data is problematic.

Generally, programming languages are limited to two approaches for managing safe concurrency:

  1. Confining mutability or limiting sharing
  2. Manual thread safety (e.g. locks, semaphores)

Languages that limit threading either confine mutable variables to a single thread or require that all shared variables be immutable. Both approaches eliminate the core problem of data races—improperly mutating shared data—but this can be too limiting. To solve this, languages have introduced low-level synchronization primitives like mutexes. These can be used to build threadsafe data structures.

Python and the global interpreter lock

The reference implementation of Python, CPython, has a mutex called the Global Interpreter Lock (GIL), which only allows a single thread to access a Python object. Multi-threaded Python is notorious for being inefficient because of the time spent waiting to acquire the GIL. Instead, most parallel Python programs use multiprocessing, meaning each process has its own GIL.

Java and runtime exceptions

Java is designed to support concurrent programming via a shared-memory model. Each thread has its own execution path, but is able to access any object in the program—it’s up to the programmer to synchronize accesses between threads using Java built-in primitives.

While Java has the building blocks for creating thread-safe programs, thread safety is not guaranteed by the compiler (unlike memory safety). If an unsynchronized memory access occurs (aka a data race), then Java will raise a runtime exception—however, this still relies on programmers appropriately using concurrency primitives.

C++ and the programmer’s brain

While Python avoids data races by synchronizing everything with the GIL, and Java raises runtime exceptions if it detects a data race, C++ relies on programmers to manually synchronize memory accesses. Prior to C++11, the standard library did not include concurrency primitives.

Most programming languages provide programmers with the tools to write thread-safe code, and post hoc methods exist for detecting data races and race conditions; however, this does not result in any guarantees of thread safety or data race freedom.

How does Rust manage concurrency?

Rust takes a multi-pronged approach to eliminating data races, using ownership rules and type safety to guarantee data race freedom at compile time.

The first post of this series introduced ownership—one of the core concepts of Rust. Each variable has a unique owner and can either be moved or borrowed. If a different thread needs to modify a resource, then we can transfer ownership by moving the variable to the new thread.

Moving enforces exclusion, allowing multiple threads to write to the same memory, but never at the same time. Since an owner is confined to a single thread, what happens if another thread borrows a variable?

In Rust, you can have either one mutable borrow or as many immutable borrows as you want. You can never simultaneously have a mutable borrow and an immutable borrow (or multiple mutable borrows). When we talk about memory safety, this ensures that resources are freed properly, but when we talk about thread safety, it means that only one thread can ever modify a variable at a time. Furthermore, we know that no other threads will try to reference an out of date borrow—borrowing enforces either sharing or writing, but never both.

Ownership was designed to mitigate memory vulnerabilities. It turns out that it also prevents data races.

While many programming languages have methods to enforce memory safety (like reference counting and garbage collection), they usually rely on manual synchronization or prohibitions on concurrent sharing to prevent data races. Rust’s approach addresses both kinds of safety by attempting to solve the core problem of identifying valid resource use and enforcing that validity during compilation.

Either one mutable borrow or infinitely many immutable borrows

But wait! There’s more!

The ownership rules prevent multiple threads from writing to the same memory and disallow simultaneous sharing between threads and mutability, but this doesn’t necessarily provide thread-safe data structures. Every data structure in Rust is either thread-safe or it’s not. This is communicated to the compiler using the type system.

A well-typed program can’t go wrong. Robin Milner, 1978

In programming languages, type systems describe valid behaviors. In other words, a well-typed program is well-defined. As long as our types are expressive enough to capture our intended meaning, then a well-typed program will behave as intended.

Rust is a type safe language—the compiler verifies that all types are consistent. For example, the following code would not compile:

    let mut x = "I am a string";
    x = 6;
    error[E0308]: mismatched types
     --> src/main.rs:6:5
      |
    6 | x = 6; //
      |     ^ expected &str, found integral variable
      |
      = note: expected type `&str`
                 found type `{integer}`

All variables in Rust have a type—often, they’re implicit. We can also define new types and describe what capabilities a type has using the trait system. Traits provide an interface abstraction in Rust. Two important built-in traits are Send and Sync, which are exposed by default by the Rust compiler for every type in a Rust program:

  • Send indicates that a struct may safely be sent between threads (required for an ownership move)
  • Sync indicates that a struct may safely be shared between threads

This example is a simplified version of the standard library code that spawns threads:

    fn spawn<Closure: Fn() + Send>(closure: Closure){ ... }

    let x = std::rc::Rc::new(6);
    spawn(|| { x; });

The spawn function takes a single argument, closure, and requires that closure has a type that implements the Send and Fn traits. When we try to spawn a thread and pass a closure value that makes use of the variable x, the compiler rejects the program for not fulfilling these requirements with the following error:

    error[E0277]: `std::rc::Rc<i32>` cannot be sent between threads safely
     --> src/main.rs:8:1
      |
    8 | spawn(move || { x; });
      | ^^^^^ `std::rc::Rc<i32>` cannot be sent between threads safely
      |
      = help: within `[closure@src/main.rs:8:7: 8:21 x:std::rc::Rc<i32>]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<i32>`
      = note: required because it appears within the type `[closure@src/main.rs:8:7: 8:21 x:std::rc::Rc<i32>]`
    note: required by `spawn`

The Send and Sync traits allow the Rust type system to reason about what data may be shared. By including this information in the type system, thread safety becomes type safety. Instead of relying on documentation, thread safety is part of the compiler’s law.

This allows programmers to be opinionated about what can be shared between threads, and the compiler will enforce those opinions.

While many programming languages provide tools for concurrent programming, preventing data races is a difficult problem. Requiring programmers to reason about complex instruction interleaving and interaction between threads leads to error prone code. While thread safety and memory safety violations share similar consequences, traditional memory safety mitigations like reference counting and garbage collection don’t prevent data races. In addition to statically guaranteeing memory safety, Rust’s ownership model prevents unsafe data modification and sharing across threads, while the type system propagates and enforces thread safety at compile time.
Pikachu finally discovers fearless concurrency with Rust

The post Fearless Security: Thread Safety appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogFirefox for iOS Amps Up Private Browsing and More

Today we’re rolling out updated features for iPhone and iPad users, including a new layout for menu and settings, persistent Private Browsing tabs and new organization options within the New Tabs feature. This round of updates is the result of requests we received straight from our users, and we’re taking your feedback to make this version of iOS work harder and smarter for you.

With this in mind, in the latest update of Firefox for iOS we overhauled both the Settings and Menu options to more closely mirror the desktop application. Now you can access bookmarks, history, Reading List and downloads in the “Library” menu item.

Private Browsing – Keep browsing like nobody’s watching

Private browsing tabs can now live across sessions, meaning, if you open a private browsing tab and then exit the app, Firefox will automatically launch in private browsing the next time you open the app. Keeping your private browsing preferences seamless is just another way we’re making it simple and easy to give you back control of the privacy of your online experience.

Private browsing tabs can now live across sessions

Organize your New Tabs (like a pro)

Today’s release also includes a few different options for New Tabs organization. You can now choose to have new tabs open with your bookmark list, in Firefox Home (with top sites and Pocket stories), with a list of recent history, a custom URL or in a blank page.

 

More options for New Tabs organization

We’re also making it easier to customize Firefox Home with top sites and Pocket content. All tabs can now be rearranged by dragging a tab into the tab bar or tab tray.

Customize Firefox Home with top sites and Pocket content

Whether it’s your personal data or how you organize your online experience, Firefox continues to bring more privacy and control to you.

To get the latest version of Firefox for iOS, visit the App Store.

 

The post Firefox for iOS Amps Up Private Browsing and More appeared first on The Mozilla Blog.

Mozilla Gfx TeamWebRender newsletter #40

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Mozilla’s research web browser Servo and on its way to becoming Firefox‘s rendering engine.

Notable WebRender and Gecko changes

  • Kats made improvements to the continuous integration on Mac.
  • Kvark fixed a crash.
  • Kvark added a way to dump the state of the frame builder for debugging.
  • kvark made transform flattening operate at preserve-3d context boundaries.
  • kvark enabled non-screen-space rasterization of plane-splits.
  • kvark fixed seams between image tiles.
  • Glenn fixed a bug with border-style: double where the border widths are exactly 1 pixel.
  • Glenn made some improvements to pixel snapping.
  • Glenn added some debugging infrastructure for pixel snapping.
  • Glenn tidied up some code and added a few optimizations.
  • Nical fixed a rendering bug with shadows and blurs causing them to flicker in some cases.
  • Nical simplified the code that manages the lifetime of image and blob image handles on the content process.
  • Nical added a test.
  • Sotaro enabled mochitest-chrome with WebRender in the CI.
  • Sotaro improved scrolling smoothness when using direct composition.
  • Sotaro fixed a window creation failure when using WebRender with Wayland.
  • Emilio improved background-clip: text invalidation.

Blocker bugs countdown

Only 0 P2 bugs and 4 P3 bugs left (two of which have fixes up for review)!

Enabling WebRender in Firefox Nightly

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Open Policy & AdvocacyMozilla statement on the conclusion of EU copyright directive ‘trialogue’ negotiations

Yesterday the EU institutions concluded ‘trialogue’ negotiations on the EU Copyright directive, a procedural step that makes the final adoption of the directive a near certainty.

Here’s a statement from Raegan MacDonald, Mozilla’s Head of EU Public Policy –

The Copyright agreement gives the green light to new rules that will compel online services to implement blanket upload filters, with an overly complex and limited SME carve out that will be unworkable in practice.  At the same time, lawmakers have forced through a new ancillary copyright for press publishers, a regressive and disproven measure that will undermine access to knowledge and the sharing of information online.

The legal uncertainty that will be generated by these complex rules means that only the largest, most established platforms will be able to fully comply and thrive in such a restricted online environment.

With this development, the EU institutions have squandered the opportunity of a generation to bring European copyright law into the 21st century.  At a time of such concern about web centralisation and the ability of small European companies to compete in the digital marketplace, these new rules will serve to entrench the incumbents.

We recognise the efforts of many Member States and MEPs who laboured to find workable solutions that would have rectified some of the gravest shortcomings in the proposal. Unfortunately the majority of their progressive compromises were rejected.

The file is expected to be adopted officially in a final European Parliament vote in the coming weeks. We’re continuously working with our allies in the Parliament and the broader community to explore any and every opportunity to limit the potential damage of this outcome.

The post Mozilla statement on the conclusion of EU copyright directive ‘trialogue’ negotiations appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyMozilla Foundation fellow weighs in on flawed EU Terrorist Content regulation

As we’ve noted previously, the EU’s proposed Terrorist Content regulation would seriously undermine internet health in Europe, by forcing companies to aggressively suppress user speech with limited due process and user rights safeguards. Yet equally concerning is the fact that this proposal is likely to achieve little in terms of reducing the actual terrorism threat or the phenomenon of radicalisation in Europe. Here, Mozilla Foundation Tech Policy fellow and community security expert Stefania Koskova* unpacks why, and proposes an alternative approach for EU lawmakers.

With the proposed Terrorist Content regulation, the EU has the opportunity to set a global standard in how to effectively address what is a pressing public policy concern. To be successful, harmful and illegal content policies must carefully and meaningfully balance the objectives of national security, internet-enabled economic growth and human rights. Content policies addressing national security threats should reflect how internet content relates to ‘offline’ harm and should provide sufficient guidance on how to comprehensively and responsibly reduce it in parallel with other interventions. Unfortunately, the Commission’s proposal falls well short in this regard.

Key shortcomings:

  • Flawed definitions: In its current form there is a considerable lack of clarity and specificity in the definition of ‘terrorist content’, which creates unnecessary confusion between ‘terrorist content’ and terrorist offences. Biased application, including through the association of terrorism with certain national or religious minorities and certain ideologies, can lead to serious harm and real-world consequences. This in turn can contribute to further polarisation and radicalisation.
  • Insufficient content assessment: Within the proposal there is no standardisation of the ‘terrorist content’ assessment procedure from a risk perspective, and no standardisation of the evidentiary requirements that inform content removal decisions by government authorities or online services. Member States and hosting service providers are asked to evaluate the terrorist risk associated with specific online content, without clear or precise assessment criteria.
  • Weak harm reduction model: Without a clear understanding of the impact of ‘terrorist content’ on the radicalisation process in specific contexts and circumstances, it seems inadvisable and contrary to the goal of evidence-based policymaking to assume that removal, blocking, or filtering will reduce radicalisation and prevent terrorism. Further, potential adverse effects of removal, blocking, and filtering, such as fueling grievances of those susceptible to terrorist propaganda, are not considered.

As such, the European Commission’s draft proposal in its current form creates additional risks with only vaguely defined benefits to countering radicalisation and preventing terrorism. To ensure the most negative outcomes are avoided, the following amendments to the proposal should be made as a matter of urgency:

  • Improving definition of terrorist content: The definition of ‘terrorist content’ should be clarified such that it depends on illegality and intentionality. This is essential to protect the public interest speech of journalists, human rights defenders, and other witnesses and archivists of terrorist atrocities.
  • Disclosing ‘what counts’ as terrorism through transparency reporting and monitoring: The proposal should ensure that Member States and hosting platforms are obliged to report on how much illegal terrorist content is removed, blocked or filtered under the regulation – broken down by category of terrorism (incl. nationalist-separatist, right-wing, left-wing, etc.) and the extent to which content decision and action was linked to law enforcement investigations. With perceptions of terrorist threat in the EU diverging across countries and across the political spectrum, this can safeguard against intentional or unintentional bias in implementation.
  • Assessing security risks: In addition to to being grounded in a legal assessment, content control actions taken by competent authorities and companies should be strategic –  i.e. be based on an assessment of the content’s danger to public safety and the likelihood that it will contribute to the commission of terrorist acts.  This risk assessment should also take into account the likely negative repercussions arising from content removal/blocking/filtering.
  • Focusing on impact: The proposal should require or ensure that all content policy measures are closely coordinated and coincide with the deployment of strategic radicalisation counter-narratives, and broader terrorism prevention and rehabilitation programmes.

The above recommendations address shortcomings in the proposal in the terrorism prevention context. Additionally, however, there remains the contested issue of 60-minute content takedowns and mandated proactive filtering, both of which are serious threats to internet health. There is an opportunity, through the parliamentary procedure, to address these concerns. Constructive feedback, including specific proposals that can significantly improve the current text, has been put forward by EU Parliament Committees, civil society and industry representatives.

The stakes are high. With this proposal, the EU can create a benchmark for how democratic societies should address harmful and illegal online content without compromising their own values. It is imperative that lawmakers take the opportunity.

*Stefania Koskova is a Mozilla Foundation Tech Policy fellow and a counter-radicalisation practitioner. Learn more about her Mozilla Foundation fellowship here.

The post Mozilla Foundation fellow weighs in on flawed EU Terrorist Content regulation appeared first on Open Policy & Advocacy.

The Mozilla BlogFacebook Answers Mozilla’s Call to Deliver Open Ad API Ahead of EU Election

After calls for increased transparency and accountability from Mozilla and partners in civil society, Facebook announced it would open its Ad Archive API next month. While the details are still limited, this is an important first step to increase transparency of political advertising and help prevent abuse during upcoming elections.

Facebook’s commitment to make the API publicly available could provide researchers, journalists and other organizations the data necessary to build tools that give people a behind the scenes look at how and why political advertisers target them. It is now important that Facebook follows through on these statements and delivers an open API that gives the public the access it deserves.

The decision by Facebook comes after months of engagement by the Mozilla Corporation through industry working groups and government initiatives and most recently, an advocacy campaign led by the Mozilla Foundation.

This week, the Mozilla Foundation was joined by a coalition of technologists, human rights defenders, academics, journalists demanding Facebook take action and deliver on the commitments made to put users first and deliver increased transparency.

“In the short term, Facebook needs to be vigilant about promoting transparency ahead of and during the EU Parliamentary elections,” said Ashley Boyd, Mozilla’s VP of Advocacy. “Their action — or inaction — can affect elections across more than two dozen countries. In the long term, Facebook needs to sincerely assess the role its technology and policies can play in spreading disinformation and eroding privacy.”

And in January, Mozilla penned a letter to the European Commission underscoring the importance of a publicly available API. Without the data, Mozilla and other organizations are unable to deliver products designed to pull back the curtain on political advertisements.

“Industry cannot ignore its potential to either strengthen or undermine the democratic process,” said Alan Davidson Mozilla’s VP of Global Policy, Trust and Security. “Transparency alone won’t solve misinformation problems or election hacking, but it’s a critical first step. With real transparency, we can give people more accurate information and powerful tools to make informed decisions in their lives.”

This is not the first time Mozilla has called on the industry to prioritize user transparency and choice. In the wake of the Cambridge Analytica news, the Mozilla Foundation rallied tens of thousands of internet users to hold Facebook accountable for its post-scandal promises. And Mozilla Corporation took action with a pause on advertising our products on Facebook and provided users with Facebook Container for Firefox, a product that keeps Facebook from tracking people around the web when they aren’t on the platform.

While the announcement from Facebook indicates a move towards transparency, it is critical the company follows through and delivers not only on this commitment but the other promises also made to European lawmakers and voters.

The post Facebook Answers Mozilla’s Call to Deliver Open Ad API Ahead of EU Election appeared first on The Mozilla Blog.

Mozilla VR BlogJingle Smash: Choosing a Physics Engine

Jingle Smash: Choosing a Physics Engine

This is part 2 of my series on how I built Jingle Smash, a block smashing WebVR game .

The key to a physics based game like Jingle Smash is of course the physics engine. In the Javascript world there are many to choose from. My requirements were for fully 3D collision simulation, working with ThreeJS, and being fairly easy to use. This narrowed it down to CannonJS, AmmoJS, and Oimo.js: I chose to use the CannonJS engine because AmmoJS was a compiled port of a C lib and I worried would be harder to debug, and Oimo appeared to be abandoned (though there was a recent commit so maybe not?).

CannonJS

CannonJS is not well documented in terms of tutorials, but it does have quite a bit of demo code and I was able to figure it out. The basic usage is quite simple. You create a Body object for everything in your scene that you want to simulate. Add these to a World object. On each frame you call world.step() then read back position and orientations of the calculated bodies and apply them to the ThreeJS objects on screen.

While working on the game I started building an editor for positioning blocks, changing their physical properties, testing the level, and resetting them. Combined with physics this means a whole lot of syncing data back and forth between the Cannon and ThreeJS sides. In the end I created a Block abstraction which holds the single source of truth and keeps the other objects updated. The blocks are managed entirely from within the BlockService.js class so that all of this stuff would be completely isolated from the game graphics and UI.

Physics Bodies

When a Block is created or modified it regenerates both the ThreeJS objects and the Cannon objects. Since ThreeJS is documented everywhere I'll only show the Cannon side.

let type = CANNON.Body.DYNAMIC
if(this.physicsType === BLOCK_TYPES.WALL) {
    type = CANNON.Body.KINEMATIC
}

this.body = new CANNON.Body({
    mass: 1,//kg
    type: type,
    position: new CANNON.Vec3(this.position.x,this.position.y,this.position.z),
    shape: new CANNON.Box(new CANNON.Vec3(this.width/2,this.height/2,this.depth/2)),
    material: wallMaterial,
})
this.body.quaternion.setFromEuler(this.rotation.x,this.rotation.y,this.rotation.z,'XYZ')
this.body.jtype = this.physicsType
this.body.userData = {}
this.body.userData.block = this
world.addBody(this.body)

Each body has a mass, type, position, quaternion, and shape.

For mass I’ve always used 1kg. This works well enough but if I ever update the game in the future I’ll make the mass configurable for each block. This would enable more variety in the levels.

The type is either dynamic or kinematic. Dynamic means the body can move and tumble in all directions. A kinematic body is one that does not move but other blocks can hit and bounce against it.

The shape is the actual shape of the body. For blocks this is a box. For the ball that you throw I used a sphere. It is also possible to create interactive meshes but I didn’t use them for this game.

An important note about Boxes. In ThreeJS the BoxGeometry takes the the full width, height, and depth in the constructor. In CannonJS you use the extent from the center, which is half of the full width, height, and depth. I didn’t realize this when I started, only to discover my cubes wouldn’t fall all the way to the ground. :)

The position and quaternion (orientation) properties use the same values in the same order as ThreeJS. The material refers to how that block will bounce against others. In my game I use only two materials: wall and ball. For each pair of materials you will create a contact material which defines the friction and restitution (bounciness) to use when that particular pair collides.

const wallMaterial = new CANNON.Material()
// …
const ballMaterial = new CANNON.Material()
// …
world.addContactMaterial(new CANNON.ContactMaterial( 
	wallMaterial,ballMaterial,
    {
        friction:this.wallFriction,
        restitution: this.wallRestitution
    }
))

Gravity

All of these bodies are added to a World object with a hard coded gravity property set to match Earth gravity (9.8m/s^2), though individual levels may override this. The last three levels of the current game have gravity set to 0 for a different play experience.

const world = new CANNON.World();
world.gravity.set(0, -9.82, 0);

Once the physics engine is set up and simulating the objects we need to update the on screen graphics after every world step. This is done by just copying the properties out of the body and back to the ThreeJS object.

this.obj.position.copy(this.body.position)
this.obj.quaternion.copy(this.body.quaternion)

Collision Detection

There is one more thing we need: collisions. The engine handles colliding all of the boxes and making them fall over, but the goal of the game is that the player must knock over all of the crystal boxes to complete the level. This means I have to define what knock over means. At first I just checked if a block had moved from its original orientation, but this proved tricky. Sometimes a box would be very gently knocked and tip slightly, triggering a ‘knock over’ event. Other times you could smash into a block at high speed but it wouldn’t tip over because there was a wall behind it.

Instead I added a collision handler so that my code would be called whenever two objects collide. The collision event includes a method to get the velocity at the impact. This allows me to ignore any collisions that aren’t strong enough.

You can see this in player.html

function handleCollision(e) {
    if(game.blockService.ignore_collisions) return

    //ignore tiny collisions
    if(Math.abs(e.contact.getImpactVelocityAlongNormal() < 1.0)) return

    //when ball hits moving block,
    if(e.body.jtype === BLOCK_TYPES.BALL) {
        if( e.target.jtype === BLOCK_TYPES.WALL) {
            game.audioService.play('click')
        }

        if (e.target.jtype === BLOCK_TYPES.BLOCK) {
            //hit a block, just make the thunk sound
            game.audioService.play('click')
        }
    }

    //if crystal hits anything and the impact was strong enought
    if(e.body.jtype === BLOCK_TYPES.CRYSTAL || e.target.jtype === BLOCK_TYPES.CRYSTAL) {
        if(Math.abs(e.contact.getImpactVelocityAlongNormal() >= 2.0)) {
            return destroyCrystal(e.target)
        }
    }
    // console.log(`collision: body ${e.body.jtype} target ${e.target.jtype}`)
}

The collision event handler was also the perfect place to add sound effects for when objects hit each other. Since the event includes which objects were involved I can use different sounds for different objects, like the crashing glass sound for the crystal blocks.

Firing the ball is similar to creating the block bodies except that it needs an initial velocity based on how much force the player slingshotted the ball with. If you don’t specify a velocity to the Body constructor then it will use a default of 0.

fireBall(pos, dir, strength) {
    this.group.worldToLocal(pos)
    dir.normalize()
    dir.multiplyScalar(strength*30)
    const ball = this.generateBallMesh(this.ballRadius,this.ballType)
    ball.castShadow = true
    ball.position.copy(pos)
    const sphereBody = new CANNON.Body({
        mass: this.ballMass,
        shape: new CANNON.Sphere(this.ballRadius),
        position: new CANNON.Vec3(pos.x, pos.y, pos.z),
        velocity: new CANNON.Vec3(dir.x,dir.y,dir.z),
        material: ballMaterial,
    })
    sphereBody.jtype = BLOCK_TYPES.BALL
    ball.userData.body = sphereBody
    this.addBall(ball)
    return ball
}

Next Steps

Overall CannonJS worked pretty well. I would like it to be faster as it costs me about 10fps to run, but other things in the game had a bigger impact on performance. If I ever revisit this game I will try to move the physics calculations to a worker thread, as well as redo the syncing code. I’m sure there is a better way to sync objects quickly. Perhaps JS Proxies would help. I would also move the graphics & styling code outside, so that the BlockService can really focus just on physics.

While there are some more powerful solutions coming with WASM, today I definitely recommend using CannonJS for the physics in your WebVR games. The ease of working with the API (despite being under documented) meant I could spend more time on the game and less time worrying about math.

The Mozilla BlogRetailers: All We Want for Valentine’s Day is Basic Security

Mozilla and our allies are asking four major retailers to adopt our Minimum Security Guidelines

 

Today, Mozilla, Consumers International, the Internet Society, and eight other organizations are urging Amazon, Target, Walmart, and Best Buy to stop selling insecure connected devices.

Why? As the Internet of Things expands, a troubling pattern is emerging:

[1] Company x makes a “smart” product — like connected stuffed animals — without proper privacy or security features

[2] Major retailers sell that insecure product widely

[3] The product gets hacked, and consumers are the ultimate loser

This has been the case with smart dolls, webcams, doorbells, and countless other devices. And the consequences can be life threatening: “Internet-connected locks, speakers, thermostats, lights and cameras that have been marketed as the newest conveniences are now also being used as a means for harassment, monitoring, revenge and control,” the New York Times reported last year. Compounding this: It is estimated that by 2020, 10 billion IoT products will be active.

Last year, in an effort to make connected devices on the market safer for consumers, Mozilla, the Internet Society, and Consumers International published our Minimum Security Guidelines: the five basic features we believe all connected devices should have. They include encrypted communications; automatic updates; strong password requirements; vulnerability management; and an accessible privacy policy.

Now, we’re calling on four major retailers to publicly endorse these guidelines, and also commit to vetting all connected products they sell against these guidelines. Mozilla, Consumers International, and the Internet Society have sent a sign-on letter to Amazon, Target, Walmart, and Best Buy.

The letter is also signed by 18 Million Rising, Center for Democracy and Technology, ColorOfChange, Consumer Federation of America, Common Sense Media, Hollaback, Open Media & Information Companies Initiative, and Story of Stuff.

Currently, there is no shortage of insecure products on shelves. In our annual holiday buyers guide, which ranks popular devices’ privacy and security features, about half the products failed to meet our Minimum Security Guidelines. And in the Valentine’s Day buyers guide we released last week, nine out of 18 products failed.

Why are we targeting retailers, and not the companies themselves? Mozilla can and does speak with the companies behind these devices. But by talking with retailers, we believe we can have an outsized impact. Retailers don’t want their brands associated with insecure goods. And if retailers drop a company’s product, that company will be compelled to improve its product’s privacy and security features.

We know this approach works. Last year, Mozilla called on Target and Walmart to stop selling CloudPets, an easily-hackable smart toy. Target and Walmart listened, and stopped selling the toys.

In the short-term, we can get the most insecure devices off shelves. In the long-term, we can fuel a movement for a more secure, privacy-centric Internet of Things.

Read the full letter, here or below.


Dear Target, Walmart, Best Buy and Amazon, 

The advent of new connected consumer products offers many benefits. However, as you are aware, there are also serious concerns regarding standards of privacy and security with these products. These require urgent attention if we are to maintain consumer trust in this market.

It is estimated that by 2020, 10 billion IoT products will be active. The majority of these will be in the hands of consumers. Given the enormous growth of this space, and because so many of these products are entrusted with private information and conversations, it is incredibly important that we all work together to ensure that internet-enabled devices enhance consumers’ trust.

Cloudpets illustrated the problem, however we continue to see connected devices that fail to meet the basic privacy and security thresholds. We are especially concerned about how these issues impact children, in the case of connected toys and other devices that children interact with. That’s why we’re asking you to publicly endorse these minimum security and privacy guidelines, and commit publicly to use them to vet any products your company sells to consumers. While many products can and should be expected to meet a high set of privacy and security standards, these minimum requirements are a strong start that every reputable consumer company must be expected to meet. These minimum guidelines require all IoT devices to have:

1) Encrypted communications

The product must use encryption for all of its network communications functions and capabilities. This ensures that all communications are not eavesdropped or modified in transit.

2) Security updates

The product must support automatic updates for a reasonable period after sale, and be enabled by default. This ensures that when a vulnerability is known, the vendor can make security updates available for consumers, which are verified (using some form of cryptography) and then installed seamlessly. Updates must not make the product unavailable for an extended period.

3) Strong passwords

If the product uses passwords for remote authentication, it must require that strong passwords are used, including having password strength requirements. Any non-unique default passwords must also be reset as part of the device’s initial setup. This helps protect the device from vulnerability to guessable password attacks, which could result in device compromise.

4) Vulnerability management

The vendor must have a system in place to manage vulnerabilities in the product. This must also include a point of contact for reporting vulnerabilities and a vulnerability handling process internally to fix them once reported. This ensures that vendors are actively managing vulnerabilities throughout the product’s lifecycle.

5) Privacy practices

The product must have a privacy policy that is easily accessible, written in language that is easily understood and appropriate for the person using the device or service at the point of sale. At a minimum, users should be notified about substantive changes to the policy. If data is being collected, transmitted or shared for marketing purposes, that should be clear to users and, in line with the EU’s General Data Protection Regulation (GDPR), there should be a way to opt-out of such practices. Users should also have a way to delete their data and account. Additionally, like in GDPR, this should include a policy setting standard retention periods wherever possible.

We’ve seen headline after headline about privacy and security failings in the IoT space. And it is often the same mistakes that have led to people’s private moments, conversations, and information being compromised. Given the value and trust that consumers place in your company, you have a uniquely important role in addressing this problem and helping to build a more secure, connected future. Consumers can and should be confident that, when they buy a device from you, that device will not compromise their privacy and security. Signing on to these minimum guidelines is the first step to turn the tide and build trust in this space.

Yours,

Mozilla, Internet Society, Consumer’s International, ColorOfChange, Open Media & Information Companies Initiative, Common Sense Media, Story of Stuff, Center for Democracy and Technology, Consumer Federation of America, 18 Million Rising, Hollaback

The post Retailers: All We Want for Valentine’s Day is Basic Security appeared first on The Mozilla Blog.

hacks.mozilla.orgAnyone can create a virtual reality experience with this new WebVR starter kit from Mozilla and Glitch

Here at Mozilla, we are big fans of Glitch. In early 2017 we made the decision to host our A-Frame content on their platform. The decision was easy. Glitch makes it easy to explore, and remix live code examples for WebVR.

We also love the people behind Glitch. They have created a culture and a community that is kind, encouraging, and champions creativity. We share their vision for a web that is creative, personal, and human. The ability to deliver immersive experiences through the browser opens a whole new avenue for creativity. It allows us to move beyond screens, and keyboards. It is exciting, and new, and sometimes a bit weird (but in a good way).

Building a virtual reality experience may seem daunting, but it really isn’t. WebVR and frameworks like A-Frame make it really easy to get started. This is why we worked with Glitch to create a WebVR starter kit. It is a free, 5-part video course with interactive code examples that will teach you the fundamentals of WebVR using A-Frame. Our hope is that this starter kit will encourage anyone who has been on the fence about creating virtual reality experiences to dive in and get started.

Check out part one of the five-part series below. If you want more, I’d encourage you to check out the full starter kit here, or use the link at the bottom of this post.

 

In the Glitch viewer embedded below, you can see how to make a WebVR planetarium in just a few easy-to-follow steps. You learn interactively (and painlessly) by editing and remixing the working code in the viewer:

 


 

Ready to keep going? Click below to view the full series on Glitch.



The post Anyone can create a virtual reality experience with this new WebVR starter kit from Mozilla and Glitch appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogOpen Letter: Facebook, Do Your Part Against Disinformation

Mozilla, Access Now, Reporters Without Borders, and 35 other organizations have published an open letter to Facebook.

Our ask: make good on your promises to provide more transparency around political advertising ahead of the 2019 EU Parliamentary Elections

 

Is Facebook making a sincere effort to be transparent about the content on its platform? Or, is the social media platform neglecting its promises?

Facebook promised European lawmakers and users it would increase the transparency of political advertising on the platform to prevent abuse during the elections. But in the very same breath, they took measures to block access to transparency tools that let users see how they are being targeted.

With the 2019 EU Parliamentary Elections on the horizon, it is vital that Facebook take action to address this problem. So today, Mozilla and 37 other organizations — including Access Now and Reporters Without Borders — are publishing an open letter to Facebook.

“We are writing you today as a group of technologists, human rights defenders, academics, journalists and Facebook users who are deeply concerned about the validity of Facebook’s promises to protect European users from targeted disinformation campaigns during the European Parliamentary elections,” the letter reads.

“Promises and press statements aren’t enough; instead, we’ll be watching for real action over the coming months and will be exploring ways to hold Facebook accountable if that action isn’t sufficient,” the letter continues.

Individuals may sign their name to the letter, as well. Sign here.

Read the full letter, here or below. The letter will also appear in the Thursday print edition of POLITICO Europe.

Lire cette lettre en français    

Diesen Brief auf Deutsch lesen

The letter urges Facebook to make good on its promise to EU lawmakers. Last year, Facebook signed the EU’s Code of Practice on disinformation and pledged to increase transparency around political advertising. But since then, Facebook has made political advertising more opaque, not more transparent. The company recently blocked access to third-party transparency tools.

Specifically, our open letter urges Facebook to:

  • Roll out a functional, open Ad Archive API that enables advanced research and development of tools that analyse political ads served to Facebook users in the EU

 

  • Ensure that all political advertisements are clearly distinguished from other content and are accompanied by key targeting criteria such as sponsor identity and amount spent on the platform in all EU countries

 

  • Cease all harassment of good faith researchers who are building tools to provide greater transparency into the advertising on Facebook’s platform.

To safeguard the integrity of the EU Parliament elections, Facebook must be part of the solution. Users and voters across the EU have the right to know who is paying to promote the political ads they encounter online; if they are being targeted; and why they are being targeted.


The full letter

Dear Facebook:

We are writing you today as a group of technologists, human rights defenders, academics, journalists and Facebook users who are deeply concerned about the validity of Facebook’s promises to protect European users from targeted disinformation campaigns during the European Parliamentary elections. You have promised European lawmakers and users that you will increase the transparency of political advertising on the platform to prevent abuse during the elections. But in the very same breath, you took measures to block access to transparency tools that let your users see how they are being targeted.

In the company’s recent Wall Street Journal op-ed, Mark Zuckerberg wrote that the most important principles around data are transparency, choice and control. By restricting access to advertising transparency tools available to Facebook users, you are undermining transparency, eliminating the choice of your users to install tools that help them analyse political ads, and wielding control over good faith researchers who try to review data on the platform. Your alternative to these third party tools provides simple keyword search functionality and does not provide the level of data access necessary for meaningful transparency.

Actions speak louder than words. That’s why you must take action to meaningfully deliver on the commitments made to the EU institutions, notably the increased transparency that you’ve promised. Promises and press statements aren’t enough; instead, we need to see real action over the coming months, and we will be exploring ways to hold Facebook accountable if that action isn’t sufficient.

Specifically, we ask that you implement the following measures by 1 April 2019 to give developers sufficient lead time to create transparency tools in advance of the elections:

  • Roll out a functional, open Ad Archive API that enables advanced research and development of tools that analyse political ads served to Facebook users in the EU

 

  • Ensure that all political advertisements are clearly distinguished from other content and are accompanied by key targeting criteria such as sponsor identity and amount spent on the platform in all EU countries

 

  • Cease harassment of good faith researchers who are building tools to provide greater transparency into the advertising on your platform

We believe that Facebook and other platforms can be positive forces that enable democracy, but this vision can only be realized through true transparency and trust. Transparency cannot just be on the terms with which the world’s largest, most powerful tech companies are most comfortable.

We look forward to the swift and complete implementation of these transparency measures that you have promised to your users.

Sincerely,

Mozilla Foundation

and also signed by:

Access Now
AlgorithmWatch
All Out
Alto Data Analytics
ARTICLE 19
Aufstehn
Bits of Freedom
Bulgarian Helsinki Committee
BUND – Friends of the Earth Germany
Campact
Campax
Center for Democracy and Technology
CIPPIC
Civil Liberties Union for Europe
Civil Rights Defenders
Declic
doteveryone
Estonian Human Rights Center
Free Press Unlimited
GONG Croatia
Greenpeace
Italian Coalition for Civil Liberties and Rights (CILD)
Mobilisation Lab
Open Data Institute
Open Knowledge International
OpenMedia
Privacy International
PROVIDUS
Reporters Without Borders
Skiftet
SumOfUs
The Fourth Group
Transparent Referendum Initiative
Uplift
Urgent Action Fund for Women’s Human Rights
WhoTargetsMe
Wikimedia UK


Note: This blog post has been updated to reflect additional letter signers.

The post Open Letter: Facebook, Do Your Part Against Disinformation appeared first on The Mozilla Blog.

Open Policy & AdvocacyKenya Government mandates DNA-linked national ID, without data protection law

Last month, the Kenya Parliament passed a seriously concerning amendment to the country’s national ID law, making Kenya home to the most privacy-invasive national ID system in the world. The rebranded, National Integrated Identity Management System (NIIMS) now requires all Kenyans, immigrants, and refugees to turn over their DNA, GPS coordinates of their residential address, retina scans, iris pattern, voice waves, and earlobe geometry before being issued critical identification documents. NIIMS will consolidate information contained in other government agency databases and generate a unique identification number known as Huduma Namba.

It is hard to see how this system comports with the right to privacy articulated in Article 31 of the Kenyan Constitution. It is deeply troubling that these amendments passed without public debate, and were approved even as a data protection bill which would designate DNA and biometrics as sensitive data is pending.

Before these amendments, in order to issue the National ID Card (ID), the government only required name, date and place of birth, place of residence, and postal address. The ID card is a critical document that impacts everyday life, without it, an individual cannot vote, purchase property, access higher education, obtain employment, access credit, or public health, among other fundamental rights.

Mozilla strongly believes that that no digital ID system should be implemented without strong privacy and data protection legislation. The proposed Data Protection Bill of 2018 which Parliament is likely to consider next month, is a strong and thorough framework that contains provisions relating to data minimization as well as collection and purpose limitation. If NIIMS  is implemented, it will be in conflict with these provisions, and more importantly in conflict with Article 31 of the Constitution, which specifically protects the right to privacy.

Proponents of NIIMS claim that the system provides a number of benefits, such as accurate delivery of government services. These arguments also seem to conflate legal and digital identity. Legal ID used to certify one’s identity through basic data about one’s personhood (such as your name and the date and place of your birth) is a commendable goal. It is one of the United Nations Sustainable Development Goals 16.9 that aims “to provide legal identity for all, including birth registration by 2030”.  However, it is important to remember this objective can be met in several ways. “Digital ID” systems, and especially those that involve sensitive biometrics or DNA, are not a necessary means of verifying identity, and in practice raise significant privacy and security concerns. The choice of whether to opt for a digital ID let alone a biometric ID therefore should be closely scrutinized by governments in light of these risks, rather than uncritically accepted as beneficial.

  • Security Concerns: The centralized nature of NIIMS creates massive security vulnerabilities. It could become a honeypot for malicious actors and identity thieves who can exploit other identifying information linked to stolen biometric data. The amendment is unclear on how the government will establish and institute strong security measures required for the protection of such a sensitive database. If there’s a breach, it’s not as if your DNA or retina can be reset like a password or token.
  • Surveillance Concerns:  By centralizing a tremendous amount of sensitive data in a government database, NIIMS creates an opportunity for mass surveillance by the State. Not only is the collection of biometrics incredibly invasive, but gathering this data combined with transaction logs of where ID is used could substantially reduce anonymity. This is all the more worrying considering Kenya’s history of extralegal  surveillance and intelligence sharing.
  • Ethnic Discrimination  Concerns: The collection of DNA is particularly concerning as this information can be used to identify an individual’s ethnic identity. Given Kenya’s history of  politicization of ethnic identity, collecting this data in a centralized database like NIIMS could reproduce and exacerbate patterns of discrimination.

The process was not constitutional

Kenya’s constitution requires public input before any new law can be adopted. No public discussions were conducted for this amendment. It was offered for parliamentary debate under “Miscellaneous” amendments, which exempted it from procedures and scrutiny that would have required introduction as a substantive bill and corresponding public debate. The Kenyan government must not implement this system without sufficient public debate and meaningful engagement to determine how such a system should be implemented if at all.

The proposed law does not provide people with the opportunity to opt in or out of giving their sensitive and precise data. The Constitution requires that all Kenyans be granted identification. However, if an individual were to refuse to turn over their DNA or other sensitive information to the State, as they should have the right to do, they could risk not being issued their identity or citizenship documents. Such a denial would contravene Articles 12, 13, and 14 of the Constitution.

Opting out of this system should not be used to discriminate or exclude any individual from accessing essential public services and exercising their fundamental rights.

Individuals must be in full control of their digital identities with the right to object to processing and use and withdraw consent. These aspects of control and choice are essential to empowering individuals in the deployment of their digital identities. Therefore policy and technical decisions must take into account systems that allow individuals to identify themselves rather than the system identifying them.

Mozilla urges the government of Kenya to suspend the implementation of NIIMS and we hope Kenyan members of parliament will act swiftly to pass the Data Protection Bill of 2018.

The post Kenya Government mandates DNA-linked national ID, without data protection law appeared first on Open Policy & Advocacy.

Mozilla VR BlogImmersive Media Content Creation Guide

Immersive Media Content Creation Guide

Firefox Reality is ready for your panoramic images and videos, in both 2D and 3D. In this guide you will find advice for creating and formatting your content to best display on the immersive web in Firefox Reality.

Images

The web is a great way to share immersive images, either as standalone photos or as part of an interactive tour. Most browsers can display immersive (360°) images but need a little help. Generally these images are regular JPGs or PNGs that have been taken with a 180° or 360° camera. Depending on the exact format you may need different software to display it in a browser. You can host the images themselves on your own server or use one of the many photo tour websites listed below.

Equirectangular Images

360 cameras usually take photos in equirectangular format, meaning an aspect ratio of 2 to 1. Here are some examples on Flickr.

Immersive Media Content Creation Guide

To display one of these on the web in VR you will need an image viewer library. Here are some examples:

Spherical Images and 3D Images

Some 360 cameras save as spherical projection, which generally looks like one or two circles. Generally these should be converted to equirectangular with the tools that came with your camera. 3D images from 180 cameras will generally be two images side by side or one above the other. Again most camera makers provide tools to prepare these for the web. Look at the documentation for your camera.

Immersive Media Content Creation Guide

Photo Tours

One of the best ways to use immersive images on the web is to build an interactive tour with them. There are many excellent web-based tools for building 360 tours. Here are just a few of them:

Video

360 and 3D video is much like regular video. It is generally encoded with the h264 codec and stored inside of an mp4 container. However, 360 and 3D video is very large. Generally you do not want to host it on your own web server. Instead you can host it with a video provider like YouTube or Vimeo. They each have their own instructions for how to process and upload videos.

If you chose to host the video file yourself on a standard web server then you will need to use a video viewer library built with a VR framework like AFrame or ThreeJS.

3D videos

3D video is generally just two 180 or 360 videos stuck together. This is usually called ‘over and under’ format, meaning each video frame is a square containing two equirectangular images, the top half is for the left eye and the bottom half is for the right eye.

Compression Advice

Use as high quality as you can get away with and let your video provider convert it as needed. If you are doing it yourself go for 4k in h264 with the highest bitrate your camera supports.

Devices for capturing 360 videos and images

You will get the best results from a camera built for 360,180, or 3D. Amazon has many fine products to choose from. They should all come with instructions and software for capturing and converting both photos and video.

Members of the Mozilla Mixed Reality team have personally used:

Though you will get better results from a dedicated camera, it is also possible to capture 360 images from custom smartphone camera apps such as FOV, Cardboard Camera and Facebook. See these tutorials on 360 iOS apps and Android apps for more information.

Sharing your Immersive Content

You can share your content on your own website, but if that won’t work for you then consider one of the many 360 content hosting sites like these:

Get Featured

Once you have your immersive content on the web, please let us know about it. We might be able to feature it in the Firefox Reality home page, getting your content in front of many viewers right inside VR.

QMOFirefox 66 Beta 8 Testday, February 15th

Hello Mozillians,

We are happy to let you know that Friday, February 15th, we are organizing Firefox 66 Beta 8 Testday. We’ll be focusing our testing on: Storage Access API/Cookie Restrictions. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

The Mozilla BlogMozilla Heads to Capitol Hill to Defend Net Neutrality

Today Denelle Dixon, Mozilla COO, had the honor of testifying on behalf of Mozilla before a packed United States House of Representatives Energy & Commerce Telecommunications Subcommittee in support of our ongoing fight for net neutrality. It was clear: net neutrality principles are broadly embraced, even in partisan Washington.

Dixon in front of the United States House of Representatives Energy & Commerce Telecommunications Subcommittee

Our work to restore net neutrality is driven by our mission to build a better, healthier internet that puts users first. And we believe that net neutrality is fundamental to preserving an open internet that creates room for new businesses and new ideas to emerge and flourish, and where internet users can choose freely the companies, products, and services that put their interests first.

We are committed to restoring the protections users deserve and will continue to go wherever the fight for net neutrality takes us.

For more, check out the replay of the hearing or read Denelle’s prepared written testimony to the subcommittee.

The post Mozilla Heads to Capitol Hill to Defend Net Neutrality appeared first on The Mozilla Blog.

hacks.mozilla.orgRefactoring MDN macros with async, await, and Object.freeze()

A frozen soap bubble

In March of last year, the MDN Engineering team began the experiment of publishing a monthly changelog on Mozilla Hacks. After nine months of the changelog format, we’ve decided it’s time to try something that we hope will be of interest to the web development community more broadly, and more fun for us to write. These posts may not be monthly, and they won’t contain the kind of granular detail that you would expect from a changelog. They will cover some of the more interesting engineering work we do to manage and grow the MDN Web Docs site. And if you want to know exactly what has changed and who has contributed to MDN, you can always check the repos on GitHub.

In January, we landed a major refactoring of the KumaScript codebase and that is going to be the topic of this post because the work included some techniques of interest to JavaScript programmers.

Modern JavaScript

One of the pleasures of undertaking a big refactor like this is the opportunity to modernize the codebase. JavaScript has matured so much since KumaScript was first written, and I was able to take advantage of this, using let and const, classes, arrow functions, for...of loops, the spread (…) operator, and destructuring assignment in the refactored code. Because KumaScript runs as a Node-based server, I didn’t have to worry about browser compatibility or transpilation: I was free (like a kid in a candy store!) to use all of the latest JavaScript features supported by Node 10.

KumaScript and macros

Updating to modern JavaScript was a lot of fun, but it wasn’t reason enough to justify the time spent on the refactor. To understand why my team allowed me to work on this project, you need to understand what KumaScript does and how it works. So bear with me while I explain this context, and then we’ll get back to the most interesting parts of the refactor.

First, you should know that Kuma is the Python-based wiki that powers MDN, and KumaScript is a server that renders macros in MDN documents. If you look at the raw form of an MDN document (such as the HTML <body> element) you’ll see lines like this:

It must be the second element of an {{HTMLElement("html")}} element.

The content within the double curly braces is a macro invocation. In this case, the macro is defined to render a cross-reference link to the MDN documentation for the html element. Using macros like this keeps our links and angle-bracket formatting consistent across the site and makes things simpler for writers.

MDN has been using macros like this since before the Kuma server existed. Before Kuma, we used a commercial wiki product which allowed macros to be defined in a language they called DekiScript. DekiScript was a JavaScript-based templating language with a special API for interacting with the wiki. So when we moved to the Kuma server, our documents were full of macros defined in DekiScript, and we needed to implement our own compatible version, which we called KumaScript.

Since our macros were defined using JavaScript, we couldn’t implement them directly in our Python-based Kuma server, so KumaScript became a separate service, written in Node. This was 7 years ago in early 2012, when Node itself was only on version 0.6. Fortunately, a JavaScript-based templating system known as EJS already existed at that time, so the basic tools for creating KumaScript were all in place.

But there was a catch: some of our macros needed to make HTTP requests to fetch data they needed. Consider the HTMLElement macro shown above for instance. That macro renders a link to the MDN documentation for a specified HTML tag. But, it also includes a tooltip (via the title attribute) on the link that includes a quick summary of the element:

A rendered link to documentation for an HTML element, displaying a tooltip containing a summary of the linked documentation.

That summary has to come from the document being linked to. This means that the implementation of the KumaScript macro needs to fetch the page it is linking to in order to extract some of its content. Furthermore, macros like this are written by technical writers, not software engineers, and so the decision was made (I assume by whoever designed the DekiScript macro system) that things like HTTP fetches would be done with blocking functions that returned synchronously, so that technical writers would not have to deal with nested callbacks.

This was a good design decision, but it made things tricky for KumaScript. Node does not naturally support blocking network operations, and even if it did, the KumaScript server could not just stop responding to incoming requests while it fetched documents for pending requests. The upshot was that KumaScript used the node-fibers binary extension to Node in order to define methods that blocked while network requests were pending. And in addition, KumaScript adopted the node-hirelings library to manage a pool of child processes. (It was written by the original author of KumaScript for this purpose). This enabled the KumaScript server to continue to handle incoming requests in parallel because it could farm out the possibly-blocking macro rendering calls to a pool of hireling child processes.

Async and await

This fibers+hirelings solution rendered MDN macros for 7 years, but by 2018 it had become obsolete. The original design decision that macro authors should not have to understand asynchronous programming with callbacks (or Promises) is still a good decision. But when Node 8 added support for the new async and await keywords, the fibers extension and hirelings library were no longer necessary.

You can read about async functions and await expressions on MDN, but the gist is this:

  • If you declare a function async, you are indicating that it returns a Promise. And if you return a value that is not a Promise, that value will be wrapped in a resolved Promise before it is returned.
  • The await operator makes asynchronous Promises appear to behave synchronously. It allows you to write asynchronous code that is as easy to read and reason about as synchronous code.

As an example, consider this line of code:

let response = await fetch(url);

In web browsers, the fetch() function starts an HTTP request and returns a Promise object that will resolve to a response object once the HTTP response begins to arrive from the server. Without await, you’d have to call the .then() method of the returned Promise, and pass a callback function to receive the response object. But the magic of await lets us pretend that fetch() actually blocks until the HTTP response is received. There is only one catch:

  • You can only use await within functions that are themselves declared async. Meantime, await doesn’t actually make anything block: the underlying operation is still fundamentally asynchronous, and even if we pretend that it is not, we can only do that within some larger asynchronous operation.

What this all means is that the design goal of protecting KumaScript macro authors from the complexity of callbacks can now be done with Promises and the await keyword. And this is the insight with which I undertook our KumaScript refactor.

As I mentioned above, each of our KumaScript macros is implemented as an EJS template. The EJS library compiles templates to JavaScript functions. And to my delight, the latest version of the library has already been updated with an option to compile templates to async functions, which means that await is now supported in EJS.

With this new library in place, the refactor was relatively simple. I had to find all the blocking functions available to our macros and convert them to use Promises instead of the node-fibers extension. Then, I was able to do a search-and-replace on our macro files to insert the await keyword before all invocations of these functions. Some of our more complicated macros define their own internal functions, and when those internal functions used await, I had to take the additional step of changing those functions to be async. I did get tripped up by one piece of syntax, however, when I converted an old line of blocking code like this:

var title = wiki.getPage(slug).title;

To this:

let title = await wiki.getPage(slug).title;

I didn’t catch the error on that line until I started seeing failures from the macro. In the old KumaScript, wiki.getPage() would block and return the requested data synchronously. In the new KumaScript, wiki.getPage() is declared async which means it returns a Promise. And the code above is trying to access a non-existent title property on that Promise object.

Mechanically inserting an await in front of the invocation does not change that fact because the await operator has lower precedence than the . property access operator. In this case, I needed to add some extra parentheses to wait for the Promise to resolve before accessing the title property:

let title = (await wiki.getPage(slug)).title;

This relatively small change in our KumaScript code means that we no longer need the fibers extension compiled into our Node binary; it means we don’t need the hirelings package any more; and it means that I was able to remove a bunch of code that handled the complicated details of communication between the main process and the hireling worker processes that were actually rendering macros.

And here’s the kicker: when rendering macros that do not make HTTP requests (or when the HTTP results are cached) I saw rendering speeds increase by a factor of 25 (not 25% faster–25 times faster!). And at the same time CPU load dropped in half. In production, the new KumaScript server is measurably faster, but not nearly 25x faster, because, of course, the time required to make asynchronous HTTP requests dominates the time required to synchronously render the template. But achieving a 25x speedup, even if only under controlled conditions, made this refactor a very satisfying experience!

Object.create() and Object.freeze()

There is one other piece of this KumaScript refactor that I want to talk about because it highlights some JavaScript techniques that deserve to be better known. As I’ve written above, KumaScript uses EJS templates. When you render an EJS template, you pass in an object that defines the bindings available to the JavaScript code in the template. Above, I described a KumaScript macro that called a function named wiki.getPage(). In order for it to do that, KumaScript has to pass an object to the EJS template rendering function that binds the name wiki to an object that includes a getPage property whose value is the relevant function.

For KumaScript, there are three layers of this global environment that we make available to EJS templates. Most fundamentally, there is the macro API, which includes wiki.getPage() and a number of related functions. All macros rendered by KumaScript share this same API. Above this API layer is an env object that gives macros access to page-specific values such as the language and title of the page within which they appear. When the Kuma server submits an MDN page to the KumaScript server for rendering, there are typically multiple macros to be rendered within the page. But all macros will see the same values for per-page variables like env.title and env.locale. Finally, each individual macro invocation on a page can include arguments, and these are exposed by binding them to variables $0, $1, etc.

So, in order to render macros, KumaScript has to prepare an object that includes bindings for a relatively complex API, a set of page-specific variables, and a set of invocation-specific arguments. When refactoring this code, I had two goals:

  • I didn’t want to have to rebuild the entire object for each macro to be rendered.
  • I wanted to ensure that macro code could not alter the environment and thereby affect the output of future macros.

I achieved the first goal by using the JavaScript prototype chain and Object.create(). Rather than defining all three layers of the environment on a single object, I first created an object that defined the fixed macro API and the per-page variables. I reused this object for all macros within a page. When it was time to render an individual macro, I used Object.create() to create a new object that inherited the API and per-page bindings, and I then added the macro argument bindings to that new object. This meant that there was much less setup work to do for each individual macro to be rendered.

But if I was going to reuse the object that defined the API and per-page variables, I had to be very sure that a macro could not alter the environment, because that would mean that a bug in one macro could alter the output of a subsequent macro. Using Object.create() helped a lot with this: if a macro runs a line of code like wiki = null;, that will only affect the environment object created for that one render, not the prototype object that it inherits from, and so the wiki.getPage() function will still be available to the next macro to be rendered. (I should point out that using Object.create() like this can cause some confusion when debugging because an object created this way will look like it is empty even though it has inherited properties.)

This Object.create() technique was not enough, however, because a macro that included the code wiki.getPage = null; would still be able to alter its execution environment and affect the output of subsequent macros. So, I took the extra step of calling Object.freeze() on the prototype object (and recursively on the objects it references) before I created objects that inherited from it.

Object.freeze() has been part of JavaScript since 2009, but you may not have ever used it if you are not a library author. It locks down an object, making all of its properties read-only. Additionally it “seals” the object, which means that new properties cannot be added and existing properties can not be deleted or configured to make them writable again.

I’ve always found it reassuring to know that Object.freeze() is there if I need it, but I’ve rarely actually needed it. So it was exciting to have a legitimate use for this function. There was one hitch worth mentioning, however: after triumphantly using Object.freeze(), I found that my attempts to stub out macro API methods like wiki.getPage() were failing silently. By locking down the macro execution environment so tightly, I’d locked out my own ability to write tests! The solution was to set a flag when testing and then omit the Object.freeze() step when the flag was set.

If this all sounds intriguing, you can take a look at the Environment class in the KumaScript source code.

The post Refactoring MDN macros with async, await, and Object.freeze() appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Gfx TeamWebRender newsletter #39

Hi there! The project keeps making very good progress (only 7 blocker bugs left at the time of writing these words, some of which have fixes in review). This mean WebRender has a good chance of making it in Firefox 67 stable. I expect bugs and crash reports to spike as WebRender reaches a larger user population, which will keep us busy for a short while, and once things settle we’ll be able to go back to something we have been postponing for a while: polishing, adding new features and preparing WebRender for new platforms. Exciting!
I’d like to showcase a few projects that use WebRender in a future WebRender newsletter. If you maintain or know about one, please let us know in the comments section of this post.

Notable WebRender and Gecko changes

  • Jeff experimented with enabling WebRender for a few more configurations.
  • Kats enabled more WPT tests for windows-qr
  • Kvark fixed more perspective interpolation issues.
  • Kvark improved the way the resolution of transformed intermediate surfaces is computed and followed up with more improvements.
  • Kvark fixed some plane-splitting bugs.
  • Kvark prevented a crash with non-mappable clip rects.
  • Andrew fixed a pixel snapping issue.
  • srijs and Lee worked around yet another Mac GLSL compiler bug.
  • Lee fixed a performance regression related to animated blobs being invalidated too frequently.
  • Emilio fixed a clipping regression.
  • Glenn fixed a regression with tiled clip masks.
  • Glenn improved the performance of large blur radii by down-scaling more aggressively.
  • Glenn added more debugging infrastructure in wrench.
  • Sotaro enabled mochitest-chrome in WebRender.
  • Sotaro fixed an intermittent assertion.
  • Sotaro fixed a race condition between GPU process crashes and video playback.
  • Doug improved document splitting generalization and integration with APZ.

Blocker bugs countdown

The team keeps going through the remaining blockers (0 P2 bugs and 7 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Mozilla L10NA New Year with New Goals for Mozilla Localization

 

We had a really ambitious and busy year in 2018! Thanks to the help of the global localization community as well as a number of cross-functional Mozilla staff, we were able to focus our efforts on improving the foundations of our localization program. These are some highlights of what we accomplished in 2018:

  • Fluent syntax stability.
  • New design for review process in Pontoon.
  • Continuous localization for Firefox desktop.
  • arewefluentyet.com
  • Formation of Mozilla Terminology Working Group for defining en-US source terms.
  • 8 community-organized workshops around the world.
  • Firefox Lite localization.
  • Research and recommendations for future international brand management.
  • Begun rewrite of Pontoon’s Translate view to React.
  • Clearly defined l10n community roles and their responsibilities.

Rather than plan out our goals for the full year in 2019, we’ve been encouraged to take it a quarter at a time. That being said, there are a number of interesting themes that will pop up in 2019 as well as the continuation of work from 2018:

Standardize & Scale

There are still areas within our tool-chain as well as our processes that make it hard to scale localization to all of Mozilla. Over the course of 2018 we saw more and more l10n requests from internal teams that required customized processes. The good news here is that the organization as a whole wants to localize more and more content (that hasn’t been true in the past)!

While we’ve seen success in standardizing the processes for localizing product user interfaces, we’ve struggled to rein in the customizations for other types of content. In 2019, we’ll focus a lot of our energy on bringing more stability and consistency to localizers by standardizing localization processes according to specific content types. Once standardized, we’ll be able to scale to meet the the needs of these internal teams while keeping the amount of new content to translate in consistent volumes.

Mobilize South East Asian Locales

One of the primary focus areas for all of Mozilla this year is South East Asian markets. The Emerging Markets team in Taipei is focused on creating products for those markets that meet the needs of users there, building on the success of Screenshots Go and Firefox Lite. This year we’ll see more products coming to these markets and it will be more important than ever for us to know how to mobilize l10n communities in those regions in order to localize these exciting, new products.

New Technologies

Early this year we plan to hit a major milestone: Fluent 1.0! This is the culmination of over a decade’s worth of work and we couldn’t be more proud of this accomplishment. Fluent will continue to be implemented in Firefox as well as other Mozilla projects throughout 2019. We’re planning a roadmap for an ecosystem of tooling to support Fluent 1.0 as well as exploring how to build a thriving Fluent community.

Pontoon’s Translate view rewrite to React will be complete and we’ll be implementing features for a newly redesigned review process. Internationalizing the Pontoon Translate UI will be a priority, as well as addressing some long-requested feature updates, like terminology support as well as improved community and user profile metrics.

Train the Trainers

In 2018 we published clear descriptions of the responsibilities and expectations of localizers in specific community roles. These roles are mirror images of Pontoon roles, as Pontoon is the central hub for localization at Mozilla. In 2019, we plan to organize a handful of workshops in the latter half of the year to train Managers on how to be effective leaders in their communities and reliable extensions of the l10n-drivers team. We would like to record at least one of these and make the workshop training available to everyone through the localizer documentation (or some other accessible place).

We aim to report on the progress of these themes throughout the year in quarterly reports. In each report, we’ll share the outcomes of the objectives of one quarter and describe the objectives for the next quarter. In Q1 of 2019 (January – March), the l10n-drivers will:

  • Announce release of Fluent 1.0 to the world
  • Standardize vendor localization process under separate, self-service tool-chain for vendor-sourced content types.
  • Standardize the way Android products are bootstrapped and localized
  • Know how to effectively mobilize South/East Asian communities
  • Transition mozilla.org away from .lang-based l10n infrastructure.
  • Port Pontoon’s translate view to React and internationalize it.

As always, if you have questions about any of these objectives or themes for 2019, please reach out to an l10n-driver, we’d be very happy to chat.

The Mozilla BlogDoes Your Sex Toy Use Encryption?

This Valentine’s Day, Mozilla is assessing the privacy and security features of romantic connected devices

 

This Valentine’s Day, use protection.

To be more specific: use encryption and strong passwords.

As the Internet of Things expands, the most intimate devices are coming online. Sex toys and beds now connect to the internet. These devices collect, store, and often share our personal data.

Connected devices in the bedroom can amp up romance. But they also have the possibility to expose the most intimate parts of our lives. Consumers have the right know if their latest device has privacy and security features that meet their standards.

So today, Mozilla is releasing a Valentine’s Day supplement to *Privacy Not Included, our annual holiday shopping guide.

Last November, we assessed the privacy and security features of 70 popular products, from Nintendo Switch and Google Home to drones and smart coffee makers. The idea: help consumers shop for gifts by highlighting a product’s privacy features, rather than just price and performance.

Now, we’re assessing 18 more products, just in time for February 14.

We researched vibrators; smart beds and sleep trackers; connected aromatherapy machines; and more.

Our research is guided by Mozilla’s Minimum Security Standards, five basic guidelines we believe all connected devices should adhere to. Mozilla developed these standards alongside our friends at Consumers International and the Internet Society. Our Minimum Security Standards include encrypted communications; automatic security updates; strong, unique passwords; vulnerability management; and an accessible privacy policy.

Of the 18 products we reviewed for this guide, nine met our standards. Among these nine: a smart vibrator that uses encryption and features automatic security updates. A Kegel exerciser that doesn’t share user data with unexpected third parties. And a fitness tracker that allows users to easily delete stored data.

Nine products did not meet our Minimum Security Standards, or weren’t clear enough in their privacy policies or our correspondences for Mozilla to make a determination. Among these nine: a smart vibrator that can be hacked by spoofing requests. And a smart vibrator with no privacy policy at all.

Lastly: This installment once again features the Creep-O-Meter, an emoji-based tool that lets readers share how creepy (or not creepy) they believe a product’s approach to privacy and security is.

Thanks for reading. And Happy Valentine’s Day 💖


Jen Caltrider is Mozilla’s Content Strategy Lead and a co-creator of the guide.

The post Does Your Sex Toy Use Encryption? appeared first on The Mozilla Blog.

The Mozilla BlogPutting Users and Publishers at the Center of the Online Value Exchange

Publishers are getting a raw deal in the current online advertising ecosystem. The technology they depend on to display advertisements also ensures they lose the ability to control who gets their users’ data and who gets to monetize that data. With third-party cookies, users can be tracked from high-value publishers to sites they have never chosen to trust, where they are targeted based on their behavior from those publisher sites. This strips value from publishers and fuels rampant ad fraud.

In August, Mozilla announced a new anti-tracking strategy intended to get to the root of this problem. That strategy includes new restrictions on third-party cookies that will make it harder to track users across websites and that we plan to turn on by default for all users in a future release of Firefox. Our motive for this is simple: online tracking is unacceptable for our users and puts their privacy at risk. We know that a large portion of desktop users have installed ad blockers, showing that people are demanding more online control. But our approach also offers an opportunity to rebalance the ecosystem in a way that is in the long-term interest of publishers.

There needs to be a profitable revenue ecosystem on the web in order to create, foster and support innovation. Our third-party cookie restrictions will allow loading of advertising and other types of content (such as videos and sponsored articles), but will prevent the cookie-based tracking that users cannot meaningfully control. This strikes a better balance for publishers than ad blocking – user data is protected and publishers are still able to monetize page visits through advertisements and other content.

Our new approach will deliver both upsides and downsides for publishers, and we want to be clear about both. On the upside, by removing more sophisticated, profile-based targeting, we are also removing the technology that allows other parties to siphon off data from publishers. Ad fraud that depends on 3rd party cookies to track users from high-value publishers to low-value fraudster sites will no longer work. On the downside, our approach will make it harder to do targeted advertising that depends on cross-site browsing profiles, possibly resulting in an impact on the bottomline of companies that depend on behavioral advertising. Targeting that depends on the context (i.e. what the user is reading) and location will continue to be effective.

In short, behavioral targeting will become more difficult, but publishers should be able to recoup a larger portion of the value overall in the online advertising ecosystem. This means the long-term revenue impact will be on those third-parties in the advertising ecosystem that are extracting value from publishers, rather than bringing value to those publishers.

We know that our users are only one part of the equation here; we need to go after the real cause of our online advertising dysfunction by helping publishers earn more than they do from the status quo. That is why we need help from publishers to test the cookie restrictions feature and give us feedback about what they are seeing and what the potential impact will be. Reach out to us at publisher-feedback@mozilla.com. The technical documentation for these cookie restrictions can be found here. To test this feature in Firefox 65, visit “about:preferences#privacy” using the address bar. Under “Content Blocking” click “Custom”, click the checkbox next to “Cookies”, and ensure the dropdown menu is set to “Third-party trackers”.

We look forward to working with publishers to build a more sustainable model that puts them and our users first.

The post Putting Users and Publishers at the Center of the Online Value Exchange appeared first on The Mozilla Blog.

hacks.mozilla.orgFirefox 66 to block automatically playing audible video and audio

Isn’t it annoying when you click on a link or open a new browser tab and audible video or audio starts playing automatically?

We know that unsolicited volume can be a great source of distraction and frustration for users of the web. So we are making changes to how Firefox handles playing media with sound. We want to make sure web developers are aware of this new autoplay blocking feature in Firefox.

Starting with the release of Firefox 66 for desktop and Firefox for Android, Firefox will block audible audio and video by default. We only allow a site to play audio or video aloud via the HTMLMediaElement API once a web page has had user interaction to initiate the audio, such as the user clicking on a “play” button.

Any playback that happens before the user has interacted with a page via a mouse click, printable key press, or touch event, is deemed to be autoplay and will be blocked if it is potentially audible.

Muted autoplay is still allowed. So script can set the “muted” attribute on HTMLMediaElement to true, and autoplay will work.

We expect to roll out audible autoplay blocking enabled by default, in Firefox 66, scheduled for general release on 19 March 2019. In Firefox for Android, this will replace the existing block autoplay implementation with the same behavior we’ll be using in Firefox on desktop.

There are some sites on which users want audible autoplay audio and video to be allowed. When Firefox for Desktop blocks autoplay audio or video, an icon appears in the URL bar. Users can click on the icon to access the site information panel, where they can change the “Autoplay sound” permission for that site from the default setting of “Block” to “Allow”. Firefox will then allow that site to autoplay audibly. This allows users to easily curate their own whitelist of sites that they trust to autoplay audibly.

Firefox expresses a blocked play() call to JavaScript by rejecting the promise returned by HTMLMediaElement.play() with a NotAllowedError. All major browsers which block autoplay express a blocked play via this mechanism. In general, the advice for web authors when calling HTMLMediaElement.play(), is to not assume that calls to play() will always succeed, and to always handle the promise returned by play() being rejected.

If you want to avoid having your audible playback blocked, you should only play media inside a click or keyboard event handler, or on mobile in a touchend event. Another strategy to consider for video is to autoplay muted, and present an “unmute” button to your users. Note that muted autoplay is also currently allowed by default in all major browsers which block autoplay media.

We are also allowing sites to autoplay audibly if the user has previously granted them camera/microphone permission, so that sites which have explicit user permission to run WebRTC should continue to work as they do today.

At this time, we’re also working on blocking autoplay for Web Audio content, but have not yet finalized our implementation. We expect to ship with autoplay Web Audio content blocking enabled by default sometime in 2019. We’ll let you know!

The post Firefox 66 to block automatically playing audible video and audio appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkey2.49.5? Where is it? Quick! Call Waldo!

Hi all,

(didn’t really have any quip to put in the topic..  so wrote that…  *shrug*)

We have finally started spinning 2.49.5.   This is going to be the most EPIC build process.  Why?  Oh…let me count the ways.

  1. 2.49.5 will be spun on a totally new and unproven infrastructure (yay… ;/ )
  2. 2.49.5 will require A LOT of trial and errors.
  3. 2.49.5 isn’t going to be released that fast as we depend on three systems.  I’m hoping to get another system up to take up the builds.

Here’s the status of the build.

  • The tagging process completed with a hiccup but it ‘should’ be ok.
  • Current build(s): Linux32 [failed – in the process of being fixed]
  • Win64 is not talking to the master so apparently Win64 has had a tiff with the master. (*sigh*  will have to convince those two to become friends again.)
  • OSX64 is off doing its own thing (nightly..  need to redirect its attention to more pressing matters.)

I’ll update everyone as it progresses.

:ewong

 

Firefox UXBias and Hiring: How We Hire UX Researchers

This year, the Firefox User Research team is planning to add two new researchers to our group. The job posting went live last month, and after just a few weeks of accepting applications, we had over 900 people apply.

Current members of the Firefox User Research Team fielded dozens of messages from prospective applicants during this time, most asking for informational meetings to discuss the open role. We decided as a team to decline these requests across the board because we did not have the bandwidth for the number of meetings requested, and more importantly we have spent a significant amount of time this year working on minimizing bias in our hiring process.

We felt that meeting with candidates outside of the formal hiring process would give unfair advantage to some candidates and undermine our de-biasing work. At the same time, in alignment with Mozilla’s values and to build on Mozilla’s diversity and inclusion disclosures from earlier this year, we realized there was an opportunity to be more transparent about our hiring process for the benefit of future job applicants and teams inside and outside Mozilla thinking about how they can minimize bias in their own hiring.

Our Hiring Process Before This Year

Before this year, our hiring process consisted of a number of steps. First, a Mozilla recruiter would screen resumes for basic work requirements such as legal status to work in the regions where we were hiring and high-level relevant work experience. Applicants with resumes that passed the initial screen would then be screened by the recruiter over the phone. The purpose of the phone screen was to verify the HR requirements, the applicant’s requirements, work history, and most relevant experience.

If the applicant passed the screen with the recruiter, two members of the research team would conduct individual phone screens with the applicant to understand the applicant’s experience with different research methods and any work with distributed teams. Applicants who passed the screen with the researchers would be invited to a Mozilla office for a day of 1:1 in-person interviews with researchers and non-researchers and asked to present a research exercise prepared in advance of the office visit.

<figcaption>Steps to hiring a UX researcher at Mozilla, from resume screen to hiring team debrief</figcaption>

This hiring process served us well in several ways. It involved both researchers and roles that interact with researchers regularly, such as UX designers and product managers. Also, the mix of remote and in-person components reflected the ways we actually work at Mozilla. The process overall yielded hires — our current research team members — who have worked well together and with cross-functional teams.

However, there were also a lot of limitations to our former hiring process. Each Mozilla staff person involved determined their own questions for the phone and in-person components. We had a living document of questions team members liked to ask, but staff were free to draw on this list as little or as much as they wanted. Moreover, while each staff person had to enter notes into our applicant tracking system after a phone screen or interview with an applicant, we had no explicit expectations about how these notes were to be structured. We were also inconsistent in how we referred to notes during the hiring team debrief meetings where final decisions about applicants were typically made.

Our New Hiring Process: What We’ve Done

Our new hiring process is a work in progress. We want to share the strides we have made and also what we would still like to do. Our first step in trying to reduce bias in our hiring process was to document our current hiring process, which was not documented comprehensively anywhere, and to try and identify areas for improvement. Simultaneously, we set out to learn as much as we could about bias in hiring in general. We consulted members of Mozilla’s Diversity and Inclusion team, dug into materials from Stanford’s Clayman Institute for Gender Research, and talked with several managers in other parts of Mozilla who had undertaken de-biasing efforts for their own hiring. This “discovery” period helped us identify a number of critical steps.

First, we needed to develop a list of essential and desired criteria for job candidates. The researcher job description we had been using reflected many of the criteria we ultimately kept, but the exercise of distilling essential and desired criteria allowed current research team members to make much that was implicit, explicit.

Team members were able to ask questions about the criteria, challenge assumptions, and in the end build a shared understanding of expectations for members of our team. For example, we previously sought out candidates with 1–3 years of work experience. With this criteria, we were receiving applications from some candidates who only had experience within academia. It was through discussing how each of our criteria relates to ways we actually work at Mozilla that we determined that what was even more essential than 1–3 years of any user research experience was that much experience specifically working in industry. The task of distilling our hiring criteria was not necessarily difficult, but it took several hours of synchronous and asynchronous discussion — time we all acknowledged was well-spent because our new hiring process would be built from these agreed-upon criteria.

Next, we wrote phone screen and interview questions that aligned with the essential and desired criteria. We completed this step mostly asynchronously, with each team member contributing and reviewing questions. We also asked UX designers, content strategists, and product managers that we work with to contribute questions, also aligned with our essential and desired criteria, that they would like to ask researcher candidates.

The next big piece was to develop a rubric for grading answers to the questions we had just written. For each question, again mostly asynchronously, research team members detailed what they thought were “excellent,” “acceptable,” and “poor answers,” with the goal of producing a rubric that was self-evident enough that it could be used by hiring team members other than ourselves. Like the exercise of crafting criteria, this step required as much research team discussion time as writing time. We then took our completed draft of a rubric and determined at which phase of the hiring process each question would be asked.

Additionally, we revisited the research exercise that we have candidates complete to make its purpose and the exercise constraints more explicit. Like we did for the phone screen and interview questions, we developed a detailed rubric for the research exercise based on our essential and desirable hiring criteria.

Most recently, we have turned our new questions and rubrics into worksheets, which Mozilla staff will use to document applicants’ answers. These worksheets will also allow staff to document any additional questions they pose to applicants and the corresponding answers, as well as questions applicants ask us. Completed worksheets will be linked to our applicant tracking system and be used to structure the hiring team debrief meetings where final decisions about leading applicants will be made.

From the work we have done to our hiring process, we anticipate a number of benefits, including:

  • Less bias on the part of hiring team members about what we think of as desirable qualities in a candidate
  • Less time spent screening resumes given the established criteria
  • Less time preparing for and processing interviews given the standardized questions and rubrics
  • Flexibility to add new questions to any of the hiring process steps but more attention to how these new questions are tracked and answers documented
  • Less time on final decision making given the criteria, rubrics, and explicit expectations for documenting candidates’ answers

Next Steps

Our Mozilla recruiter and members of the research team have started going through the 900+ resumes we have received to determine which candidates will be screened by phone. We fully expect to learn a lot and make changes to our hiring process after this first attempt at putting it into practice. There are also several other resource-intensive steps we would like to take in the near future to mitigate bias further, including:

  • Making our hiring process more transparent by publishing it where it would be discoverable (for instance, some Mozilla teams are publishing hiring materials to Github)
  • Establishing greater alignment between our new process and the mechanics of our applicant tracking system to make the hiring process easier for hiring team members
  • At the resume screening phase, blinding parts of resumes that can contribute to bias such as candidate names, names of academic institutions, and graduation dates
  • Sharing the work we have done on our hiring process via blog posts and other platforms to help foster critical discussion

Teams who are interested in trying out some of the exercises we carried out to improve our hiring process are welcome to use the template we developed for our purposes. We are also interested in learning how other teams have tackled bias in hiring and welcome suggestions, in particular, for blinding when hiring people from around the world.

We are looking forward to learning from this work and welcoming new research team members who can help us advance our efforts.

Thank you to Gemma Petrie and Mozilla’s Diversity & Inclusion Team for reviewing an early draft of this post.

Also published on the Firefox UX blog


Bias and Hiring: How We Hire UX Researchers was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXHow do people decide whether or not to get a browser extension?

The Firefox Add-ons Team works to make sure people have all of the information they need to decide which browser extensions are right for them. Past research conducted by Bill Selman and the Add-ons Team taught us a lot about how people discover extensions, but there was more to learn. Our primary research question was: “How do people decide whether or not to get a specific browser extension?”

We recently conducted two complementary research studies to help answer that big question:

  1. An addons.mozilla.org (AMO) survey, with just under 7,500 respondents
  2. An in-person think-aloud study with nine recruited participants, conducted in Vancouver, BC

The survey ran from July 19, 2018 to July 26, 2018 on addons.mozilla.org (AMO). The survey prompt was displayed when visitors went to the site and was localized into ten languages. The survey asked questions about why people were visiting the site, if they were looking to get a specific extension (and/or theme), and if so what information they used to decide to get it.

<figcaption>Screenshot of the survey message bar on addons.mozilla.org.</figcaption>

The think-aloud study took place at our Mozilla office in Vancouver, BC from July 30, 2018 to August 1, 2018. The study consisted of 45-minute individual sessions with nine participants, in which they answered questions about the browsers they use, and completed tasks on a Windows laptop related to acquiring a theme and an extension. To get a variety of perspectives, participants included three Firefox users and six Chrome users. Five of them were extension users, and four were not.

<figcaption>Mozilla office conference room in Vancouver, where the think-aloud study took place.</figcaption>

What we learned about decision-making

People use social proof on the extension’s product page

Ratings, reviews, and number of users proved important for making a decision to get the extension in both the survey and think-aloud study. Think-aloud participants used these metrics as a signal that an extension was good and safe. All except one think-aloud participant used this “social proof” before installing an extension. The importance of social proof was backed up by the survey responses where ratings, number of users, and reviews were among the top pieces of information used.

<figcaption>Screenshot of Facebook Container’s page on addons.mozilla.org with the “social proof” outlined: number of users, number of reviews, and rating.</figcaption>
<figcaption>AMO survey responses to “Think about the extension(s) you were considering getting. What information did you use to decide whether or not to get the extension?”</figcaption>

People use social proof outside of AMO

Think-aloud participants mentioned using outside sources to help them decide whether or not to get an extension. Outside sources included forums, advice from “high authority websites,” and recommendations from friends. The same result is seen among the survey respondents, where 40.6% of respondents used an article from the web and 16.2% relied on a recommendation from a friend or colleague. This is consistent with our previous user research, where participants used outside sources to build trust in an extension.

<figcaption>Screenshot of an example outside source: TechCrunch article about Facebook Container extension.</figcaption>
<figcaption>AMO survey responses to “What other information did you use to decide whether or not to get an extension?”</figcaption>

People use the description and extension name

<figcaption>Screenshot of Facebook Container’s page on addons.mozilla.org with extension name, descriptions, and screenshot highlighted.</figcaption>

Almost half of the survey respondents use the description to make a decision about the extension. While the description was the top piece of content used, we also see that over one-third of survey respondents evaluate the screenshots and the extension summary (the description text beneath the extension name), which shows their importance as well.

Think-aloud participants also used the extension’s description (both the summary and the longer description) to help them decide whether or not to get it.

While we did not ask about the extension name in the survey, it came up during our think-aloud studies. The name of the extension was cited as important to think-aloud participants. However, they mentioned how some names were vague and therefore didn’t assist them in their decision to get an extension.

Themes are all about the picture

In addition to extensions, AMO offers themes for Firefox. From the survey responses, the most important part of a theme’s product page is the preview image. It’s clear that the imagery far surpasses any social proof or description based on this survey result.

<figcaption>Screenshot of a theme on addons.mozilla.org with the preview image highlighted.</figcaption>
<figcaption>AMO survey responses to “Think about the theme(s) you were considering getting. What information did you use to decide whether or not to get the theme?”</figcaption>

All in all, we see that while social proof is essential, great content on the extension’s product page and in external sources (such as forums and articles) are also key to people’s decisions about whether or not to get an extension. When we’re designing anything that requires people to make an adoption decision, we need to remember the importance of social proof and great content, within and outside of our products.

In alphabetical order by first name, thanks to Amy Tsay, Ben Miroglio, Caitlin Neiman, Chris Grebs, Emanuela Damiani, Gemma Petrie, Jorge Villalobos, Kev Needham, Kumar McMillan, Meridel Walkington, Mike Conca, Peiying Mo, Philip Walmsley, Raphael Raue, Richard Bloor, Rob Rayborn, Sharon Bautista, Stuart Colville, and Tyler Downer, for their help with the user research studies and/or reviewing this blog post.

Also published on the Firefox UX blog.


How do people decide whether or not to get a browser extension? was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXReflections on a co-design workshop

Authors: Jennifer Davidson, Meridel Walkington, Emanuela Damiani, Philip Walmsley

Co-design workshops help designers learn first-hand the language of the people who use their products, in addition to their pain points, workflows, and motivations. With co-design methods [1] participants are no longer passive recipients of products. Rather, they are involved in the envisioning and re-imagination of them. Participants show us what they need and want through sketching and design exercises. The purpose of a co-design workshop is not to have a pixel-perfect design to implement, rather it’s to learn more about the people who use or will use the product, and to involve them in generating ideas about what to design.

We ran a co-design workshop at Mozilla to inform our product design, and we’d like to share our experience with you.

<figcaption>Sketching exercises during the co-design workshop were fueled by coffee and tea.</figcaption>

Before the workshop

Our UX team was tasked with improving the Firefox browser extension experience. When people create browser extensions, they use a form to submit their creations. They submit their code and all the metadata about the extension (name, description, icon, etc.). The metadata provided in the submission form is used to populate the extension’s product page on addons.mozilla.org.

<figcaption>A cropped screenshot of the third step of the submission form, which asks for metadata like name and description of the extension.</figcaption>
<figcaption>Screenshot of an extension product page on addons.mozilla.org.</figcaption>

The Mozilla Add-ons team (i.e., Mozilla staff who work on improving the extensions and themes experience) wanted to make sure that the process to submit an extension is clear and useful, yielding a quality product page that people can easily find and understand. Improving the submission flow for developers would lead to higher quality extensions for people to use.

We identified some problems by using test extensions to “eat our own dog food” (i.e. walk through the current process). Our content strategist audited the submission flow experience to understand product page guidelines in the submission flow. Then some team members conducted a cognitive walkthrough [2] to gain knowledge of the process and identify potential issues.

After identifying some problems, we sought to improve our submission flow for browser extensions. We decided to run a co-design workshop that would identify more problem areas and generate new ideas. The workshop took place in London on October 26, one day before MozFest, an annual week-long “celebration for, by, and about people who love the internet.” Extension and theme creators were selected from our global add-ons community to participate in the workshop. Mozilla staff members were involved, too: program managers, a community manager, an Engineering manager, and UX team members (designers, a content strategist, and a user researcher).

<figcaption>A helpful and enthusiastic sticky note on the door of our workshop room. Image: “Submission flow workshop in here!!” posted on a sticky note on a wooden door.</figcaption>

Steps we took to create and organize the co-design workshop

After the audit and cognitive walkthrough, we thought a co-design workshop might help us get to a better future. So we did the following:

  1. Pitch the idea to management and get buy-in
  2. Secure budget
  3. Invite participants
  4. Interview participants (remotely)
  5. Analyze interviews
  6. Create an agenda for the workshop. Our agenda included: ice breaker, ground rules, discussion of interview results, sketching (using this method [3]) & critique sessions, creating a video pitch for each group’s final design concept.
  7. Create workshop materials
  8. Run the workshop!
  9. Send out a feedback survey
  10. Debrief with Mozilla staff
  11. Analyze results (over three days) with Add-ons UX team
  12. Share results (and ask for feedback) of analysis with Mozilla staff and participants

Lessons learned: What went well

Interview participants beforehand

We interviewed each participant before the workshop. The participants relayed their experience about submitting extensions and their motivations for creating extensions. They told us their stories, their challenges, and their successes.

Conducting these interviews beforehand helped our team in a few ways:

  • The interviews introduced the team and facilitators, helping to build rapport before the workshop.
  • The interviews gave the facilitators context into each participant’s experience. We learned about their motivations for creating extensions and themes as well as their thoughts about the submission process. This foundation of knowledge helped to shape the co-design workshop (including where to focus for pain points), and enabled us to prepare an introductory data summary for sharing at the workshop.
  • We asked for participants’ feedback about the draft content guidelines that our content strategist created to provide developers with support, examples, and writing exercises to optimize their product page content. Those guidelines were to be incorporated into the new submission flow, so it was very helpful to get early user feedback. It also gave the participants some familiarity with this deliverable so they could help incorporate it into the submission flow during the workshop.
<figcaption>A photo of Jennifer, user researcher, presenting interview results back to the participants, near the beginning of the workshop.</figcaption>

Thoughtfully select diverse participants

The Add-ons team has an excellent community manager, Caitlin Neiman, who interfaces with the greater Add-ons community. Working with Mozilla staff, she selected a diverse group of community participants for the workshop. The participants hailed from several different countries, some were paid to create extensions and some were not, and some had attended Mozilla events before and some had not. This careful selection of participants resulted in diverse perspectives, workflows, and motivations that positively impacted the workshop.

Create Ground Rules

Design sessions can benefit from a short introductory activity of establishing ground rules to get everyone on the same page and set the tone for the day. This activity is especially helpful when participants don’t know one another.

Using a flip chart and markers, we asked the room of participants to volunteer ground rules. We captured and reviewed those as a group.

<figcaption>A photo of Emanuela, UX Designer and facilitator, scribing ground rules on a flip chart.</figcaption>

Why are ground rules important?

Designing the rules together, with facilitators and participants, serves as a way to align the group with a set of shared values, detecting possible harmful group behaviors and proposing productive and healthy interactions. Ground rules help make everyone’s experience a more rich and satisfying one.

Assign roles and create diverse working groups during the workshop

The Mozilla UX team in Taipei recently conducted a participatory workshop with older adults. In their blog post, they also highlight the importance of creating diverse working groups for the workshops [4].

In our workshop, each group was comprised of:

  • multiple participants (i.e. extension and theme creators)
  • a Mozilla staff program manager, engineering manager, community manager, and/or engineer.
  • a facilitator who was either a Mozilla staff designer or program manager. As a facilitator, the designer was a neutral party in the group and could internalize participants’ mental models, workflows, and vocabulary through the experience.

We also assigned roles during group critique sessions. Each group member chose to be a dreamer (responds to ideas with a “Why not?” attitude), a realist (responds to ideas with “How?”), or a spoiler (responds to ideas by pointing out their flaws). This format is called the Walt Disney approach [5].

<figcaption>Post-its for each critique role: Realist, Spoiler, Dreamer</figcaption>

Why are critique roles important?

Everyone tends to fit into one of the Walt Disney roles naturally. Being pushed to adopt a role that may not be their tendency gets participants to step out of their comfort zone gently. The roles help participants empathize with other perspectives.

We had other roles throughout the workshop as well, namely, a “floater” who kept everyone on track and kept the workshop running, a timekeeper, and a photographer.

Ask for feedback about the workshop results

The “co” part of “co-design” doesn’t have to end when the workshop concludes. Using what we learned during the workshop, the Add-ons UX team created personas and potential new submission flow blueprints. We sent those deliverables to the workshop participants and asked for their feedback. As UX professionals, it was useful to close the feedback loop and make sure the deliverables accurately reflected the people and workflows being represented.

Lessons Learned: What could be improved

The workshop was too long

We flew from around the world to London to do this workshop. A lot of us were experiencing jet lag. We had breaks, coffee, biscuits, and lunch. Even so, going from 9 to 4, sketching for hours and iterating multiple times was just too much for one day.

<figcaption>Jorge, a product manager, provided feedback about the workshop’s duration. Image: “Jorge is done” text written above a skull and crossbones sketch.</figcaption>

We have ideas about how to fix this. One approach is to introduce a variety of tasks. In the workshop we mostly did sketching over and over again. Another idea is to extend the workshop across two days, and do a few hours each day. Another idea is to shorten the workshop and do fewer iterations.

There were not enough Mozilla staff engineers present

The workshop was developed by a user researcher, designers, and a content strategist. We included a community manager and program managers, but we did not include engineers in the planning process (other than providing updates). One of the engineering managers said that it would have been great to have engineers present to help with ideation and hear from creators first-hand. If we were to do a design workshop again, we would be sure to have a genuinely interdisciplinary set of participants, including more Mozilla staff engineers.

And with that…

We hope that this blog post helps you create a co-design workshop that is interdisciplinary, diverse, caring of participants’ perspectives, and just the right length.

Acknowledgements

Much gratitude to our colleagues who created the workshop with us and helped us edit this blog post! Thanks to Amy Tsay, Caitlin Neiman, Jorge Villalobos, Kev Needham, Stuart Colville, Mike Conca, and Gemma Petrie.

References

[1] Sanders, Elizabeth B-N., and Pieter Jan Stappers. “Co-creation and the new landscapes of design.” Co-design 4.1 (2008): 5–18.

[2] “How to Conduct a Cognitive Walkthrough.” The Interaction Design Foundation, 2018, www.interaction-design.org/literature/article/how-to-conduct-a-cognitive-walkthrough.

[3] Gray, Dave. “6–8–5.” Gamestorming, 2 June 2015, gamestorming.com/6–8–5s/.

[4] Hsieh, Tina. “8 Tips for Hosting Your First Participatory Workshop.” Medium.com, Firefox User Experience, 20 Sept. 2018, medium.com/firefox-ux/8-tips-for-hosting-your-first-participatory-workshop-f63856d286a0.

[5] “Disney Brainstorming Method: Dreamer, Realist, and Spoiler.” Idea Sandbox, idea-sandbox.com/blog/disney-brainstorming-method-dreamer-realist-and-spoiler/.

Also published on the Firefox UX Blog.


Reflections on a co-design workshop was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla Add-ons BlogFebruary’s featured extensions

Firefox Logo on blue background

Pick of the Month: ContextSearch

by Mike B
Select text to quickly search the phrase from an array of engines.

“Very intuitive and customizable. Well done!”

Featured: Word Count

by Trishul
Simply highlight text, right click, and select Word Count to easily do just that.

“Beautifully simple and incredibly useful for those of us who write for a living.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post February’s featured extensions appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgNew in Firefox DevTools 65

We just released Firefox 65 with a number of new developer features that make it even easier for you to create, inspect and debug the web.

Among all the features and bug fixes that made it to DevTools in this new release, we want to highlight two in particular:

  • Our brand new Flexbox Inspector
  • Smarter JavaScript inspection and debugging

We hope you’ll love using these tools just as much as we and our community loved creating them.

Understand CSS layout like never before

The Firefox DevTools team is on a mission to help you master CSS layout. We want you to go from “trying things until they work” to really understanding how your browser lays out a page.

Introducing the Flexbox Inspector

Flexbox is a powerful way to organize and distribute elements on a page, in a flexible way.

To achieve this, the layout engine of the browser does a lot of things under the hood. When everything works like a charm, you don’t have to worry about this. But when problems appear in your layout it may feel frustrating, and you may really need to understand why elements behave a certain way.

That’s exactly what the Flexbox Inspector is focused on.

Highlighting containers, lines, and items

First and foremost, the Flexbox Inspector highlights the elements that make up your flexbox layout: the container, lines and items.

Being able to see where these start and end — and how far apart they are — will go a long way to helping you understand what’s going on.

Flexbox highlighter showing containers, items and available space

Once toggled, the highlighter shows three main parts:

  • A dotted outline that highlights the flex container itself
  • Solid lines that show where the flex items are
  • A background pattern that represents the free space between items

One way to toggle the highlighter for a flexbox container is by clicking its “flex” badge in the inspector.  This is an easy way to find flex containers while you’re scanning elements in the DOM. Additionally, you can turn on the highlighter from the flex icon in the CSS rules panel, as well as from the toggle in the new Flexbox section of the layout sidebar.

Animation showing how to enable the flexbox highlighter

Understanding how flex items got their size

The beauty of Flexbox is that you can leave the browser in charge of making the right layout decisions for you. How much should an element stretch, or should an element wrap to a new line?

But when you give up control, how do you know what the browser is actually doing?

The Flexbox Inspector comes with functionality to show how the browser distributed the sizing for a given item.
Flexbox container panel showing a list of flexbox items

The layout sidebar now contains a Flex Container section that lists all the flex items, in addition to providing information about the container itself.

Clicking any of these flex items opens the Flex Item section that displays exactly how the browser calculated the item size.
Overview of Flexbox Item panel showing sizing informatin

The diagram at the top of the flexbox section shows a quick overview of the steps the browser took to give the item its size.

It shows your item’s base size (either its minimum content size or its flex-basis size), the amount of flexible space that was added (flex-grow) or removed (flex-shrink) from it, and any minimum or maximum defined sizes that restricted the item from becoming any shorter or longer.

If you are reading this on Firefox 65, you can take this for a spin right now!

Open the Inspector on this page, and select the div.masthead.row element. Look for the Flex Container panel in the sidebar, and click on the 2 items to see how their sizes are computed by the browser.

Animation showing how to use the Flexbox Inspector

After the bug fix, keep track of changes

Let’s suppose you have fixed a flexbox bug thanks to the Flexbox Inspector. To do so, you’ve made a few edits to various CSS rules and elements. That’s when you’re usually faced with a problem we’ve all had: “What did I actually change to make it work?”.

In Firefox 65, we’ve also introduced a new Changes panel to do just that.

New changes panel showing additions, deletions and modifications of CSS as diff.

It keeps track of all the CSS changes you’ve made within the inspector, so you can keep working as you normally would. Once you’re happy, open the Changes tab and see everything you did.

What’s next for layout tools?

We’re really excited for you to try these two new features and let us know what you think. But there’s more in store.

You’ve been telling us exactly what your biggest CSS challenges are, and we’ve been listening. We’re currently prototyping layout tools for debugging unwanted scrollbars, z-indexes that don’t work, and more tools like the Flexbox Inspector but for other types of layouts. Also, we’re going to make it even easier for you to extract your changes from the Changes panel.

Smarter JavaScript inspection & debugging

When developing JavaScript, the Console and Debugger are your windows into your code’s execution flow and state changes. Over the past releases we’ve focused on making debugging work better for modern toolchains. Firefox 65 continues this theme.

Collapsing Framework Stack Traces

If you’re working with frameworks and build tools, then you’re used to seeing really long error stack traces in the Console. The new smarter stack traces identify 3rd party code (such as frameworks) and collapse it by default. This significantly reduces the information displayed in the Console and lets you focus on your code.
Before and after version of stack traces in console.

The Collapsing feature works in the Console stack traces for errors and logs, and in the Debugger call stacks.

Reverse search your Console history

If you are tired of smashing the arrow key to find that awesome one-liner you ran one hour ago in the console, then this is for you. Reverse search is a well known command-line feature that lets you quickly browse recent commands that match the entered string.

Animation showing how to use reverse search in the console

To use it in the Console, press F9 on Windows/Linux or Ctrl+R on MacOS and start typing. You can then use Ctrl+R to move to the previous or Ctrl+S to the next result. Finally, hit return to confirm.

Invoke getters to inspect the return value

JavaScript getters are very useful for dynamic properties and heavily used in frameworks like vue.js for computed properties. But when you log an object with a getter to the Console, the reference to the method is logged, not its return value. The method does not get invoked automatically, as that could change your application’s state. Since you often actually want to see the value, you can now manually invoke getters on logged objects.

Animation showing how to invoke getters in the console

Wherever objects can be inspected, in the Console or Debugger, you’ll see >> icons next to getters. Clicking these will execute the method and print the return value.

Pause on XHR/Fetch Breakpoints

Console logging is just one aspect of understanding application state. For complex issues, you need to pause state at precisely the right moment. Fetching data is usually one of those moments, and it is now made “pausable” with the new XHR/Fetch Breakpoint in the Debugger.

XHR Breakpoints panel in the debugger
Kudos to Firefox DevTools code contributor Anshul Malik for “casually” submitting the patch for this useful feature and for his ongoing contributions.

What’s next for JavaScript debugging?

You might have noticed that we’ve been heads down over recent releases to make the JavaScript debugging experience rock solid – for breakpoints, stepping, source maps, performance, etc. Raising the quality bar and continuing to polish and refine remains the focus for the entire team.

There’s work in progress on much requested features like Column Breakpoints, Logpoints, Event and DOM Breakpoints. Building out the authoring experience in the Console, we are adding an multi-line editing mode (inspired by Firebug) and a more powerful autocomplete. Keep an eye out for those features in the latest release of Firefox Developer Edition.

Thank you

Countless contributors helped DevTools staff by filing bugs, writing patches and verifying them. Special thanks go to:

Also, thanks to Patrick Brosset, Nicolas Chevobbe and the whole DevTools team & friends for helping put together this article.

Contribute

As always, we would love to hear your feedback on how we can improve DevTools and the browser.

Download Firefox Developer Edition to get early access to upcoming tooling and platform.

The post New in Firefox DevTools 65 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Gfx TeamWebRender newsletter #38

Greetings! WebRender’s best and only newsletter is here. The number of blocker bugs is rapidly decreasing, thanks to the efforts of everyone involved (staff and volunteers alike). The project is in a good enough shape that some people are now moving on to other projects and we are starting to experiment with webrender on new hardware. WebRender is now enabled by default in Nightly for some subset of AMD GPUs on Windows and we are looking into Intel integrated GPUs as well. As usual we start with small subsets with the goal of gradually expanding in order to avoid running into an overwhelming amount of platform/configuration specific bugs at once.

Notable WebRender and Gecko changes

  • Bobby improved the test infrastructure for picture caching.
  • Jeff added restrictions to filter inputs.
  • Jeff enabled WebRender for a subset of AMD GPUs on Windows.
  • Matt fixed a filter clipping issue.
  • Matt made a few improvements to blob image performance.
  • Emilio fixed perspective scrolling.
  • Lee worked around transform animation detection disabling sub-pixel AA on some sites.
  • Lee fixed fixed the dwrote font descriptor handling so we don’t crash anymore on missing fonts.
  • Lee, Jeff and Andrew fixed how we handle snapping with the will-change property and animated transforms.
  • Glenn improved the accuracy of sub-pixel box shadows.
  • Glenn fixed double inflation of text shadows.
  • Glenn added GPU timers for scale operations.
  • Glenn optimized drawing axis-aligned clip rectangles into clip masks.
  • Glenn used down-scaling more often to avoid large blur radii.
  • Glenn and Nical fixed uneven rendering of transformed shadows with fractional offsets.
  • Nical rewrote the tile decomposition logic to support negative tile offsets and arbitrary tiling origins.
  • Nical surveyed the available GPU debugging tools and documented the workarounds.
  • Sotaro fixed a bug with the lifetime of animations.
  • Sotaro skipped a test which is specific to how non-webrender backends work.
  • Sotaro fixed another test that was specific to the non-webrender rendering logic.
  • Sotaro fixed a bug in the iteration over image bridges when dispatching compositing notifications.
  • Doug made APZ document-splitting-aware.
  • Kvark fixed a perspective interpolation issue.

Ongoing work

The team keeps going through the remaining blockers (3 P2 bugs and 11 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

The Mozilla BlogMozilla Raises Concerns Over Facebook’s Lack of Transparency

Today Denelle Dixon, Mozilla’s Chief Operating Officer, sent a letter to the European Commission surfacing concerns about the lack of publicly available data for political advertising on the Facebook platform.

It has come to our attention that Facebook has prevented third parties from conducting analysis of the ads on their platform. This impacts our ability to deliver transparency to EU citizens ahead of the EU elections. It also prevents any developer, researcher, or organization to develop tools, critical insights, and research designed to educate and empower users to understand and therefore resist targeted disinformation campaigns.

Mozilla strongly believes that transparency cannot just be on the terms with which the world’s largest, most powerful tech companies are most comfortable. To have true transparency in this space, the Ad Archive API needs to be publicly available to everyone. This is all the more critical now that third party transparency tools have been blocked. We appreciate the work that Facebook has already done to counter the spread of disinformation, and we hope that it will fulfill its promises made under the Commission’s Code of Practice and deliver transparency to EU citizens ahead of the EU Parliamentary elections.

Mozilla’s letter to European Commission on Facebook Transparency 31 01 19

The post Mozilla Raises Concerns Over Facebook’s Lack of Transparency appeared first on The Mozilla Blog.

Open Policy & AdvocacyOnline content regulation in Europe: a paradigm for the future #1

Lawmakers in the European Union are today focused on regulating online content, and compelling online services to make greater efforts to reduce the illegal and harmful activity on their services. As we’ve blogged previously, many of the present EU initiatives – while well-intentioned – are falling far short of what is required in this space, and pose real threats to users rights online and the decentralised open internet. Ahead of the May 2019 elections, we’ll be taking a close look at the current state of content regulation in the EU, and advancing a vision for a more sustainable paradigm that adequately addresses lawmakers’ concerns within a rights- and ecosystem-protective framework.

Concerns about illegal and harmful content online, and the role of online services in tackling it, is a policy issue that is driving the day in jurisdictions around the world. Whether it’s in India, the United States, or the European Union itself, lawmakers are grappling with what is ultimately a really hard problem – removing ‘bad’ content at scale without impacting ‘good’ content, and in ways that work for different types of internet services and that don’t radically change the open character of the internet. Regrettably, despite the fact that many great minds in government, academia, and civil society are working on this hard problem, online content regulation remains stuck in a paradigm that undermines users’ rights and the health of the internet ecosystem, without really improving users’ internet experience.

More specifically, the policy approaches of today – epitomised in Europe by the proposed EU Terrorist Content regulation and the EU Copyright Reform directive – are characterised by three features that, together, fail to mitigate effectively the harms of bad content, while also failing to protect the good:

  • Flawed metrics: The EU’s approach to content regulation today frames ‘success’ in terms of the speed and quantity of content removal. As we will see later in this series, this quantitative framing undermines proportionality and due process, and is unfitting for an internet defined by user-uploaded content.
  • The lack of user safeguards: Under existing content control paradigms, online service providers are forced to play the role of judge and jury, and terms of service (ToS) effectively function as a law unto themselves. As regulation becomes ‘privatised’ in this way, users have little access to the redress and oversight that one is entitled to when fundamental rights are restricted.
  • The one-size-fits-all approach: The internet is characterised by a rich diversity of service providers and use-cases. Yet at the same time, today’s online content control paradigm functions as if there is only one type of online service – namely, large, multinational social media companies. Forcing all online services to march to the compliance beat of a handful of powerful and well-resourced companies has the effect of undermining competition and internet openness.

In that context, it is clear that the present model is not fit-for purpose, and there is an urgent need to rethink how we do online content regulation in Europe. At the same time, the fact that online content regulation at scale is a hard problem is not an excuse to do nothing. As we’ve highlighted before, illegal content is symptomatic of an unhealthy internet ecosystem, and addressing it is something that we care deeply about. To that end, we recently adopted an addendum to our Manifesto, in which we affirmed our commitment to an internet that promotes civil discourse, human dignity, and individual expression. The issue is also at the heart of our recently published Internet Health Report, through its dedicated section on digital inclusion.

For these reasons, we’re focused on shaping a more progressive and sustainable discourse around online content regulation in the EU. In that endeavour there’s no time like the present: 2019 will see critical developments in EU policy initiatives around illegal and harmful content online (think terrorism, copyright, disinformation), and the new European Commission is expected to review the rules around intermediary liability in Europe – the cornerstone of online enforcement and compliance today.

In the coming weeks, we’ll be using this blog to unpack the key considerations of online content regulation, and slowly build out a vision for what a better framework could look like. We hope you’ll join us on the journey.

 

 

 

 

 

 

 

The post Online content regulation in Europe: a paradigm for the future #1 appeared first on Open Policy & Advocacy.

hacks.mozilla.orgFirefox 65: WebP support, Flexbox Inspector, new tooling & platform updates

Well now, there’s no better way to usher out the first month of the year than with a great new Firefox release. It’s winter for many of us, but that means more at-home time to install Firefox version 65, and check out some of the great new browser and web platform features we’ve included within. Unless you’d rather be donning your heavy coat and heading outside to grit the driveway, that is (or going to the beach, in the case of some of our Australian chums).

A good day for DevTools

Firefox 65 features several notable DevTools improvements. The highlights are as follows:

CSS Flexbox Inspector

At Mozilla, we believe that new features of the web platform are often best understood with the help of intuitive, visual tools. That’s why our DevTools team has spent the last few years getting feedback from the field, and prioritizing innovative new tooling to allow web devs and designers to inspect, edit, understand, and tinker with UI features. This drive led to the release of the CSS Grid Inspector, Font Editor, and Shape Path Editor.

Firefox 65 sees these features joined by a new friend — the CSS Flexbox Inspector — which allows you to easily visualize where your flex containers and items are sitting on the page and how much free space is available between them, what each flex item’s default and final size is, how much they are being shrunk or grown, and more.

The Firefox 65 Flexbox inspector showing several images of colored circles laid out using Flexbox

Changes panel

When you’re done tweaking your site’s interface using these tools, our new Changes panel tracks and summarizes all of the CSS modifications you’ve made during the current session, so you can work out what you did to fix a particular issue, and can copy and paste your fixes back out to your code editor.

Firefox 65 Changes panel, showing a diff of CSS added and CSS removed

Advanced color contrast ratio

We have also added an advanced color contrast ratio display. When using the Accessibility Inspector’s accessibility picker, hovering over the text content of an element displays its color contrast ratio, even if its background is complex (for example a gradient or detailed image), in which case it shows a range of color contrast values, along with a WCAG rating.

Firefox Accessibility picker, showing the color contrast ratio range of some text with a gradient behind it

JavaScript debugging improvements

Firefox 65 also features some nifty JavaScript debugging improvements:

  • When displaying stack traces (e.g. in console logs or with the JavaScript debugger), calls to framework methods are identified and collapsed by default, making it easier to home in on your code.
  • In the same fashion as native terminals, you can now use reverse search to find entries in your JavaScript console history (F9 (Windows/Linux) or Ctrl + R (macOS) and type a search term, followed by Ctrl + R/Ctrl + S to toggle through results).
  • The JavaScript console’s $0 shortcut (references the currently inspected element on the page) now has autocomplete available, so for example you could type $0.te to get a suggestion of $0.textContent to reference text content.

Find out more

CSS platform improvements

A number of CSS features have been added to Gecko in 65. The highlights are described below.

CSS environment variables

CSS environment variables are now supported, accessed via env() in stylesheets. These variables are usable in any part of a property value or descriptor, and are scoped globally to a particular document, whereas custom properties are scoped to the element(s) they are declared on. These were initially provided by the iOS browser to allow developers to place their content in a safe area of the viewport, i.e., away from the area covered by the notch.

body {
  padding:
    env(safe-area-inset-top, 20px)
    env(safe-area-inset-right, 20px)
    env(safe-area-inset-bottom, 20px)
    env(safe-area-inset-left, 20px);
}

steps() animation timing function

We’ve added the steps() CSS animation timing function, along with the related jump-* keywords. This allows you to easily create animations that jump in a series of equidistant steps, rather than a smooth animation.

As an example, we might previously have added a smooth animation to a DOM node like this:

.smooth {
  animation: move-across 2s infinite alternate linear;
}

Now we can make the animation jump in 5 equal steps, like this:

.stepped {
  animation: move-across 2s infinite alternate steps(5, jump-end);
}

Note: The steps() function was previously called frames(), but some details changed, and the CSS Working Group decided to rename it to something less confusing.

break-* properties

New break-before, break-after, and break-inside CSS properties have been added, and the now-legacy page-break-* properties have been aliased to them. These properties are part of the CSS Fragmentation spec, and set how page, column, or region breaks should behave before, after, or inside a generated box.

For example, to stop a page break occurring inside a list or paragraph:

ol, ul, p {
  break-inside: avoid;
}

JavaScript/APIs

Firefox 65 brings many updates to JavaScript/APIs.

Readable streams

Readable streams are now enabled by default, allowing developers to process data chunk by chunk as it arrives over the network, e.g. from a fetch() request.

You can find a number of ReadableStream demos on GitHub.

Relative time formats

The Intl.RelativeTimeFormat constructor allows you to output strings describing localized relative times, for easier human-readable time references in web apps.

A couple of examples, to sate your appetite:

let rtf1 = new Intl.RelativeTimeFormat('en', { style: 'narrow' });
console.log(rtf1.format(2, 'day')); // expected output: "in 2 days"

let rtf2 = new Intl.RelativeTimeFormat('es', { style: 'narrow' });
console.log(rtf2.format(2, 'day')); // expected output: "dentro de 2 días"

Storage Access API

The Storage Access API has been enabled by default, providing a mechanism for embedded, cross-origin content to request access to client-side storage mechanisms it would normally only have access to in a first-party context. This API features a couple of simple methods, hasStorageAccess() and requestStorageAccess(), which respectively check and request storage access. For example:

document.requestStorageAccess().then(
  () => { console.log('access granted') },
  () => { console.log('access denied') }
);

Other honorable mentions

  • The globalThis keyword has been added, for accessing the global object in whatever context you are in. This avoids needing to use a mix of window, self, global, or this, depending on where a script is executing (e.g. a webpage, a worker, or Node.js).
  • The FetchEvent object’s replacesClientId and resultingClientId properties are now implemented — allowing you to monitor the origin and destination of a navigation.
  • You can now set a referrer policy on scripts applied to your documents (e.g. via a referrerpolicy attribute on <script> elements)
  • Lastly, to avoid popup spam, Window.open() may now only be called once per user interaction event.

Media: Support for WebP and AV1, and other improvements

At long last, Firefox 65 now supports the WebP image format. WebP offers both lossless and lossy compression modes, and typically produces files that are 25-34% smaller than equivalent JPEGs or PNGs with the same image quality. Smaller files mean faster page loads and better performance, so this is obviously a good thing.

Not all browsers support WebP. You can use the <picture> element in your HTML to offer both WebP and traditional image formats, leaving the final choice to the user’s browser. You can also detect WebP support on the server-side and serve images as appropriate, as supported browsers send an Accept: image/webp header when requesting images.

Images are great, but what about video? Mozilla, along with industry partners, has been developing the next-generation AV1 video codec, which is now supported in Firefox 65 for Windows. AV1 is nearly twice as efficient as H.264 in terms of compression, and, unlike H.264, it’s completely open and royalty-free. Support for other operating systems will be enabled in future releases.

Other additions

  • The MediaRecorder pause and resume events are finally supported in Firefox, as of version 65.
  • For developers creating WebGL content, Firefox 65 supports the BPTC and RGTC texture compression extensions.

Firefox Internals

We’ve also updated several aspects of Firefox itself:

  • Support for Handoff between iOS and macOS devices is now available.
  • Preferences for content blocking have been completely redesigned to give people greater and more obvious control over how Firefox protects them from third-party tracking.
  • The about:performance dashboard now reports the memory used by tabs and extensions.
  • WebSockets have been implemented over HTTP/2.
  • Lastly, for Windows administrators, Firefox is now available as an MSI package in addition to a traditional self-extracting EXE.

WebExtensions improvements

We’ve added some useful WebExtensions API features too!

  • The Tabs API now allows extensions to control which tab gets focused when the current tab is closed. You can read more about the motivation for this feature on Piro’s blog, where he discusses it in the context of his Tree Style Tab extension.

Interoperability

The web often contains conflicting, non-standard, or under-specified markup, and it’s up to us to ensure that pages which work in other browsers also work in Firefox.

To that end, Firefox 65:

  • supports even more values of the non-standard -webkit-appearance CSS property.
  • behaves the same as other browsers when encountering the user-select CSS property in nested, shadow, or content editable contexts.
  • clears the content of <iframe>s when the src attribute is removed, matching the behavior of Safari and Chrome.

Further Reading

The post Firefox 65: WebP support, Flexbox Inspector, new tooling & platform updates appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogToday’s Firefox Gives Users More Control over their Privacy

Privacy. While it’s the buzzword for 2019, it has always been a core part of the Mozilla mission, and continues to be a driving force in how we create features for Firefox right from the start. For example, last year at this time we had just announced Firefox Quantum with Opt-in Tracking Protection.

We’ve always made privacy for our users a priority and we saw the appetite for more privacy-focused features that protect our users’ data and put them in control. So, we knew it was a no-brainer for us to meet this need. It’s one of the reasons we broadened our approach to anti-tracking.

One of the features we outlined in our approach to anti-tracking was Enhanced Tracking Protection, otherwise known as “removing cross-site tracking”. We initially announced in October that we would roll out Enhanced Tracking Protection off-by-default. This was just one of the many steps we took to help prepare users when we turn this on by default this year. We continue to experiment and share our journey to ensure we balance these new preferences with the experiences our users want and expect. Before we roll this feature out by default, we plan to run a few more experiments and users can expect to hear more from us about it.

As a result of some of our previous testing, we’re happy to announce a new set of redesigned controls for the Content Blocking section in today’s Firefox release where users can choose their desired level of privacy protection. Here’s a video that shows you how it works:

Firefox Enhanced Tracking Protection lets you see and control how websites track you on the web

Your Choice in How to Control your Privacy

When it comes to user privacy, choice and control are first and foremost. To see the new redesigned Content Blocking section, you can view it in two ways. Click on the small “i” icon in the address bar and under Content Blocking, click on the gear on the right side. The other way is to go to your Preferences. Click on Privacy & Security on the left hand side. From there, users will see Content Blocking listed at the top. There will be three distinct choices. They include:

  • Standard: For anyone who wants to “set it and forget it,” this is currently the default where we block known trackers in Private Browsing Mode. In the future, this setting will also block Third Party tracking cookies.

Block known trackers in Private Browsing Mode

  • Strict: For people who want a bit more protection and don’t mind if some sites break. This setting blocks known trackers by Firefox in all windows.

Block known trackers by Firefox in all windows

  • Custom: For those who want complete control to pick and choose what trackers and cookies they want to block. We talk more about tracking cookies here and about cross-site tracking on our Firefox Frontier blog post.
    • Trackers: You can choose to block in Private Windows or All Windows. You can also change your block list from two Disconnect lists: basic (recommended) or strict (blocks all known trackers).
    • Cookies:  You have the following four choices to block – Third-party trackers; Cookies from unvisited websites; All third-party cookies (may cause websites to break); and All cookies (will cause websites to break).

Pick and choose what trackers and cookies you want to block

Additional features in today’s Firefox release include:

  • AV1 Support – For Windows users, Firefox now supports the royalty-free video compression technology, AV1. Mozilla has contributed to this new open standard which keep high-quality video affordable for everyone. It can open up business opportunities, and remove barriers to entry for entrepreneurs, artists, and regular people.
  • Updated Performance Management – For anyone who likes to look under the hood and find out why a specific web page is taking too long to load, you can check our revamped Task Manager page when you type about:performance in the address bar. It reports memory usage for tabs and add-ons. From there you can see what (tab, ads in tabs, extension, etc) could be the possible cause, and find a solution either by refreshing/closing the tab, blocking tab, or uninstall the extension.

For the complete list of what’s new or what we’ve changed, you can check out today’s release notes.

Check out and download the latest version of Firefox Quantum, available here.

The post Today’s Firefox Gives Users More Control over their Privacy appeared first on The Mozilla Blog.

Web Application SecurityDefining the tracking practices that will be blocked in Firefox

For years, web users have endured major privacy violations. Their browsing continues to be routinely and silently tracked across the web. Tracking techniques have advanced to the point where users cannot meaningfully control how their personal data is used.

At Mozilla, we believe that privacy is fundamental, and that pervasive online tracking is unacceptable. Simply put: users need more protection from tracking. In late 2018, Mozilla announced that we are changing our approach to anti-tracking, with a focus on providing tracking protection by default, for the benefit of everyone using Firefox.

In support of this effort, today we are releasing an anti-tracking policy that outlines the tracking practices that Firefox will block by default. At a high level, this new policy will curtail tracking techniques that are used to build profiles of users’ browsing activity. In the policy, we outline the types of tracking practices that users cannot meaningfully control. Firefox may apply technical restrictions to the parties found using each of these techniques.

With the release of our new policy, we’ve defined the set of tracking practices that we think users need to be protected against. As a first step in enforcing this policy, Firefox includes a feature that prevents domains classified as trackers from using cookies and other browser storage features (e.g., DOM storage) when loaded as third parties. While this feature is currently off by default, we are working towards turning it on for all of our users in a future release of Firefox.

Furthermore, the policy also covers query string tracking, browser fingerprinting, and supercookies. We intend to apply protections that block these tracking practices in Firefox in the future.

Parties not wishing to be blocked by this policy should stop tracking Firefox users across websites. To classify trackers, we rely on Disconnect’s Tracking Protection list, which is curated in alignment with this policy. If a party changes their tracking practices and updates their public documentation to reflect these changes, they should work with Disconnect to update the classification of their domains.

This initial release of the anti-tracking policy is not meant to be the final version. Instead, the policy is a living document that we will update in response to the discovery and use of new tracking techniques. We believe that all web browsers have a fundamental obligation to protect users from tracking and we hope the launch of our policy advances the conversation about what privacy protections should be the default for all web users.

Clarification (2019-01-28): Added a sentence to clarify the current status of the cookie blocking feature.

The post Defining the tracking practices that will be blocked in Firefox appeared first on Mozilla Security Blog.

The Mozilla BlogMozilla Fosters the Next Generation of Women in Emerging Technologies

At Mozilla, we want to empower people to create technology that reflects the diversity of the world we live in. Today we’re excited to announce the release of the Inclusive Development Space toolkit. This is a way for anyone around the world to set up their own pop-up studio to support diverse creators.

The XR Studio was a first-of-its-kind pop-up at Mozilla’s San Francisco office in the Summer of 2018. It provided a deeply needed space for women and gender non-binary people to collaborate, learn and create projects using virtual reality, augmented reality, and artificial intelligence.

The XR Studio program was founded to offer a jump-start for women creators, providing access to mentors, equipment, ideas, and a community with others like them. Including a wide range of ages, technical abilities, and backgrounds was essential to the program experience.

Inclusive spaces are needed in the tech industry. In technology maker-spaces, eighty percent of makers are men. As technologies like VR and AI become more widespread, it’s crucial that a variety of viewpoints are represented to eliminate biases from lack of diversity.

The XR Studio cohort had round-the-clock access to high quality VR, AR, and mixed reality hardware, as well as mentorship from experts in the field. The group came together weekly to share experiences and connect with leading industry experts like Unity’s Timoni West, Fast.ai’s Rachel Thomas, and VR pioneer Brenda Laurel.

We received more than 100 applications in little over two weeks and accepted 32 participants. Many who applied cited a chance to experiment with futuristic tools as the most important reason for applying to the program, with career development a close second.

“I couldn’t imagine XR Studio being with any other organization. Don’t know if it would have had as much success if it wasn’t with Mozilla. That really accentuated the program.” – Tyler Musgrave, recently named Futurist in residence at ARVR Women.

Projects spanned from efforts to improve bias awareness in education, self defense training, criminal justice system education, identifying police surveillance and more. Participants felt the safe and supportive environment gave them a unique advantage in technology creation. “With Mozilla’s XR Studio, I am surrounded by women just as passionate and supportive about creating XR products as I am,” said Neilda Pacquing, Founder and CEO MindGlow, Inc., a company that focuses on safety training using immersive experiences. “There’s no other place like it and I feel I’ve gone further in creating my products than I would have without it.”

So what’s next?

The Mozilla XR Studio program offered an opportunity to learn and build confidence, overcome imposter syndrome, and make amazing projects. We learned lessons about architecting an inclusive space that we plan to use to create future Mozilla spaces that will support underrepresented groups in creating with emerging technologies.

Mozilla is also sponsoring the women in VR brunch at the Sundance Film Festival this Sunday. It will be a great opportunity to learn, collaborate, and fellowship with women from around the world. If you will be in the area, please reach out and say hello.

Want to create your own inclusive development space in your community, city or company? Check out our toolkit.

The post Mozilla Fosters the Next Generation of Women in Emerging Technologies appeared first on The Mozilla Blog.

SUMO Blog[Important] Changes to the SUMO staff team

TL;DR

  • Social Community Manager changes: Konstantina and Kiki will be taking over Social Community Management. As of today, Rachel has left Mozilla as an employee.
  • L10n/KB Community Manager changes: Ruben will be taking over Community Management for KB translations. As of today, Michal has left Mozilla as an employee.
  • SUMO community call to introduce Konstantina, Kiki and Ruben on the 24th of January at 9 am PST.
  • If you have questions or concerns please join the conversation on the SUMO forums or the SUMO discourse

Today we’d like to announce some changes to the SUMO staff team. Rachel McGuigan and Michał Dziewoński will be leaving Mozilla.

Rachel and Michal have been crucial to our efforts of creating and running SUMO for many years. Rachel first showed great talent with her work on FxOS support. Her drive with our social support team have been crucial to the support of Firefox releases. Michal’s drive and passion for languages have ensured SUMO KB has a fantastic coverage of languages and that support to use the free, open browser that is Firefox, is available for more people. We wish Rachel and Michal all the best on their next adventure and thank them for their contributions to Mozilla.

With these changes, we will be thinking about how best to organize the SUMO team. Rest assured, we will continue investing in community management and will be growing the overall size of the SUMO team throughout 2019.

In the meantime Konstantina, Kiki and Ruben will be stepping in temporarily while we seek to backfill these roles to help us ensure we still have full focus on our work and continue working on our projects with you all.

We are confident in the positive future of SUMO in Mozilla, and we remain excited about the many new products and platforms we will introduce support for.  We have an incredible opportunity in front of us to continue delivering huge impact for Mozilla in 2019 and are looking forward to making this real with all of you.

Keep rocking the helpful web!

Mozilla Gfx TeamWebRender newsletter #37

Hi! Last week I mentioned picture caching landing in nightly and I am happy to report that it didn’t get backed out (never to take for granted with a change of that importance) and it’s here to stay.
Another rather hot topic but which didn’t appear in the newsletter was Jeff and Matt’s long investigation of content frame time telemetry numbers. It turned into a real saga, featuring performance improvements but also a lot of adjustments to the way we do the measurements to make sure that we get apple to apple comparisons of Firefox running with and without WebRender. The content frame time metric is important because it correlates with user perception of stuttering, and we now have solid measurements backing that WebRender improves this metric.

Notable WebRender and Gecko changes

  • Bobby did various code cleanups and improvements.
  • Chris wrote a prototype Windows app to test resizing a child HWND in a child process and figure out how to do that without glitches.
  • Matt fixed an SVG filter clipping issue.
  • Matt Enabled SVG filters to be processed on the GPU in more cases.
  • Andrew fixed a pixel snapping issue with transforms.
  • Andrew fixed a blob image crash.
  • Emilio fixed a bug with perspective transforms.
  • Glenn included root content clip rect in picture caching world bounds.
  • Glenn added support for multiple dirty rects in picture caching.
  • Glenn fixed adding extremely large primitives to picture caching tile dependencies.
  • Glenn skipped some redundant work during picture caching updates.
  • Glenn removed unused clear color mode.
  • Glenn reduced invalidation caused by world clip rects.
  • Glenn fixed an invalidation issue with picture caching when encountering a blur filter.
  • Glenn avoided interning text run primitives due to scrolled offset field.
  • Sotaro improved the performance of large animated SVGs in some cases.

Ongoing work

The team keeps going through the remaining blockers (7 P2 bugs and 20 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

hacks.mozilla.orgCameras, Sensors & What’s Next for Mozilla’s Things Gateway

Today the Mozilla IoT team is happy to announce the 0.7 release of the Things Gateway. This latest release brings experimental support for IP cameras, as well as support for a wider range of sensors. We’ve also got some exciting news on where the project is heading next.

Camera Support

With 0.7, you can now view video streams and get snapshots from IP cameras which follow the ONVIF standard such as the Foscam R2.

To enable ONVIF support, install the ONVIF add-on via Settings > Add-ons in the gateway’s web interface.

Set up your camera as per the manufacturer’s instructions, including a username and password if it’s required. (Always remember to change from the default if there is one!) Then, you can click the “Configure” button on the ONVIF add-on (see above) to enter your login details in the form shown below:

Once the adapter is configured you should be able to add your device in the usual way, by clicking on the + button on the Things screen. When your camera appears you can give it a name before saving it:

When you click on the video camera you will see icons for an image snapshot and/or video stream:

Click on the icons and the image or video stream will pop up on the screen. When viewing an image property, you can click the reload button in the bottom left to reload the latest snapshot:

Video camera support is still experimental at this point as we look to optimise video performance, refine the UI and support a wider range of hardware. If running on the Raspberry Pi you can expect to see a noticeable delay on video streams as it transcodes video into a web friendly format. We’d appreciate your help testing with different cameras and giving us feedback to help improve this feature.

Sensors

Things Gateway 0.7 also comes with support for a wider range of sensors.

We have added support for temperature sensors (e.g. Eve Degree, Eve Room and the SmartThings Multipurpose sensor).

And we have added support for leak sensors (e.g. the SmartThings Water Leak Sensor and the Fibaro Flood Sensor).

This means you can also now create new types of rules in the rules engine, for example to turn on a fan when temperature reaches a certain level, or be notified if a leak is detected.

Thing Description Changes

For developers, this release brings some changes to the Thing Description format used to advertise the properties, actions, and events web things support.

Rather than providing a single URL in an href member, each Property, Action and Event object can now provide an array of links with an href, rel and mediaType for each Link object. This is particularly useful for the new Camera and VideoCamera capabilities, which can provide links to an image resource or video stream. Below is an example of a Thing Description for a video camera that supports both new capabilities.

{
 "@context": "https://iot.mozilla.org/schemas/",
 "@type": ["Camera", "VideoCamera"],
 "name": "Web Camera",
 "description": "My web camera",
 "properties": {
   "video": {
     "@type": "VideoProperty",
     "title": "Stream",
     "links": [{
       "href": "rtsp://example.com/things/camera/properties/video.mp4",
       "mediaType": "video/mp4"
     }]
   },
   "image": {
     "@type": "ImageProperty",
     "title": "Snapshot",
     "links": [{
       "href": "http://example.com/things/camera/properties/image.jpg",
       "mediaType": "image/jpg"
     }]
   }
 }
}

You may also notice that label has been renamed to title to be more in line with the latest W3C draft of the Thing Description specification.

We make an effort to retain backwards compatibility where possible, but please expect more changes like this as we rapidly evolve the Thing Description specification.

What’s Next

We’ve been delighted with the response we’ve seen to Project Things from hacker and maker communities in 2018. Thank you so much for all the contributions you’ve made in reporting bugs, implementing new features and building your own adapter add-ons and web things. Also thanks to you, a Project Things tutorial on Mozilla Hacks was our most read blog post of 2018!

Taking things (pun intended) to the next level in 2019, a big focus for our team will be to evolve the current Things Gateway application into a software distribution for wireless routers. By integrating all the smart home features we have built directly into your wireless router, we believe we can provide even more value in the areas of family internet safety and home network health.

In 2019, you can expect to see more effort go into the OpenWrt port of the Things Gateway to create our very own software distribution for “smart routers” which integrate smart home capabilities. We’ll start with new features for configuring your gateway as a wireless access point and all of the other features you’d expect from a wireless router. We anticipate many more new features to emerge as we develop this distribution, and explore all the value that a Mozilla trusted personal agent for your whole home network could provide.

We will keep generating Raspberry Pi builds of our ongoing quarterly releases for the foreseeable future, because that’s what most of our current users are using and that plucky little developer board is still close to our hearts. But look out for support for new hardware platforms coming soon.

For now, you can download the new 0.7 release from our website. If you have a Things Gateway already set up on a Raspberry Pi it should update itself automatically.

Happy hacking!

The post Cameras, Sensors & What’s Next for Mozilla’s Things Gateway appeared first on Mozilla Hacks - the Web developer blog.

about:communityFirefox 65 new contributors

With the release of Firefox 65, we are pleased to welcome the 32 developers who contributed their first code change to Firefox in this release, 27 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Mozilla VR BlogHow I made Jingle Smash

How I made Jingle Smash

This is part 1 of my series on how I built Jingle Smash, a block smashing WebVR game

When advocating a new technology I always try to use it in the way that real world developers will, and for WebVR (the VR-only precursor to WebXR), building a game is currently one of the best ways to do that. So for the winter holidays I built a game, Jingle Smash, a classic block tumbling game. If you haven't played it yet, put on your headset and give it a try. Now an overview of how I built it.

How I made Jingle Smash

ThreeJS

Jingle Smash is written in ThreeJS using WebVR and some common boilerplate that I use in all of my demos. I chose to use ThreeJS directly instead of A-Frame because I knew I would be adding custom textures, custom geometry, and a custom control scheme. While it is possible to do this with A-Frame, I'd be writing so much code at the ThreeJS level that it was easier to cut out the middle man.

Physics

Jingle Smash is an Angry Birds style game where you lob an object at blocks to knock them over and destroy targets. Once you have destroyed the required targets you get to the next level. Seems simple enough. And for an 2D side view game like Angry Birds it is. I remember enough of my particle physics from school to write a simple 2D physics simulator, but 3D collisions are way beyond me. I needed a physics engine.

After evaluating the options I settled on Cannon.js because it's 100% Javascript and has no requirements on the UI. It simply calculates the positions of objects in space and puts your code in charge of stepping through time. This made it very easy to integrate with ThreeJS. It even has an example.

Graphics

In previous games I have used 3D models created by an artist. For this Jingle Smash I created everything in code. The background, blocks, and ornaments all use either standard or generated geometry. All of the textures except for the sky background are also generated on the fly using 2D HTML Canvas, then converted into textures.

I went with a purely generated approach because it let me easily mess with UV values to create different effects and use exactly the colors I wanted. In a future blog I'll dive deep into how they work. Here is a quick example of generating an ornament texture:

    const canvas = document.createElement('canvas')
    canvas.width = 64
    canvas.height = 16
    const c = canvas.getContext('2d')

    c.fillStyle = 'black'
    c.fillRect(0, 0, canvas.width, canvas.height)
    c.fillStyle = 'red'
    c.fillRect(0, 0, 30, canvas.height)
    c.fillStyle = 'white'
    c.fillRect(30, 0, 4, canvas.height)
    c.fillStyle = 'green'
    c.fillRect(34, 0, 30, canvas.height)

    this.textures.ornament1 = new THREE.CanvasTexture(canvas)
    this.textures.ornament1.wrapS = THREE.RepeatWrapping
    this.textures.ornament1.repeat.set(8, 1)

How I made Jingle Smash

Level Editor

Most block games are 2D. The player has a view of the entire game board. Once you enter 3D, however, the blocks obscure the ones behind them. This means level design is completely different. The only way to see what a level looks like is to actually jump into VR and see it. That meant I really needed a way to edit the level from within VR, just as the player would see it.

To make this work I built a simple (and ugly) level editor inside of VR. This required building a small 2D UI toolkit for the editor controls. Thanks to using HTML canvas this turned out not to be too difficult.

How I made Jingle Smash

Next Steps

I'm pretty happy with how Jingle Smash turned out. Lots of people played it at the Mozilla All-hands and said they had fun. I did some performance optimization and was able to get the game up to about 50fps, but there is still more work to do (which I'll cover soon in another post).

Jingle Smash proved that we can make fun games that run in WebVR, and that load very quickly (on a good connection the entire game should load in less than 2 seconds). You can see the full (but messy) code of Jingle Smash in my WebXR Experiments repo.

While you wait for the future updates on Jingle Smash, you might want to watch my new Youtube Series on How to make VR with the Web

hacks.mozilla.orgFearless Security: Memory Safety

Fearless Security

Last year, Mozilla shipped Quantum CSS in Firefox, which was the culmination of 8 years of investment in Rust, a memory-safe systems programming language, and over a year of rewriting a major browser component in Rust. Until now, all major browser engines have been written in C++, mostly for performance reasons. However, with great performance comes great (memory) responsibility: C++ programmers have to manually manage memory, which opens a Pandora’s box of vulnerabilities. Rust not only prevents these kinds of errors, but the techniques it uses to do so also prevent data races, allowing programmers to reason more effectively about parallel code.

With great performance comes great memory responsibility

In the coming weeks, this three-part series will examine memory safety and thread safety, and close with a case study of the potential security benefits gained from rewriting Firefox’s CSS engine in Rust.

What Is Memory Safety

When we talk about building secure applications, we often focus on memory safety. Informally, this means that in all possible executions of a program, there is no access to invalid memory. Violations include:

  • use after free
  • null pointer dereference
  • using uninitialized memory
  • double free
  • buffer overflow

For a more formal definition, see Michael Hicks’ What is memory safety post and The Meaning of Memory Safety, a paper that formalizes memory safety.

Memory violations like these can cause programs to crash unexpectedly and can be exploited to alter intended behavior. Potential consequences of a memory-related bug include information leakage, arbitrary code execution, and remote code execution.

Managing Memory

Memory management is crucial to both the performance and the security of applications. This section will discuss the basic memory model. One key concept is pointers. A pointer is a variable that stores a memory address. If we visit that memory address, there will be some data there, so we say that the pointer is a reference to (or points to) that data. Just like a home address shows people where to find you, a memory address shows a program where to find data.

Everything in a program is located at a particular memory address, including code instructions. Pointer misuse can cause serious security vulnerabilities, including information leakage and arbitrary code execution.

Allocation/free

When we create a variable, the program needs to allocate enough space in memory to store the data for that variable. Since the memory owned by each process is finite, we also need some way of reclaiming resources (or freeing them). When memory is freed, it becomes available to store new data, but the old data can still exist until it is overwritten.

Buffers

A buffer is a contiguous area of memory that stores multiple instances of the same data type. For example, the phrase “My cat is Batman” would be stored in a 16-byte buffer. Buffers are defined by a starting memory address and a length; because the data stored in memory next to a buffer could be unrelated, it’s important to ensure we don’t read or write past the buffer boundaries.

Control Flow

Programs are composed of subroutines, which are executed in a particular order. At the end of a subroutine, the computer jumps to a stored pointer (called the return address) to the next part of code that should be executed. When we jump to the return address, one of three things happens:

  1. The process continues as expected (the return address was not corrupted).
  2. The process crashes (the return address was altered to point at non-executable memory).
  3. The process continues, but not as expected (the return address was altered and control flow changed).

How languages achieve memory safety

We often think of programming languages on a spectrum. On one end, languages like C/C++ are efficient, but require manual memory management; on the other, interpreted languages use automatic memory management (like reference counting or garbage collection [GC]), but pay the price in performance. Even languages with highly optimized garbage collectors can’t match the performance of non-GC’d languages.

Manually

Some languages (like C) require programmers to manually manage memory by specifying when to allocate resources, how much to allocate, and when to free the resources. This gives the programmer very fine-grained control over how their implementation uses resources, enabling fast and efficient code. However, this approach is prone to mistakes, particularly in complex codebases.

Mistakes that are easy to make include:

  • forgetting that resources have been freed and trying to use them
  • not allocating enough space to store data
  • reading past the boundary of a buffer

Shake hands with danger!
A safety video candidate for manual memory management

Smart pointers

A smart pointer is a pointer with additional information to help prevent memory mismanagement. These can be used for automated memory management and bounds checking. Unlike raw pointers, a smart pointer is able to self-destruct, instead of waiting for the programmer to manually destroy it.

There’s no single smart pointer type—a smart pointer is any type that wraps a raw pointer in some practical abstraction. Some smart pointers use reference counting to count how many variables are using the data owned by a variable, while others implement a scoping policy to constrain a pointer lifetime to a particular scope.

In reference counting, the object’s resources are reclaimed when the last reference to the object is destroyed. Basic reference counting implementations can suffer from performance and space overhead, and can be difficult to use in multi-threaded environments. Situations where objects refer to each other (cyclical references) can prohibit either object’s reference count from ever reaching zero, which requires more sophisticated methods.

Garbage Collection

Some languages (like Java, Go, Python) are garbage collected. A part of the runtime environment, named the garbage collector (GC), traces variables to determine what resources are reachable in a graph that represents references between objects. Once an object is no longer reachable, its resources are not needed and the GC reclaims the underlying memory to reuse in the future. All allocations and deallocations occur without explicit programmer instruction.

While a GC ensures that memory is always used validly, it doesn’t reclaim memory in the most efficient way. The last time an object is used could occur much earlier than when it is freed by the GC. Garbage collection has a performance overhead that can be prohibitive for performance critical applications; it requires up to 5x as much memory to avoid a runtime performance penalty.

Ownership

To achieve both performance and memory safety, Rust uses a concept called ownership. More formally, the ownership model is an example of an affine type system. All Rust code follows certain ownership rules that allow the compiler to manage memory without incurring runtime costs:

  1. Each value has a variable, called the owner.
  2. There can only be one owner at a time.
  3. When the owner goes out of scope, the value will be dropped.

Values can be moved or borrowed between variables. These rules are enforced by a part of the compiler called the borrow checker.

When a variable goes out of scope, Rust frees that memory. In the following example, when s1 and s2 go out of scope, they would both try to free the same memory, resulting in a double free error. To prevent this, when a value is moved out of a variable, the previous owner becomes invalid. If the programmer then attempts to use the invalid variable, the compiler will reject the code. This can be avoided by creating a deep copy of the data or by using references.

Example 1: Moving ownership

let s1 = String::from("hello");
let s2 = s1;

//won't compile because s1 is now invalid
println!("{}, world!", s1);

Another set of rules verified by the borrow checker pertains to variable lifetimes. Rust prohibits the use of uninitialized variables and dangling pointers, which can cause a program to reference unintended data. If the code in the example below compiled, r would reference memory that is deallocated when x goes out of scope—a dangling pointer. The compiler tracks scopes to ensure that all borrows are valid, occasionally requiring the programmer to explicitly annotate variable lifetimes.

Example 2: A dangling pointer

let r;
{
  let x = 5;
  r = &x
}
println!("r: {}", r);

The ownership model provides a strong foundation for ensuring that memory is accessed appropriately, preventing undefined behavior.

Memory Vulnerabilities

The main consequences of memory vulnerabilities include:

  1. Crash: accessing invalid memory can make applications terminate unexpectedly
  2. Information leakage: inadvertently exposing non-public data, including sensitive information like passwords
  3. Arbitrary code execution (ACE): allows an attacker to execute arbitrary commands on a target machine; when this is possible over a network, we call it a remote code execution (RCE)

Another type of problem that can appear is memory leakage, which occurs when memory is allocated, but not released after the program is finished using it. It’s possible to use up all available memory this way. Without any remaining memory, legitimate resource requests will be blocked, causing a denial of service. This is a memory-related problem, but one that can’t be addressed by programming languages.

The best case scenario with most memory errors is that an application will crash harmlessly—this isn’t a good best case. However, the worst case scenario is that an attacker can gain control of the program through the vulnerability (which could lead to further attacks).

Misusing Free (use-after-free, double free)

This subclass of vulnerabilities occurs when some resource has been freed, but its memory position is still referenced. It’s a powerful exploitation method that can lead to out of bounds access, information leakage, code execution and more.

Garbage-collected and reference-counted languages prevent the use of invalid pointers by only destroying unreachable objects (which can have a performance penalty), while manually managed languages are particularly susceptible to invalid pointer use (particularly in complex codebases). Rust’s borrow checker doesn’t allow object destruction as long as references to the object exist, which means bugs like these are prevented at compile time.

Uninitialized variables

If a variable is used prior to initialization, the data it contains could be anything—including random garbage or previously discarded data, resulting in information leakage (these are sometimes called wild pointers). Often, memory managed languages use a default initialization routine that is run after allocation to prevent these problems.

Like C, most variables in Rust are uninitialized until assignment—unlike C, you can’t read them prior to initialization. The following code will fail to compile:

Example 3: Using an uninitialized variable

fn main() {
    let x: i32;
    println!("{}", x);
}

Null pointers

When an application dereferences a pointer that turns out to be null, usually this means that it simply accesses garbage that will cause a crash. In some cases, these vulnerabilities can lead to arbitrary code execution 1 2 3. Rust has two types of pointers, references and raw pointers. References are safe to access, while raw pointers could be problematic.

Rust prevents null pointer dereferencing two ways:

  1. Avoiding nullable pointers
  2. Avoiding raw pointer dereferencing

Rust avoids nullable pointers by replacing them with a special Option type. In order to manipulate the possibly-null value inside of an Option, the language requires the programmer to explicitly handle the null case or the program will not compile.

When we can’t avoid nullable pointers (for example, when interacting with non-Rust code), what can we do? Try to isolate the damage. Any dereferencing raw pointers must occur in an unsafe block. This keyword relaxes Rust’s guarantees to allow some operations that could cause undefined behavior (like dereferencing a raw pointer).

Everything the borrow checker touches...what about that shadowy place? That's an unsafe block. You must never go there Simba.

Buffer overflow

While the other vulnerabilities discussed here are prevented by methods that restrict access to undefined memory, a buffer overflow may access legally allocated memory. The problem is that a buffer overflow inappropriately accesses legally allocated memory. Like a use-after-free bug, out-of-bounds access can also be problematic because it accesses freed memory that hasn’t been reallocated yet, and hence still contains sensitive information that’s supposed to not exist anymore.

A buffer overflow simply means an out-of-bounds access. Due to how buffers are stored in memory, they often lead to information leakage, which could include sensitive data such as passwords. More severe instances can allow ACE/RCE vulnerabilities by overwriting the instruction pointer.

Example 4: Buffer overflow (C code)

int main() {
  int buf[] = {0, 1, 2, 3, 4};
  
  // print out of bounds
  printf("Out of bounds: %d\n", buf[10]);
  
  // write out of bounds
  buf[10] = 10;
  printf("Out of bounds: %d\n", buf[10]);
  
  return 0;
}

The simplest defense against a buffer overflow is to always require a bounds check when accessing elements, but this adds a runtime performance penalty.

How does Rust handle this? The built-in buffer types in Rust’s standard library require a bounds check for any random access, but also provide iterator APIs that can reduce the impact of these bounds checks over multiple sequential accesses. These choices ensure that out-of-bounds reads and writes are impossible for these types. Rust promotes patterns that lead to bounds checks only occurring in those places where a programmer would almost certainly have to manually place them in C/C++.

Memory safety is only half the battle

Memory safety violations open programs to security vulnerabilities like unintentional data leakage and remote code execution. There are various ways to ensure memory safety, including smart pointers and garbage collection. You can even formally prove memory safety. While some languages have accepted slower performance as a tradeoff for memory safety, Rust’s ownership system achieves both memory safety and minimizes the performance costs.

Unfortunately, memory errors are only part of the story when we talk about writing secure code. The next post in this series will discuss concurrency attacks and thread safety.

Exploiting Memory: In-depth resources

Heap memory and exploitation
Smashing the stack for fun and profit
Analogies of Information Security
Intro to use after free vulnerabilities

The post Fearless Security: Memory Safety appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgHow to make VR with the web, a new video series

Virtual reality (VR) seems complicated, but with a few JavaScript libraries and tools, and the power of WebGL, you can make very nice VR scenes that can be viewed and shared in a headset like an Oculus Go or HTC Vive, in a desktop web browser, or on your smartphone. Let me show you how:

In this new YouTube series, How to make a virtual reality project in your browser with three.js and WebVR, I’ll take you through building an interactive birthday card in seven short tutorials, complete with code and examples to get you started. The whole series clocks in under 60 minutes. We begin by getting a basic cube on the screen, add some nice 3D models, set up lights and navigation, then finally add music.

All you need are basic JavaScript skills and an internet connection.

Here’s the whole series. Come join me:

1: Learn how to build virtual reality scenes on the web with WebVR and JavaScript

2: Set up your WebVR workflow and code to build a virtual reality birthday card

3: Using a WebVR editor (Spoke) to create a fun 3D birthday card

4: How to create realistic lighting in a virtual reality scene

5: How to move around in virtual reality using teleportation to navigate your scene

6: Adding text and text effects to your WebVR scene with a few lines of code

7: How to add finishing touches like sound and sky to your WebVR scene

  

To learn how to make more cool stuff with web technologies, subscribe to Mozilla Hacks on YouTube. And if you want to get more involved in learning to create mixed reality experiences for the web, you can follow @MozillaReality on twitter for news, articles, and updates.

The post How to make VR with the web, a new video series appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogThe Coral Project is Moving to Vox Media

Since 2015, the Mozilla Foundation has incubated The Coral Project to support journalism and improve online dialog around the world through privacy-centered, open source software. Originally founded as a two-year collaboration between Mozilla, The New York Times and the Washington Post, it became entirely a Mozilla project in 2017.

Over the past 3.5 years, The Coral Project has developed two software tools, a series of guides and best practices, and grown a community of journalism technologists around the world advancing privacy and better online conversation.

Coral’s first tool, Ask, has been used by journalists in several countries, including the Spotlight team at the Boston Globe, whose series on racism used Ask on seven different occasions, and was a finalist for the Pulitzer Prize in Local Reporting.

The Coral Project’s main tool, the Talk platform, now powers the comments for nearly 50 newsrooms in 11 countries, including The Wall Street Journal, the Washington Post, The Intercept, and the Globe and Mail. The Coral Project has also collaborated with academics and technologists, running events and working with researchers to reduce online harassment and raise the quality of conversation on the decentralized web.

After 3.5 years at Mozilla, the time is right for Coral software to move further into the journalism space, and grow with the support of an organization grounded in that industry. And so, in January, the entire Coral Project team will join Vox Media, a leading media company with deep ties in online community engagement.

Under Vox Media’s stewardship, The Coral Project will receive the backing of a large company with an unrivaled collection of journalists as well as experience in the area of Software as a Service. This combination will help specifically to grow the adoption of Coral’s commenting platform Talk, while continuing as an open source project that respects user privacy.

The Coral Project has built a community of journalists and technologists who care deeply about improving the quality of online conversation. Mozilla will continue to support and highlight the work of this community as champions of a healthy, humane internet that is accessible to all.

We are excited for the new phase of The Coral Project at Vox Media, and hope you will join us in celebrating its success so far, and in supporting our shared vision for a better internet.

The post The Coral Project is Moving to Vox Media appeared first on The Mozilla Blog.

Open Policy & AdvocacyBrussels Mozilla Mornings – Disinformation and online advertising: an unhealthy relationship?

On the morning of 19 February, Mozilla will host the second of our Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments. This session will be devoted to disinformation and online advertising.

Our expert panel will seek to unpack the relation between the two and explore policy solutions to ensure a healthy online advertising ecosystem.

Speakers

MEP Marietje Schaake, ALDE MEP
Clara Hanot, EU Disinfo Lab
Raegan MacDonald, Mozilla

Moderated by Brian Maguire, EURACTIV

 Logistical information

19 February 2019
08:30-10:00
Sliversquare Europe, Square de Meeûs 35

Register your attendance here

The post Brussels Mozilla Mornings – Disinformation and online advertising: an unhealthy relationship? appeared first on Open Policy & Advocacy.

The Mozilla BlogWelcome Roxi Wen, our incoming Chief Financial Officer

I am excited to announce that Roxi Wen is joining Mozilla Corporation as our Chief Financial Officer (CFO) next month.

As a wholly-owned subsidiary of the non-profit Mozilla Foundation, the Mozilla Corporation, with over 1,000 full-time employees worldwide, creates products, advances public policy and explores new technology that give people more control over their lives online, and shapes the future of the global internet platform for the public good.

As our CFO Roxi will become a key member of our senior executive team with responsibility for leading financial operations and strategy as we scale our mission impact with new and existing products, technology and business models to better serve our users and advance our agenda for a healthier internet.

“I’m thrilled to join Mozilla at such a pivotal moment for the technology sector,” said Roxi Wen. “With consumers demanding more and better from the companies that supply the technology they rely upon, Mozilla is well-positioned to become their go-to choice and I am eager to lend my financial know-how to this effort.”

Roxi comes to Mozilla from Elo Touch Solutions, where she was CFO for the private equity-backed (The Gores Group) $400 million global manufacturer of touch screen computing systems. She brings to Mozilla experience across varying sectors from technology to healthcare to banking having held senior-level positions at GE Energy, Medtronic and Royal Bank of Canada.

Roxi is a CFA charterholder, earned a Bachelor of Economics from Xiamen University, China, a MBA in Finance and Strategy from the Carlson School of Management at the University of Minnesota. When she assumes her role in mid-February, Roxi will be based in our Mountain View, California headquarters.

Please join me in welcoming Roxi to Mozilla.

The post Welcome Roxi Wen, our incoming Chief Financial Officer appeared first on The Mozilla Blog.

Mozilla Add-ons BlogFriend of Add-ons: Shivam Singhal

Please meet our newest Friend of Add-ons, Shivam Singhal! Shivam became involved with the add-ons community in April 2017. Currently, he is an extension developer, Mozilla Rep, and code contributor to addons.mozilla.org (AMO). He also helps mentor good-first-bugs on AMO.

“My skill set grew while contributing to Mozilla,” Shivam says of his experiences over the last two years. “Being the part of a big community, I have learned how to work remotely with a cross-cultural team and how to mentor newbies. I have met some super awesome people like [AMO engineers] William Durand and Rebecca Mullin. The AMO team is super helpful to newcomers and works actively to help them.”

This year, he’s looking forward to submitting patches to the WebExtensions API and Add-ons Manager in Firefox, and mentoring more new code contributors. Shivam has advice for anyone who is interested in contributing to Mozilla’s add-ons projects. “If you are shy or not feeling comfortable commenting on an issue, you can fill out the add-ons contributor survey and someone will help you get started. That’s what I did. You can also check https://whatcanidoformozilla.org for other ways to get involved.”

In his free time, Shivam enjoys watching stand-up comedy and sci-fi web series, exploring food at cafes, and going through pull requests on the AMO frontend repository.

Thanks for all of your contributions, Shivam! Your enthusiasm for the add-ons ecosystem is contagious, and it’s been a pleasure watching you grow.

To learn more about how to get involved with the add-ons community, check out our Contribute wiki.

The post Friend of Add-ons: Shivam Singhal appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgMDN Changelog – Looking back at 2018

December is when Mozilla meets as a company for our biannual All-Hands, and we reflect on the past year and plan for the future. Here are some of the highlights of 2018.

The browser-compat-data (BCD) project required a sustained effort to convert MDN’s documentation to structured data. The conversion was 39% complete at the start of 2018, and ended the year at 98% complete. Florian Scholz coordinated a large community of staff and volunteers, breaking up the work into human-sized chunks that could be done in parallel. The community converted, verified, and refreshed the data, and converted thousands of MDN pages to use the new data sources. Volunteers also built tools and integrations on top of the data.

The interactive-examples project had a great year as well. Will Bamberg coordinated the work, including some all-staff efforts to write new examples. Schalk Neethling improved the platform as it grew to handle CSS, JavaScript, and HTML examples.

In 2018, MDN developers moved from MozMEAO to Developer Outreach, joining the content staff in Emerging Technologies. The organizational change in March was followed by a nine-month effort to move the servers to the new ET account. Ryan Johnson, Ed Lim, and Dave Parfitt completed the smoothest server transition in MDN’s history.

The strength of MDN is our documentation of fundamental web technologies. Under the leadership of Chris Mills, this content was maintained, improved, and expanded in 2018. It’s a lot of work to keep an institution running and growing, and there are few opportunities to properly celebrate that work. Thanks to Daniel Beck, Eric Shepherd, Estelle Weyl, Irene Smith, Janet Swisher, Rachel Andrew, and our community of partners and volunteers for keeping MDN awesome in 2018.

Kadir Topal led the rapid development of the payments project. We’re grateful to all the MDN readers who are supporting the maintenance and growth of MDN.

There’s a lot more that happened in 2018:

  • January – Added a language preference dialog, and added rate limiting.
  • February – Prepared to move developers to Emerging Technologies.
  • March – Ran a Hack on MDN event for BCD, and tried Brotli.
  • April – Moved MDN to a CDN, and started switching to SVG.
  • May – Moved to ZenHub.
  • June – Shipped Django 1.11.
  • July – Decommissioned zones, and tried new CDN experiments.
  • August – Started performance improvements, added section links, removed memcache from Kuma, and upgraded to ElasticSearch 5.
  • September – Ran a Hack on MDN event for accessibility, and deleted 15% of macros.
  • October – Completed the server migration, and shipped some performance improvements.
  • November – Completed the migration to SVG, and updated the compatibility table header rows.

Shipped tweaks and fixes

There were 124 PRs merged in December, including 27 pull requests from 26 new contributors:

This includes some important changes and fixes:

27 pull requests were from first-time contributors:

Planned for January

David Flanagan took a look at KumaScript, MDN’s macro rendering engine, and is proposing several changes to modernize it, including using await and Jest. These changes are performing well in the development environment, and we plan to get the new code in production in January.

The post MDN Changelog – Looking back at 2018 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla UXReflections on a co-design workshop

Co-design workshops help designers learn first-hand the language of the people who use their products, in addition to their pain points, workflows, and motivations. With co-design methods [1] participants are no longer passive recipients of products. Rather, they are involved in the envisioning and re-imagination of them. Participants show us what they need and want through sketching and design exercises. The purpose of a co-design workshop is not to have a pixel-perfect design to implement, rather it’s to learn more about the people who use or will use the product, and to involve them in generating ideas about what to design.

We ran a co-design workshop at Mozilla to inform our product design, and we’d like to share our experience with you.

Image shows hands, paper, pencils, cups of coffee and tea.

Sketching exercises during the co-design workshop were fueled by coffee and tea.

Before the workshop

Our UX team was tasked with improving the Firefox browser extension experience. When people create browser extensions, they use a form to submit their creations. They submit their code and all the metadata about the extension (name, description, icon, etc.). The metadata provided in the submission form is used to populate the extension’s product page on addons.mozilla.org.

Screenshot of the Add-on Developer Hub submission form for a new add-on. It includes fields for name and summary, URL, description, and checkboxes for attributes that describe the add-on.

A cropped screenshot of the third step of the submission form, which asks for metadata like name and description of the extension.

 

Screenshot of the Facebook Container product page on addons.mozilla.org. Source of image: https://addons.mozilla.org/en-US/firefox/addon/facebook-container/?src=search.

Screenshot of an extension product page on addons.mozilla.org.

 

The Mozilla Add-ons team (i.e., Mozilla staff who work on improving the extensions and themes experience) wanted to make sure that the process to submit an extension is clear and useful, yielding a quality product page that people can easily find and understand. Improving the submission flow for developers would lead to higher quality extensions for people to use.

We identified some problems by using test extensions to “eat our own dog food” (i.e. walk through the current process). Our content strategist audited the submission flow experience to understand product page guidelines in the submission flow. Then some team members conducted a cognitive walkthrough [2] to gain knowledge of the process and identify potential issues.

After identifying some problems, we sought to improve our submission flow for browser extensions. We decided to run a co-design workshop that would identify more problem areas and generate new ideas. The workshop took place in London on October 26, one day before MozFest, an annual week-long “celebration for, by, and about people who love the internet.” Extension and theme creators were selected from our global add-ons community to participate in the workshop. Mozilla staff members were involved, too: program managers, a community manager, an Engineering manager, and UX team members (designers, a content strategist, and a user researcher).

Image: “Submission flow workshop in here!!” posted on a sticky note on a wooden door.

A helpful and enthusiastic sticky note on the door of our workshop room.

 

Steps we took to create and organize the co-design workshop

After the audit and cognitive walkthrough, we thought a co-design workshop might help us get to a better future. So we did the following:

  1. Pitch the idea to management and get buy-in
  2. Secure budget
  3. Invite participants
  4. Interview participants (remotely)
  5. Analyze interviews
  6. Create an agenda for the workshop. Our agenda included: ice breaker, ground rules, discussion of interview results, sketching (using this method [3]) & critique sessions, creating a video pitch for each group’s final design concept.
  7. Create workshop materials
  8. Run the workshop!
  9. Send out a feedback survey
  10. Debrief with Mozilla staff
  11. Analyze results (over three days) with Add-ons UX team
  12. Share results (and ask for feedback) of analysis with Mozilla staff and participants

Lessons learned: What went well

Interview participants beforehand

We interviewed each participant before the workshop. The participants relayed their experience about submitting extensions and their motivations for creating extensions. They told us their stories, their challenges, and their successes.

Conducting these interviews beforehand helped our team in a few ways:

  • The interviews introduced the team and facilitators, helping to build rapport before the workshop.
  • The interviews gave the facilitators context into each participant’s experience. We learned about their motivations for creating extensions and themes as well as their thoughts about the submission process. This foundation of knowledge helped to shape the co-design workshop (including where to focus for pain points), and enabled us to prepare an introductory data summary for sharing at the workshop.
  • We asked for participants’ feedback about the draft content guidelines that our content strategist created to provide developers with support, examples, and writing exercises to optimize their product page content. Those guidelines were to be incorporated into the new submission flow, so it was very helpful to get early user feedback. It also gave the participants some familiarity with this deliverable so they could help incorporate it into the submission flow during the workshop.
Photo of Jennifer gesturing with her hands, in front of a large presentation TV screen that has research results on it.

A photo of Jennifer, user researcher, presenting interview results back to the participants, near the beginning of the workshop.

 

Thoughtfully select diverse participants

The Add-ons team has an excellent community manager, Caitlin Neiman, who interfaces with the greater Add-ons community. Working with Mozilla staff, she selected a diverse group of community participants for the workshop. The participants hailed from several different countries, some were paid to create extensions and some were not, and some had attended Mozilla events before and some had not. This careful selection of participants resulted in diverse perspectives, workflows, and motivations that positively impacted the workshop.

Create Ground Rules

Design sessions can benefit from a short introductory activity of establishing ground rules to get everyone on the same page and set the tone for the day. This activity is especially helpful when participants don’t know one another.

Using a flip chart and markers, we asked the room of participants to volunteer ground rules. We captured and reviewed those as a group.

A photo of Emanuela, UX Designer and facilitator, scribing ground rules on a flip chart.

A photo of Emanuela, UX Designer and facilitator, scribing ground rules on a flip chart.

 

Why are ground rules important?

Designing the rules together, with facilitators and participants, serves as a way to align the group with a set of shared values, detecting possible harmful group behaviors and proposing productive and healthy interactions. Ground rules help make everyone’s experience a more rich and satisfying one.

Assign roles and create diverse working groups during the workshop

The Mozilla UX team in Taipei recently conducted a participatory workshop with older adults. In their blog post, they also highlight the importance of creating diverse working groups for the workshops [4].

In our workshop, each group was comprised of:

  • multiple participants (i.e. extension and theme creators)
  • a Mozilla staff program manager, engineering manager, community manager, and/or engineer.
  • a facilitator who was either a Mozilla staff designer or program manager. As a facilitator, the designer was a neutral party in the group and could internalize participants’ mental models, workflows, and vocabulary through the experience.

We also assigned roles during group critique sessions. Each group member chose to be a dreamer (responds to ideas with a “Why not?” attitude), a realist (responds to ideas with “How?”), or a spoiler (responds to ideas by pointing out their flaws). This format is called the Walt Disney approach [5].

A photo of a person holding 3 sticky notes in front of their body. The sticky notes read "realist" "spoiler" and "dreamer" from left to right.

Sticky notes for each critique role: Realist, Spoiler, Dreamer

 

Why are critique roles important?

Everyone tends to fit into one of the Walt Disney roles naturally. Being pushed to adopt a role that may not be their tendency gets participants to step out of their comfort zone gently. The roles help participants empathize with other perspectives.

We had other roles throughout the workshop as well, namely, a “floater” who kept everyone on track and kept the workshop running, a timekeeper, and a photographer.

Ask for feedback about the workshop results

The “co” part of “co-design” doesn’t have to end when the workshop concludes. Using what we learned during the workshop, the Add-ons UX team created personas and potential new submission flow blueprints. We sent those deliverables to the workshop participants and asked for their feedback. As UX professionals, it was useful to close the feedback loop and make sure the deliverables accurately reflected the people and workflows being represented.

Lessons Learned: What could be improved

The workshop was too long

We flew from around the world to London to do this workshop. A lot of us were experiencing jet lag. We had breaks, coffee, biscuits, and lunch. Even so, going from 9 to 4, sketching for hours and iterating multiple times was just too much for one day.

Image: “Jorge is done” text written above a skull and crossbones sketch.

Jorge, a product manager, provided feedback about the workshop’s duration.

 

We have ideas about how to fix this. One approach is to introduce a variety of tasks. In the workshop we mostly did sketching over and over again. Another idea is to extend the workshop across two days, and do a few hours each day. Another idea is to shorten the workshop and do fewer iterations.

There were not enough Mozilla staff engineers present

The workshop was developed by a user researcher, designers, and a content strategist. We included a community manager and program managers, but we did not include engineers in the planning process (other than providing updates). One of the engineering managers said that it would have been great to have engineers present to help with ideation and hear from creators first-hand. If we were to do a design workshop again, we would be sure to have a genuinely interdisciplinary set of participants, including more Mozilla staff engineers.

And with that…

We hope that this blog post helps you create a co-design workshop that is interdisciplinary, diverse, caring of participants’ perspectives, and just the right length.

Authors

Jennifer Davidson, Meridel Walkington, Emanuela Damiani, Philip Walmsley

Acknowledgements

Much gratitude to our colleagues who created the workshop with us and helped us edit this blog post! Thanks to Amy Tsay, Caitlin Neiman, Jorge Villalobos, Kev Needham, Stuart Colville, Mike Conca, and Gemma Petrie.

References

[1] Sanders, Elizabeth B-N., and Pieter Jan Stappers. “Co-creation and the new landscapes of design.” Co-design 4.1 (2008): 5–18.

[2] “How to Conduct a Cognitive Walkthrough.” The Interaction Design Foundation, 2018, www.interaction-design.org/literature/article/how-to-conduct-a-cognitive-walkthrough.

[3] Gray, Dave. “6–8–5.” Gamestorming, 2 June 2015, gamestorming.com/6–8–5s/.

[4] Hsieh, Tina. “8 Tips for Hosting Your First Participatory Workshop.” Medium.com, Firefox User Experience, 20 Sept. 2018, medium.com/firefox-ux/8-tips-for-hosting-your-first-participatory-workshop-f63856d286a0.

[5] “Disney Brainstorming Method: Dreamer, Realist, and Spoiler.” Idea Sandbox, idea-sandbox.com/blog/disney-brainstorming-method-dreamer-realist-and-spoiler/.

Originally published on medium.com.

Mozilla Gfx TeamWebRender newsletter #36

Hi everyone! This week’s highlight is Glenn’s picture caching work which almost landed about a week ago and landed again a few hours ago. Fingers crossed! If you don’t know what picture caching means and are interested, you can read about it in the introduction of this newsletter’s season 01 episode 28.
On a more general note, the team continues focusing on the remaining list of blocker bugs which grows and shrinks depending on when you look, but the overall trend is looking good.

Without further ado:

Notable WebRender and Gecko changes

  • Bobby fixed unbounded interner growth.
  • Bobby overhauled the memory reporter.
  • Bobby added a primitive highlighting debug tool.
  • Bobby reduced code duplication around interners.
  • Matt and Jeff continued investigating telemetry data.
  • Jeff removed the minimum blob image size, yielding nice improvements on some talos benchmarks (18% raptor-motionmark-animometer-firefox linux64-qr opt and 7% raptor-motionmark-animometer-firefox windows10-64-qr opt).
  • kvark fixed a crash.
  • kvark reduced the number of vector allocations.
  • kvark improved the chasing debugging tool.
  • kvark fixed two issues with reference frame and scrolling.
  • Andrew fixed an issue with SVGs that embed raster images not rendering correctly.
  • Andrew fixed a mismatch between the size used during decoding images and the one we pass to WebRender.
  • Andrew fixed a crash caused by an interaction between blob images and shared surfaces.
  • Andrew avoided scene building caused by partially decoded images when possible.
  • Emilio made the build system take care of generating the ffi bindings automatically.
  • Emilio fixed some clipping issues.
  • Glenn optimized how picture caching handle world clips.
  • Glenn fixed picture caching tiles being discarded incorrectly.
  • Glenn split primitive preparation into a separate culling pass.
  • Glenn fixed some invalidation issues.
  • Glenn improved display list correlation.
  • Glenn re-landed picture caching.
  • Doug improved the way we deal with document splitting to allow more than two documents.

Ongoing work

The team keeps going through the remaining blockers (14 P2 bugs and 29 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Mozilla L10NL10n report: January edition

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

The localization cycle for Firefox 66 in Nightly is approaching its end, and Tuesday (Jan 15) was the last day to get changes into Firefox 65 before it moves to release (Jan 29). These are the key dates for the next cycle:

  • January 28: Nightly will be bumped to version 67.
  • February 26: deadline to ship updates to Beta (Firefox 66).

As of January, localization of the Pocket add-on has moved back into the Firefox main project. That’s a positive change for localization, since it gives us a clearer schedule for updates, while before they were complex and sparse. All existing translations from the stand-alone process were imported into Mercurial repositories (and Pontoon).

In terms of prioritization, there are a couple of features to keep an eye on:

  • Profile per installation: with Firefox 67, Firefox will begin using a dedicated profile for each Firefox version (including Nightly, Beta, Developer Edition, and ESR). This will make Firefox more stable when switching between versions on the same computer and will also allow you to run different Firefox installations at the same time. This introduces a set of dialogs and web pages to warn the user about the change, and explain how to sync data between profiles. Unlike other features, this targets all versions, but Nightly users in particular, since they are more likely to have multiple profiles according to Telemetry data. That’s a good reason to prioritize these strings.
  • Security error pages: nothing is more frustrating than being unable to reach a website because of certificate issues. There are a lot of experiments happening around these pages and the associated user experience, both in Beta and Release, so it’s important to prioritize translations for these strings (they’re typically in netError.dtd).

What’s new or coming up in Test Pilot

As explained in this blog post, Test Pilot is reaching its end of life. The website localization has been updated in Pontoon to include messages around this change, while other experiments (Send, Monitor) will continue to exist as stand-alone projects. Screenshots is also going to see changes in the upcoming days, mostly on the server side of the project.

What’s new or coming up in mobile

Just like for Firefox desktop, the last day to get in localizations for Fennec 65 was Tuesday, Jan 15. Please see the desktop section above for more details.

Firefox iOS v15 localization deadline was Friday, January 11. The app should be released to everyone by Jan 29th, after a phased roll-out. This time around we’ve added seven new locales: Angika, Burmese, Corsican, Javanese, Nepali, Norwegian Bokmål and Sundanese. This means that we’re currently shipping 87 locales out of the 88 that are being localized – which is twice as more than when we first shipped the app. Congrats to all the voluntary localizers involved in this effort over the years!

And stay tuned for an update on the upcoming v16 l10n timeline soon.

We’re also still working with Lockbox Android team in order to get the project plugged in to Pontoon, and you can expect to see something come up in the next couple of weeks.

Firefox Reality project is going to be available and open for localization very soon too. We’re working out the specifics right now, and the timeline will be shared very soon and once everything is ironed out.

What’s new or coming up in web projects

Mozilla.org has a few updates.

  • Navigation bar: The new navigation.lang file contains strings for the redesigned navigation bar. When the language completion rate reaches 80%+, the new layout will be switched on. Try to get your locale completed by the time it is switched over.
  • Content Blocking Tour with updated UIs will go live on 29 Jan. Catch up all the updates by completing the firefox/tracking-protection-tour.lang file before then.

What’s new or coming up in Foundation projects

Mozilla’s big end-of-year push for donations has passed, and thanks in no small part to your efforts, the Foundation’s financial situation is in a much better shape for this year to pick up the fight where they left it before the break. Thank you all for your help!

In these first days of 2019, the fundraising team takes the opportunity of the quiet time to modernize the donation receipts with a better email sent to donors and migrate the receipts to the same infrastructure used to send Mozilla & Firefox newsletters. Content for the new receipts should be exposed in the Fundraising project by the end of the month for the 10-15 locales with the most donations in 2018.

The Advocacy team is still working on the misinfo campaign in Europe, with a first survey coming up to be sent to the people subscribed to the Mozilla newsletter, to get a flavor of where opinion lies with their attitudes to misinformation at the moment. Next steps will include launching a campaign about political ads ahead of the EU elections then promote anti-disinformation tools. Let’s do this!

What’s new or coming up in Support

What’s new or coming up in Pontoon

We re-launched the ability to delete translations. First you need to reject a translation, and then click on the trash can icon, which only appears next to rejected translations. The delete functionality has been replaced by the reject functionality, but over time it became obvious there are various use cases for both features to co-exist. See bug 1397377 for more details about why we first removed and then restored this feature.

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Image by Elio Qoshi

  • Sofwath came to us right after the new year holiday break through the Common Voice project. As the locale manager of Dhivehi, the official language of Maldives, he gathered all the necessary information in order to onboard several new contributors. Together, they almost completed the web site localization in a matter of days. They are already looking into government sources that are public for sentence collection. Kudos to the entire community!

 

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

hacks.mozilla.orgAugmented Reality and the Browser — An App Experiment

We all want to build the next (or perhaps the first) great Augmented Reality app. But there be dragons! The space is new and not well defined. There aren’t any AR apps that people use every day to serve as starting points or examples. Your new ideas have to compete against an already very high quality bar of traditional 2d apps. And building a new app can be expensive, especially for native app environments. This makes AR apps still somewhat uncharted territory, requiring a higher initial investment of time, talent and treasure.

But this also creates a sense of opportunity; a chance to participate early before the space is fully saturated.

From our point of view the questions are: What kinds of tools do artists, developers, designers, entrepreneurs and creatives of all flavors need to be able to easily make augmented reality experiences? What kinds of apps can people build with tools we provide?

For example: Can I watch Trevor Noah on the Daily Show this evening, and then release an app tomorrow that is a riff on a joke he made the previous night? A measure of success is being able to speak in rich media quickly and easily, to be a timely part of a global conversation.

With Blair MacIntyre‘s help I wrote an experiment to play-test a variety of ideas exploring these questions. In this comprehensive post-mortem I’ll review the app we made, what we learned and where we’re going next.

Finding “good” use cases

To answer some of the above questions, we started out surveying AR and VR developers, asking them their thoughts and observations. We had some rules of thumb. What we looked for were AR use cases that people value, that are meaningful enough, useful enough, make enough of a difference, that they might possibly become a part of people’s lives.

Existing AR apps also provided inspiration. One simple AR app I like for example is AirMeasure, which is part of a family of similar apps such as the Augmented Reality Measuring Tape. I use it once or twice a month and while not often, it’s incredibly handy. It’s an app with real utility and has 6500 reviews on the App Store  – so there’s clearly some appetite already.

image of airmeasure, an augmented reality measuring tape

Sean White, Mozilla’s Chief R&D Officer, has a very specific definition for an MVP (minimum viable product). He asks: What would 100 people use every day?

When I hear this, I hear something like: What kind of experience is complete, compelling, and useful enough, that even in an earliest incarnation it captures a core essential quality that makes it actually useful for 100 real world people, with real world concerns, to use daily even with current limitations? Shipping can be hard, and finding those first users harder.

Browser-based AR

New Pixel phones, iPhones and other emerging devices such as the Magic Leap already support Augmented Reality. They report where the ground is, where walls are, and other kinds of environment sensing questions critical for AR. They support pass-through vision and 3d tracking and registration. Emerging standards, notably WebXR, will soon expose these powers to the browser in a standards- based way, much like the way other hardware features are built and made available in the browser.

Native app development toolchains are excellent but there is friction. It can be challenging to jump through the hoops required to release a product across several different app stores or platforms. Costs that are reasonable for a AAA title may not be reasonable for a smaller project. If you want to knock out an app tonight for a client tomorrow, or post an app as a response to an article in the press or a current event— it can take too long.

With AR support coming to the browser there’s an option now to focus on telling the story rather than worrying about the technology, costs and distribution. Browsers historically offer lower barriers to entry, and instant deployment to millions of users, unrestricted distribution and a sharing culture. Being able to distribute an app at the click of a link, with no install, lowers the activation costs and enables virality. This complements other development approaches, and can be used for rapid prototyping of ideas as well.

ARPersist – the idea

In our experiment we explored what it would be like to decorate the world with virtual post-it notes. These notes can be posted from within the app, and they stick around between play sessions. Players can in fact see each other, and can see each other moving the notes in real time. The notes are geographically pinned and persist forever.

Using our experiment, a company could decorate their office with hints about how the printers work, or show navigation breadcrumbs to route a bewildered new employee to a meeting. Alternatively, a vacationing couple could walk into an AirBNB, open an “ARBNB” app (pardon the pun) and view post-it notes illuminating where the extra blankets are or how to use the washer.

We had these kinds of aspirational use case goals for our experiment:

  • Office interior navigation: Imagine an office decorated with virtual hints and possibly also with navigation support. Often a visitor or corporate employee shows up in an unfamiliar place — such as a regional Mozilla office or a conference hotel or even a hospital – and they want to be able to navigate that space quickly. Meeting rooms are on different floors — often with quirky names that are unrelated to location.  A specific hospital bed with a convalescing friend or relative could be right next door or up three flights and across a walkway. I’m sure we’ve all struggled to find bathrooms, or the cafeteria, or that meeting room. And even when we’ve found what we want – how does it work, who is there, what is important? Take the simple example of a printer. How many of us have stood in front of a printer for too long trying to figure out how to make a single photocopy?
  • Interactive information for house guests: Being a guest in a person’s home can be a lovely experience. AirBNB does a great job of fostering trust between strangers. But is there a way to communicate all the small details of a new space? How to use the Nest sensor, how to use the fancy dishwasher? Where is the spatula? Where are extra blankets? An AirBNB or shared rental could be decorated with virtual hints. An owner walks around the space and posts up virtual post-it notes attached to some of the items, indicating how appliances work. A machine-assisted approach also is possible – where the owner walks the space with the camera active, opens every drawer and lets the machine learning algorithm label and memorize everything. Or, imagine a real-time variation where your phone tells you where the cat is, or where your keys are. There’s a collaborative possibility as well here, a shared journal, where guests could leave hints for each other — although this does open up some other concerns which are tricky to navigate – and hard to address.
  • Public retail and venue navigation: These ideas could also work in a shopping scenario to direct you to the shampoo, or in a scenario where you want to pinpoint friends in a sports coliseum or concert hall or other visually noisy venue.

ARPersist – the app

Taking these ideas we wrote a standalone app for the iPhone 6S or higher — which you can try at arpersist.glitch.me and play with the source code at https://github.com/anselm/arpersist>github.com/anselm/arpersist.

Here’s a short video of the app running, which you might have seen some days ago in my tweet:

And more detail on how to use the app if you want to try it yourself:

Here’s an image of looking at the space through the iPhone display:

An AR hornet in the living room as seen through the iphone

And an image of two players – each player can see the other player’s phone in 3d space and a heart placed on top of that in 3d:

You’ll need the WebXR Viewer for iOS, which you can get on the iTunes store. (WebXR standards are still maturing so this doesn’t yet run directly in most browsers.)

This work is open source, it’s intended to be re-used and intended to be played with, but also — because it works against non-standard browser extensions — it cannot be treated as something that somebody could build a commercial product with (yet).

The videos embedded above offer a good description: Basically, you open ARPersist, (using the WebXR viewer linked above on an iPhone 6s or higher), by going to the URL (arpersist.glitch.me). This drops you into a pass-through vision display. You’ll see a screen with four buttons on the right. The “seashell” button at the bottom takes you to a page where you can load and save maps. You’ll want to “create an anchor” and optionally “save your map”. At this point, from the main page, you can use the top icon to add new features to the world. Objects you place are going to stick to the nearest floor or wall. If you join somebody else’s map, or are at a nearby geographical location, you can see other players as well in real time.

This app features downloadable 3d models from Sketchfab. These are the assets I’m using:

  1. Flying Hornet by Ashley Aslett
  2. Low Poly Crow by fernandogilmiranda
  3. Love Low Poly by Suwulo

What went well

Coming out of that initial phase of development I’ve had many surprising realizations, and even a few eureka moments. Here’s what went well, which I describe as essential attributes of the AR experience:

  • Webbyness. Doing AR in a web app is very very satisfying. This is good news because (in my opinion) mobile web apps more typically reflects how developers will create content in the future. Of course there are questions still such as payment models and difficulty in encrypting or obfuscating art assets if those assets are valuable. For example a developer can buy a 3d model off the web and trivially incorporate that model into a web app but it’s not yet clear how to do this without violating licensing terms around re-distribution and how to compensate creators per use.
  • Hinting. This was a new insight. It turns out semantic hints are critical, both for intelligently decorating your virtual space with objects but also for filtering noise. By hints I mean being able to say that the intent of a virtual object is that it should be shown on the floor, or attached to a wall, or on top of the watercooler. There’s a difference between simply placing something in space and understanding why it belongs in that position. Also, what quickly turns up is an idea of priorities. Some virtual objects are just not as important as others. This can depend on the user’s context. There are different layers of filtering, but ultimately you have some collection of virtual objects you want to render, and those objects need to argue amongst themselves which should be shown where (or not at all) if they collide. The issue isn’t the contention resolution strategy — it’s that the objects themselves need to provide rich metadata so that any strategies can exist. I went as far as classifying some of the kinds of hints that would be useful. When you make a new object there are some toggle fields you can set to help with expressing your intention around placement and priority.
  • Server/Client models. In serving AR objects to the client a natural client server pattern emerges. This model begins to reflect a traditional RSS pattern — with many servers and many clients. There’s a chance here to try and avoid some of the risky concentrations of power and censorship that we see already with existing social networks. This is not a new problem, but an old problem that is made more urgent. AR is in your face — and preventing centralization feels more important.
  • Login/Signup. Traditional web apps have a central sign-in concept. They manage your identity for you, and you use a password to sign into their service. However, today it’s easy enough to push that back to the user.

    This gets a bit geeky — but the main principle is that if you use modern public key cryptography to self-sign your own documents, then a central service is not needed to validate your identity. Here I implemented a public/private keypair system similar to Metamask. The strategy is that the user provides a long phrase and then I use Ian Coleman’s Mnemonic Code Converter bip39 to turn that into a public/private keypair. (In this case, I am using bitcoin key-signing algorithms.)

    In my example implementation, a given keypair can be associated with a given collection of objects, and it helps prune a core responsibility away from any centralized social network. Users self-sign everything they create.

  • 6DoF control. It can be hard to write good controls for translating, rotating and scaling augmented reality objects through a phone. But towards the end of the build I realized that the phone itself is a 6dof controller. It can be a way to reach, grab, move and rotate — and vastly reduce the labor of building user interfaces. Ultimately I ended up throwing out a lot of complicated code for moving, scaling and rotating objects and replaced it simply with a single power, to drag and rotate objects using the phone itself. Stretching came for free — if you tap with two fingers instead of one finger then your finger distance is used as the stretch factor.
  • Multiplayer. It is pretty neat having multiple players in the same room in this app. Each of the participants can manipulate shared objects, and each participant can be seen as a floating heart in the room — right on top of where their phone is in the real world. It’s quite satisfying. There wasn’t a lot of shared compositional editing (because the app is so simple) but if the apps were more powerful this could be quite compelling.

Challenges that remain

We also identified many challenges. Here are some of the ones we faced:

  • Hardware. There’s a fairly strong signal that Magic Leap or Hololens will be better platforms for this experience. Phones just are not a very satisfying way to manipulate objects in Augmented Reality. A logical next step for this work is to port it to the Magic Leap or the Hololens or both or other similar emerging hardware.
  • Relocalization. One serious, almost blocker problem had to do with poor relocalization. Between successive runs I couldn’t reestablish where the phone was. Relocalization, my device’s ability to accurately learn its position and orientation in real world space, was unpredictable. Sometimes it would work many times in a row when I would run the app. Sometimes I couldn’t establish relocalization once in an entire day. It appears that optimal relocalization is demanding, and requires very bright sunlight, stable lighting conditions and jumbled sharp edge geometry. Relocalization on passive optics is too hard and it disrupts the feeling of continuity — being able to quit the app and restart it, or enabling multiple people to share the same experience from their own devices. I played with a work-around, which was to let users manually relocalize — but I think this still needs more exploration.

    This is ultimately a hardware problem. Apple/Google have done an unbelievable job with pure software but the hardware is not designed for the job. Probably the best short-term answer is to use a QRCode. A longer term answer is to just wait a year for better hardware. Apparently next-gen iPhones will have active depth sensors and this may be an entirely solved problem in a year or two. (The challenge is that we want to play with the future before it arrives — so we do need some kind of temporary solution for now.)

  • Griefing. Although my test audience was too small to have any griefers — it was pretty self-evident that any canonical layer of reality would instantly be filled with graphical images that could be offensive or not safe for work (NSFW). We have to find a way to allow for curation of layers. Spam and griefing are important to prevent but we don’t want to censor self-expression. The answer here was to not have any single virtual space but to let people self select who they follow. I could see roles emerging for making it easy to curate and distribute leadership roles for curation of shared virtual spaces — similar to Wikipedia.
  • Empty spaces. AR is a lonely world when there is nobody else around. Without other people nearby it’s just not a lot of fun to decorate space with virtual objects at all. So much of this feels social. A thought here is that it may be better, and possible, to create portals that wire together multiple AR spaces — even if those spaces are not actually in the same place — in order to bring people together to have a shared consensus. This begins to sound more like VR in some ways but could be a hybrid of AR and VR together. You could be at your house, and your friend at their house, and you could join your rooms together virtually, and then see each others post-it notes or public virtual objects in each others spaces (attached to the nearest walls or floors as based on the hints associated with those objects).
  • Security/Privacy. Entire posts could be written on this topic alone. The key issue is that sharing a map to a server, that somebody else can then download, means leaking private details of your own home or space to other parties. Some of this simply means notifying the user intelligently — but this is still an open question and deserves thought.
  • Media Proxy. We’re fairly used to being able to cut and paste links into slack or into other kinds of forums, but the equivalent doesn’t quite yet exist in VR/AR, although the media sharing feature in Hubs, Mozilla’s virtual reality chat system and social environment, is a first step. It would be handy to paste not only 3d models but also PDFs, videos and the like. There is a highly competitive anti-sharing war going on between rich media content providers and entities that want to allow and empower sharing of content. Take the example of iframely, a service that aims to simplify and optimize rich media sharing between platforms and devices.

Next steps

Here’s where I feel this work will go next:

  • Packaging. Although the app works “technically” it isn’t that user friendly. There are many UI assumptions. When capturing a space one has to let the device capture enough data before saving a map. There’s no real interface for deleting old maps. The debugging screen, which provides hints about the system state, is fairly incomprehensible to a novice. Basically the whole acquisition and tracking phase should “just work” and right now it requires a fair level of expertise. The right way to exercise a more cohesive “package” is to push this experience forward as an actual app for a specific use case. The AirBNB decoration use case seems like the right one.
  • HMD (Head-mounted display) support. Magic Leap or Hololens or possibly even Northstar support. The right place for this experience is in real AR glasses. This is now doable and it’s worth doing. Granted every developer will also be writing the same app, but this will be from a browser perspective, and there is value in a browser-based persistence solution.
  • Embellishments. There are several small features that would be quick easy wins. It would be nice to show contrails of where people moved through space for example. As well it would be nice to let people type in or input their own text into post-it notes (right now you can place gltf objects off the net or images). And it would be nice to have richer proxy support for other media types as mentioned. I’d like to clarify some licensing issues for content as well in this case. Improving manual relocalization (or using a QRCode) could help as well.
  • Navigation. I didn’t do the in-app route-finding and navigation; it’s one more piece that could help tell the story. I felt it wasn’t as critical as basic placement — but it helps argue the use cases.
  • Filtering. We had aspirations around social networking — filtering by peers that we just didn’t get to test out. This would be important in the future.

Several architecture observations

This research wasn’t just focused on user experience but also explored internal architecture. As a general rule I believe that the architecture behind an MVP should reflect a mature partitioning of jobs that the fully-blown app will deliver. In nascent form, the MVP has to architecturally reflect a larger code base. The current implementation of this app consists of these parts (which I think reflect important parts of a more mature system):

  • Cloud Content Server. A server must exist which hosts arbitrary data objects from arbitrary participants. We needed some kind of hosting that people can publish content to. In a more mature universe there could be many servers. Servers could just be WordPress, and content could just be GeoRSS. Right now however I have a single server — but at the same time that server doesn’t have much responsibility. It is just a shared database. There is a third party ARCloud initiative which speaks to this as well.
  • Content Filter. Filtering content is an absurdly critical MVP requirement. We must be able to show that users can control what they see. I imagine this filter as a perfect agent, a kind of copy of yourself that has the time to carefully inspect every single data object and ponder if it is worth sharing with you or not. The content filter is a proxy for you, your will. It has perfect serendipity, perfect understanding and perfect knowledge of all things. The reality of course falls short of this — but that’s my mental model of the job here. The filter can exist on device or in the cloud.
  • Renderer. The client-side rendering layer deals with painting stuff on your field of view. It deals with contention resolution between objects competing for your attention. It handles presentation semantics — that some objects want to be shown in certain places  —  as well as ideas around fundamental UX paradigms for how people will interact with AR. Basically it invents an AR desktop  — a fundamental AR interface  — for mediating human interaction. Again of course, we can’t do all this, but that’s my mental model of the job here.
  • Identity Management. This is unsolved for the net at large and is destroying communication on the net. It’s arguably one of the most serious problems in the world today because if we can’t communicate, and know that other parties are real, then we don’t have a civilization. It is a critical problem for AR as well because you cannot have spam and garbage content in your face. The approach I mentioned above is to have users self-sign their utterances. On top of this would be conventional services to build up follow lists of people (or what I call emitters) and then arbitration between those emitters using a strategy to score emitters based on the quality of what they say, somewhat like a weighted contextual network graph.

An architectural observation regarding geolocation of all objects

One other technical point deserves a bit more elaboration. Before we started we had to answer the question of “how do we represent or store the location of virtual objects?”. Perhaps this isn’t a great conversation starter at the pub on a Saturday night, but it’s important nevertheless.

We take so many things for granted in the real world – signs, streetlights, buildings. We expect them to stick around even when you look away. But programming is like universe building, you have to do everything by hand.

The approach we took may seem obvious: to define object position with GPS coordinates. We give every object a latitude, longitude and elevation (as well as orientation).

But the gotcha is that phones today don’t have precise geolocation. We had to write a wrapper of our own. When users start our app we build up (or load) an augmented reality map of the area. That map can be saved back to a server with a precise geolocation. Once there is a map of a room, then everything in that map is also very precisely geo-located. This means everything you place or do in our app is in fact specified in earth global coordinates.

Blair points out that although modern smartphones (or devices) today don’t have very accurate GPS, this is likely to change soon. We expect that in the next year or two GPS will become hyper-precise – augmented by 3d depth maps of the landscape – making our wrapper optional.

Conclusions

Our exploration has been taking place in conversation and code. Personally I enjoy this praxis — spending some time talking, and then implementing a working proof of concept. Nothing clarifies thinking like actually trying to build an example.

At the 10,000 foot view, at the idealistic end of the spectrum, it is becoming obvious that we all have different ideas of what AR is or will be. The AR view I crave is one of many different information objects from many of different providers — personal reminders, city traffic overlays, weather bots, friend location notifiers, contrails of my previous trajectories through space etc. It feels like a creative medium. I see users wanting to author objects, where different objects have different priorities, where different objects are “alive” — that they have their own will, mobility and their own interactions with each other. In this way an AR view echoes a natural view of the default world— with all kinds of entities competing for our attention.

Stepping back even further — at a 100,000 foot view —  there are several fundamental communication patterns that humans use creatively. We use visual media (signage) and we use audio (speaking, voice chat). We have high-resolution high-fidelity expressive capabilities, that includes our body language, our hand gestures, and especially a hugely rich facial expressiveness. We also have text-based media — and many other kinds of media. It feels like when anybody builds a communication medium that easily allows humans to channel some of their high-bandwidth needs over that pipeline, that medium can become very popular. Skype, messaging, wikis, even music — all of these things meet fundamental expressive human drives; they are channels for output and expressiveness.

In that light a question that’s emerging for me is “Is sharing 3D objects in space a fundamental communication medium?”. If so then the question becomes more “What are reasons to NOT build a minimal capability to express the persistent 3d placement of objects in space?”. Clearly work needs to make money and be sustainable for people who make the work. Are we tapping into something fundamental enough, valuable enough, even in early incarnations, that people will spend money (or energy) on it? I posit that if we help express fundamental human communication  patterns — we all succeed.

What’s surprising is the power of persistence. When the experience works well I have the mental illusion that my room indeed has these virtual images and objects in it. Our minds seem deeply fooled by the illusion of persistence. Similar to using the Magic Leap there’s a sense of “magic” — the sense that there’s another world — that you can see if you squint just right. Even after you put down the device that feeling lingers. Augmented Reality is starting to feel real.

The post Augmented Reality and the Browser — An App Experiment appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogEvolving Firefox’s Culture of Experimentation: A Thank You from the Test Pilot Program

For the last three years Firefox has invested heavily in innovation, and our users have been an essential part of this journey. Through the Test Pilot Program, Firefox users have been able to help us test and evaluate a variety of potential Firefox features. Building on the success of this program, we’re proud to announce today that we’re evolving our approach to experimentation even further.

Lessons Learned from Test Pilot

Test Pilot was designed to harness the energy of our most passionate users. We gave them early prototypes and product explorations that weren’t ready for wide release. In return, they gave us feedback and patience as these projects evolved into the highly polished features within our products today. Through this program we have been able to iterate quickly, try daring new things, and build products that our users have been excited to embrace.

Graduated Features

Since the beginning of the Test Pilot program, we’ve built or helped build a number of popular Firefox features. Activity Stream, which now features prominently on the Firefox homepage, was in the first round of Test Pilot experiments. Activity Stream brought new life to an otherwise barren page and made it easier to recall and discover new content on the web. The Test Pilot team continued to draw the attention of the press and users alike with experiments like Containers that paved the way for our highly successful Facebook Container. Send made private, encrypted, file sharing as easy as clicking a button. Lockbox helped you take your Firefox passwords to iOS devices (and soon to Android). Page Shot started as a simple way to capture and share screenshots in Firefox. We shipped the feature now known as Screenshots and have since added our new approach to anti-tracking that first gained traction as a Test Pilot experiment.

So what’s next?

Test Pilot performed better than we could have ever imagined. As a result of this program we’re now in a stronger position where we are using the knowledge that we gained from small groups, evangelizing the benefits of rapid iteration, taking bold (but safe) risks, and putting the user front and center.

We’re applying these valuable lessons not only to continued product innovation, but also to how we test and ideate across the Firefox organization. So today, we are announcing that we will be moving to a new structure that will demonstrate our ability to innovate in exciting ways and as a result we are closing the Test Pilot program as we’ve known it.

More user input, more testing

Migrating to a new model doesn’t mean we’re doing fewer experiments. In fact, we’ll be doing even more! The innovation processes that led to products like Firefox Monitor are no longer the responsibility of a handful of individuals but rather the entire organization. Everyone is responsible for maintaining the Culture of Experimentation Firefox has developed through this process. These techniques and tools have become a part of our very DNA and identity. That is something to celebrate. As such, we won’t be uninstalling any experiments you’re using today, in fact, many of the Test Pilot experiments and features will find their way to Addons.Mozilla.Org, while others like Send and Lockbox will continue to take in more input from you as they evolve into stand alone products.

We couldn’t do it without you

We want to thank Firefox users for their input and support of product features and functionality testing through the Test Pilot Program. We look forward to continuing to work closely with our users who are the reason we build Firefox in the first place. In the coming months look out for news on how you can get involved in the next stage of our experimentation.

In the meantime, the Firefox team will continue to focus on the next release and what we’ll be developing in the coming year, while other Mozillians chug away at developing equally exciting and user-centric product solutions and services. You can get a sneak peak at some of these innovations at Mozilla Labs, which touches everything from voice capability to IoT to AR/VR.

And so we say goodbye and thank you to Test Pilot for helping us usher in a bright future of innovation at Mozilla.

The post Evolving Firefox’s Culture of Experimentation: A Thank You from the Test Pilot Program appeared first on The Mozilla Blog.

hacks.mozilla.orgDesigning the Flexbox Inspector

Screenshot showing the Flex highlighter, Flex Container pane, and Flex Item pane

Firefox DevEdition logoThe new Flexbox Inspector, created by Firefox DevTools, helps developers understand the sizing, positioning, and nesting of Flexbox elements. You can try it out now in Firefox DevEdition or join us for its official launch in Firefox 65 on January 29th.

The UX challenges of this tool have been both frustrating and a lot of fun for our team. Built on the basic concepts of the CSS Grid Inspector, we sought to expand on the possibilities of what a design tool could be. I’m excited to share a behind-the-scenes look at the UX patterns and processes that drove our design forward.

Research and ideation

CSS Flexbox is an increasingly popular layout model that helps in building robust dynamic page layouts. However, it has a big learning curve—at the beginning of this project, our team wasn’t sure if we understood Flexbox ourselves, and we didn’t know what the main challenges were. So, we gathered data to help us design the basic feature set.

Our earliest research on design-focused tools included interviews with developer/designer friends and community members who told us they wanted to understand Flexbox better.

We also ran a survey to rank the Flexbox features folks most wanted to see. Min/max width and height constraints received the highest score. The ranking of shrink/grow features was also higher than we expected. This greatly influenced our plans, as we had originally assumed these more complicated features could wait for a version 2.0. It was clear however that these were the details developers needed most.

Flexbox survey results

Most of the early design work took the form of spirited brainstorming sessions in video chat, text chat, and email. We also consulted the experts: Daniel Holbert, our Gecko engine developer who implemented the Flexbox spec for Firefox; Dave Geddes, CSS educator and creator of the Flexbox Zombies course; and Jen Simmons, web standards champion and designer of the Grid Inspector.

The discussions with friendly and passionate colleagues were among the best parts of working on this project. We were able to deep-dive into the meaty questions, the nitty-gritty details, and the far-flung ideas about what could be possible. As a designer, it is amazing to work with developers and product managers who care so much about the design process and have so many great UX ideas.

Visualizing a new layout model

After our info-gathering, we worked to build our own mental models of Flexbox.

While trying to learn Flexbox myself, I drew diagrams that show its different features.

Early Flexbox diagram

My colleague Gabriel created a working prototype of a Flexbox highlighter that greatly influenced our first launch version of the overlay. It’s a monochrome design similar to our Grid Inspector overlay, with a customizable highlight color to make it clearly visible on any website.

We use a dotted outline for the container, solid outlines for items, and diagonal shading between the items to represent the free space created by justify-content and margins.

NYTimes header with Flexbox overlay

Youtube header with Flexbox overlay

We got more adventurous with the Flexbox pane inside DevTools. The flex item diagram (or “minimap” as we love to call it) shows a visualization of basis, shrink/grow, min/max clamping, and the final size—each attribute appearing only if it’s relevant to the layout engine’s sizing decisions.

Flex item diagram

Many other design ideas, such as these flex container diagrams, didn’t make it into the final MVP, but they helped us think through the options and may get incorporated later.

Early container diagram design

Color-coded secrets of the rendering engine

With help from our Gecko engineers, we were able to display a chart with step-by-step descriptions of how a flex item’s size is determined. Basic color-coding between the diagram and chart helps to connect the two UIs.

Flex item sizing steps

Markup badges and other entry points

Flex badges in the markup view serve as indicators of flex containers as well as shortcuts for turning on the in-page overlay. Early data shows that this is the most common way to turn on the overlay; the toggle switch in the Layout panel and the button next to the display:flex declaration in Rules are two other commonly used methods. Having multiple entry points accommodates different workflows, which may focus on any one of the three Inspector panels.

Flex badges in the markup view

Surfacing a brand new tool

Building new tools can be risky due to the presumption of modifying developers’ everyday workflows. One of my big fears was that we’d spend countless hours on a new feature only to hide it away somewhere inside the complicated megaplex that is Firefox Developer Tools. This could result in people never finding it or not bothering to navigate to it.

To invite usage, we automatically show Flexbox info in the Layout panel whenever a developer selects a flex container or item inside the markup view. The Layout panel will usually be visible by default in the third Inspector column which we added in Firefox 62. From there, the developer can choose to dig deeper into flex visualizations and relationships.

Showing the Flexbox info automatically when selecting a Flex element

Mobile-inspired navigation & structure

One new thing we’re trying is a page-style navigation in which the developer goes “forward a page” to traverse down the tree (to child elements), or “back a page” to go up the tree (to parent elements). We’re also making use of a select menu for jumping between sibling flex items. Inspired by mobile interfaces, the Firefox hamburger menu, and other page-style UIs, it’s a big experimental departure from the simpler navigation normally used in DevTools.

Page-like navigation

One of the trickier parts of the structure was coming up with a cohesive design for flex containers, items, and nested container-items. My colleague Patrick figured out that we should have two types of flex panes inside the Layout panel, showing whichever is relevant: an Item pane or a Container pane. Both panes show up when the element is both a container and an item.

Layout panel showing flex container and item info

Tighter connection with in-page context

When hovering over element names inside the Flexbox panes, we highlight the element in the page, strengthening the connection between the code and the output without including extra ‘inspect’ icons or other steps. I plan to introduce more of this type of intuitive hover behavior into other parts of DevTools.

Hovering over a flex item name which triggers a highlight in the page

Testing and development

After lots of iteration, I created a high-fidelity prototype to share with our community channels. We received lots of helpful comments that fed back into the design.

Different screens in the Flexbox Inspector prototype

We had our first foray into formal user testing, which was helpful in revealing the confusing parts of our tool. We plan to continue improving our user research process for all new projects.

User testing video

UserTesting asks participants to record their screens and think aloud as they try out software

Later this month, developers from our team will be writing a more technical deep-dive about the Flexbox Inspector. Meanwhile, here are some fun tidbits from the dev process: Lots and lots of issues were created in Bugzilla to organize every implementation task of the project. Silly test pages, like this one, created by my colleague Mike, were made to test out every Flexbox situation. Our team regularly used the tool in Firefox Nightly with various sites to dog-food the tool and find bugs.

What’s next

2018 was a big year for Firefox DevTools and the new Design Tools initiative. There were hard-earned lessons and times of doubt, but in the end, we came together as a team and we shipped!

We have more work to do in improving our UX processes, stepping up our research capabilities, and understanding the results of our decisions. We have more tools to build—better debugging tools for all types of CSS layouts and smoother workflows for CSS development. There’s a lot more we can do to improve the Flexbox Inspector, but it’s time for us to put it out into the world and see if we can validate what we’ve already built.

Now we need your help. It’s critical that the Flexbox Inspector gets feedback from real-world usage. Give it a spin in DevEdition, and let us know via Twitter or Discourse if you run into any bugs, ideas, or big wins.

____

Thanks to Martin Balfanz, Daniel Holbert, Patrick Brosset, and Jordan Witte for reviewing drafts of this article.

The post Designing the Flexbox Inspector appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Gfx TeamWebRender newsletter #35

Bonsoir! Another week, another newsletter. I stealthily published WebRender on crates.io this week. This doesn’t mean anything in terms of API stability and whatnot, but it makes it easier for people to use WebRender in their own rust projects. Many asked for it so there it is. Everyone is welcome to use it, find bugs, report them, submit fixes and improvements even!

In other news we are initiating a notable workflow change: WebRender patches will land directly in Firefox’s mozilla-central repository and a bot will automatically mirror them on github. This change mostly affects the gfx team. What it means for us is that testing webrender changes becomes a lot easier as we don’t have to manually import every single work in progress commit to test it against Firefox’s CI anymore. Also Kats won’t have to spend a considerable amount of his time porting WebRender changes to mozilla-central anymore.
We know that interacting with mozilla-central can be intimidating for external contributors so we’ll still accept pull requests on the github repository although instead of merging them from there, someone in the gfx team will import them in mozilla-central manually (which we already had to do for non-trivial patches to run them against CI before merging). So for anyone who doesn’t work everyday on WebRender this workflow change is pretty much cosmetic. You are still welcome to keep following and interacting with the github repository.

Notable WebRender and Gecko changes

  • Jeff fixed a recent regression that was causing blob images to be painted twice.
  • Kats the work to make the repository transition possible without losing any of the tools and testing we have in WebRender. He also set up the repository synchronization.
  • Kvark completed the clipping API saga.
  • Matt added some new telemetry for paint times, that take vsync into account.
  • Matt fixed a bug with a telemetry probe that was mixing content and UI paint times.
  • Andrew fixed an image flickering issue.
  • Andrew fixed a bug with image decode size and pixel snapping.
  • Lee fixed a crash in DWrite font rasterization.
  • Lee fixed a bug related to transforms and clips.
  • Emilio fixed a bug with clip path and nested clips.
  • Glenn fixed caching fixed position clips.
  • Glenn improved the cached tile eviction heuristics (2).
  • Glenn fixed an intermittent test failure.
  • Glenn fixed caching with opacity bindings that are values.
  • Glenn avoided caching tiles that always change.
  • Glenn fixed a cache eviction issue.
  • Glenn added a debugging overlay for picture caching.
  • Nical reduced the overdraw when rendering dashed corners, which was causing freezes in extreme cases.
  • Nical added the possibility to run wrench/scripts/headless.py (which lets us run CI under os-mesa) inside gdb, cgdb, rust-gdb and rr both with release and debug builds (see Debugging WebRender on wiki for more info about how to set this up).
  • Nical fixed a blob image key leak.
  • Sotaro fixed the timing of async animation deletion which addressed bug 1497852 and bug 1505363.
  • Sotaro fixed a cache invalidation issue when the number of blob rasterization requests hits the per-transaction limit.
  • Doug cleaned up WebRenderLayaerManager’s state management.
  • Doug fixed a lot of issues in WebRender when using multiple documents at the same time.

Ongoing work

The team keeps going through the remaining blockers (19 P2 bugs and 34 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

The Mozilla BlogEric Rescorla Wins the Levchin Prize at the 2019 Real-World Crypto Conference

The Levchin Prize awards two entrepreneurs every year for significant contributions to solving global, real-world cryptography issues that make the internet safer at scale. This year, we’re proud to announce that our very own Firefox CTO, Eric Rescorla, was awarded one of these prizes for his involvement in spearheading the latest version of Transport Layer Security (TLS). TLS 1.3 incorporates significant improvements in both security and speed, and was completed in August and already secures 10% of sites.

Eric has contributed extensively to many of the core security protocols used in the internet, including TLS, DTLS, WebRTC, ACME, and the in development IETF QUIC protocol.  Most recently, he was editor of TLS 1.3, which already secures 10% of websites despite having been finished for less than six months. He also co-founded Let’s Encrypt, a free and automated certificate authority that now issues more than a million certificates a day, in order to remove barriers to online encryption and helped HTTPS grow from around 30% of the web to around 75%. Previously, he served on the California Secretary of State’s Top To Bottom Review where he was part of a team that found severe vulnerabilities in multiple electronic voting devices.

The 2019 winners were selected by the Real-World Cryptography conference steering committee, which includes professors from Stanford University, University of Edinburgh, Microsoft Research, Royal Holloway University of London, Cornell Tech, University of Florida, University of Bristol, and NEC Research.

This prize was announced on January 9th at the 2019 Real-World Crypto Conference in San Jose, California. The conference brings together cryptography researchers and developers who are implementing cryptography on the internet, the cloud and embedded devices from around the world. The conference is organized by the International Association of Cryptologic Research (IACR) to strengthen and advance the conversation between these two communities.

For more information about the Levchin Prize visit www.levchinprize.com.

The post Eric Rescorla Wins the Levchin Prize at the 2019 Real-World Crypto Conference appeared first on The Mozilla Blog.

Open Policy & AdvocacyOur Letter to Congress About Facebook Data Sharing

Last week Mozilla sent a letter to the House Energy and Commerce Committee concerning its investigation into Facebook’s privacy practices. We believe Facebook’s representations to the Committee — and more recently — concerning Mozilla are inaccurate and wanted to set the record straight about any past and current work with Facebook. You can read the full letter here.

The post Our Letter to Congress About Facebook Data Sharing appeared first on Open Policy & Advocacy.

The Mozilla BlogMozilla Announces Deal to Bring Firefox Reality to HTC VIVE Devices

Last year, Mozilla set out to build a best-in-class browser that was made specifically for immersive browsing. The result was Firefox Reality, a browser designed from the ground up to work on virtual reality headsets. To kick off 2019, we are happy to announce that we are partnering with HTC VIVE to power immersive web experiences across Vive’s portfolio of devices.

What does this mean? It means that Vive users will enjoy all of the benefits of Firefox Reality (such as its speed, power, and privacy features) every time they open the Vive internet browser. We are also excited to bring our feed of immersive web experiences to every Vive user. There are so many amazing creators out there, and we are continually impressed by what they are building.

“This year, Vive has set out to bring everyday computing tasks into VR for the first time,” said Michael Almeraris, Vice President, HTC Vive. “Through our exciting and innovative collaboration with Mozilla, we’re closing the gap in XR computing, empowering Vive users to get more content in their headset, while enabling developers to quickly create content for consumers.”

Virtual reality is one example of how web browsing is evolving beyond our desktop and mobile screens. Here at Mozilla, we are working hard to ensure these new platforms can deliver browsing experiences that provide users with the level of privacy, ease-of-use, and control that they have come to expect from Firefox.

In the few months since we released Firefox Reality, we have already released several new features and improvements based on the feedback we’ve received from our users and content creators. In 2019, you will see us continue to prove our commitment to this product and our users with every update we provide.

Stay tuned to our mixed reality blog and twitter account for more details. In the meantime, you can check out all of the announcements from HTC Vive here.

If you have an all-in-one VR device running Vive Wave, you can search for “Firefox Reality” in the Viveport store to try it out right now.

The post Mozilla Announces Deal to Bring Firefox Reality to HTC VIVE Devices appeared first on The Mozilla Blog.

Mozilla VR BlogNavigation Study for 3DoF Devices

Navigation Study for 3DoF Devices

Over the past few months I’ve been building VR demos and writing tutorial blogs. Navigation on a device with only three degrees of freedom (3dof) is tricky, So I decided to do a survey of many native apps and games for the Oculus Go to see how each of them handled it. Below are my results.

For this study I looked only at navigation, meaning how the user moves around in the space, either by directly moving or by jumping to semantically different spaces (ex: click a door to go to the next room). I don't cover other interactions like how buttons or sliders work. Just navigation.

TL;DR

Don’t touch the camera. The camera is part of the users head. Don’t try to move it. All apps which move the camera induce some form of motion sickness. Instead use one of a few different forms of teleportation, always under user control.

The ideal control for me was teleportation to semantically meaningful locations, not just 'forward ten steps'. Further more, when presenting the user with a full 360 environment it is helpful to have a way to recenter the view, such as by using left/right buttons on the controller. Without a recentering option the user will have to physically turn themselves around, which is cumbersome unless you are in a swivel chair.

To help complete the illusion I suggest subtle sound effects for movement, selection, and recentering. Just make sure they aren't very noticable.

Epic Roller Coaster

This is a roller coaster simulator, except it lets you do things that a real rollercoaster can’t, such as jumping between tracks and being chased by dinosaurs. To start you have pointer interaction across three panels: left, center, right. Everything has hover/rollover effects with sound. During the actual rollercoaster ride you are literally a camera on rails. Press the trigger to start and then the camera moves at constant speed. All you can do is look around. Speed and angle changes made me dizzy and I had to take it off after about five minutes, but my 7 year old loves Epic Roller Coaster.

Space Time

A PBS app that teaches you about black holes, the speed of light, and other physics concepts. You use pointer interaction to click buttons then watch non-interactive 3D scenes/info, though they are in full 360 3D, rather than plain movies.

Within Videos

A collection of many 360 and 3D movies. Pointer interaction to pick videos, some scrolling w/ touch gestures. Then non-interactive videos except for the video controls.

Master Work Journeys

Explorer famous monuments and locations like Mount Rushmore. You can navigate around 360 videos by clicking on hotspots with the pointer. Some trigger photos or audio. Others are teleportation spots. There is no free navigation or free teleportation, only to the hotspots. You can adjust the camera with left and right swipes, though.

Thumper

An intense driving and tilting music game. It uses pointer control for menus. In the game you run at a constant speed. The track itself turns but you are always stable in the middle. Particle effects stream at you, reinforcing the illusion of the tube you are in.

Bait

A fishing simulator. You use pointer clicks for navigation in the menus. The main interaction is a fishing pole. Hold then release button at the right time while flicking the pole forward to cast, then click to reel it back in.

Dinosaurs

Basically like the rollercoaster game, but you learn about various dinosaurs by driving on a constant speed rails car to different scenes. It felt better to me than Epic Roller Coaster because the velocity is constant, rather than changing.

Breaking Boundaries in Science

Text overlays with audio and 360 background image. You can navigate through full 3D space by jumping to hard coded teleport spots. You can click certain spots to get to these points, hear audio, and lightly interact with artifacts. If you look up at an angle you see a flip-180 button to change the view. This avoids the problem of having to be in a 360 chair to navigate around. You cannot camera adjust with left/right swipes.

WonderGlade

In every scene you float over a static mini-landscape, sort of like you are above a game board. You cannot adjust the angle or camera, just move your head to see stuff. Everything laid around you for easy viewing from the fixed camera point. Individual mini games may use different mechanics for playing, but they are all using the same camera. Essentially the camera and world never move. You can navigate your player character around the board by clicking on spots, similar to an RTS like Starcraft.

Starchart

Menus are a static camera view with mouse interaction. Once inside of a star field you are at the center and can look in any direction of the virtual night sky. Swipe left / right to move camera 45 degrees, which happens instantly, not with navigation, though there are sound effects.
Click on a star or other object in the sky to get more info. The info appears attached to your controller. Rotate thumb in a on the touch area to get different info on the object. The info has a model of the object, either a full 3d model of a star / planet, or a 2d image of a galaxy, etc.

Lila’s Tail

Mouse menu interaction. In-game the level is a maze mapped onto a cylinder surrounding you. You click to guide Lila through the maze, sometimes she must slide from one side across the center to the other side. You guide her and the spotlight with your gaze. You activate things by connecting dots with the pointer. There is no way I can see to adjust the camera. This is a bit annoying in the levels which require you to navigate a full 360 degrees. I really wish it had recentering.

Overworld Underlord

A mini RTS / tower defense game. The camera is at a fixed position above the board. The boards are designed to lay around the camera, so you turn left or right to see the whole board. Control your units by clicking on them then clicking a destination.

Claro

A puzzle game where you lightly move things around to complete a sun light reflecting mechanism. The camera is fixed and the board is always an object in front of you. You rotate the board with left / right swipes on the touchpad. You move the sun by holding the trigger and moving the pointer around. Menus use mouse cursor navigation. The board is always in front of you but extra info like the level you are on and your score are to the left or right of you. Interestingly these are positioned far enough to the sides that you won't see the extra info while solving a puzzle. Very nice. You are surrounded by a pretty sky box that changes colors as the sun moves.

Weaver

Weaver is 360 photo viewer using a gaze cursor to navigate. Within the photos you cannot move them, just rotate your head. If you look down a floating menu appears to go to the next photo or main menu.

Ocean Rift

This is a nice underwater simulation of a coral reef. Use the pointer for menus and navigate around undersea with controller. The camera moves fairly slowly but does have some acceleration which made me a little sick. No camera rotation or recentering, just turn your head.

Fancy Beats

Rhythm game. Lights on the game board make it look like you are moving forward w/ your bowling ball, or that the board is moving backward. Either way it’s at a constant speed. Use touchpad interactions to control your ball to the beat.

Endspace

In Endspae you fly a space fighter into battle. There is a static cockpit around you and it uses head direction to move the camera around. The controller is used to aim the weapon. I could only handle this for about 60 seconds before I started to feel sick. Basically everything is moving around you constantly in all directions, so I immediately started to feel floaty.

Lands End

You navigate by jumping to teleportation spots using a gaze cursor. When you teleport the camera moves to the new spot at a constant velocity. Because movement is slow and the horizon is level I didn’t get too queasy, but still not as great as instant teleport. On the other hand, instant teleport might make it hard to know where you just moved to. Losing spatial context would be bad in this game. You can rotate your view using left and right swipes.

Jurassic World Blue

This is a high resolution 360 movie following a dinosaur from the newest Jurassic Park movie. The camera is generally fixed though sometimes moves very slowly along a line to take you towards the action. I never experienced any dizziness from the movement.

Dark Corner

Spooky short films in full 360. In the one I watched, The Office, the camera did not move at all, though things did sometimes come from angles away from where they knew the viewer would be looking. This is a very clever way to do jump scares without controlling the camera.

Maze VR Ultimate Pathfinding

You wander around a maze trying to find the exit. I found the control scheme awkward. You walk forward in whatever direction you gaze in for more than a moment, or when you press a button on the controller. The direction is always controlled by your gaze, however. The movement speed goes from stationary to full speed over a second or so. I would have preferred to not have the ramp up time. Also, you can’t click left or right on the controller trackpad to shift the view. I’m guessing this was originally for cardboard or similar devices.

Dead Shot

A Zombie shooter. The Oculus Store has several of these. Dead Shot has both a comfort mode and regular mode. In regular mode the camera does move, but at a slow and steady pace that didn’t give me any sickness. In comfort mode the camera never moves. Instead it teleports to the new location, including a little eyeblink animation for the transition. Nicely done! To make sure you don’t get lost it only teleports to near by locations you can see.

Pet Lab

A creature creation game. While there are several rooms all interaction happens from fixed positions where you look around you. You travel to various rooms by pointing and clicking on the door that you want to go to.

Dead and Buried

Shoot ghosts in an old west town. You don’t move at all in this game. You always shoot from fixed camera locations, simliar to a carnival game.

Witchblood

This is actually a side scroller with a set that looks like little dollhouses that you view from the side. I’d say it was cute except that there’s monsters anywhere. In any case, you don’t move the camera at all except to look from one end of a level to another.

Affected : The Manor

A game where you walk around a haunted house. The control scheme is interesting. You use the trigger on the controller to move forward, however the direction is controlled by your head view. The direction of the controller is used for your flashlight. I think it would be better if it was reversed. Use the controller direction for movement and your head for the flashlight. Perhaps there’s a point later in the game where their decision matters. I did notice that the speed is constant. You seem to be moving or not. I didn’t experience any discomfort.

Tomb Raider: Laura’s Escape

This is a little puzzle adventure game that takes you to the movie’s trailer. For being essentially advertising it was surprisingly good. You navigate by pointing at and clicking on glowing lights that are trigger points. These then move you toward that spot. The movement is at a constant speed but here is a slight slowdown when you reach the trigger point instead of immediately stopping. I felt a slight tinge of sickness but not much. In other parts of the game you climb by pointing and clicking on hand holds on a wall. I like how they used the same mechanic in different ways.

Dreadhalls

A literal dungeon crawler where you walk through dark halls looking for clues to find the way out. This game uses the trigger to collect things and the forward button on the touchpad to move forward. It uses the direction of the controller for movement rather than head direction. This means you can move sideways. It also means you can smoothly move around twisty halls if you are sitting in a swivel chair. I like it more than the way Affected does it.

World of Wonders

This game lets you visit various ancient wonders and wander around as citizens. You navigate by teleporting to wherever you point at. You can swipe left or right on the touchpad to rotate your view, though I found it a bit twitchy. Judging from the in game tutorial World of Wonders was designed original for the Gear VR so perhaps it’s not calibrated for the Oculus Go.

Short distance teleporting is fine for when you are walking around a scene, but to get between scenes you click on the sky to bring up a map which then lets you jump to the other scenes. Within a scene you can also click on various items to learn more about them.

One interesting interaction is that sometimes characters in the scenes will talk to you and ask you questions. You can respond yes or now by nodding or shaking your head. I don’t think I’ve ever seen that in a game before. Interestingly, nods and shakes are not universal. Different cultures use these gestures differently.

Rise of the Fallen

A fighting game where you slash at enemies. It doesn’t appear that you move at all, just that enemies attack you and you attack back with melee weapons.

Vendetta Online VR

Spaceship piloting game. This seems to be primarily a mullti-player game but I did the single player training levels to learn how to navigate. All action takes place in the cockpit of a spaceship. You navigate by targeting where you want to go and tilting your head. Once you have picked a target you press turbo to go their quickly. Oddly the star field is fixed while the cockpit floats around you. I think this means that if I wanted to go backwards I’d have to completely rotate myself around. Better have a swivel chair!

Smash Hit

A game where you smash glass things. The camera is on rails moving quickly straight forward. I was slightly dizzy at first because of the speed but quickly got used to it. You press the trigger to fire at the direction your head is pointing. It doesn’t use the controller orientation. I’m guessing this is a game originally designed for Cardboard? The smashing of objects is satisfying and there are interesting challenges as further levels have more and more stuff to smash. There is no actual navigation because you are on rails.

Apollo 11 VR

A simulation of the moon landing with additional information about the Apollo missions. Mostly this consists of watching video clips or cinematics, which are when the camera is moved around a scene, such as going up the elevator of the Saturn V rocket. In a few places you can control something, such as docking the spaceship to the LEM. The cinematics are good, especially for a device as limited graphically as the Go, but I did get a twinge of dizziness whenever the camera accelerated or de-celerated. Largely are you are in a fixed position with zero interaction.

QMOFirefox 65 Beta 10 Testday, January 11th

Hello Mozillians,

We are happy to let you know that Friday, January 11th, we are organizing Firefox 65 Beta 10 Testday. We’ll be focusing our testing on:  Firefox Monitor, Content Blocking and Find Toolbar. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

The Mozilla BlogMOSS 2018 Year in Review

Mozilla was born out of, and remains a part of, the open-source and free software movement. Through the Mozilla Open Source Support (MOSS) program, we recognize, celebrate, and support open source projects that contribute to our work and to the health of the internet.

2018 was a year of change and growth for the MOSS program. We worked to streamline the application process, undertook efforts to increase the diversity and inclusion of the program, and processed a record number of MOSS applications. The results? In total, MOSS provided over $970,000 in funding to over 40 open-source projects over the course of 2018. For the first time since the beginning of the program, we also received the majority of our applications from outside of the United States.

2018 highlights

While all MOSS projects advance the values of the Mozilla Manifesto, we’ve selected a few that stood out to us this year:

    • Secure Drop — $250,000 USD
      • SecureDrop is an open-source whistleblower submission system that media organizations can install to securely accept documents from anonymous sources. It was originally built by the late Aaron Swartz and is used by newsrooms all over the world, including those at The Guardian and the Associated Press. In 2018, MOSS gave its second award to Secure Drop; to date, the MOSS program has supported Secure Drop with $500,000 USD in funding.
    • The Tor Project — $150,000 USD
      • Tor is free software and an open network that helps defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security. In 2018, MOSS gave its second award to help modularize key aspects of the Tor codebase; to date, the MOSS program has supported this work with $300,000 USD in funding.
    • The Processing Foundation — $69,700 USD
      • The Processing Foundation maintains p5.js, an open-source JavaScript framework that makes creating visual media with code on the web accessible to anyone, especially those without traditional computer science backgrounds. p5.js enables users to quickly prototype interactive applications, data visualizations, and narrative experiences, and share them easily on the web.
    • Dat Project — $34,000 USD
      • Dat is a nonprofit-backed data sharing protocol for applications of the future. With software built for researchers and data management, Dat empowers people with decentralized data tools. MOSS provided $34,000 USD in funding to Dat for community-building, documentation, and tooling.

Seed Awards

With an eye toward broadening participation in the MOSS program and reaching new audiences, the MOSS team decided to try something new at this year’s Mozilla Festival in London: we invited Festival attendees who work on open-source projects to join us for an event we called “MOSS Speed Dating.” For the event, we established a special MOSS committee, comprised of existing committee members, Mozilla staff, and leaders in the open-source world. Attendees were invited to “pitch” their project to three different committee members for 10 minutes each. Following the event, the committee met to discuss which projects best exemplified the qualities we look for in all MOSS projects (openness, impact, alignment with the Mozilla mission) and provided each of the most promising projects with a $5,000 seed grant to help support future development. While many of these projects are less mature than the projects we’d support with a larger, traditional MOSS award, we hope that these seed awards will assist them in growing their codebases and communities.

The 14 projects that the committee selected were:

Looking forward to 2019

In 2019, we hope to double down on our efforts to widen the applicant pool for MOSS and support a record number of projects from a diverse set of maintainers around the globe. Do you know of an open-source project in need of support whose work advances Mozilla’s mission? Please encourage them to apply for a MOSS award!

The post MOSS 2018 Year in Review appeared first on The Mozilla Blog.

Mozilla Gfx TeamWebRender newsletter #34

Happy new year! I’ll introduce WebRender’s 34th newsletter with a rather technical overview of a neat trick we call primitive segmentation. In previous posts I wrote about how we deal with batching and how we use the depth buffer both as a culling mechanism and as a way to save memory bandwidth. As a result, pixels rendered in the opaque pass are much cheaper than pixels rendered in the blend pass. This works great with rectangular opaque primitives that are axis-aligned so they don’t need anti-aliasing. Anti-aliasing, however, requires us to do some blending to smoothen the edges and rounded corners have some transparent pixels. We could tessellate a mesh that covers exactly the rounded primitive but we’d still need blending for the anti-aliasing of the border. What a shame, rounded corners are so common on the web, and they are often quite big.

Well, we don’t really need to render whole primitives at a time. for a transformed primitive we can always extract out the opaque part of the primitive and render the anti-aliased edges separately. Likewise, we can break rounded rectangles up into smaller opaque rectangles and the rectangles that contain the corners. We call this primitive segmentation and it helps at several levels: opaque segments can move to the opaque pass which means we get good memory bandwidth savings and better batching since batching complexity is mostly affected by the amount of work to perform during the blend pass. This also opens the door to interesting optimizations. For example we can break a primitive into segments, not only depending on the shapes of the primitive itself, but also on the shape of masks that are applied to it. This lets us create large rounded rectangle masks where only the rounded parts of the masks occupy significant amounts of space in the mask. More generally, there are a lot of complicated elements that can be reduced to simpler or more compact segments by applying the same family of tricks and render them as nine-patches or some more elaborate patchwork of segments (for example the box-shadow of a rectangle).

Segmented primitives

The way we represent this on the GPU is to pack all of the primitive descriptions in a large float texture. For each primitive we first pack the per-primitive data followed by the per-segment data. We dispatch instanced draw calls where each instance corresponds to a segment’s quad. The vertex shader finds all of the information it needs from the primitive offset and segment id of the quad it is working on.

The idea of breaking complicated primitives up into simpler segments isn’t new nor ground breaking, but I think that it is worth mentioning in the context of WebRender because of how well it integrates with the rest of our rendering architecture.

Notable WebRender and Gecko changes

  • Jeff fixed some issues with blob image recoordination.
  • Dan improved the primitive interning mechanism in WebRender.
  • Kats fixed a bug with position:sticky.
  • Kats fixed a memory leak.
  • Kats improved the CI.
  • Kvark fixed a crash caused by empty regions in the texture cache allocator.
  • Kvark fixed a division by zero in a shader.
  • Matt improved to the frame scheduling logic.
  • Matt fixed a hit-testing issue with opacity:0 divs.
  • Matt fixed a blob image validation issue.
  • Matt improved the performance of text DrawTargets.
  • Matt prevented opacity:0 animation from generating lots of CPU work.
  • Matt fixed a pixel snapping issue.
  • Matt reduced the number of YUV shader permutations.
  • Lee fixed a bug in the FreeType font backend that caused all sub-pixel AA text to be shifted by a pixel.
  • Lee implemented font variation on Linux.
  • Emilio fixed a clipping issue allowing web content to draw over the tab bar.
  • Emilio fixed a border rendering corruption.
  • Glenn added suport for picture caching when the content rect changes between display lists.
  • Glenn fixed some picture caching bugs (2, 3, 4, 5).
  • Glenn removed redundant clustering information.
  • Glenn fixed a clipping bug.
  • Sotaro and Bobby lazily iniztialized D3D devices.
  • Sotaro fixed a crash on Wayland.
  • Bobby improved memory usage.
  • Bobby improved some of the debugging facilities.
  • Bobby shrunk the size of some handles using NonZero.
  • Bobby improved the shader hashing speed to help startup.
  • Glenn fixed a picture caching bug with multiple scroll roots.
  • Glenn improved the performance of picture caching.
  • Glenn followed up with more picture caching improvements.

Ongoing work

The team is going through the remaining release blockers.

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Open Policy & AdvocacyKenya Considers Protection of Privacy and Personal Data

Mozilla applauds the government of Kenya for publishing the Data Protection Bill, 2018. This highly anticipated bill gives effect to Article 31 of the Constitution of Kenya, which protects the right to privacy, and, if passed, will be Kenya’s first data protection law.

Most notably, the bill includes:

  • An empowered and well resourced data protection commission with a high degree of independence from the government.
  • Strong obligations placed on data controllers and processors requiring them to abide by principles of meaningful user consent, collection limitation, purpose limitation, data minimization, and data security.
  • Robust protections for data subjects with the rights to rectification, erasure of inaccurate data, objection to processing of their data, as well as the right to access and to be informed of the use of their data, providing users with control over their personal data and online experiences.

This bill comes at a pivotal time. Kenya is a rapidly digitizing nation with  46.6 million mobile subscribers with a penetration rate of 97.8%. Over 99% of Kenya’s internet subscribers access the internet via mobile phones. Several government services are now available only online, compelling citizens to provide personal data to access services like registration of births. Furthermore, the Registration of Persons Act requires demographic and biometric data to be contained in an electronic national identity card, which is crucial for every-day life. All these services have accelerated the collection and analysis of personal data but the lack of a comprehensive data protection law exposes Kenyan citizens to risks of misuse of their data.

This proposed law is therefore a welcome opportunity for the government to develop a model data protection framework that upholds individual privacy and safeguards the data of generations of Kenyans including those who are yet to come online. Kenya’s draft data protection legislation is clearly inspired by the EU’s General Data Protection Regulation, and Kenya is striving to be the first country to receive an “adequacy” determination from the European Commission — a certification that a country has strong privacy laws, and which allows Europeans’ data to be processed in that country and for companies in that jurisdiction to more easily enter European markets. This bill is also an important step toward fulfilling the African Union Convention on Cyber Security and Personal Data Protection, which calls for member states to adopt legal frameworks for data privacy and cybersecurity.

Mozilla’s comments on the Kenyan data protection bill can be found here. Our work on Kenya’s data protection bill builds on our strong commitment to user privacy and security as can be seen both in the open source code of our products as well as in our policies. We believe that strong data protection laws are critical to ensuring that user rights and the private user data that  companies and governments are entrusted with are protected. We have been actively engaged in advocating for strong data protection laws in India, Brazil, the EU, and the US, and are enthusiastic to engage in this timely and historic debate in Kenya.

We believe that a strong data protection law must protect the rights of individuals with meaningful consent at its core. It must have strong obligations placed on data controllers and processors reflecting the significant responsibilities associated with collecting, storing, using, analyzing, and processing user data; and provide for effective enforcement by an empowered, independent, and well-resourced Data Protection Authority. We’re pleased to see all of these values included in the Kenyan data protection bill.

The bill was developed in open public consultations, another crucial pillar of the Kenyan constitution, which provides the public with the opportunity to take part in government and parliamentary decision making processes. The consultations received wide ranging comments from governments, private sector, academia, civil society, and individuals. The result is a bill that Kenya should be proud of.

We commend the government for the thoughtful and thorough framework  and urge Kenyan members of parliament to pass this critical data protection legislation and reconcile it with other statues, which contain provisions that threaten the good intentions of this bill. Without a data protection law, Kenyans private data is currently at risk.

With this legislation, Kenya is emerging as a leader in the digital economy and we hope this will serve as a positive example to the many other African governments that are currently considering data protection frameworks.

The post Kenya Considers Protection of Privacy and Personal Data appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogJanuary’s featured extensions

Firefox Logo on blue background

Pick of the Month: Auto Tab Discard

by Richard Neomy
Save memory usage by automatically hibernating inactive tabs.

“Wow! This add-on works like a charm. My browsing experience has improved greatly.”

Featured: Malwarebytes Browser Extension

by Malwarebytes Inc.
Enhance the safety and speed of your browsing experience by blocking malicious websites like fake tech support scams and hidden cryptocurrency miners.

“Malwarebytes is the best I have used to stop ‘Microsoft alerts’ and ‘Windows warnings’.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post January’s featured extensions appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyIndia attempts to turn online companies into censors and undermines security – Mozilla responds

Last week, the Indian government proposed sweeping changes to the legal protections for “intermediaries”, which affect every internet company today. Intermediary liability protections have been fundamental to the growth of the internet as an open and secure medium of communication and commerce. Whether Section 79 of the Information Technology Act in India (under which these new rules are proposed), the EU’s E-Commerce Directive, or Section 230 of the US’ Communications Decency Act, these legal provisions ensure that companies generally have no obligations to actively censor and limited liability for illegal activities and postings of their users until they know about it. In India, the landmark Shreya Singhal judgment had clarified in 2015 that companies would only be expected to remove content when directed by a court order to do so.

The new rules proposed by the Ministry of Electronics and Information Technology (MEITY) turn this logic on its head. They propose that all “intermediaries”, ranging from social media and e-commerce platforms to internet service providers, be required to proactively remove “unlawful” user content, or else face liability for content on their platform. They also propose a sharp blow to end-to-end encryption technologies, used to secure most popular messaging, banking, and e-commerce apps today, by requiring services to make available information about the creators or senders of content to government agencies for surveillance purposes.

The government has justified this move based on “instances of misuse of social media by criminals and anti-national elements”, citing lynching incidents spurred on by misinformation campaigns. We recognize that harmful content online – from hate speech and misinformation to terrorist content – undermines the overall health of the internet and stifles its empowering potential. However, the regulation of speech online necessarily calls into play numerous fundamental rights and freedoms guaranteed by the Indian constitution (freedom of speech, right to privacy, due process, etc), as well as crucial technical considerations (‘does the architecture of the internet render this type of measure possible or not’, etc). This is a delicate and critical balance, and not one that should be approached with such maladroit policy proposals.

Our five main concerns are summarised here, and we will build on these for our filing to MEITY:

  1. The proactive obligation on services to remove “unlawful” content will inevitably lead to over-censorship and chill free expression.
  2. Automated and machine-learning solutions should not be encouraged as a silver bullet to fight against harmful content on the internet.
  3. One-size-fits-all obligations for all types of online services and all types of unlawful content is arbitrary and disproportionately harms smaller players.
  4. Requiring services to decrypt encrypted data, weakens overall security and contradicts the principles of data minimisation, endorsed in MEITYs draft data protection bill.
  5. Disproportionate operational obligations, like mandatorily incorporating in India, are likely to spur market exit and deter market entry for SMEs.

We do need to find ways to hold social media platforms to higher standards of responsibility, and acknowledge that building rights-protective frameworks for tackling illegal content on the internet is a challenging task. However, whittling down intermediary liability protections and undermining end-to-end encryption are blunt and disproportionate tools that fail to strike the right balance. We stress that any regulatory intervention on this complex issue must be preceded by a wide ranging and participatory dialogue. We look forward to continue constructive engagement with MEITY and other stakeholders on this issue.

 

 

The post India attempts to turn online companies into censors and undermines security – Mozilla responds appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogThunderbird in 2019

From the Thunderbird team we wish you a Happy New Year! Welcome to 2019, and in this blog post we’ll look at what we got accomplished in 2018 and look forward to what we’re going to be working on this year.

Looking Back on 2018

More Eggs in the Nest

Our team grew considerably in 2018, to eight staff working full-time on Thunderbird. At the beginning of this year we are going to be adding as many as six new members to our team. Most of these people with the exception of this author (Ryan Sipes, Community Manager) are engineers who will be focused on making Thunderbird more stable, faster, and easier to use (more on this below).

The primary reason we’ve been able to do this is an increase in donors to the project. We hope that anyone reading this will consider giving to Thunderbird as well. Donations from individual contributors are our primary source of funding, and we greatly appreciate all our supporters who made this year so successful!

Thunderbird 60

We released the latest ESR, Thunderbird 60 – which saw many improvements in security, stability, and the app’s interface. Beyond big upgrades to core Thunderbird, Thunderbird’s calendar saw many improvements as well.

For the team this was also a big learning opportunity. We heard from users who upgraded and loved the improvements, and we heard from users who encountered issues with legacy add-ons or other changes that they hurt their workflow.

We listened, and will continue to listen. We’re going to build upon what made Thunderbird 60 a success, and work to address the concerns of those users who experienced issues with the update. Hiring more staff (as mentioned above) will go a long way to having the manpower needed to build even better releases going forward.

A Growing Community

Early in the year, a couple of members of the Thunderbird team visited FOSDEM – from then on we worked hard to ensure our users and contributors that Thunderbird was spreading its wings and flying high again.

That work was rewarded when folks came to help us out. The folks at Ura Design worked on us on a few initiatives, including a style guide and user testing. They’ve also joined us in working on a new UX team, which we very much expect to grow with a dedicated UX designer/developer on staff in the new year. If you are interested in contributing or following along, you can join the UX team mailing list here.

We heard from many users who were excited at the new energy that’s been injected into Thunderbird. I received many Emails detailing what our userbase loved about Thunderbird 60 and what they’d like to see in future releases. Some even said they’d like to get involved, so we made a page with information on how to do that.

We still have some areas to improve on this year, with one of them being onboarding core contributors. Thunderbird is a big, complex project that isn’t easy to jump into. So, as we closed out the year I opened a bug where we can detail what documentation needs to be created or updated for new members of the community – to ensure they can dive into the project.

Plans for 2019

So here we are, in 2019. Looking into the future, this year looks bright for the Thunderbird project. As I pointed out earlier in this post, we start the new year with the hiring of some new staff to the Thunderbird team. Which will put us at as many as 14 full-time members on our staff. This opens up a world of possibilities for what we are able to accomplish, some of those goals I will detail now.

Making Thunderbird Fly Faster

Our hires are already addressing technical debt and doing a fair bit of plumbing when it comes to Thunderbird’s codebase. Our new hires will also be addressing UI-slowness and general performance issues across the application.

This is an area where I think we will see some of the best improvements in Thunderbird for 2019, as we look into methods for testing and measuring slowness – and then put our engineers on architecting solutions to these pain points. Beyond that, we will be looking into leveraging new, faster technologies in rewriting parts of Thunderbird as well as working toward a multi-process Thunderbird.

A More Beautiful (and Useable) Thunderbird

We have received considerable feedback asking for UX/UI improvements and, as teased above, we will work on this in 2019. With the addition of new developers we will see some focus on improving the experience for our users across the board in Thunderbird.

For instance, one area of useability that we are planning on addresssing in 2019 is integration improvements in various areas. One of those in better GMail support, as one of the biggest Email providers it makes sense to focus some resources on this area. We are looking at addressing GMail label support and ensuring that other features specific to the GMail experience translate well into Thunderbird.

We are looking at improving notifications in Thunderbird, by better integrating with each operating system’s built-in notification system. By working on this feature Thunderbird will feel more “native” on each desktop and will make managing notifications from the app easier.

The UX/UI around encryption and settings will get an overhaul in the coming year, whether or not all this work makes it into the next release is an open question – but as we grow our team this will be a focus. It is our hope to make encrypting Email and ensuring your private communication easier in upcoming releases, we’ve even hired an engineer who will be focused primarily on security and privacy. Beyond that, Thunderbird can do a lot so we’ll be looking into improving the experience around settings so that it is easier to find and manage what you’re looking for.

So Much More

There are a still a few things to work out for a 2019 roadmap. But if you’d like to see a technical overview of our plans, take a look at this post on the Thunderbird mailing list.

Support Thunderbird

If you are excited about the direction that Thunderbird is headed and would like to support the project, please consider becoming a donor to the project. We even have a newsletter that donors receive with news and updates about the project (and awesome Thunderbird art). You can even make a recurring monthly gift to Thunderbird, which is much appreciated. It’s the folks that have given of their time or donated that have made 2018 a success, and it’s your support that makes the future look bright for Thunderbird.

 

SeaMonkeyHappy New Year!

We, from the SeaMonkey dev team, would like to wish everyone a very Happy New, Healthy, Safe and Prosperous New Year!

We do not know what’s in store for this small project; but we do hope to continue to work on the project.   It’s not going to be easy and it certainly isn’t going to be an overnight turnaround.   We wholeheartedly appreciate everyone’s patience, and we also like to appreciate the past support for those who’ve changed to a different browser.

Most of all, we’d like to take this opportunity to thank all those countless past developers who’ve moved on from this project.  Their participation, contributions and effort have helped us to make this project better.  We certainly miss their participations; but wish them the best of luck whatever they choose to do.

:ewong

on behalf of the SeaMonkey Project.

 

hacks.mozilla.orgMozilla Hacks’ 10 most-read posts of 2018

Must be the season of the list—when we let the numbers reveal what they can about reader interests and attention over the past 360-some days of Mozilla Hacks.

Our top ten posts ranged across a variety of categories – including JavaScript and WebAssembly, CSS, the Web of Things, and Firefox Quantum. What else does the list tell us? People like code cartoons!

I should mention that the post on Mozilla Hacks that got the most traffic in 2018 was written in 2015. It’s called Iterators and the for-of loop, and was the second of seventeen articles in an amazing, evergreen series, ES6 In Depth, crafted and written in large part by Jason Orendorff, a JavaScript engineer.

Today’s list is focused on the year we’re about to put behind us, and only covers the posts written in calendar year 2018.

  1. Ben Francis kicked off Mozilla’s Project Things with this post about the potential and flexibility of WoT: How to build your own private smart home with a Raspberry Pi and Mozilla’s Things Gateway. It’s the opener of a multi-part hands-on series on the Web of Things, from Ben and team.
  2. Lin Clark delivered A cartoon intro to DNS over HTTPS in true-blue code cartoon style.
  3. In April, she gave a brilliant exposition of ES modules in ES modules: A cartoon deep-dive.
  4. WebAssembly has been a consistently hot topic on Hacks this year: /" target="_blank">Calls between JavaScript and WebAssembly are finally fast 🎉.
  5. Don’t underestimate the importance of WebAsssembly for making the web viable and performant. As 2018 opened, Lin Clark illustrated its role in the browser: Making WebAssembly even faster: Firefox’s new streaming and tiering compiler.
  6. Research engineer Michael Bebenita shared a Sneak Peek at WebAssembly Studio, his interactive visualization of WebAssembly.
  7. Developer Advocate Josh Marinacci, who’s focused on sharing WebVR and Mozilla Mixed Reality with web developers, wrote a practical post about CSS Grid for UI Layouts—on how to improve your app layouts to respond and adapt to user interactions and changing conditions, and always have your panels scroll properly.
  8. As the year began to wind down, we got a closer look at how the best is yet to come for WebAssembly in WebAssembly’s post-MVP future: A cartoon skill tree from Lin Clark, Till Schneidereit, and Luke Wagner.
  9. Potch delivered his Hacks swan song as November drew to a close. The Power of Web Components was years in the making and well worth the wait.
  10. Mozilla Design Advocate and Layout Land creator Jen Simmons walked us through the ins and outs of resilient CSS in this seven-part video series you won’t want to miss: How to Write CSS That Works in Every Browser, Even the Old Ones.

Thanks for reading and sharing Mozilla Hacks in 2018. Here’s to 2019. There’s so much to do.

It’s always a good year to be learning. Want to keep up with Hacks? Follow @mozhacks on Twitter or subscribe to our always informative and unobtrusive weekly Mozilla Developer Newsletter below.

The post Mozilla Hacks’ 10 most-read posts of 2018 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla UXA bumpy road to Firefox ScreenshotGo in a foreign market

Back in January 2018, we were assigned to lead a project called New Product Exploration to help Mozilla grow impact on shopping experience in Southeast Asia, and the first stop was Indonesia, a country I had not visited and barely knew!

To quickly dive into the market and design a great product within six weeks, we adopted Lean UX thinking and continually fine-tuned the design process* that has proven to be successful in this journey. Furthermore, we also practiced the strategic approach, Playing to Win**, to fit the emerging markets aligning the business model/goal. In short, we pushed forward and discovered by the following design process:

1. Explore & learn: discover and prioritize user needs on shopping demands
2. Assumptions to validate: create and validate preliminary assumptions
3. Design & build prototype: develop the prototype by collaborative design
4. Test & iterate: field research with users by prototype to iterate

Fine-tuned structure according to Lean UX process

Explore & learn

Though Indonesia is just 5 hours-flight from Taiwan, I had little knowledge about its language, culture, value, etc. Fortunately, Ruby Hsu, our user researcher who had done extensive research and interviews in Indonesia, brought us decent observation and research findings as our starting point. Next, the team did extensive desk research to understand their shopping behaviors. With the research findings, we depicted the shopping journey to explore the opportunities and pain points.

User journey is a sequence of events a user may take or interact with something to reach their goal

 

According to the shopping journey, we synthesized five general directions for further exploration:
– Price comparison
– Save to Wish list
– Save Money Helper
– Reviews
– Cash Flow Management

For each track, we validated assumptions by quantitative questionnaires via JakPat, a mobile survey service in Indonesia. Around 200 participants, online shoppers, were recruited for each survey. The surveys offered us fundamental knowledge covered from their daily life to specific shopping behaviors across genders, ages, and monthly spending, etc. Surprisingly, the most significant pattern was that

screenshot always served as a dominant tool to fulfill most needs, like keeping wish lists, promotions, shopping history, and cash flows, which was really out of our expectation.

Too much to do, but too little time. With so many different things going on, knowing how to prioritize effectively can be a real challenge. To have each member from different disciplines become familiar with the knowledge, we held a workshop to develop the problem statement and persona to represent what we have learned from our research.

Persona is a representation of a type of target users

 

 

At the end of the workshop, the participants of the brainstorming session helped the team identify and assess risks and values as references to determine the exact location for each direction in the Risk/Value Matrix. “Save to wish list” was the one with the lowest risk but the highest value considering the limited time and resources, which was the different logic of Lean UX.

Risk/Value Matrix prioritizes the potential ideas, which Lean UX believes the higher the risk and the more the value, the higher the priority is to test first

 

The team believed that creating a cross-channel wish list tool for online shoppers could be valuable since it could track original product information and discover the relevant items tailored to their taste.

Assumptions to validate

We determined the direction. It is time to march forward. We invited representatives from all function teams to create hypothesis and assumptions. The workshop consisted of four parts, business outcomes, users, user outcomes and features. At last, we mapped all materials from those four aspects to create the feature hypotheses:

We believe this [business outcome] will be achieved if [these users] successfully achieve [this user outcome] with [this feature].

Cross-disciplinary workshop

Considering the persona, we prioritized and chose three hypotheses via risk prioritization matrix. To speed up the validation, we decided three most essential and developed surveys with storyboards accordingly to verify, which were “universal wish list,” “organized wish list” and “social wish list.”

Left to right: Universal wish list, Organized wish list, and Social wish list

The result revealed that the high demands for the first two ideas but the last one required further validation for the potential needs. Out of our expectation again, how Indonesians used screenshots was still the highlight of the survey results.

The screenshot was the existing and dominant tool they got used to capturing everything beyond shopping needs.

Design & build prototype

With all the validated assumptions, we concluded to develop an app to help Indonesians make good use of screenshots as a quick, universal information manager across various online sources. Furthermore, they could get back to online sources through collected screenshots.

At that time, we have to make our ideas tangible for testing. After the collaborative workshop with engineers on the early design and continuous feasibility check, I used Figma, a collaborative design tool, to quickly develop fundamental information architecture and interaction details. By co-operating and referring to the evolving UX wireframe, everyone showed their capabilities simultaneously.

Collaborated UX spec via Figma

While Fang Shih, the UI designer, was busy with designing the look and feel into the visual spec, Mark Liang, our prototyper, was coding the infrastructure and high fidelity prototype with Frammer. Last but not least, Ricky Yu, the user researcher, took care of the research plan, recruiting and testing schedule in Indonesia.

Test & Iterate

With everything prepared, we flew to Indonesia to meet the real users and listened to their inner thoughts. The research trip consisted of three sections. Among the eight recruited participants, the first four were interviewed for the validation of the unmet needs, which we put more focus on their screenshot behaviors, mental models. We took one day to iterate the design, then kept testing the rest participants for concept feedback, like the IA, usability.

Overview of testing

Each participant took various tasks for us to understand and learn their feelings, thoughts, and behaviors, such as demoing screenshots, prioritizing top features, etc. The entire research trip confirmed our observation about the screenshots to Indonesians:

“Almost everything I screenshot,” said one of our participants.

Shaping the product

As the answers from the interviews analyzed, their screenshot behavior and pain points gradually emerged in each step of the screenshot process, including triggering, storing, and retrieving.

Why did Indonesians like to screenshot?

Apps provided a better experience than websites, even better prices in some e-commerce apps. Screenshots was a quick and universal tool to grab information across apps. Besides, with the unstable, slow, and limited data plan, they were more inclined to capture online live through an offline screenshot instead of a hyperlink, which might cost forever loading.

What pain points did Indonesians have in storing and retrieving?

Handy screenshots led to countless images in Gallery, which caused users hard to locate the needed screenshot. Even though the screenshot was found, the static image did not provide any digital data for relevant actions in the smartphone — for instance; users had to memorize the info on the screenshot and search more relevant content again.

In conclusion, screenshots to Indonesians could be defined as a universal offline tool to capture information across various apps for further exploration online. However, they are looking for an app to find needed screenshots among numerous images readily and make good use of screenshots to explore related knowledge and content. The validated findings shaped the blueprint of Firefox ScreenshotGo — an Android app that helps Indonesians easily capture, manage screenshots and explore more relevant information.

As for how we measured the market size and launched the product, allow me to spend another post to cover the details.

Firefox ScreenshotGo is only available in Indonesia Google play store, but you can still install by following the instructions.

Why did Mozilla build Firefox ScreenshotGo?

It is a great question! Here I would like to briefly talk about how we adopted Playing to Win to delve into the answer. The strategic narrative focused on fulfilling Mozilla mission. Users screenshotted their online life across app silos who offered limited or manipulated information. Mozilla targeted to encourage users to go back to the open web to search more linked content freely with those screenshots as bookmarks.

Strategic narrative

*Lean UX process is a cycle of four actions, starting with “Research & learning”, “Outcomes, assumptions, hypotheses”, “Design it” and “Create an MVP.”
**Playing to Win provides a step-by-step framework to develop a strategy.

Firefox UXA bumpy road to Firefox ScreenshotGo in a foreign market

Back in January 2018, we were assigned to lead a project called New Product Exploration to help Mozilla grow impact on shopping experience in Southeast Asia, and the first stop was Indonesia, a country I had not visited and barely knew!

To quickly dive into the market and design a great product within six weeks, we adopted Lean UX thinking and continually fine-tuned the design process* that has proven to be successful in this journey. Furthermore, we also practiced the strategic approach, Playing to Win**, to fit the emerging markets aligning the business model/goal. In short, we pushed forward and discovered by the following design process:

1. Explore & learn: discover and prioritize user needs on shopping demands
2. Assumptions to validate: create and validate preliminary assumptions
3. Design & build prototype: develop the prototype by collaborative design
4. Test & iterate: field research with users by prototype to iterate

<figcaption>Fine-tuned structure according to Lean UX process</figcaption>

Explore & learn
Though Indonesia is just 5 hours-flight from Taiwan, I had little knowledge about its language, culture, value, etc. Fortunately, Ruby Hsu, our user researcher who had done extensive research and interviews in Indonesia, brought us decent observation and research findings as our starting point. Next, the team did extensive desk research to understand their shopping behaviors. With the research findings, we depicted the shopping journey to explore the opportunities and pain points.

<figcaption>User journey is a sequence of events a user may take or interact with something to reach their goal</figcaption>

According to the shopping journey, we synthesized five general directions for further exploration:
- Price comparison
- Save to Wish list
- Save Money Helper
- Reviews
- Cash Flow Management

For each track, we validated assumptions by quantitative questionnaires via JakPat, a mobile survey service in Indonesia. Around 200 participants, online shoppers, were recruited for each survey. The surveys offered us fundamental knowledge covered from their daily life to specific shopping behaviors across genders, ages, and monthly spending, etc. Surprisingly, the most significant pattern was that

screenshot always served as a dominant tool to fulfill most needs, like keeping wish lists, promotions, shopping history, and cash flows, which was really out of our expectation.

Too much to do, but too little time. With so many different things going on, knowing how to prioritize effectively can be a real challenge. To have each member from different disciplines become familiar with the knowledge, we held a workshop to develop the problem statement and persona to represent what we have learned from our research.

<figcaption>Persona is a representation of a type of target users</figcaption>

At the end of the workshop, the participants of the brainstorming session helped the team identify and assess risks and values as references to determine the exact location for each direction in the Risk/Value Matrix. “Save to wish list” was the one with the lowest risk but the highest value considering the limited time and resources, which was the different logic of Lean UX.

<figcaption>Risk/Value Matrix prioritizes the potential ideas, which Lean UX believes the higher the risk and the more the value, the higher the priority is to test first</figcaption>

The team believed that creating a cross-channel wish list tool for online shoppers could be valuable since it could track original product information and discover the relevant items tailored to their taste.

Assumptions to validate

We determined the direction. It is time to march forward. We invited representatives from all function teams to create hypothesis and assumptions. The workshop consisted of four parts, business outcomes, users, user outcomes and features. At last, we mapped all materials from those four aspects to create the feature hypotheses:

We believe this [business outcome] will be achieved if [these users] successfully achieve [this user outcome] with [this feature].
<figcaption>Cross-disciplinary workshop</figcaption>

Considering the persona, we prioritized and chose three hypotheses via risk prioritization matrix. To speed up the validation, we decided three most essential and developed surveys with storyboards accordingly to verify, which were “universal wish list,” “organized wish list” and “social wish list.”

<figcaption>Left to right: Universal wish list, Organized wish list, and Social wish list</figcaption>

The result revealed that the high demands for the first two ideas but the last one required further validation for the potential needs. Out of our expectation again, how Indonesians used screenshots was still the highlight of the survey results.

The screenshot was the existing and dominant tool they got used to capturing everything beyond shopping needs.

Design & build prototype

With all the validated assumptions, we concluded to develop an app to help Indonesians make good use of screenshots as a quick, universal information manager across various online sources. Furthermore, they could get back to online sources through collected screenshots.

At that time, we have to make our ideas tangible for testing. After the collaborative workshop with engineers on the early design and continuous feasibility check, I used Figma, a collaborative design tool, to quickly develop fundamental information architecture and interaction details. By co-operating and referring to the evolving UX wireframe, everyone showed their capabilities simultaneously.

<figcaption>Collaborated UX spec via Figma</figcaption>

While Fang Shih, the UI designer, was busy with designing the look and feel into the visual spec, Mark Liang, our prototyper, was coding the infrastructure and high fidelity prototype with Frammer. Last but not least, Ricky Yu, the user researcher, took care of the research plan, recruiting and testing schedule in Indonesia.

Test & Iterate

With everything prepared, we flew to Indonesia to meet the real users and listened to their inner thoughts. The research trip consisted of three sections. Among the eight recruited participants, the first four were interviewed for the validation of the unmet needs, which we put more focus on their screenshot behaviors, mental models. We took one day to iterate the design, then kept testing the rest participants for concept feedback, like the IA, usability.

<figcaption>Overview of testing</figcaption>

Each participant took various tasks for us to understand and learn their feelings, thoughts, and behaviors, such as demoing screenshots, prioritizing top features, etc. The entire research trip confirmed our observation about the screenshots to Indonesians:

“Almost everything I screenshot,” said one of our participants.

Shaping the product

As the answers from the interviews analyzed, their screenshot behavior and pain points gradually emerged in each step of the screenshot process, including triggering, storing, and retrieving.

Why did Indonesians like to screenshot?

Apps provided a better experience than websites, even better prices in some e-commerce apps. Screenshots was a quick and universal tool to grab information across apps. Besides, with the unstable, slow, and limited data plan, they were more inclined to capture online live through an offline screenshot instead of a hyperlink, which might cost forever loading.

What pain points did Indonesians have in storing and retrieving?

Handy screenshots led to countless images in Gallery, which caused users hard to locate the needed screenshot. Even though the screenshot was found, the static image did not provide any digital data for relevant actions in the smartphone — for instance; users had to memorize the info on the screenshot and search more relevant content again.

In conclusion, screenshots to Indonesians could be defined as a universal offline tool to capture information across various apps for further exploration online. However, they are looking for an app to find needed screenshots among numerous images readily and make good use of screenshots to explore related knowledge and content. The validated findings shaped the blueprint of Firefox ScreenshotGo — an Android app that helps Indonesians easily capture, manage screenshots and explore more relevant information.

As for how we measured the market size and launched the product, allow me to spend another post to cover the details.

<figcaption>Firefox ScreenshotGo is only available in Indonesia Google play store, but you can still install by following the instructions.</figcaption>

Why did Mozilla build Firefox ScreenshotGo?

It is a great question! Here I would like to briefly talk about how we adopted Playing to Win to delve into the answer. The strategic narrative focused on fulfilling Mozilla mission. Users screenshotted their online life across app silos who offered limited or manipulated information. Mozilla targeted to encourage users to go back to the open web to search more linked content freely with those screenshots as bookmarks.

<figcaption>Strategic narrative</figcaption>

*Lean UX process is a cycle of four actions, starting with “Research & learning”, “Outcomes, assumptions, hypotheses”, “Design it” and “Create an MVP.”
**Playing to Win provides a step-by-step framework to develop a strategy.


A bumpy road to Firefox ScreenshotGo in a foreign market was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

QMOFirefox 65 Beta 6 Testday Results

Hello Mozillians!

As you may already know, last Friday December 21st – we held a new Testday event, for Firefox 65 Beta 6.

Thank you all for helping us make Mozilla a better place: priyadharshini A.

From the Bangladesh team: Sayed Ibn Masud, Osman Noyon, Alamin Shikder, Farhan Sadik Galib, Tanjia Akter Kona, Hossain Al Ikram, Basirul Fahad, Md. Majedul Islam, Sajedul Islam, Maruf Rahman and Forhad Hossain.
From the India team: Mohammed Adam and Adam24, Mohamed Bawas, Aishwarya Narasimhan@Aishwarya, Showkath begum.J and priyadharshini A.

Results:

– several test cases executed for the <notificationbox> & <notification> changes and Update Directory;
– bugs verified: 1501161, 1509277, 1511751, 1504268, 1501992, 1315509, 1510734, 1511954, 1509711, 1509889, 1511074, 1510734, 1506114, 1505801, 1450973, 1509889, 1511954, 1315509, 1501992, 1512047, 1237076;
– bugs confirmed: 1515995, 1515906;
– bug filled: 1516124;

Thanks for another successful testday! 🙂

Open Policy & AdvocacyPrivacy in practice: Mozilla talks “lean data” in India

How can businesses best implement privacy principles? On November 26th, Mozilla hosted its first “Privacy Matters” event in New Delhi, bringing together representatives from some of India’s leading and upcoming online businesses. The session was aimed at driving a practical conversation around how companies can better protect user data, and the multiple incentives to do so.

This conversation is timely. The European GDPR came into force this May and had ripple effects on many Indian companies. India itself is well on its way to having its first comprehensive data protection law. We’ve been vocal in our support for a strong law, see here and here for our submissions to the Indian government. Conducted with Mika Shah, Lead Product and Data Counsel at Mozilla Headquarters in Mountain View, the meeting saw participation from thirteen companies in India, ranging from SMEs to large conglomerates, including Zomato, Ibibo, Dunzo, Practo and Zeotap. There was a mix of representatives across engineering, c-level, and legal/policy teams of these companies. The discussions were divided into three segments as per Mozilla’s Lean Data framework, covering key topics: “Engage users”, “Stay Lean”, and “Build-in Security”.

Engage Users

The first segment of the discussion focussed on how companies can better engage different audiences on issues of privacy. This ranges from making privacy policies more accessible and explaining data collection through “just-in-time” notifications to users to better engaging investors and boards on privacy concerns to gain their support for implementing reforms. Many companies argued that providing more choices to the Indian user base throws up unique challenges, and that often users can be disinterested or careless about their making choices about their personal data. This only reinforces the importance of user-education and companies agreed they could do more to effectively communicate about data collection, use, and sharing.

Stay lean

The second section was on the importance of staying “lean” with personal data rather than collecting, storing, and sharing indiscriminately. Most companies agreed that collecting and storing less personal data mitigates the risk of potential privacy leaks, breaches, and vulnerability to broad law enforcement requests. Staying lean does come with its own challenges, given that deleting data trails often comes at a high cost, or may be technically challenging when data has changed hands across vendors. It was agreed that there is a need for more innovative techniques to help pseudonymize or anonymize such datasets to reduce the risk of identification of end-users while maintaining the value of service. Despite these challenges, responsible companies should do their best to adhere to the principle of deleting data within their control, when no longer required.

Build-in security

The final segment covered key security features that could be built in to the services. For many startups, their emphasis on security practices, especially relating to employee data access controls, have increased as they grew in size. Participants in the event also spoke to concerns around the security practices of their vendors; these corporate partners often resist scrutiny of their security and/or are unwilling to negotiate terms, making it hard for companies to meet their obligations to their users and under the law.

Following the event, all of the participants confirmed that they’re intending to make changes to their privacy practices. It’s great to see such enthusiasm and commitment to protecting user privacy and championing these issues within their respective companies. We look forward to hosting further iterations of this event in India. For more information about the Lean Data Practices, see: https://www.leandatapractices.com/

 

The post Privacy in practice: Mozilla talks “lean data” in India appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogExtensions in Firefox 65

In lieu of the normal, detailed review of WebExtensions API coming out in Firefox 65, I’d like to simply say thank you to everyone for choosing Firefox. Now, more than ever, the web needs people who consciously decide to support an open, private, and safe online ecosystem.

Two weeks ago, nearly every Mozilla employee gathered in Orlando, Florida for the semi-annual all-hands meeting.  It was an opportunity to connect with remote teammates, reflect on the past year and begin sharing ideas for the upcoming year. One of the highlights was the plenary talk by Mitchell Baker, Chairwoman of the Mozilla Foundation. If you have not seen it, it is well worth 15 minutes of your time.

Mitchell talks about Firefox continually adapting to a changing internet, shifting its engagement model over time to remain relevant while staying true to its original mission. Near the end, she notes that it is time, once again, for Mozilla and Firefox to evolve, to shift from being merely a gateway to the internet to being an advocate for users on the internet.

Extensions will need to be part of this movement. We started when Firefox migrated to the WebExtensions API (only a short year ago), ensuring that extensions operated with explicit user permissions within a well-defined sandbox. In 2018, we made a concerted effort to not just add new API, but to also highlight when an extension was using those API to control parts of the browser. In 2019, expect to see us sharpen our focus on user privacy, user security, and user agency.

Thank you again for choosing Firefox, you have our deepest gratitude and appreciation. As a famous Mozillian once said, keep on rockin’ the free web.

-Mike Conca

Highlights of new features and fixes in Firefox 65:

A huge thank you to the community contributors in this release, including: Ben Armstrong, Oriol Brufau, Tim Nguyen, Ryan Hendrickson, Sean Burke, Yuki “Piro” Hiroshi, Diego Pino, Jan Henning, Arshad Kazmi, Nicklas Boman.

 

The post Extensions in Firefox 65 appeared first on Mozilla Add-ons Blog.