Mitchell BakerNew Mozilla Foundation Board Members: Mohamed Nanabhay and Nicole Wong

Today, I’m thrilled to announce that Mohamed Nanabhay and Nicole Wong have joined the Mozilla Foundation Board of Directors.

Over the last few years, we’ve been working to expand the boards for both the Mozilla Foundation and the Mozilla Corporation. Our goals for the Foundation board roles were to grow Mozilla’s capacity to move our mission forward; expand the number and diversity of people on our boards, and; add specific skills in areas related to movement building and organizational excellence. Adding Mohamed and Nicole represents a significant move forward on these goals.

We met Mohamed about seven years ago through former board member and then Creative Commons CEO Joi Ito. Mohamed was at Al Jazeera at the time and hosted one of Mozilla’s first Open News fellows. Mohamed Nanabhay currently serves as the Deputy CEO of the Media Development Investment Fund (MDIF), which invests in independent media around the world providing the news, information and debate that people need to build free, thriving societies.

Nicole is an attorney specializing in Internet, media and intellectual property law. She served as President Obama’s deputy chief technology officer (CTO) and has also worked as the vice president and deputy general counsel at Google to arbitrate issues of censorship. Nicole has already been active in helping Mozilla set up a new fellows program gathering people who have worked in government on progressive tech policy. That program launches in June.

Talented and dedicated people are the key to building an Internet as a global public resource that is open and accessible to all. Nicole and Mohammad bring expertise, dedication and new perspectives to Mozilla. I am honored and proud to have them as our newest Board members.

Please join me in welcoming Mohamed and Nicole to the Board. You can read more about why Mohamed chose to join the Board here, and why Nicole joined us here.

Mitchell

The Mozilla BlogNew Mozilla Foundation Board Members: Mohamed Nanabhay and Nicole Wong

Today, I’m thrilled to announce that Mohamed Nanabhay and Nicole Wong have joined the Mozilla Foundation Board of Directors.

Over the last few years, we’ve been working to expand the boards for both the Mozilla Foundation and the Mozilla Corporation. Our goals for the Foundation board roles were to grow Mozilla’s capacity to move our mission forward; expand the number and diversity of people on our boards, and; add specific skills in areas related to movement building and organizational excellence. Adding Mohamed and Nicole represents a significant move forward on these goals.

We met Mohamed about seven years ago through former board member and then Creative Commons CEO Joi Ito. Mohamed was at Al Jazeera at the time and hosted one of Mozilla’s first Open News fellows. Mohamed Nanabhay currently serves as the Deputy CEO of the Media Development Investment Fund (MDIF), which invests in independent media around the world providing the news, information and debate that people need to build free, thriving societies.

Nicole is an attorney specializing in Internet, media and intellectual property law. She served as President Obama’s deputy chief technology officer (CTO) and has also worked as the vice president and deputy general counsel at Google to arbitrate issues of censorship. Nicole has already been active in helping Mozilla set up a new fellows program gathering people who have worked in government on progressive tech policy. That program launches in June.

Talented and dedicated people are the key to building an Internet as a global public resource that is open and accessible to all. Nicole and Mohammad bring expertise, dedication and new perspectives to Mozilla. I am honored and proud to have them as our newest Board members.

Please join me in welcoming Mohamed and Nicole to the Board. You can read more about why Mohamed chose to join the Board here, and why Nicole joined us here.

Mitchell

The post New Mozilla Foundation Board Members: Mohamed Nanabhay and Nicole Wong appeared first on The Mozilla Blog.

The Mozilla BlogWhy I’m joining the Mozilla Board by Mohamed Nanabhay

Mozilla has been at the forefront of shaping internet culture and fighting to keep the Internet open. Being able to join the Board and be of service to that mission is an honor as the open internet played such an important role in my life and my work.

My generation came online to the shrill beeps of our modems connecting to this network that represented endless possibilities. Those beeps of our modems dialing up provided the soundtrack to some of our deepest friendships and community, crossing borders and breaking down barriers. I remember contributing to the SpreadFirefox.com campaign in 2004 to put an advert into the New York Times. That campaign epitomized the best of what we could achieve on the network – thousands coming together to promote the open source project that many thousands worked on that would go on to touch millions of people.

Mohamed Nanabhay, new Mozilla Foundation board member (photo credit: Joi Ito)

As the next billion come online, there are real questions about what sort of Internet they are coming online to. We know that it is most often through a mobile device (phone or tablet) and often the first contact is through Facebook (including WhatsApp). Navigating the usage (what role does the browser play when most people are using Apps?) and the social implications (will an even greater number of people confuse Facebook with the Internet?) are deeply important in the near term.

At Al Jazeera I was deeply focused on using social technologies to not only distribute news but also discover and amplify the voices of people being most impacted by power. With Creative Commons I worked to launch a repository of broadcast quality video footage under the most permissive license. At Global Voices, bridge building and providing a nuanced understand through a local lens is key to what we do. And at the Media Development Investment Fund (MDIF), we are deeply committed to funding the highest quality journalism in countries where there is a threat to press freedom.

This work has all really been around building bridges, connecting people, and amplifying voices. While our kids may never know the thrill of hearing a modem connection I hope that we can work to ensure that the Internet remains open so they can use it to learn, build, and grow in the same way we did.

Mohamed Nanabhay is the Deputy CEO of the Media Development Investment Fund (MDIF), which invests in independent media around the world providing the news, information and debate that people need to build free, thriving societies. He is also the board chair of GlobalVoices.org and the former head of Al Jazeera English. Mohamed was appointed to the Mozilla Foundation board in April 2017.

The post Why I’m joining the Mozilla Board by Mohamed Nanabhay appeared first on The Mozilla Blog.

The Mozilla BlogWhy I’m joining the Mozilla Board by Nicole Wong

It’s an honor for me to join the Mozilla Board. I’m so inspired by the Foundation’s mission and by the incredibly talented people that lead it. And, I’m looking forward to contributing to Mozilla’s plans to build out a leadership network focused on protecting the open Internet.

Though I’m still too new to the organization to be able to diagnose Mozilla’s biggest challenges, I think this is a really exciting and crucial time for Mozilla to develop products that really put users first. Today’s Internet users have complex needs, so I’ll be excited to see how the Mozilla community works to identify and solve them.

Nicole Wong, New Mozilla Foundation Board Member


Obviously, this is also a very challenging time to protect the Internet from the national and global trends toward authoritarianism, censorship and surveillance. Mozilla is in a great position to address some of those challenges.

During my career, I’ve had the privilege of working in both the private and public sector, but the consistent theme is focusing on the intersection of emerging technologies, law, and public policy. I have tried to build cultures, policies, and practices that are forward-leaning in the development and defense of a healthy internet. I’m looking forward to doing the same at the Foundation, helping build out the leadership network and focusing on emerging tech policy leaders.

Nicole is an attorney specializing in Internet, media and intellectual property law. She served as President Obama’s deputy chief technology officer (CTO) and has also worked as the vice president and deputy general counsel at Google to arbitrate issues of censorship. She was appointed to the Mozilla Foundation board in April 2017.

The post Why I’m joining the Mozilla Board by Nicole Wong appeared first on The Mozilla Blog.

Support.Mozilla.OrgPlatform update & Q&A

Hey there, SUMO Nation!

As you may have noticed, we are (for the time being) back to the previous engine powering the support site at support.mozilla.org

You can follow the latest updates and participate in the discussion about this here.

We are definitely present and following this discussion, noting your concerns and questions. We can provide you with answers and reassurance, even if we do not have ready-made solutions to some of the issues you are highlighting.

Since some of you may not be frequently visiting the forums, we would also like to make sure you can find the answers to some of the more burning questions asked across our community here, on our blog.

Q: Why is Mozilla no longer interested in using Kitsune, its own support platform?

The software engineers and project managers developing Kitsune were shifted to work on critical development needs in the Firefox browser. Kitsune also had only a handful of contributors to the code base. After calculating the time and money requirements for maintaining our own platform, which were considerable and might have entailed a major overhaul, Mozilla decided that using an third-party solution was a better investment for the long term future of Mozilla’s support needs. (To be honest, it was one of the hardest decisions we have made.) We also considered that Lithium has significant ongoing software development which we think will lead to faster feature improvements than we might have achieved internally with Kitsune.

Q: Why is the new support platform still not providing all the functionality available in Kitsune?

Kitsune had been customized and hand-crafted from scratch by Mozillians and for Mozillians over a period of eight years.

No other platform in the market can offer the same level of compatibility with Mozilla’s mission and contribution methods without a serious investment of time and development power.

We have been working with Lithium for an extended period of time on matching the core functionality of Kitsune. This is a complex, lengthy, and ongoing process.

Due to technical differences in development and deployment of features between both platforms, complete feature parity may not be possible. We promise that over time we will push aggressively to close the feature gap and even to develop useful new features that were not present in Kitsune. We understand that many in the community feel that Kitsune is a better option and there are many things we love about Kitsune. We are hopeful that Lithium will grow on you and ultimately surpass Kitsune.

Q: How will you ensure that Mozilla’s image is not negatively influenced by issues with the support site now and in the future?

We will do our very best to provide the best support site and the best workflows we can for the budget that we are allocated for support technology and tools. We are extremely serious about maintaining Mozilla’s good image and working with our community and users to ensure that Mozilla is viewed positively. We realize that changes in software and workflows may not work equally well for everyone but we will do our best to help. We always have and always will appreciate the contributions from you, our community – and that users choose to browse on Firefox.

Q: What can the community members do to help any of the above now and in the future?

First of all, please continue to contribute your time and hard work answering user questions. It’s the most valuable contribution you can make and one we greatly appreciate. Thank you.

Second, your ideas on how to improve the Mozilla support platform are something we always listen closely to, as you are in the system as much as we are. These can be new features or improvements to existing features (or adding back in older features), including improvements to the Lithium platform. We can’t promise that we will be able to include all requests in our roadmap but the community does drive our priorities and inform our decisions.

Please add these requests into the Platform meeting notes or file feature requests through Bugzilla (and make sure they are assigned to us.) Please note that we already have several feature improvements lined up for development and deployment by Lithium. We will do what we can to keep the information flowing back and forth in a clear and organized manner.

As always, thank you for your continuous presence and support of Mozilla’s mission. We can’t make it happen without you.

All the best to you all!

The SUMO Team on behalf of Mozilla

Will Kahn-GreeneUsing Localstack for a fake AWS S3 for local development

Summary

Over the last year, I rewrote the Socorro collector which is the edge of the Mozilla crash ingestion pipeline. Antenna (the new collector) receives crashes from Breakpad clients as HTTP POSTs, converts the POST payload into JSON, and then saves a bunch of stuff to AWS S3. One of the problems with this is that it's a pain in the ass to do development on my laptop without being connected to the Internet and using AWS S3.

This post covers the various things I did to have a locally running fake AWS S3 service.

Read more… (4 mins to read)

Gervase MarkhamDon’t Pin To A Single CA

If you do certificate pinning, either via HPKP, or in your mobile app, or your IoT device, or your desktop software, or anywhere… do not pin solely to a single certificate, whether it’s a leaf certificate, intermediate or root certificate, and do not pin solely to certificates from a single CA. This is the height of self-imposed Single Point of Failure foolishness, and has the potential to bite you in the ass. If your CA goes away or becomes untrusted and it causes you problems, no-one will be sympathetic.

This Has Been A Public Service Announcement.

Ehsan AkhgariQuantum Flow Engineering Newsletter #7

It’s time for another quick update about the recent Quantum Flow work.
I want to start by shedding some light into the performance of synchronous IPC caused by JavaScript.  Here is a breakdown report for data as of today similar to the previous ones.  Let’s look at the top 10 worst offenders:
  • All of the ones with names starting with Addons: or Content: are e10s compatibility shims that we have been using to make non-e10s compatible add-ons work with Firefox.  They essentially make synchronous XPCOM APIs work using sync IPC across process boundaries.  This is by far the majority of the issue here, and it skews all sorts of performance measurements that we do on Nightly.  We’re soon going to make changes to Nightly to disable running non-e10s compatible extensions so that we get better performance data.
  • The second one is AdblockPlus:Message which presumably comes from one of the the Adblock Plus extension variants.  What’s curious about this one is the super high count of the messages, the median time isn’t that high.  And that the submission percentage is 100%!!
  • #9 is contextmenu.
Looking through more of these messages, there are a few more that stem from our code.  It’s a bit hard to spot everything due to everything being mixed together.  After traditional style extensions aren’t loaded any more a lot of these issues will go away but it also makes things difficult to evaluate when looking at the data.  This is also solid practical data that can be used as input to API design in the future, on why some functionality such as sync IPC is very dangerous to be exposed at the API level in the first place!  Moving to a more modern extension model is really good for performance from this perspective.
The next topic I wanted to discuss was the issue of the usage of timers in our code.  Timers are terrible for responsiveness and are almost never what you want.  We sometimes use them to lazify some work (do this part of the initialization in 10 seconds!) or to do some periodic background task (refresh this cache every second) but they cause a lot of problems due to the fact that they can execute at unpredictable times.  From the perspective of improving the responsiveness of Firefox, if your goal is to keep the main thread as free as possible when the user’s input events are about to be handled, the last thing you want is to start running one of these timers right before the user clicks or types or something like that.  Gecko now supports the requestIdleCallback API which allows the caller to request a timer that only gets dispatched when the application is idle.  Where possible, you should consider re-examining the timers in your areas of the code and consider using to idle dispatch where appropriate.  Note that this API is currently only available to contexts where the Window object is available.  Bug 1358476 is adding the functionality to nsThread and hopefully we can expose it to JSMs afterwards as well.
On to the list of the acknowledgements for the week.  As usual, I hope I’m not forgetting anyone’s names here!

Air MozillaApril Privacy Lab – The Future of Privacy and Artificial Intelligence (AI)

April Privacy Lab – The Future of Privacy and Artificial Intelligence (AI) Peter Eckersley, the Chief Computer Scientist for the Electronic Frontier Foundation (EFF), will discuss the new EFF initiative that he is leading on the policy,...

Air MozillaLocalization Community Bi-Monthly Call, 27 Apr 2017

Localization Community Bi-Monthly Call These calls will be held in the Localization Vidyo room every second (14:00 UTC) and fourth (20:00 UTC) Thursday of the month and will be...

Air MozillaReps Weekly Meeting Apr. 27, 2017

Reps Weekly Meeting Apr. 27, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Rust Programming Language BlogAnnouncing Rust 1.17

The Rust team is happy to announce the latest version of Rust, 1.17.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed, getting Rust 1.17 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.17.0 on GitHub.

What’s in 1.17.0 stable

The story of Rust 1.17.0 is mostly one of small, quality of life improvements. For example, the 'static lifetime is now assumed in statics and consts. When writing a const or static like this:

const NAME: &'static str = "Ferris";
static NAME: &'static str = "Ferris";

Rust 1.17 will allow you to elide the 'static, since that’s the only lifetime that makes sense:

const NAME: &str = "Ferris";
static NAME: &str = "Ferris";

In some situations, this can remove lots of boilerplate:

// old
const NAMES: &'static [&'static str; 2] = &["Ferris", "Bors"];

// new
const NAMES: &[&str; 2] = &["Ferris", "Bors"];

Another similar improvement is “field init shorthand.” Similar to ECMAScript 6, which calls this “Object Literal Property Value Shorthand”, duplication can be removed when declaring structs, like this:

// definitions
struct Point {
    x: i32,
    y: i32,
}

let x = 5;
let y = 6;

// old
let p = Point {
    x: x,
    y: y,
};

// new
let p = Point {
    x,
    y,
};

That is, the x, y form will assume that its values are set to a variable with the same name in its scope.

For another small quality of life improvement, it’s common for new Rustaceans to try to use + to add two &strs together. This doesn’t work, you can only add String + &str. As such, a new error message was added to help users who make this mistake:

// code
"foo" + "bar"

// old
error[E0369]: binary operation `+` cannot be applied to type `&'static str`
 --> <anon>:2:5
  |
2 |     "foo" + "bar"
  |     ^^^^^
  |
note: an implementation of `std::ops::Add` might be missing for `&'static str`
 --> <anon>:2:5
  |
2 |     "foo" + "bar"
  |     ^^^^^

// new
error[E0369]: binary operation `+` cannot be applied to type `&'static str`
 --> <anon>:2:5
  |
2 |     "foo" + "bar"
  |     ^^^^^
  |
  = note: `+` can't be used to concatenate two `&str` strings
help: to_owned() can be used to create an owned `String` from a string
reference. String concatenation appends the string on the right to the string on
the left and may require reallocation. This requires ownership of the string on
the left.
  |     "foo".to_owned() + "bar"

When using Cargo’s build scripts, you must set the location of the script in your Cargo.toml. However, the vast majority of people wrote build = "build.rs", using a build.rs file in the root of their project. This convention is now encoded into Cargo, and will be assumed if build.rs exists. We’ve been warning about this change for the past few releases, and you can use build = false to opt out.

This release marks the removal of the old Makefile based build system. The new system, announced in Rust 1.15, is written in Rust and primarily uses Cargo to drive the build. It is now mature enough to be the only build system.

As part of that change, packages from crates.io can now be used within Rust’s build system. The first one to be added was mdBook, and it’s now being used to render our various book-like documentation:

In addition, see those links to their respective repositories; they’ve been moved out of tree. Also, we’ve added a fourth book, still in-tree: The Unstable Book. This provides an overview of unstable features by name, contains links to their tracking issues, and may contain initial documentation. If there’s a feature you want to see stabilized, please get involved on its tracking issue!

A few releases ago, rustup stopped installing documentation by default. We made this change to save some bandwidth and because not all users want a copy of the documentation locally. However, this created a pitfall: some users did not realize that this changed, and would only notice once they were no longer connected to the internet. In addition, some users did want to have a local copy of the docs, regardless of their connectivity. As such, we’ve reverted the change, and documentation is being installed by default again.

Finally, while this release is full of improvements, there is one small step back we want to regretfully inform you about. On Windows, Visual Studio 2017 has been released, and Microsoft has changed the structure of how the software is installed. Rust cannot automatically detect this location, and while we were working on the neccesary changes, they did not make it in time for this release. Until then, Visual Studio 2015 still works fine, or you can run vcvars.bat on the command line. We hope to make this work in a seamless fashion soon.

See the detailed release notes for more.

Library stabilizations

19 new bits of API were stabilized this release:

In other changes, Cell<T> used to require that T: Copy for many of its methods, but this has been relaxed significantly.

Box<T> now implements over a dozen new conversions with From.

SocketAddr and IpAddr have some new conversions as well. Previously, you may have written code like this:

"127.0.0.1:3000".parse().unwrap()

Now, you can write

SocketAddr::from(([127, 0, 0, 1], 3000))
// or even
([127, 0, 0, 1], 3000).into()

This removes some unnecessary run-time parsing, and is roughly as readable, depending on your preferences.

Backtraces now have nicer formatting, eliding some things by default. For example, the full backtrace:

thread 'main' panicked at 'explicit panic', foo.rs:2
stack backtrace:
   1:     0x55c39a23372c - std::sys::imp::backtrace::tracing::imp::write::hf33ae72d0baa11ed
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:42
   2:     0x55c39a23571e - std::panicking::default_hook::::h59672b733cc6a455
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:351
   3:     0x55c39a235324 - std::panicking::default_hook::h1670459d2f3f8843
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:367
   4:     0x55c39a235afb - std::panicking::rust_panic_with_hook::hcf0ddb069e7beee7
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:555
   5:     0x55c39a22e866 - std::panicking::begin_panic::heb433e9aa28a7408
   6:     0x55c39a22e9bf - foo::main::hd216d4a160fcce19
   7:     0x55c39a23d44a - __rust_maybe_catch_panic
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libpanic_unwind/lib.rs:98
   8:     0x55c39a236006 - std::rt::lang_start::hd7c880a37a646e81
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:436
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panic.rs:361
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/rt.rs:57
   9:     0x55c39a22e9e9 - main
  10:     0x7f5e5ed3382f - __libc_start_main
  11:     0x55c39a22e6b8 - _start
  12:                0x0 - <unknown>

is now instead

thread 'main' panicked at 'explicit panic', foo.rs:2
stack backtrace:
   0: std::sys::imp::backtrace::tracing::imp::unwind_backtrace
             at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::_print
             at /checkout/src/libstd/sys_common/backtrace.rs:71
   2: std::panicking::default_hook::
             at /checkout/src/libstd/sys_common/backtrace.rs:60
             at /checkout/src/libstd/panicking.rs:355
   3: std::panicking::default_hook
             at /checkout/src/libstd/panicking.rs:371
   4: std::panicking::rust_panic_with_hook
             at /checkout/src/libstd/panicking.rs:549
   5: std::panicking::begin_panic
   6: foo::main
   7: __rust_maybe_catch_panic
             at /checkout/src/libpanic_unwind/lib.rs:98
   8: std::rt::lang_start
             at /checkout/src/libstd/panicking.rs:433
             at /checkout/src/libstd/panic.rs:361
             at /checkout/src/libstd/rt.rs:57
   9: main
  10: __libc_start_main
  11: _start

By default. You can set the environment variable RUST_BACKTRACE=full to get the full backtrace. We may be able to do more cleanup in the future; see this bug for more.

See the detailed release notes for more.

Cargo features

Other than the previously mentioned build.rs changes, Cargo has a few new improvements. cargo check --all and cargo run --package are two missing flags that are now supported.

You can now opt in to ignoring SSL revocation checks. The default is still to check, of course.

A new field in Cargo.toml, required-features, lets you specify specific features that must be set for a target to be built. Here’s an example: let’s say that we are writing a crate that interacts with databases, and that we support multiple databases. We might have this in our Cargo.toml:

[features]
# ...
postgres = []
sqlite = []
tools = []

The tools feature allows us to include extra tooling, and the postgres and sqlite features control which databses we want to support.

Previously, cargo build would attempt to build all targets, which is normally what you want. But what if we had a src/bin/postgres-tool.rs, that would only really be relevant if the postgres and tools features would be enabled? Previously, we would have to write something like this:

#[cfg(not(all(feature = "postgres", feature = "tools")))]
fn main() {
    println!("This tool requires the `postgres` and `tools` features to be enabled.");
}

#[cfg(all(feature = "postgres", feature = "tools"))]
fn main() {
    // real code
}

This is a lot of boilerplate to work around cargo build’s behavior. It’s even more unfortunate with examples/, which are supposed to show off how to use your library, but this shenanigans is only relevant within the package, not if you were to try to use the example on your own.

With the new required-features key, we can add this:

[[bin]]
# ...
required-features = ["postgres", "tools"]

Now, cargo build will only build our postgres-tool if we have the two features set, and so we can write a normal fn main without all the cfg nonsense getting in the way.

See the detailed release notes for more.

Contributors to 1.17.0

Many people came together to create Rust 1.17. We couldn’t have done it without all of you. Thanks!

Mozilla Open Policy & Advocacy BlogMozilla is Ready to Fight: FCC Chairman Announces Plans to Reverse U.S. Net Neutrality Protections

In a speech at the Newseum today, FCC Chairman Ajit Pai shared some details about his plan to repeal and replace U.S. net neutrality protections enacted in 2015. These rules were adopted after more than a decade long battle to protect net neutrality, and after a massive amount of input by US citizens. Pai’s approach would leave internet users and innovators with no protections.

FCC Chairman Pai seeks to shift the source of the authority for the Net Neutrality rules away from “Title II” (where it now sits) and back to a weaker “Title I” classification for Internet Service Providers because it is “more consistent with the facts and the law.” We disagree – and we aren’t the only ones. So did the D.C. Circuit Court on three occasions, along with the late Justice Scalia, in the same 2005 Supreme Court case Pai cited. In that case Justice Scalia described what Pai has now chosen as his path, the classification of ISPs under Title I, “an implausible reading of the statute.”

Unfortunately, Pai’s assertions today are just as implausible.

This move is saddening, maddening and unacceptable, but we’re not surprised. This proposal is nothing more than a repetition of the same old ideas discussed by opponents of net neutrality over the past few years.

Net neutrality is under threat and we all need to work towards an “open internet that does not discriminate on content and protects free speech and consumer privacy.” Mozilla has rallied for this fight in the past, and as we have said before, we are ready to protect net neutrality – and the creativity, innovation, and economic growth it unlocks – again, and always. Today was the first clearly articulated threat  – we now need to begin mobilizing against these actions. Stay tuned for ways that you can help us win the fight again.

The post Mozilla is Ready to Fight: FCC Chairman Announces Plans to Reverse U.S. Net Neutrality Protections appeared first on Open Policy & Advocacy.

Air MozillaApril Speaker Series: American Spies: Modern Surveillance and What We Can Do Speaker: Jennifer Granick

April Speaker Series: American Spies: Modern Surveillance and What We Can Do Speaker: Jennifer Granick Intelligence agencies in the U.S. (aka the American Spies) are exceedingly aggressive, pushing and sometimes bursting through the technological, legal and political boundaries of lawful...

Firefox NightlyThese Weeks in Firefox: Issue 15

A big thank you goes out to Johann Hofmann who put these headlines together while I was away on vacation!

Highlights

  • The Form Autofill feature is being enabled on Nightly this week (for @autocomplete on <input>). Stay tuned!
  • Firefox Screenshots is in Beta 2 preffed-off by default.  We’ll enable it very soon for everyone, or you can jump the gun by toggling extensions.screenshots.system-disabled .  If you run into anything fishy, please let #screenshots know

Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates

Add-ons

Activity Stream

  • Test Pilot Activity Stream
    • Activity Stream support for Pocket has landed in Test Pilot version.  Experiment launches May 1st (thanks csadilek!)
    • You can try it now by using the Activity Stream Dev channel
    • Updated to latest eslint-plugin-mozilla which now supports mozilla-central external repositories, but need to disable no-useless-parameters as we support older than Firefox 55 (thanks Standard8!)
  • Activity Stream system add-on
    • Search feed and UI landed
    • TelemetrySender landed
    • Top Sites feed landed

Firefox Core Engineering

  • Flash
    • Telemetry experiment ran on Nightly 55 from April 14 – April 23.
    • Shield Study defaulting to click-to-play will start on Release 53 in the next week and run for six weeks.
    • Default and (slight) additional UI land in 55.
  • Crash
    • Crash pings contain raw stacks (opt-out) as of Beta 54.
    • Crash pings exist for main, content, and GPU processes as of Beta 54.
    • Crash pings are sent via pingSender (i.e. right away) as of Beta 54.
    • Only one bug did not get uplifted to 54 — a new data point added to the crash ping (a form of client crash id) as of 55.
    • Working on identifying top crashers (by signature) currently. Intending to land while 55 is on Nightly.
  • Install/Update
    • Continue with phase 1 of the Update Agent, which will continue/complete the download of an update even if a session ends.
    • Looking into trying to encourage users on FF4.0 – FF35.0 to update past 35 prior to September 2017 (when their update server, aus3, expires).
    • Fun with Nahimic: it can (and has) prevented updates. Follow 1356637 for updates.

Form Autofill

Mobile

Photon

Performance
  • Lots of sync reflow bugs filed, thanks! We now have a big backlog to triage.
  • Several fixes landed for sync reflows, especially around interactions with the tab bar (thank Dão!) and the awesomebar.
  • Starting to profile startup, and there’s a lot of room for improvement there (loading JS modules lazily from nsBrowserGlue, loading the blocklist from JSON instead of parsing a big XML file).
  • A few tips:
    • Avoid calling .focus() several times in a row, each focus call currently flushes layout.
    • Avoid using setTimeout(…, 0), Services.tm.dispatchToMainThread(…) has less overhead.
    • Avoid using Preferences.jsm (especially during startup) if it’s only to have support for default values.
    • Avoid importing NetUtil.jsm only to use newURI, use Services.io.newURI directly instead.
Structure
  • Work starting on page action menu
  • Ongoing work on the hamburger and overflow panel
  • Ongoing work on having more than one level of nesting within panel subviews (the slide-to-the-side thing in panels) and update their styling
    • All the previous stuff is / will be behind a pref. We aim to flip that pref on Nightly in the near future!
  • We swapped the sidebar to the right… and then swapped it back again. Expect more updates to sidebars in the future (with the side of the window stuff still under investigation).
Animation
  • Animations themselves are in-progress. We intend to use svg spritesheet animations for animating icon states
Visuals
Onboarding
  • Have walked through questions about the UX and visual specs with verdi from UX in today’s onboarding team meeting.
Preferences

Platform Audibles

  • Pending results of experiment, Flash will be marked as click-to-activate by default starting soon in nightly. Pending results of SHIELD study this will ride to 55 release.
  • We’ve got initial page navigation numbers comparing Chrome and Firefox
    • In general we’re competitive with Chrome (+-20%), but a few cases show us far worse, in particular back navigation: filed bug 1359400
  • A bug that caused windows to be ghosts if touch events were sent is causing large CC pauses in Nightly and Beta. Fix in tomorrow’s Nightly.
  • Initial data shows that mean-time-between-failure (MTBF) for input jank:
  • 70+% of nightly users last week saw GC pauses >0.5s
  • ASK: if you see slow things, please install/use the gecko profiler and file bugs!

Privacy/Security

  • jkt wrote a blog post about the new “Always Open In This Container” feature in containers.
  • freddyb is writing a series of ESLint rules to catch common security problems in Firefox code. First victim: Eval and implied eval.

Project Mortar (PDFium)

  • Three milestones are set for better estimating our release schedule
    • Milestone 1 (target on Q2): feature landing. We are still trying to land our significant bits into mozilla-central, which are:
      • bug 558184 and 1344942: JSPlugin and plugin binary process creation and loading
      • bug 1345330: pull in Chromium source code (PDFium + Pepper API layer) into the tree and build with Firefox
      • bug 1269760: pdf printing (to paper). The most challenged part is converting PDF to EMF printing format on Windows, because we rely on PDFium to do the conversion. This means that the sandbox for plugin binary processes should allow PDFium to create device contexts and even access files. We are discussing with the Sandbox team
    • Milestone 2: release polish. (We haven’t figured out the release target but the conversation is ongoing)
      • focus on performance and stability (meta bug 1286791), telemetry and test automation for future proofing
  • Milestone 3: script support. will NOT in the first release. Still investigating its product value

Search

Test Pilot

  • Min Vid is celebrating its largest release yet with the addition of a play and history queue.  Add media you want to watch to your upcoming queue, or replay something you missed by clicking the history tab.  Min Vid currently works on YouTube, SoundCloud, Vimeo, and direct links to audio or video.
  • Pulse has added occasional (less than once a day) prompting for feedback to help avoid biased data.  If you want to help Firefox improve performance on your favorite sites, this is your chance.  The data from this experiment goes directly to the Firefox Product team to help prioritize improvements.
  • Snooze Tabs has gone world wide now supporting 23 locales.  In addition to using Snooze Tabs in your favorite language, you’ll also find an Undo button when deleting a snoozed tab.

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Hacks.Mozilla.OrgFathom: a framework for understanding web pages

It’s time we went beyond a browser that just renders pages. On the modern web, trying to accomplish a simple task can get you buffeted by pop-overs, squinting at content crammed into a tiny column, and trying to suss out the behavior of yet another site’s custom widgets. To restore a balance of power and reclaim user efficiency, we need a smarter browser.

Imagine if Firefox understood pages like a human does:

  • Arduous sign-on could be a thing of the past. The browser could recognize a Log In link, follow it in the background, and log you in, all without losing your place. The links could disappear from the page and be moved into a standard browser UI.
  • Products could be recognized as such and manipulated as cohesive chunks. You could drag them to a shopping cart, complete with pictures and prices, for cross-site comparison shopping. You could enjoy easily scannable columns rather than a circus of tabs.
  • Inefficient and inconsistent UI could be ironed out at last. We could have browser-provided hotkeys for dismissing popovers, navigating to the next logical page, standardizing the look of interface elements, or recognizing and flattening out needlessly paginated slideshows.
  • On small screens or windows, superfluous navigation or header sections could be hidden, even on pages that don’t use responsive design. We could intelligently figure out what to print, even in the absence of print stylesheets.

These possible futures all assume the browser can identify meaningful parts of the page. Over the decades, there have been many attempts to make this easier. But microformats, semantic tags, RDF, and link/rel header elements have failed to take over the world, due both to sites’ incentive to remain unscrapeable and to the extra work they represent. As a result, modern search engines and browsers’ reader modes have taken an alternative tack: they extract meaning by embracing the mess, bulling straight through unsemantic markup with a toolbelt full of heuristics.

But a problem remains: these projects are single-purpose and expensive to produce. Readability, the basis of Safari and Firefox’s reader modes, is 1,800 lines of JavaScript and was recently shut down. Chrome’s DOM Distiller is 23,000 lines of Java. These imperative approaches get bogged down in the mechanics of DOM traversal and state accumulation, obscuring the operative parts of the understanders and making them arduous to write and difficult to comprehend. They are further entangled with the ad hoc fuzzy scoring systems and the site-specific heuristics they need to include. The economics are against them from the start, and consequently few of them are created, especially outside large organizations.

But what if understanders were cheap to write? What if Readability could be implemented in just 4 simple rules?

const rules = ruleset(
    rule(dom('p,div,li,code,blockquote,pre,h1,h2,h3,h4,h5,h6'),
         props(scoreByLength).type('paragraphish')),
    rule(type('paragraphish'),
         score(fnode => (1 - linkDensity(fnode,
                                         fnode.noteFor('paragraphish')
                                              .inlineLength))
                        * 1.5)),
    rule(dom('p'),
         score(4.5).type('paragraphish')),
    rule(type('paragraphish')
            .bestCluster({splittingDistance: 3,
                          differentDepthCost: 6.5,
                          differentTagCost: 2,
                          sameTagCost: 0.5,
                          strideCost: 0}),
         out('content').allThrough(domSort))
);

That scores within 7% of Readability’s output on a selection of its own test cases, measured by Levenshtein distance1. The framework enabling this is Fathom, and it drives the cost of writing understanders through the floor.

Fathom is a mini-language for writing semantic extractors. The sets of rules that make up its programs are embedded in JavaScript, so you can use it client- or server-side as privacy dictates. And Fathom handles all your bookkeeping so you can concentrate on your heuristics:

  • Tree-walking goes away. Fathom is a data-flow language like Prolog, so data conveniently “turns up” when there are applicable rules that haven’t yet seen it.
  • Flow control goes away. Fathom determines execution order based on dependencies, running only what it needs to answer your query and caching intermediate results.
  • The temptation to write plugin systems goes away. Fathom rules are unordered, so additional ones can be added as easily as adding a new element to a JavaScript array. This makes Fathom programs (or rulesets) inherently pluggable. They commingle like streams of water, having only to agree on type names, making them ripe for collaborative experimentation or special-casing without making a mess.
  • The need to keep parallel data structures to the DOM goes away. Fathom provides proxy DOM nodes you can scribble on, along with a black-and-white system of types and a shades-of-grey system of scores to categorize nodes and guide decisions.
  • The need to come up with the optimal balance of weights for your heuristics goes away, thanks to an optimization harness based on simulated annealing. All those fiddly numerical constants in the code above were figured out by siccing the machine on a selection of input and correct output and walking away.

The best part is that Fathom rulesets are data. They look like JavaScript function calls, but the calls are just making annotations in a sort of syntax tree, making the whole thing easily machine-manipulable. Today, that gets us automatic tuning of score constants. Tomorrow, it could get us automatic generation of rules themselves!

Fathom is young but feisty. It’s already in production powering Firefox’s Activity Stream, where it picks out page descriptions, main images, and such. In 70 lines, it replaced a well-known commercial metadata-parsing service.

What we need now is imagination. Scoop up all those ideas you threw away because they required too much understanding by the browser. We can do that now. It’s cheap.

Have an idea? Great! Check out the full documentation to get started, grab the npm package, submit patches, and join us in the #fathom channel on irc.mozilla.org and on the mailing list as you build. Let’s make a browser that is, in bold new ways, the user’s agent!


1The caveats of the example are quite manageable. It’s slower than Readability, because clustering is O(n2 log n). But there is also much low-hanging fruit left unpicked: we do nothing in the above to take advantage of CSS classes or semantic tags like <article>, both rich sources of signal, and we don’t try to pare down the clustering candidates with thresholds. Finally, some of the 7% difference actually represents improvements over Readability’s output.

Mozilla Open Policy & Advocacy BlogMozilla at #rp17: What’s up with Internet Health?

Spring in Berlin means re:publica. This year, we invite you to join us for an interactive exhibition, in talks, and hopefully in many personal conversations to discover what we call Internet Health.

From Monday to Wednesday (May 8-10), “the Station” will be expecting roughly 10,000 attendees from more than 100 countries. People will be traveling to Berlin to learn, discuss, and experience new things on topics such as digital rights, politics and society, e-health, science fiction, journalism, education, all sorts of new technologies and anything else related to the digital society.

The schedule includes over 400 sessions taking place on 19 different stages and workshop rooms. The full list of the speakers is available here. All of the eight main stages will be live-streamed, recorded and translated.

Mozilla @ re:publica 17

Mozilla’s Executive Director, Mark Surman, will hold a keynote on whether the Internet of Things we are building is ethical. He will discuss the importance of asking not just what’s possible? but also what’s responsible?

Raegan MacDonald, Senior EU Policy Manager, will present a pitch on the EU copyright reform and the looming threat posed by mandatory upload filters and I will participate in a conversation on digital inclusion, in which we will talk about projects on enhancing digital literacy, equality and individual empowerment.

 

Where else to find us

We invite you to embark on a new and fun adventure: Join us on Monday, May 8 in the Labore:tory to explore our Internet Health Clinic. We will create an engaging installation, which is designed  to foster structured dialogue about five issues that we believe are crucial for the health of the Internet as an ecosystem: privacy and security; digital inclusion; web literacy; openness; and decentralization. We are particularly thrilled to partner with the Global Innovation Gathering to welcome an international group of experts, who will enter into a journey across continents. The speakers will share with us their own personal stories and inspire us with their work on the ground, around the world. At the clinic we are inviting feedback and are looking forward to discovering and exploring new ideas, research, potential collaborations, and possibilities to assess the impact of our actions on a healthy Internet on an annual basis.

In addition, we are excited to see the winners of our recent Equal Rating Innovation Challenge on stage as well.  If you want to learn more about their projects, this is where you need to go.

A number of Mozillians from our Advocacy, Emerging Tech, Firefox, Open Innovation, Open IoT, and Policy teams, as well as some of our Open Web Fellows, will also be in Berlin. So join us at re:publica and of course on Monday in the Labore:tory in the Internet Health Clinic!

See you in Berlin!

Sessions Overview

Day 1: Monday, May 8 (all times are CET)

12:45-1:15pm, Stage 2

Mark Surman (keynote): Are we living inside an ethical (and kind) machine?: https://re-publica.com/en/17/session/are-we-living-inside-ethical-and-kind-machine

2:00-5:00pm, Labore:tory

Mozilla’s Internet Health Clinic will feature 15 global experts during “visiting hours”.

Day 2: Tuesday, May 9

4:15 to 4:45pm, Stage 9

The Winner’s of Mozilla Equal Rating Innovation Challenge: Access all areas – Independent internet infrastructures in Brazil, India and South Africa

Day 3: Wednesday, May 10

10:00-11:00am, Stage 4

Cathleen Berger (panel): Digital Equality and how the open web can contribute to a more equal world

11:15am-12:15pm, Stage 8

Raegan MacDonald (panel): Stop the censorship machines! Can we prevent mandatory upload filters in the EU?

The post Mozilla at #rp17: What’s up with Internet Health? appeared first on Open Policy & Advocacy.

Michael Kellycontent-UITour.js

Recently I found myself trying to comprehend an unfamiliar piece of code. In this case, it was content-UITour.js, a file that handles the interaction between unprivileged webpages and UITour.jsm.

UITour allows webpages to highlight buttons in the toolbar, open menu panels, and perform other tasks involved in giving Firefox users a tour of the user interface. The event-based API allows us to iterate quickly on the onboarding experience for Firefox by controlling it via easily-updated webpages. Only a small set of Mozilla-owned domains are allowed access to the UITour API.

Top-level View

My first step when trying to grok unfamiliar JavaScript is to check out everything at the top-level of the file. If we take content-UITour.js and remove some comments, imports, and constants, we get:

var UITourListener = {
  handleEvent(event) {
    /* ... */
  },

  /* ... */
};

addEventListener("mozUITour", UITourListener, false, true);

Webpages that want to use UITour emit synthetic events with the name "mozUITour". In the snippet above, UITourListener is the object that receives these events. Normally, event listeners are functions, but they can also be EventListeners, which are simply objects with a handleEvent function.

According to Mossop's comment, content-UITour.js is loaded in browser.js. A search for firefox loadFrameScript brings up two useful pages:

  • nsIFrameScriptLoader, which describes how loadFrameScript takes our JavaScript file and loads it into a remote frame. If you don't innately know what a remote frame is, then you should read...

  • Message manager overview, which gives a great overview of frame scripts and how they relate to multi-process Firefox. In particular, browser.js seems to be asking for a browser message manager.

It looks like content-UITour.js is loaded for each tab with a webpage open, but it can do some more privileged stuff than a normal webpage. Also, the global object seems to be window, referring to the browser window containing the webpage, since events from the webpage are bubbling up to it. Neat!

Events from Webpages

So what about handleEvent?

handleEvent(event) {
  if (!Services.prefs.getBoolPref("browser.uitour.enabled")) {
    return;
  }
  if (!this.ensureTrustedOrigin()) {
    return;
  }
  addMessageListener("UITour:SendPageCallback", this);
  addMessageListener("UITour:SendPageNotification", this);
  sendAsyncMessage("UITour:onPageEvent", {
    detail: event.detail,
    type: event.type,
    pageVisibilityState: content.document.visibilityState,
  });
},

If UITour itself is disabled, or if the origin of the webpage we're registered on isn't trustworthy, events are thrown away. Otherwise, we register UITourListener as a message listener, and send a message of our own.

I remember seeing addMessageListener and sendAsyncMessage on the browser message manager documentation; they look like a fairly standard event system. But where are these events coming from, and where are they going to?

In lieu of any better leads, our best bet is to search DXR for "UITour:onPageEvent", which leads to nsBrowserGlue.js. Luckily for us, I've actually heard of this file before: it's a grab-bag for things that need to happen to set up Firefox that don't fit anywhere else. For our purposes, it's enough to know that stuff in here gets run once when the browser starts.

The lines in question:

// Listen for UITour messages.
// Do it here instead of the UITour module itself so that the UITour module is lazy loaded
// when the first message is received.
var globalMM = Cc["@mozilla.org/globalmessagemanager;1"].getService(Ci.nsIMessageListenerManager);
globalMM.addMessageListener("UITour:onPageEvent", function(aMessage) {
  UITour.onPageEvent(aMessage, aMessage.data);
});

Oh, I remember reading about the global message manager! It covers every frame. This seems to be where all the events coming up from individual frames get gathered and passed to UITour. That UITour variable is coming from a clever lazy-import block at the top:

[
/* ... */
["UITour", "resource:///modules/UITour.jsm"],
/* ... */
].forEach(([name, resource]) => XPCOMUtils.defineLazyModuleGetter(this, name, resource));

In other words, UITour refers to the module in UITour.jsm, but it isn't loaded until we receive our first event, which helps make Firefox startup snappier.

For our purposes, we're not terribly interested in what UITour does with these messages, as long as we know how they're getting there. We are, however, interested in the messages that we're listening for: "UITour:SendPageCallback" and "UITour:SendPageNotification". Another DXR search tells me that those are in UITour.jsm. A skim of the results shows that these messages are used for things like notifying the webpage when an operation is finished, or returning information that was requested by the webpage.


To summarize:

  • handleEvent in the content process triggers behavior from UITour.jsm in the chrome process by sending and receiving messages sent through the message manager system.

  • handleEvent checks that the origin of a webpage is trustworthy before doing anything.

  • The UITour module in the chrome process is not initialized until a webpage emits an event for it.

The rest of the content-UITour.js is split between origin verification and sending events back down to the webpage.

Verifying Webpage URLs

Next, let's take a look at ensureTrustedOrigin:

ensureTrustedOrigin() {
  if (content.top != content)
    return false;

  let uri = content.document.documentURIObject;

  if (uri.schemeIs("chrome"))
    return true;

  if (!this.isSafeScheme(uri))
    return false;

  let permission = Services.perms.testPermission(uri, UITOUR_PERMISSION);
  if (permission == Services.perms.ALLOW_ACTION)
    return true;

  return this.isTestingOrigin(uri);
},

MDN tells us that content is the Window object for the primary content window; in other words, the webpage. top, on the other hand, is the topmost window in the window hierarchy (relevant for webpages that get loaded in iframes). Thus, the first check is to make sure we're not in some sort of frame. Without this, a webpage could control when UITour executes things by loading a whitelisted origin in an iframe1.

documentURIObject lets us check the origin of the loaded webpage. chrome:// URIs get passed immediately, since they're already privileged. The next three checks are more interesting:

isSafeScheme

isSafeScheme(aURI) {
  let allowedSchemes = new Set(["https", "about"]);
  if (!Services.prefs.getBoolPref("browser.uitour.requireSecure"))
    allowedSchemes.add("http");

  if (!allowedSchemes.has(aURI.scheme))
    return false;

  return true;
},

This function checks the URI scheme to see if it's considered "safe" enough to use UITour functions. By default, https:// and about: pages are allowed. http:// pages are also allowed if the browser.uitour.requireSecure preference is false (it defaults to true).

Permissions

The next check is against the permissions system. The Services.jsm documentation says that Services.perms refers to an instance of the nsIPermissionManager interface. The check itself is easy to understand, but what's missing is how these permissions get added in the first place. A fresh Firefox profile has some sites already whitelisted for UITour, but where does that whitelist come from?

This is where DXR really shines. If we look at nsIPermissionManager.idl and click the name of the interface, a dropdown appears with several options. The "Find subclasses" option performs a search for "derived:nsIPermissionManager", which leads to the header file for nsPermissionManager.

We're looking for where the default permission values come from, so an in-page search for the word "default" eventually lands on a function named ImportDefaults. Clicking that name and selecting "Jump to definition" lands us inside nsPermissionManager.cpp, and the very first line of the function is:

nsCString defaultsURL = mozilla::Preferences::GetCString(kDefaultsUrlPrefName);

An in-page search for kDefaultsUrlPrefName leads to:

// Default permissions are read from a URL - this is the preference we read
// to find that URL. If not set, don't use any default permissions.
static const char kDefaultsUrlPrefName[] = "permissions.manager.defaultsUrl";

On my Firefox profile, the "permissions.manager.defaultsUrl" preference is set to resource://app/defaults/permissions:

# This file has default permissions for the permission manager.
# The file-format is strict:
# * matchtype \t type \t permission \t host
# * "origin" should be used for matchtype, "host" is supported for legacy reasons
# * type is a string that identifies the type of permission (e.g. "cookie")
# * permission is an integer between 1 and 15
# See nsPermissionManager.cpp for more...

# UITour
origin    uitour    1    https://www.mozilla.org
origin    uitour    1    https://self-repair.mozilla.org
origin    uitour    1    https://support.mozilla.org
origin    uitour    1    https://addons.mozilla.org
origin    uitour    1    https://discovery.addons.mozilla.org
origin    uitour    1    about:home

# ...

Found it! A quick DXR search reveals that this file is in /browser/app/permissions in the tree. I'm not entirely sure where that defaults bit in the URL is coming from, but whatever.

With this, we can confirm that the permissions check is where most valid uses of UITour are passed, and that this permissions file is where the whitelist of allowed domains lives.

isTestingOrigin

The last check in ensureTrustedOrigin falls back to isTestingOrigin:

isTestingOrigin(aURI) {
  if (Services.prefs.getPrefType(PREF_TEST_WHITELIST) != Services.prefs.PREF_STRING) {
    return false;
  }

  // Add any testing origins (comma-seperated) to the whitelist for the session.
  for (let origin of Services.prefs.getCharPref(PREF_TEST_WHITELIST).split(",")) {
    try {
      let testingURI = Services.io.newURI(origin);
      if (aURI.prePath == testingURI.prePath) {
        return true;
      }
    } catch (ex) {
      Cu.reportError(ex);
    }
  }
  return false;
},

Remember those boring constants we ignored earlier? Here's one of them in action! Specifically, it's PREF_TEST_WHITELIST, which is set to "browser.uitour.testingOrigins".

This function appears to parse the preference as a comma-separated list of URIs. It fails early if the preference isn't a string, then splits the string and loops over each entry, converting them to URI objects.

The nsIURI documentation notes that prePath is everything in the URI before the path, including the protocol, hostname, port, etc. Using prePath, the function iterates over each URI in the preference and checks it against the URI of the webpage. If it matches, then the page is considered safe!

(And if anything fails when parsing URIs, errors are reported to the console using reportError and discarded.)

As the preference name implies, this is useful for developers who want to test a webpage that uses UITour without having to set up their local development environment to fake being one of the whitelisted origins.

Sendings Messages Back to the Webpage

The other remaining logic in content-UITour.js handles messages sent back to the content process from UITour.jsm:

receiveMessage(aMessage) {
  switch (aMessage.name) {
    case "UITour:SendPageCallback":
      this.sendPageEvent("Response", aMessage.data);
      break;
    case "UITour:SendPageNotification":
      this.sendPageEvent("Notification", aMessage.data);
      break;
    }
},

You may remember the Message manager overview, which links to documentation for several functions, including addMessageListener. We passed in UITourListener as the listener, which the documentation says should implement the nsIMessageListener interface. Thus, UITourListener.receiveMessage is called whenever messages are received from UITour.jsm.

The function itself is simple; it defers to sendPageEvent with slightly different parameters depending on the incoming message.

sendPageEvent(type, detail) {
  if (!this.ensureTrustedOrigin()) {
    return;
  }

  let doc = content.document;
  let eventName = "mozUITour" + type;
  let event = new doc.defaultView.CustomEvent(eventName, {
    bubbles: true,
    detail: Cu.cloneInto(detail, doc.defaultView)
  });
  doc.dispatchEvent(event);
}

sendPageEvent starts off with another trusted origin check, to avoid sending results from UITour to untrusted webpages. Next, it creates a custom event to dispatch onto the document element of the webpage. Webpages register an event listener on the root document element to receive data returned from UITour.

defaultView returns the window object for the document in question.

Describing cloneInto could take up an entire post on its own. In short, cloneInto is being used here to copy the object from UITour in the chrome process (a privileged context) for use in the webpage (an unprivileged context). Without this, the webpage would not be able to access the detail value at all.

And That's It!

It takes effort, but I've found that deep-dives like this are a great way to both understand a single piece of code, and to learn from the style of the code's author(s). Hopefully ya'll will find this useful as well!


  1. While this isn't a security issue on its own, it gives some level of control to an attacker, which generally should be avoided where possible.

The Mozilla BlogMozilla Continues to Oppose the U.S. Administration’s Executive Order on Travel

Mozilla and more than 150 other tech companies continue to oppose the U.S. administration’s revised Executive Order on travel as it winds its way through the U.S. Court system.

This order seeks to temporarily prohibit the U.S. Government from issuing new visas to travelers from six predominantly Muslim countries and suspend the U.S refugee program. Soon after it was issued, two federal judges in Hawaii and Maryland held the revised order to be discriminatory and unconstitutional. So far, their decisions have prevented the order from being enforced, but the administration has appealed to higher courts asking for a reversal.

Last week, we filed two amicus briefs in the Fourth and Ninth Circuits against the Executive Order and in support of the district court decisions.

We are against this Executive Order, for the same reasons we opposed the original one.  People worldwide build, protect, and advance the internet, regardless of their nationality.

Travel is often necessary to the robust exchange of information and ideas within and across companies, universities, industry and civil society.  Undermining immigration law harms the international cooperation needed to develop and maintain an open internet.
We urge the Courts of Appeals to uphold the district court decisions and reinforce the harmful impact of the travel ban.

The post Mozilla Continues to Oppose the U.S. Administration’s Executive Order on Travel appeared first on The Mozilla Blog.

Air MozillaMartes Mozilleros, 25 Apr 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Firefox NightlyGuest post: India uses Firefox Nightly: Kick off on May 1, 2017

Biraj Karmakar This is a guest post by Biraj Karmakar, who has been active promoting Mozilla and Mozilla software in India for over 7 years. Biraj is taking the initiative of organizing a series of workshops throughout the country to convince technical people to (mozillians or not) that may be interested in getting involved in Mozilla to use Firefox Nightly.

 

 

In my last blog, I have announced that Mozilla India is going to organize a special campaign on Firefox Nightly Usage in India. RSVP here.

Everything is set. Gearing up for the campaign.

#INUsesFxNightly_fb

BTW Recently we have organized one community call on this campaign. You can watch it to know more about how to organize events and technical things.

  • How to get involved:
    • Online Activities
      • Telling and inviting friends!
      • Create the event in social media!
      • Writing about it on Facebook & Twitter.
      • Posting updates on social media when the event is running.
      • Running an online event like webinar for this campaign. Please, check the event flow.
      • Blog posting regarding Firefox Nightly technical things, features and events.
    • Offline Activities
      • Introduction to Mozilla
      • Introduction to Firefox Nightly Release cycle details
      • Why we need Firefox Nightly users?
      • Showing various stats regarding firefox
      • Installing Nightly on participant’s PC
      • WebCompat on Firefox Nightly
      • How they can contribute in Nightly (QA and Promotion)
      • Swag Distribution
  • Duration of Campaign: 2 months
  • Total Number of offline events: 15 only. 
  • Hashtag: #INUsesFxNightly
  • Duration of each event: 3-5 hours

Swag is ready! 

 

IMG_20170425_174228.jpg

Swag for offline events

For requesting swag, please read here.

Also, we have the budget for these events. You can request it. Know more here .

Other than that if you want to know more about activity format, event flow, resource and more thing, please read the wiki.

If you have a special query, please send a mail to Biraj Karmakar [brnet00 AT gmail DOT com]. Don’t forget to join our telegram group for a realtime chat. 

Daniel PocockFSFE Fellowship Representative, OSCAL'17 and other upcoming events

The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.

I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.

Please consider becoming an FSFE fellow or donor

The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.

Attending OSCAL'17, Tirana

During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.

What is your view on the Fellowship and FSFE structure?

Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.

In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.

Mozilla Open Innovation TeamIntroducing FilterBubbler

Brainfood and Mozilla’s Open Innovation Team Kick Off Text Classification Open Source Experiment

Mozilla’s Open Innovation team is beginning a new effort to understand more about motivations and rewards for open source collaboration. Our goal is to expand the number of people for whom open source collaboration is a rewarding activity.

An interesting question is: While the server side benefits from opportunities to work collaboratively, can we explore them further on the client side, beyond browser features and their add-on ecosystems? User interest in “filter bubbles” gives us an opportunity to find out. The new FilterBubbler project provides a platform that helps users experiment with and explore what kind of text they’re seeing on the web. FilterBubbler lets you collaboratively “tag” pages with descriptive labels and then analyze any page you visit to see how similar it is to pages you have already classified.

You could classify content by age or reading-level rating, category like “current events” or “fishing”, or even how much you trust the source like “trustworthy” or “urban legend”. The system doesn’t have any bias and it doesn’t limit the number of tags you apply. Once you build up a set of classifications you can visit any page and the system will show you which classification has the closest statistical match. Just as a web site maintainer develops a general view of the technologies and communities of practice required to make a web site, we will use filter bubble building and sharing to help build client-side understanding.

The project aims to reach users who are motivated to understand and maybe change their information environment. Who want to transform their own “bubble” space and participate in collaborative work, but do not have add-on development skills.

Can the browser help users develop better understanding and control of their media environments? Can we emulate the path to contribution that server-side web development has? Please visit the project and help us find out. FilterBubbler can serve as a jumping off point for all kinds of specific applications that can be built on top of its techniques. Ratings systems, content suggestion, fact checking and many other areas of interest can all use the classifiers and corpora that the FilterBubbler users will be able to generate. We’ll measure our success by looking at user participation in filter bubble data sharing, and by how our work gets adapted and built on by other software projects.

Please find more information on the project, ways to engage and contact points on http://www.filterbubbler.org.


Introducing FilterBubbler was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Christian HeilmannTalking about building the next interfaces with Machine Learning and AI at hackingui

Yesterday I was proud to be an invited speaker at the HackingUI masterclass where I presented about what Machine Learning and Artificial Intelligence means for us as developers and designers. I will be giving a similar talk tomorrow in Poland in my Code Europe talk.

Speaking at the masterclass

The Masterclass is using Crowdcast to allow for discussions between the moderators and the presenter, for the presenter to show his slides/demos and for people to chat and submit questions. You can see the whole one hour 45 minutes session by signing up to Hacking UI.

Master Class #4: The Soul in The Machine – Developing for Humans

It was exciting to give this presentation and the questions of the audience were interesting which meant that in addition to the topics covered in the talk I also managed to discuss the ethics of learning machines, how having more diverse teams can battle the issue of job loss because of automation and how AI can help combat bullying and antisocial behaviour online.

The materials I covered in the talk:

All in all there is a lot for us to be excited about and I hope I managed to make some people understand that the machine revolution is already happening and our job is it to make it benefit humankind, not work against it.

This Week In RustThis Week in Rust 179

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate of the week is pq, a crate to generically decode protobuf messages. Thanks to sevagh for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

98 pull requests were merged in the last week.

New Contributors

  • Dylan Maccora
  • Maxwell Paul Brickner
  • Nicolas Bigaouette

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

There are many ways in which Rust is like a version of C/C++ that mutated when Haskell was injected into its veins.

Lokathor on reddit.

Thanks to Johan Sigfrids and liquidivy for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Myk MelezHeadless Firefox

Over in Headless SlimerJS with Firefox, fellowzillian Brendan Dahl writes about the work he’s been doing to support running Firefox headlessly. A headless mode for Firefox makes it easier to test websites with the browser, especially in continuous integration, to ensure Firefox remains compatible with the Web. It also enables a variety of other interesting use cases.

Brendan started with Linux, the most popular platform for CI services like Travis, and focused first on SlimerJS, a popular tool for testing websites with Firefox (and scripting the browser more generally) that uses Firefox to run a different XUL application (rather than running Firefox itself). Now he’s working on support for full headless Firefox as well as Windows and Mac.

Check out his blog post for more details and to tell him how you’d use the feature!

David BurnsHarassment of Open Source Maintainers or Contributors

On Friday I had the unfortunate pleasure of taking the brunt on an unhappy Selenium user. Their issue? My team said that a release of GeckoDriver would happen when we are confident in the code. They said that was not professional. They started by telling me that they contribute to Mozilla and this is not acceptable for them as a customer.

Below is a break down of why I took exception to this:

  • My team was being extremely professional. Software, by its very nature, has bugs but we try minimize the amount of bugs we ship. To do this we don't set release dates, we set certain objectives. My team is relatively small compared to the user group it needs to service so we need to triage bugs, fix code. We have both groups inside and outside of Mozilla. By saying we can only release when it is ready is going to be the best we can do.
  • Please don't ever tell open source maintainers you are their customer unless you are paying for support and you have a contract with SLAs. So that there is no issue with definition of customer I suggest you look at Merriam Webster's definition. It says "one that purchases a commodity or service". Mozilla, just like Google, Microsoft, and Apple, are working on WebDriver to help web developers. There is no monetary benefit from doing this. The same goes for the Selenium project. The work and products are given freely.
  • And finally, and this goes for any F/OSS project even if it comes from large corporations like Google or Facebook, never make demands. Ask how you can help instead. If you disagree with the direction of the project, fork it. Make your own project. They have given everything away for free. Take it, make it better for whatever better means for you.

Now, even after explaining this, the harassment continued. It has lead to that user being blocked on social media for me and my team as well as them being blocked on Github. I really dislike blocking people because I know when they approach us they are frustrated but taking that frustration out on my team doesn't help anyone. If you continue, after being warned, you will be blocked. This is not a threat, this a promise.

Next time you feel frustrated with open source ask the maintainers if you can donate time/money/resources to make their lives easier. Don't be the moron that people will instantly block.

Firefox NightlyRelease Notes for Nightly

release notes for NightlyEvery day, multiple changesets are merged or backed out on mozilla-central and every day we compile a new version of Firefox Nightly based on these changes so as to provide builds that our core community can use, test and report feedback on.

This is why we historically don’t issue release notes for Nightly, it is hard to maintain release notes for software to gets a new release every day. However, knowing what happens, what’s new, what should be tested, has always been a recurring request from our community over the years.

So as to help with this legitimate request, we set up a twitter account that regularly informs about significant new features, and we also have the great “These weeks in Firefox” posts by Mike Conley every two weeks. These new communication channels certainly did improve things for our community over the last year.

We are now going a step further and we just started maintaining real release notes for Nightly at this address: Release Notes for Firefox Nightly

But what does it mean to have release notes for a product released every day?

It means that in the context of Project Dawn, we have started monitoring all the commits landing on mozilla-central so as to make sure changes that would merit a mention in Firefox final release notes are properly documented. This is something that we used to do with the Aurora channel, we are just doing it for Nightly instead and we do that several times a week.

Having release notes for Nightly of course means that those are updated continuously and that we only document features that have not been merged yet to Beta. We also do not intend to document unstable features or features currently hidden behind a preference flag in about:config.

The focus today is Firefox Desktop, but we will  also  produce release notes for Firefox Nightly for Android at a later stage, once we have polished the process for Desktop.

Anthony RicaudOn the utility of filing bugs

During my five years working at Mozilla, I’ve been known to ask people to file bugs when they encountered an issue. Most of the time, the answer was that they didn’t have time to do so and it was useless. I think it is actually very valuable. You get to learn from that experience: how to file actionable bugs, getting deeper knowledge into a specification, maybe a workaround for the problem.

A recent example

Three weeks ago, at work, we launched a new design for the website header. We got some reports that the logo was missing in Firefox on some pages. After investigation, we discovered that Firefox (and also Edge) had a different behaviour with SVG’s <use xlink:href> on pages with a <base> element. We fixed it right away by using an absolute URL for our logo. But we also filed bugs against Gecko and Edge. As part of filing those bugs, I found the change in the SVG specification clarifying how it should be handled. Microsoft fixed the issue in less than two weeks. Mozilla fixed it in less than three weeks.

In October this year1, all browsers should behave the same way in regard to that issue. And a four year old workaround will be obsolete. We will be able to remove the code that we had to introduce. Less code, yeah!

I hope this will convince you that filing bugs has an impact. You can learn more on how to file actionable bugs. If you’d like an easier venue to file bugs when browsers are incompatible, the WebCompat project is a nice place to start.


  1. Firefox 55 should be released on August 8 and the next Edge should be released in September (maybe even earlier, I’m not clear on Edge’s release schedule) 

The Servo BlogThis Week In Servo 99

In the last week, we landed 127 PRs in the Servo organization’s repositories.

By popular request, we added a ZIP archive link to the Servo nightlies for Windows users.

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017. Q2 plans will appear soon; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • hiikezoe corrected the animation behaviour of pseudo-elements in Stylo.
  • UK992 added some auto cleanup mechanisms for TravisCI.
  • Manishearth implemented system font support in Stylo.
  • glennw added groove and ridged border support to WebRender.
  • bholley converted simple CSS selectors and combinators to use inline storage for improved performance.
  • MortimerGoro implemented the missing GetShaderPrecisionFormat WebGL API.
  • sbwtw corrected the behaviour of CSS’ calc API in certain cases.
  • metajack removed the DOMRectList API.
  • BorisChious extended CSS transition support to shorthand properties.
  • nox improved the parsing of the background-size CSS property.
  • avadacatavra added support for creating Rust-based extensions of the C++ JSPrincipals API for SpiderMonkey.
  • kvark avoided a panic in WebRender encountered when using it through Firefox.
  • paulrouget clamped mouse scrolling to a single dimension at a time.
  • Gankro added IPC overhead profiling to WebRender.
  • stshine improved the inline size calculation for inline block layout.
  • mrobinson fixed several problems with laying out absolute positioned blocks.
  • canaltinova implemented support for the -moz-transform CSS property for Stylo.
  • MortimerGoro modernized the infrastructure surrounding Android builds.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Wladimir PalantHow bad is a buffer overflow in an Emscripten-compiled application?

Emscripten allows compiling C++ code to JavaScript. It is an interesting approach allowing porting large applications (games) and libraries (crypto) to the web relatively easily. It also promises better performance and memory usage for some scenarios (something we are currently looking into for Adblock Plus core). These beneficial effects largely stem from the fact that the “memory” Emscripten-compiled applications work with is a large uniform typed array. The side-effect is that buffer overflows, use-after-free bugs and similar memory corruption mistakes are introduced to JavaScript that was previously safe from them. But are these really security-relevant?

Worst-case scenario are obviously memory corruption bugs that can be misused in order to execute arbitrary code. At the first glance, this don’t seem to be possible here — even with Emscripten the code is still running inside the JavaScript sandbox and cannot escape. In particular, it can only corrupt data but not change any code because code is kept separately from the array serving as “memory” to the application. Then again, native applications usually cannot modify code either due to protection mechanisms of modern processors. So memory corruption bugs are typically abused by manipulating function pointers such as those found on the stack.

Now Emscripten isn’t working with return pointers on the stack. I could identify one obvious place where function pointers are found: virtual method tables. Consider the following interface for example:

class Database {
  virtual User* LookupUser(char* userName) = 0;
  virtual bool DropTable(char* tableName) = 0;
  ...
};

Note how both methods are declared with the virtual keyword. In C++ this means that the methods should not be resolved at compile time but rather looked up when the application is running. Typically, that’s because there isn’t a single Database class but rather multiple possible implementations for the Database interface, and it isn’t known in advance which one will be used (polymorphism). In practice this means that each subclass of the Database interface will have a virtual method table with pointers to its implementations of LookupUser and DropTable methods. And that’s the memory area an attacker would try to modify. If the virtual method table can be changed in such a way that the pointer to LookupUser is pointing to DropTable instead, in the next step the attacker might make the application try to look up user "users" and the application will inadvertently remove the entire table.

There are some limitations here coming from the fact that function pointers in Emscripten aren’t actual pointers (remember, code isn’t stored in memory so you cannot point to it). Instead, they are indexes in the function table that contains all functions with the same signature. Emscripten will only resolve the function pointer against a fixed function table, so the attacker can only replace a function pointer by a pointer to another function with the same signature. Note that the signature of the two methods above is identical as far as Emscripten is concerned: both have an int-like return value (as opposed to void, float or double), both have an int-like value as the first parameter (the implicit this pointer) and another int-like value as the second parameter (a string pointer). Given that most types end up as an int-like values, you cannot really rely on this limitation to protect your application.

But the data corruption alone can already cause significant security issues. Consider for example the following memory layout:

char incomingMessage[256];
bool isAdmin = false;

If the application fails to check the size of incoming messages properly, the data will overflow into the following isAdmin field and the application might allow operations that aren’t safe. It is even possible that in some scenarios confidential data will leak, e.g. with this memory layout:

char response[256];
char sessionToken[32];

If you are working with zero-terminated strings, you should be really sure that the response field will always contain the terminating zero character. For example, if you are using some moral equivalent of the _snprintf function in Microsoft Visual C++ you should always check the function return value in order to verify that the buffer is large enough, because this function will not write the terminating zero when confronted with too much data. If the application fails to check for this scenario, an attacker might trick it into producing an overly large response, meaning that the secret sessionToken field will be sent along with the response due to missing terminator character.

These are the problematic scenarios I could think of, there might be more. Now all this might be irrelevant for your typical online game, if you are only concerned about are cheaters then you likely have bigger worries — cheaters have much easier ways to mess with code that runs on their end. A website on the other hand, which might be handling data from a third-party site (typically received via URL or window.postMessage()), is better be more careful. And browser extensions are clearly endangered if they are processing website data via Emscripten-compiled code.

Niko MatsakisUnification in Chalk, part 2

In my previous post, I talked over the basics of how unification works and showed how that “mathematical version” winds up being expressed in chalk. I want to go a bit further now and extend that base system to cover associated types. These turn out to be a pretty non-trival extension.

What is an associated type?

If you’re not a Rust programmer, you may not be familiar with the term “associated type” (although many langages have equivalents). The basic idea is that traits can have type members associated with them. I find the most intuitive example to be the Iterator trait, which has an associated type Item. This type corresponds to kind of elements that are produced by the iterator:

trait Iterator {
    type Item;
    
    fn next(&mut self) -> Option<Self::Item>;
}

As you can see in the next() method, to reference an associated type, you use a kind of path – that is, when you write Self::Item, it means “the kind of Item that the iterator type Self produces”. I often refer to this as an associated type projection, since one is “projecting out”1 the type Item.

Let’s look at an impl to make this more concrete. Consider the type std::vec::IntoIter<T>, which is one of the iterators associated with a vector (specifically, the iterator you get when you invoke vec.into_iter()). In that case, the elements yielded up by the iterator are of type T, so we have an impl like:

impl<T> Iterator for IntoIter<T> {
    type Item = T;
    fn next(&mut self) -> Option<T> { ... }
}

This means that if we have the type IntoIter<i32>::Item, that is equivalent to the type i32. We usually call this process of converting an associated trait projection (IntoIter<i32>::Item) into the type found in the impl normalizing the type.

In fact, this IntoIter<i32>::Item is a kind of shorthand; in particular, it didn’t explicitly state what trait the type Item is defined in (it’s always possible that IntoIter<i32> implements more than one trait that define an associated type called Item). To make things fully explicit, then, one can use a fully qualified path like this:

<IntoIter<i32> as Iterator>::Item
 ^^^^^^^^^^^^^    ^^^^^^^^   ^^^^
 |                |          |
 |                |          Associated type name
 |                Trait
 Self type

I’ll use these fully qualified paths from here on out to avoid confusion.

Integrating associated types into our type system

In this post, we will extend our notion of types to include associated type projections:

T = ?X               // type variables
  | N<T1, ..., Tn>   // "applicative" types
  | P                // "projection" types   (new in this post)
P = <T as Trait>::X

Projection types are quite different from the existing “applicative” types that we saw before. The reason is that they introduce a kind of “alias” into the equality relationship. With just applicative types, we could always make progress at each step: that is, no matter what two types were being equated, we could always break the problem down into simpler subproblems (or else error out). For example, if we had Vec<?T> = Vec<i32>, we knew that this could only be true if ?T == i32.

With associated type projections, this is not always true. Sometimes we just can’t make progress. Imagine, for example, this scenario:

<?X as Iterator>::Item = i32

Here we know that ?X is some kind of iterator that yields up i32 elements: but we have no way of knowing which iterator it is, there are many possibilities. Similarly, imagine this:

<?X as Iterator>::Item = <T as Iterator>::Item

Here we know that ?X and T are both iterators that yield up the same sort of items. But this doesn’t tell us anything about the relationship between ?X and T.

Normalization constraints

To handle associated types, the basic idea is that we will introduce normalization constraints, in addition to just having equality constraints. A normalization constraint is written like this:

<IntoIter<i32> as Iterator>::Item ==> ?X   

This constraint says that the associated type projection <IntoIter<i32> as Iterator>::Item, when normalized, should be equal to ?X (a type variable). As we will see in more detail in a bit, we’re going to then go and solve those normalizations, which would eventually allow us to conclude that ?X = i32.

(We could use the Rust syntax IntoIter<i32>: Iterator<Item=?X> for this sort of constraint as well, but I’ve found it to be more confusing overall.)

Processing a normalization constraint is very simple to processing a standard trait constraint. In fact, in chalk, they are literally the same code. If you recall from my first Chalk post, we can lower impls into a series of clauses that express the trait that is being implemented along with the values of its associated types. In this case, if we look at the impl of Iterator for the IntoIter type:

impl<T> Iterator for IntoIter<T> {
    type Item = T;
    fn next(&mut self) -> Option<T> { ... }
}

We can translate this impl into a series of clauses sort of like this (here, I’ll use the notation I was using in my first post):

// Define that `IntoIter<T>` implements `Iterator`,
// if `T` is `Sized` (the sized requirement is
// implicit in Rust syntax.)
Iterator(IntoIter<T>) :- Sized(T).

// Define that the `Item` for `IntoIter<T>`
// is `T` itself (but only if `IntoIter<T>`
// implements `Iterator`).
IteratorItem(IntoIter<T>, T) :- Iterator(IntoIter<T>).

So, to solve the normalization constraint <IntoIter<i32> as Iterator>::Item ==> ?X, we translate that into the goal IteratorItem(IntoIter<i32>, ?X), and we try to prove that goal by searching the applicable clauses. I sort of sketched out the procedure in my first blog post, but I’ll present it in a bit more detail here. The first step is to “instantiate” the clause by replacing the variables (T, in this case) with fresh type variables. This gives us a clause like:

IteratorItem(IntoIter<?T>, ?T) :- Iterator(IntoIter<?T>).

Then we can unify the arguments of the clause with our goals, leading to two unification equalities, and combine that with the conditions of the clause itself, leading to three things we must prove:

IntoIter<?T> = IntoIter<i32>
?T = ?X
Iterator(IntoIter<?T)

Now we can recursively try to prove those things. To prove the equalities, we apply the unification procedure we’ve been looking at. Processing the first equation, we can simplify because we have two uses of IntoIter on both sides, so the type arguments must be equal:

?T = i32 // changed this
?T = ?X
Iterator(IntoIter<?T>)

From there, we can deduce the value of ?T and do some substitutions:

i32 = ?X
Iterator(IntoIter<i32>)

We can now unify ?X with i32, leaving us with:

Iterator(IntoIter<i32>)

We can apply the clause Iterator(IntoIter<T>) :- Sized(T) using the same procedure now, giving us two fresh goals:

IntoIter<i32> = IntoIter<?T>
Sized<?T>

The first unification will yield (eventually):

Sized<i32>

And we can prove this because this is a built-in rule for Rust (that is, that i32 is sized).

Unification as just another goal to prove

As you can see in the walk through in the previous section, in a lot of ways, unification is “just another goal to prove”. That is, the basic way that chalk functions is that it has a goal it is trying to prove and, at each step, it tries to simplify that goal into subgoals. Often this takes place by consulting the clauses that we derived from impls (or that are builtin), but in the case of equality goals, the subgoals are constructed by the builtin unification algorithm.

In the previous post, I gave various pointers into the implementation showing how the unification code looks “for real”. I want to extend that explanation now to cover associated types.

The way I presented things in the previous section, unification flattens its subgoals into the master list of goals. But in fact, for efficiency, the unification procedure will typically eagerly process its own subgoals. So e.g. when we transform IntoIter<i32> = IntoIter<?T>, we actually just invoke the code to equate their arguments immediately.

The one exception to this is normalization goals. In that case, we push the goals into a separate list that is returned to the caller. The reason for this is that, sometimes, we can’t make progress on one of those goals immediately (e.g., if it has unresolved type variables, a situation we’ve not discussed in detail yet). The caller can throw it onto a list of pending goals and come back to it later.

Here are the various cases of interest that we’ve covered so far

Fallback for projection

Thus far we showed how projection proceeds in the “successful” case, where we manage to normalize a projection type into a simpler type (in this case, <IntoIter<i32> as Iterator>::Item into i32). But sometimes we want to work with generics we can’t normalize the projection any further. For example, consider this simple function, which extracts the first item from a non-empty iterator (it panics if the iterator is empty):

fn first<I: Iterator>(iter: I) -> I::Item {
    iter.next().expect("iterator should not be empty")
}

What’s interesting here is that we don’t know what I::Item is. So imagine we are given a normalization constraint like this one:

<I as Iterator>::Item ==> ?X

What type should we use for ?X here? What chalk opts to do in cases like this is to construct a sort a special “applicative” type representing the associated item projection. I will write it as <Iterator::Item><I>, for now, but there is no real Rust syntax for this. It basically represents “a projection that we could not normalize further”. You could consider it as a separate item in the grammar for types, except that it’s not really semantically different from a projection; it’s just a way for us to guide the chalk solver.

The way I think of it, there are two rules for proving that a projection type is equal. The first one is that we can prove it via normalization, as we’ve already seen:

IteratorItem(T, X)
-------------------------
<T as Iterator>::Item = X

The second is that we can prove it just by having all the inputs be equal:

T = U
---------------------------------------------
<T as Iterator>::Item = <U as Iterator>::Item

We’d prefer to use the normalization route, because it is more flexible (i.e., it’s sufficient for T and U to be equal, but not necessary). But if we can definitively show that the normalization route is impossible (i.e., we have no clauses that we can use to normalize), then we we opt for this more restrictive route. The special “applicative” type is a way for chalk to record (internally) that for this projection, it opted for the more restrictive route, because the first one was impossible.

(In general, we’re starting to touch on Chalk’s proof search strategy, which is rather different from Prolog, but beyond the scope of this particular blog post.)

Some examples of the fallback in action

In the first() function we saw before, we will wind up computing the result type of next() as <I as Iterator>::Item. This will be returned, so at some point we will want to prove that this type is equal to the return type of the function (actually, we want to prove subtyping, but for this particular type those are the same thing, so I’ll gloss over that for now). This corresponds to a goal like the following (here I am using the notation I discussed in my first post for universal quantification etc):

forall<I> {
    if (Iterator(I)) {
        <I as Iterator>::Item = <I as Iterator>::Item
    }
}

Per the rules we gave earlier, we will process this constraint by introducing a fresh type variable and normalizing both sides to the same thing:

forall<I> {
    if (Iterator(I)) {
        exists<?T> {
            <I as Iterator>::Item ==> ?T,
            <I as Iterator>::Item ==> ?T,
        }
    }
}

In this case, both constraints will wind up resulting in ?T being the special applicative type <Iterator::Item><I>, so everything works out successfully.

Let’s briefly look at an illegal function and see what happens here. In this case, we have two iterator types (I and J) and we’ve used the wrong one in the return type:

fn first<I: Iterator, J: Iterator>(iter_i: I, iter_j: J) -> J::Item {
    iter_i.next().expect("iterator should not be empty")
}

This will result in a goal like:

forall<I, J> {
    if (Iterator(I), Iterator(J)) {
        <I as Iterator>::Item = <J as Iterator>::Item
    }
}

Which will again be normalized and transformed as follows:

forall<I, J> {
    if (Iterator(I), Iterator(J)) {
        exists<?T> {
            <I as Iterator>::Item ==> ?T,
            <J as Iterator>::Item ==> ?T,
        }
    }
}

Here, the difference is that normalizing <I as Iterator>::Item results in <Iterator::Item><I>, but normalizing <J as Iterator>::Item results in <Iterator::Item><J>. Since both of those are equated with ?T, we will ultimately wind up with a unification problem like:

forall<I, J> {
    if (Iterator(I), Iterator(J)) {
        <Iterator::Item><I> = <Iterator::Item><J>
    }
}

Following our usual rules, we can handle the equality of two applicative types by equating their arguments, so after that we get forall<I, J> I = J – and this clearly cannot be proven. So we get an error.

Termination, after a fashion

One final note, on termination. We do not, in general, guarantee termination of the unification process once associated types are involved. Rust’s trait matching is turing complete, after all. However, we do wish to ensure that our own unification algorithms don’t introduce problems of their own!

The non-projection parts of unification have a pretty clear argument for termination: each time we remove a constraint, we replace it with (at most) simpler constraints that were all embedded in the original constraint. So types keep getting smaller, and since they are not infinite, we must stop sometime.

This argument is not sufficient for projections. After all, we replace a constraint like <T as Iterator>::Item = U with an equivalent normalization constraint, where all the types are the same:

<T as Iterator>::Item ==> U

The argument for termination then is that normalization, if it terminates, will unify U with an applicative type. Moreover, we only instantiate type variables with normalized types. Now, these applicative types might be the special applicative types that Chalk uses internally (e.g., <IteratorItem><T>), but it’s an applicative type nontheless. When that applicative type is processed later, it will therefore be broken down into smaller pieces (per the prior argument). That’s the rough idea, anyway.

Contrast with rustc

I tend to call the normalization scheme that chalk uses lazy normalization. This is because we don’t normalize until we are actually equating a projection with some other type. In constrast, rustc uses an eager strategy, where we normalize types as soon as we “instantiate” them (e.g., when we took a clause and replaced its type parameters with fresh type variables).

The eager strategy has a number of downsides, not the least of which that it is very easy to forget to normalize something when you were supposed to (and sometimes you wind up with a mix of normalized and unnormalized things).

In rustc, we only have one way to represent projections (i.e., we don’t distinguish the “projection” and “applicative” version of <Iterator::Item><T>). The distinction between an unnormalized <T as Iterator>::Item and one that we failed to normalize further is made simply by knowing (in the code) whether we’ve tried to normalize the type in question or not – the unification routines, in particular, always assume that a projection type implies that normalization wouldn’t succeed.

A note on terminology

I’m not especially happy with the “projection” and “applicative” terminology I’ve been using. Its’s what Chalk uses, but it’s kind of nonsense – for example, both <T as Iterator>::Item and Vec<T> are “applications” of a type function, from a certain perspective. I’m not sure what’s a better choice though. Perhaps just “unnormalized” and “normalized” (with types like Vec<T> always being immediately considered normalized). Suggestions welcome.

Conclusion

I’ve sketched out how associated type normalization works in chalk and how it compares to rustc. I’d like to change rustc over to this strategy, and plan to open up an issue soon describing a strategy. I’ll post a link to it in the internals comment thread once I do.

There are other interesting directions we could go with associated type equality. For example, I was pursuing for some time a strategy based on congruence closure, and even implemented (in ena) an extended version of the algorithm described here. However, I’ve not been able to figure out how to combine congruence closure with things like implication goals – it seems to get quite complicated. I understand that there are papers tackling this topic (e.g, Selsam and de Moura), but haven’t yet had time to read it.

Comments?

I’ll be monitoring [the internals thread] for comments and discussion. =)

Footnotes

  1. Projection is a very common bit of jargon in PL circles, though it typically refers to accessing a field, not a type. As far as I can tell, no mainstream programmer uses it. Ah well, I’m not aware of a good replacement.

Daniel StenbergFewer mallocs in curl

Today I landed yet another small change to libcurl internals that further reduces the number of small mallocs we do. This time the generic linked list functions got converted to become malloc-less (the way linked list functions should behave, really).

Instrument mallocs

I started out my quest a few weeks ago by instrumenting our memory allocations. This is easy since we have our own memory debug and logging system in curl since many years. Using a debug build of curl I run this script in my build dir:

#!/bin/sh
export CURL_MEMDEBUG=$HOME/tmp/curlmem.log
./src/curl http://localhost
./tests/memanalyze.pl -v $HOME/tmp/curlmem.log

For curl 7.53.1, this counted about 115 memory allocations. Is that many or a few?

The memory log is very basic. To give you an idea what it looks like, here’s an example snippet:

MEM getinfo.c:70 free((nil))
MEM getinfo.c:73 free((nil))
MEM url.c:294 free((nil))
MEM url.c:297 strdup(0x559e7150d616) (24) = 0x559e73760f98
MEM url.c:294 free((nil))
MEM url.c:297 strdup(0x559e7150d62e) (22) = 0x559e73760fc8
MEM multi.c:302 calloc(1,480) = 0x559e73760ff8
MEM hash.c:75 malloc(224) = 0x559e737611f8
MEM hash.c:75 malloc(29152) = 0x559e737a2bc8
MEM hash.c:75 malloc(3104) = 0x559e737a9dc8

Check the log

I then studied the log closer and I realized that there were many small memory allocations done from the same code lines. We clearly had some rather silly code patterns where we would allocate a struct and then add that struct to a linked list or a hash and that code would then subsequently add yet another small struct and similar – and then often do that in a loop.  (I say we here to avoid blaming anyone, but of course I myself am to blame for most of this…)

Those two allocations would always happen in pairs and they would be freed at the same time. I decided to address those. Doing very small (less than say 32 bytes) allocations is also wasteful just due to the very large amount of data in proportion that will be used just to keep track of that tiny little memory area (within the malloc system). Not to mention fragmentation of the heap.

So, fixing the hash code and the linked list code to not use mallocs were immediate and easy ways to remove over 20% of the mallocs for a plain and simple ‘curl http://localhost’ transfer.

At this point I sorted all allocations based on size and checked all the smallest ones. One that stood out was one we made in curl_multi_wait(), a function that is called over and over in a typical curl transfer main loop. I converted it over to use the stack for most typical use cases. Avoiding mallocs in very repeatedly called functions is a good thing.

Recount

Today, the script from above shows that the same “curl localhost” command is down to 80 allocations from the 115 curl 7.53.1 used. Without sacrificing anything really. An easy 26% improvement. Not bad at all!

But okay, since I modified curl_multi_wait() I wanted to also see how it actually improves things for a slightly more advanced transfer. I took the multi-double.c example code, added the call to initiate the memory logging, made it uses curl_multi_wait() and had it download these two URLs in parallel:

http://www.example.com/
http://localhost/512M

The second one being just 512 megabytes of zeroes and the first being a 600 bytes something public html page. Here’s the count-malloc.c code.

First, I brought out 7.53.1 and built the example against that and had the memanalyze script check it:

Mallocs: 33901
Reallocs: 5
Callocs: 24
Strdups: 31
Wcsdups: 0
Frees: 33956
Allocations: 33961
Maximum allocated: 160385

Okay, so it used 160KB of memory totally and it did over 33,900 allocations. But ok, it downloaded over 512 megabytes of data so it makes one malloc per 15KB of data. Good or bad?

Back to git master, the version we call 7.54.1-DEV right now – since we’re not quite sure which version number it’ll become when we release the next release. It can become 7.54.1 or 7.55.0, it has not been determined yet. But I digress, I ran the same modified multi-double.c example again, ran memanalyze on the memory log again and it now reported…

Mallocs: 69
Reallocs: 5
Callocs: 24
Strdups: 31
Wcsdups: 0
Frees: 124
Allocations: 129
Maximum allocated: 153247

I had to look twice. Did I do something wrong? I better run it again just to double-check. The results are the same no matter how many times I run it…

33,961 vs 129

curl_multi_wait() is called a lot of times in a typical transfer, and it had at least one of the memory allocations we normally did during a transfer so removing that single tiny allocation had a pretty dramatic impact on the counter. A normal transfer also moves things in and out of linked lists and hashes a bit, but they too are mostly malloc-less now. Simply put: the remaining allocations are not done in the transfer loop so they’re way less important.

The old curl did 263 times the number of allocations the current does for this example. Or the other way around: the new one does 0.37% the number of allocations the old one did…

As an added bonus, the new one also allocates less memory in total as it decreased that amount by 7KB (4.3%).

Are mallocs important?

In the day and age with many gigabytes of RAM and all, does a few mallocs in a transfer really make a notable difference for mere mortals? What is the impact of 33,832 extra mallocs done for 512MB of data?

To measure what impact these changes have, I decided to compare HTTP transfers from localhost and see if we can see any speed difference. localhost is fine for this test since there’s no network speed limit, but the faster curl is the faster the download will be. The server side will be equally fast/slow since I’ll use the same set for both tests.

I built curl 7.53.1 and curl 7.54.1-DEV identically and ran this command line:

curl http://localhost/80GB -o /dev/null

80 gigabytes downloaded as fast as possible written into the void.

The exact numbers I got for this may not be totally interesting, as it will depend on CPU in the machine, which HTTP server that serves the file and optimization level when I build curl etc. But the relative numbers should still be highly relevant. The old code vs the new.

7.54.1-DEV repeatedly performed 30% faster! The 2200MB/sec in my build of the earlier release increased to over 2900 MB/sec with the current version.

The point here is of course not that it easily can transfer HTTP over 20 Gigabit/sec using a single core on my machine – since there are very few users who actually do that speedy transfers with curl. The point is rather that curl now uses less CPU per byte transferred, which leaves more CPU over to the rest of the system to perform whatever it needs to do. Or to save battery if the device is a portable one.

On the cost of malloc: The 512MB test I did resulted in 33832 more allocations using the old code. The old code transferred HTTP at a rate of about 2200MB/sec. That equals 145,827 mallocs/second – that are now removed! A 600 MB/sec improvement means that curl managed to transfer 4300 bytes extra for each malloc it didn’t do, each second.

Was removing these mallocs hard?

Not at all, it was all straight forward. It is however interesting that there’s still room for changes like this in a project this old. I’ve had this idea for some years and I’m glad I finally took the time to make it happen. Thanks to our test suite I could do this level of “drastic” internal change with a fairly high degree of confidence that I don’t introduce too terrible regressions. Thanks to our APIs being good at hiding internals, this change could be done completely without changing anything for old or new applications.

(Yeah I haven’t shipped the entire change in a release yet so there’s of course a risk that I’ll have to regret my “this was easy” statement…)

Caveats on the numbers

There have been 213 commits in the curl git repo from 7.53.1 till today. There’s a chance one or more other commits than just the pure alloc changes have made a performance impact, even if I can’t think of any.

More?

Are there more “low hanging fruits” to pick here in the similar vein?

Perhaps. We don’t do a lot of performance measurements or comparisons so who knows, we might do more silly things that we could stop doing and do even better. One thing I’ve always wanted to do, but never got around to, was to add daily “monitoring” of memory/mallocs used and how fast curl performs in order to better track when we unknowingly regress in these areas.

Addendum, April 23rd

(Follow-up on some comments on this article that I’ve read on hacker news, Reddit and elsewhere.)

Someone asked and I ran the 80GB download again with ‘time’. Three times each with the old and the new code, and the “middle” run of them showed these timings:

Old code:

real    0m36.705s
user    0m20.176s
sys     0m16.072s

New code:

real    0m29.032s
user    0m12.196s
sys     0m12.820s

The server that hosts this 80GB file is a standard Apache 2.4.25, and the 80GB file is stored on an SSD. The CPU in my machine is a core-i7 3770K 3.50GHz.

Someone also mentioned alloca() as a solution for one of the patches, but alloca() is not portable enough to work as the sole solution, meaning we would have to do ugly #ifdef if we would want to use alloca() there.

About:CommunityRevitalize participation by understanding our communities

As part of the bigger Open Innovation strategy project on how openness can better drive Mozilla products and technologies, during the next few months we will be conducting research about our communities and contributors.

We want to take a detailed, data-driven look into our communities and contributors: who we are, what we’re doing, what our motivations are and how we’re connected.

Who: Understanding the people in our communities

  • How many contributors are there in the Mozilla community.
  • Who are we? (how diverse is our community?)
  • Where are we? (geography, groups, projects)

What: Understanding what people are doing

  • What are we doing? (contributing with)
  • What are our skillsets?
  • How much time we’re able to devote to the project.
  • The tools we use.
  • Why do people contribute? (motivations)
  • What blocks people from contributing?
  • What other projects do we contribute to?
  • What other organisations are we connected to?
  • How much do people want to get involved?

Why: Understanding why people contribute

  • What are people’s’ motivations.
  • What are the important factors in contributing for Mozilla (ethical, moral, technological etc).
  • Is there anything Mozilla can do that will lead volunteers to contribute more?
  • For people who have left the project:why do they no longer contribute?)

How & Where: Understanding the shape of our communities and our people’s networks

  • What are the different groups and communities.
  • Who’s inside each group (regional and functional).
  • What is the overlap between people in groups?
  • Which groups have the most overlap, which have the least? (not just a static view, but also over time)
  • How contributors are connected to each other? (related with the “where”)
  • How are our contributors connected to other projects, Mozilla etc

In order to answer all these questions, we have divided the work in three major areas.

Contributors and Contributions Data Analysis

Analyzing past quantitative data about contributions and contributors (from sources like Bugzilla, Github, Mailing Lists, and other sources) to identify patterns and draw conclusions about contributors, contributions and communities.

Communities and Contributors survey

Designing and administering a qualitative survey to as many active contributors as possible (also trying to survey people who have stopped contributing to Mozilla) to get a full view of our volunteers (demographics), motivations, which communities people identify with, and their experience with Mozilla. We’ll use this to identify patterns in motivations.

Insights

We’ll bring together the conclusions and data from both of the above components to articulate a set of insights and recommendations that can be a useful input to the Open Innovation Strategy project.

In particular, one aim that we have is to cross reference individuals from the Mozillians Survey and Data Analysis to better understand — on aggregate — how things like motivations and identity relate to contribution.

Our commitments

In all of this work we are handling data with the care you would expect from Mozilla, in line with our privacy policy and in close consultation with Mozilla’s legal and trust teams.

Additionally, we realize that we at Mozilla often ask for people’s time to provide feedback and you may have recently seen other surveys. Also, we have run research projects of this sort in the past without following up with a clear plan of action. This project is different. It’s more extensive than anything we’ve done, it is connected a much larger project to shape Mozilla’s strategy with respect to open practices, and we will be publishing the results and data.

We would like to know your feedback/input about this project, its scope and implementation:

  • Are we missing any areas/topics we should get information about our communities?
  • Which part do you feel it’s more relevant?
  • Where do you think communities can engage to provide more value to the work we are going to do?
  • Any other ideas we are not thinking about?

Please let us know in this discourse topic.

Thanks everyone!

Firefox UXRatings and reviews on add-ons.mozilla.org

Hello!

My name is Philip Walmsley, and I am a Senior Visual Designer on the Firefox UX team. I am also one of the people tasked with making addons.mozilla.org (or, “AMO”) a great place to list and find Firefox extensions and themes.

There are a lot of changes happening in the Firefox and Add-ons ecosystem this year (Quantum, Photon, Web Extensions, etc.), and one of them is a visual and functional redesign of AMO. This has been a long time coming! The internet has progressed in leaps and bounds since our little site was launched many years ago, and it’s time to give it some love. We’ve currently got a top-to-bottom redesign in the works, with the goal of making add-ons more accessible to more users.

I’m here to talk with you about one part of the add-ons experience: ratings and reviews. We have found a few issues with our existing approach:

  • The 5-star rating system is flawed. Star ratings are arbitrary on a user by user basis, and it leads to a muddling of what users really think about an add-on.
  • Some users just want to leave a rating and not write a review. Sometimes this is referred to as “blank page syndrome,” sometimes a user is just in a time-crunch, sometimes a user might have accessibility issues. Forcing users to do both leads to glib, unhelpful, and vague reviews.
  • On that note, what if there was a better way to get reviews from users that may not speak your native tongue? What if instead of writing a review, a user had the option to select tags or qualities describing their experience with an add-on? This would greatly benefit devs (‘80% of the global community think my extension is “Easy to use”!’) and other users (‘80% of the global community believe this extension is “Easy to use”!’).
  • We don’t do a very good job of triaging users actual issues: A user might love an extension but have an (unbeknownst to them) easily-solved technical problem. Instead of leaving a negative 1-star review for this extension that keeps acting weird, can we guide that user to the developer or Mozilla support?
  • We also don’t do a great job of facilitating developer/user communication within AMO. Wouldn’t it be great if you could rectify a user’s issue from within the reviews section on your extension page, changing a negative rating to a positive one?

So, as you can see, we’ve got quite a few issues here. So let’s simplify and tackle these one-by-one: Experience, Tags, Triage.

So many feels

Experience

Someone is not familiar with Lisa Hanawalt

The star rating has its place. It is very useful in systems where the rating you leave is relevant to you and you alone. Your music library, for example: you know why you rate one song two stars and another at four. It is a very personal but very arbitrary way of rating something. Unfortunately, this rating system doesn’t scale well when more than one person is reviewing the same thing: If I love something but rate it two stars because it lacks a particular feature, what does that mean to other users or the overall aggregated rating? It drags down the review of a great add-on, and as other users scan reviews and see 2-stars, they might leave and try to find something else. Not great.

What if instead of stars, we used emotions?

Some of you might have seen these in airports or restrooms. It is a straightforward and fast way for a group of people to indicate “Yep, this restroom is sparkling and well-stocked, great experience.” Or “Someone needs to get in here with a mop, PRONTO.” It changes throughout the day, and an attendant can address issues as they arise. Or, through regular maintenance, they can achieve a happy face rating all day.

What if we applied this method to add-ons? What if the first thing we asked a user once they had used an extension for a day or so was: “How are you enjoying this extension?” and presented them with three faces: Grinning, Meh, and Sad. At a very high level, this gives users and developers a clear, overall impression of how people feel about using this add-on (“90% grinning face for this extension? People must like it, let’s give it a try.”).

So! A user has contributed some useful rating data, which is awesome. At this point, they can leave the flow and continue on their merry way, or we can prompt them to quickly leave a few more bits of even MORE useful review data…

Tags

Not super helpful

Writing a review is hard. Let me rephrase that: Writing a good review is hard. It’s easy to fire off something saying “This add-on is just ok.” It’s hard to write a review explaining in detail why the add-on is “just ok.” Some (read: most) users don’t want to write a detailed review, for many reasons: time, interest, accessibility, etc. What if we provided a way for these users to give feedback in a quick and straightforward way? What if, instead of staring down a blank text field, we displayed a series of tags or descriptors based on the emotion rating the user just gave?

For example, I just clicked a smiling face to review an extension I’m enjoying. Right after that, a grid of tags with associated icons pops up. Words like “fast”, “stable”, “easy to use”, well-designed”, fun”, etc. I liked the speed of this extension, so I click “fast” and “stable” and submit my options. And success: I have submitted two more pieces of data that are useful to devs and users. Developers can find out what users like about their add-on, and users can see what other users are thinking before committing to downloading. We can pop up different tags based on the emotion selected: if a user taps Meh or Sad, we can pop up tags to find out why the user selected that initially. The result is actionable review data that can is translated across all languages spoken by our users! Pretty cool.

Triage

Finally, we reach triage. Once a user submits tag review data, we can present them with a few more options. If a user is happy with this extension and wants to contribute even more, we can present them with an opportunity to write a review, or share it with friends, or contact the developer personally to give them kudos. If a user selected Meh, we could suggest reading some developer-provided documentation, contacting support, or writing a review. If the user selected Sad, we’d show them developer or Mozilla support, extension documentation, file a bug/issue, or write a review. That way we can make sure a user gets the help they need, and we can avoid unnecessary poor reviews. All of these options will also be available on the add-on page as well, so a user always has access to these different actions. If a user leaves a review expressing frustration with an add-on, devs will be able to reply to the review in-line, so other users can see issues being addressed. Once a dev has responded, we will ask the user if this has solved their problem and if they’d like to update their review.

We’ve covered a lot here! Keep in mind that this is still in the early proposal stage and things will change. And that’s good; we want to change this for the better. Is there anything we’ve missed? Other ideas? What’s good about our current rating and review flow? What’s bad? We’d love constructive feedback from AMO users, extension developers, and theme artists.

Please visit this Discourse post to continue the discussion, and thanks for reading!

Philip (@pwalm)
Senior Visual Designer, Firefox UX


Ratings and reviews on add-ons.mozilla.org was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaWebdev Beer and Tell: April 2017

Webdev Beer and Tell: April 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Open Policy & Advocacy BlogDutch court ruling puts net neutrality in question

On Thursday, April 20th a Rotterdam Court ruled that T-Mobile’s zero rated service “Data Free Music” is legal. The court declared that the Dutch net neutrality law, which prohibits zero rating, is not in accordance with the EU net neutrality law that Brussels lawmakers passed last year.

Zero rating is bad for the long term health of the internet. By disrupting the level playing field and allowing discrimination, zero rating poses a threat to users, competition, and opportunity online.

The Netherlands has been a model to the world in protecting net neutrality. It’s alarming to see these vital protections for users, competition, and opportunity online struck down.

The power and potential of the Internet is greatest when users can access the full diversity of the open Internet, not just some parts of it. We urge the Authority for Consumers & Markets (ACM) to appeal this decision swiftly, and we hope that higher courts will restore the Internet’s level playing field.

The post Dutch court ruling puts net neutrality in question appeared first on Open Policy & Advocacy.

Doug BelshawCan digital literacy be deconstructed into learnable units?

Earlier this week, Sally Pewhairangi got in touch to ask if I’d be willing to answer four questions about digital literacy, grouped around the above question. She’ll be collating answers from a number of people in due course but, in the spirit of working openly, I’m answering her questions here.

Deconstructed

 1. What are the biggest mistakes novices make when becoming digitally literate?

The three things I stress time and time again in my keynotes, writing, and workshops on this subject are:

  1. Digital literacies are plural
  2. Digital literacies are context-dependent
  3. Digital literacies are socially-negotiated

As such, there is no stance from which you could call someone ‘digitally literate’, because (as Allan Martin has pointed out), it is a condition, not a threshold. There is no test you could devise to say whether someone was ‘digitally literate’, except maybe at a very particular snapshot in time, for a very defined purpose, in a certain context.

That being said, and to answer the question, I think the main mistake that we make is to equate surface-level, procedural skills with depth of thought and understanding. I’m certain this is where the myth of the ‘digital native’ came from. Use does not automatically lead to expertise and understanding.

 2. What mistakes are common at a pro level?

By ‘pro level’, I’m assuming that this means someone who is seen as having the requisite digital knowledge, skills, and behaviours to thrive in their given field. As such, and because this is so context-dependent, it’s difficult to generalise.

Nevertheless, what I observe in myself and others is an assumption that I/we/they have somehow ‘made it’ in terms of digital literacies. It’s an ongoing process of development, not something whereby you can sit back and rest on your laurels. I’m constantly surprised by digital practices and the effects that technologies have on society.

As with the stock market, past performance isn’t a reliable guide to future success, so just because something looks ‘stupid’, ‘unimportant’, or otherwise outside my/your/our frame of reference doesn’t mean that it’s not worth investigating.

I’d also comment on how important play is to the development of digital literacies. Learning something because you have to, or because someone has set you a target, is different from doing so of your own accord. Self-directed learning is messy and, from the point of view of an instructor, ‘inefficient’. However, to my mind, it’s the most effective type of learning there is. In general, there should be more remixing and experimentation in life, and less deference and conformity.

 3. Can you learn the building blocks of digital literacy without access to the web? Where would you start? What would be the biggest misuse of time?

To be ‘literate’ means to be part of a community of literate peers. To quote my own thesis:

Given the ubiquitous and mandated use of technology in almost every occupation, students are left with a problem. They ‘seek to enter new communities… but do not yet have the knowledge necessary to act as “knowledgeable peers” in the community conversation’ (Taylor & Ward, 1998, p.18). Educators seeking to perpetuate Traditional (Print) Literacy often exploit the difference between students ‘tool literacy’ on the one-hand (their technical ability) and their understanding of, and proficiency in ‘literacies of representation’ (making use of these abilities for a purpose). Students are stereotyped at having great technical ability but lacking the skills to put these into practice. Given the ‘duty of care’ educational institutions have, reference is therefore made to ‘e-safety’, ‘e-learning’ and ‘e-portfolios’ - slippery terms that sound important and which serve to reinforce a traditional teacher-led model of education. As Bruffee points out, “pooling the resources that a group of peers brings with them to the task may make accessible the normal discourse of the new community they together hope to enter.“ (Taylor & Ward, 1998, p.18). The barrier, in this case, is the traditional school classroom and the view that Traditional Literacy is a necessary and sufficient conditional requirement for entry into such communities.

It’s almost unthinkable to have a digital device that isn’t networked and connected to other devices. As such, I would say that this is a necessary part of digital literacies. Connecting to other people using devices is just the way the world works these days, and to claim to be digitally up-to-date without these digital knowledge/skills/behaviours, would seem out of touch.

As with almost any arena of development, improving takes deliberate practice - something I’ve written about elsewhere. You have to immerse yourself in the thing you want to get better at, whether that’s improving your piano playing, sinking 3-pointers in basketball, or learning how to tweet effectively.

The biggest misuse of time? Learning things that used to be important but which are now anachronisms. Some teachers/mentors/instructors seem to think that those learning digital literacies require a long, boring history lesson on how things used to be. While this may be of some value, there’s enough to learn about the ways things are now - the power structures, the different forms of discourse, important nuances. And I say this as a former History teacher.

 4. What are your favourite instructional books or resources on digital literacies? If people were to teach themselves what would you suggest they use?

I’d recommend the following for a general audience:

There’s plenty of books for those looking to develop digital literacies in an academic context. I’d look out for anything by Colin Lankshear and/or Michele Knobel. I’ve written a book called The Essential Elements of Digital Literacies which people seem to have found useful.

Reading about digital literacies is a bit like dancing about architecture, however. There’s no substitute for keeping up-to-date by following people who are making sense of the latest developments. For that, the following is an short, incomplete, and partial list:

I’ve linked to the Twitter accounts of the above individuals, as I find that particular medium extremely good for encouraging the kind of global, immersive, networked digital literacies that I think are important. However, I may be wrong and out of touch, as Snapchat confuses me.

Finally, because of the context-dependency of digital literacies, it’s important to note that discourse in this arena differs depending on which geographical area you’re talking about. In my experience, and I touched up on this in my thesis, what ‘counts’ as digital literacies depends on whether you’re situated in Manchester, Mumbai, or Melbourne.


Questions? Comments? I’m @dajbelshaw on Twitter, or you can email me: hello@dynamicskillset.com

Image CC0 Florian Klauer

Ehsan AkhgariQuantum Flow Engineering Newsletter #6

I would like to share some updates about some of the ongoing performance related work.
We have started looking at the native stack traces that are submitted through telemetry from the Background Hang Reports that take more than 8 seconds.  (We were hoping to have been able to reduce this threshold to 256ms for a while now, but the road has been bumpy — but this should land really soon now!)  Michael Layzell put together a telemetry analysis job that creates a symbolicated version of this data here: https://people-mozilla.org/~mlayzell/bhr/.  For example, this is the latest generated report.  The grouping of this data is unfortunate, since the data is collected based on the profiler pseudo-stack labels, which is captured after 128ms, and then native stack (if the hang continues for 8 seconds) gets captured after that, so the pseudo-stack and the native stack may or may not correspond, and this grouping also doesn’t help going through the list of native stacks and triage them more effectively.  Work is under way to create a nice dashboard out of this data, but in the mean time this is an area where we could really use all of the help that we can get.  If you have some time, it would be really nice if you can take a look at this data and see if you can make sense of some of these call stacks and find some useful bug reports out of them.  If you do end up filing bugs, these are super important bugs to work on, so please make sure you add “[qf]” to the status whiteboard so that we can track the bug.
Another item worthy of highlight is Mike Conley’s Oh No! Reflow! add-on.  Don’t let the simple web page behind this link deceive you, this add-on is really awesome!  It generates a beep every time that a long running reflow happens in the browser UI (which, of course, you get to turn off when you don’t need to hunt for bugs!), and it logs the sync reflows that happened alongside the JS call stack to the code that triggered them, and it also gives you a single link that allows you to quickly file a bug with all of the right info in it, pre-filled!  In fact you can see the list of already filed bugs through this add-on!
Another issue that I want to bring up is the [qf:p1] bugs.  As you have noticed, there are a lot of them.  🙂  It is possible that some of these bugs aren’t important to work on, for example because they only affect edge case conditions that affects a super small subset of users and that wasn’t obvious when the bug was triaged.  In some other cases it may turn out that fixing the bug requires massive amounts of work that is unreasonable to do in the amount of time we have, or that the right people for it are doing more important work and can’t be interrupted, and so on.  Whatever the issue is, whether the bug was mis-triaged, or can’t be fixed, please make sure to raise it on the bug!  In general the earlier these issues are uncovered the better it is, because everyone can focus their time on more important work.  I wanted to make sure that this wasn’t lost in all of the rush around our communication for Quantum Flow, my apologies if this hasn’t been clear before.
On to the acknowledgement section, I hope I’m not forgetting to mention anyone’s name here!

Cameron KaiserThe аррӏе bites back

I've received a number of inquiries about whether TenFourFox will follow the same (essentially wontfix) approach of Firefox for dealing with those international domain names that happen to be whole-script homographs. The matter was forced recently by one enterprising sort who created just this sort of double using Cyrillic characters for https://www.аррӏе.com/, which depending on your font and your system setup, may look identical to https://www.apple.com/ (the site is a proof of concept only).

The circulating advice is to force all IDNs to be displayed in punycode by setting network.IDN_show_punycode to true. This is probably acceptable for most of our users (the vast majority of TenFourFox users operate with a Latin character set), but I agree with Gerv's concern in that Bugzilla entry that doing so disadvantages all other writing systems that are not Latin, so I don't feel this should be the default. That said, I also find the current situation unacceptable and doing nothing, or worse relying on DNS registrars who so far don't really care about anything but getting your money, similarly so. While the number of domains that could be spoofed in this fashion is probably small, it is certainly greater than one, and don't forget that they let the proof-of-concept author register his spoof!

Meanwhile, I'm not sure what the solution right now should be other than "not nothing." Virtually any approach, including the one Google Chrome has decided to take, will disadvantage non-Latin scripts (and the Chrome approach has its own deficiencies and is not IMHO a complete solution to the problem, nor was it designed to be). It would be optimal to adopt whatever solution Firefox eventually decides upon for consistency if they do so, but this is not an issue I'd like to sit on indefinitely. If you use a Latin character set as your default language, and/or you don't care if all domains will appear in either ASCII or punycode, then go ahead and set that pref above; if you don't, or consider this inappropriate, stay tuned. I'm thinking about this in issue 384.

By the way, TenFourFox "FPR0" has been successfully uploaded to Github. Build instructions to follow and the first FPR1 beta should be out in about two to three weeks. I'm also cogitating over a blog post discussing not only us but other Gecko forks (SeaMonkey, Pale Moon, etc.) which for a variety of reasons don't want to follow Mozilla into the unclear misty haze of a post-XUL world. To a first approximation our reasons are generally technical and theirs are primarily philosophical, but we both end up doing some of the same work and we should talk about that as an ecosystem. More later.

Air MozillaWorldBots Meetup 4/20/17

WorldBots Meetup 4/20/17 WorldBots Meetup 2017-04-20 19:00 - 21:00 We're throwing the first World Bot Meetup! International experts from all over the world will talk about the culture,...

Air MozillaReps Weekly Meeting Apr. 20, 2017

Reps Weekly Meeting Apr. 20, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

QMOFirefox 54 Beta 3 Testday, April 28th

Hello Mozillians,

We are happy to let you know that Friday, April 28th, we are organizing Firefox 54 Beta 3 Testday. We’ll be focusing our testing on the following new features: Net Monitor MVP and Download Panel UX Redesign.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Mozilla Localization (L10N)Localizing Firefox in Barcelona

We were thrilled to start the year’s localization (l10n) community workshops in Barcelona at the end of March 2017! Thanks to the help of the ever dedicated Alba and Benny the workshop was fun, productive, and filled with amazing Catalonian food.

This workshop aimed to gather together core and active localizers from twenty-one l10n communities scattered throughout the southern parts of Western and Eastern Europe. Unlike the 2016 l10n hackathons, this was the first time we brought these twenty-one communities together to share experiences, ideas, and hack on Mozilla l10n projects together.

The workshop was held at Betahaus, a local co-working space in Barcelona located in Villa de Grácia, Barcelona. The space was great for both large group presentations and small group breakouts. We had room to move around, brainstorm on whiteboards, and play our favorite icebreaker game, spectograms.

All of the l10n-drivers were present for this workshop (another first) and many gave presentations on their main projects. Localizers got a look into new developments with L20n, Pontoon, and Pootle. We also had a glimpse into cross-channel localization for Firefox and how localizers can prepare for it to come in June.

Following tradition, l10n communities came to the workshop with specific goals to accomplish while there. While together, these communities were able to complete around 75% of their goals. These goals largely surrounded addressing the question of localization quality and testing, but also included translating strings for Mozilla products, web sites, and planning for recruiting new localizers.

We couldn’t think of being in Barcelona without taking advantage of participating in a cultural activity as a group. Alba was kind enough to guide the whole group through the city on Saturday night and show us some of the most prominent sites, like Sagrada Familia (which happened to be the most popular site among the l10n communities).

On Sunday, the l10n communities and drivers gathered around four different tables to discuss four different topics in 30-minute chunks of time. Every 30 minutes, Mozillians moved to a different table to discuss the topic assigned to that table. These topics included localization quality, style guides, recruiting new localizers, and mentoring new localizers. It was a great opportunity for both veteran and new localizers to come together and share their experience with each topic and ideas on how to take new approaches to each. Sure, it was a bit chaotic, but everyone was flexible and willing to participate, which made it a good experience nevertheless.

For more info about the workshop (including the official Spotify playlist of the workshop), visit the event’s wiki page here. ¡Hasta luego!

More pictures from the event:

Mozilla Reps CommunityReps Program Objectives – Q2 2017

As we did in the past few quarters, we have decided on the Reps Program Objectives for this quarter. Again we have worked with the Community Development Team to align our goals to the broader scope of their goals. These are highly relevant for the Reps program and the Reps’ goals are tightly coupled with these. In the following graphic you can see how all these goals play together.

Objective 1 – RepsNext is successfully completed paving the way for our next improvement program

  • KR 1 – The Coaching plan is implemented and we are able to scale
  • KR 2 – Budget requests submitted after June, 1st go through the trained Resource Reps
  • KR 3 – Reps can get initial resources to improve their Leadership skills
  • KR 4 – Core community sentiment NPS >11.5 (Konstantina as in Q1)
  • KR 5 – Mobilizer sentiment NPS >15 (Konstantina as in Q1)
  • KR 6 – We have a GitHub issue to plan for the future of Reps with an exclusive focus on functional contributions
  • KR 7 – The Facebook experiment is analyzed and being continued if successful
  • KR 8 – 2 communication improvements are identified
  • KR 9 – It takes a maximum of 2 weeks for new applicants to have their first task assigned

Objective 2 – MozActivate focuses mobilizers on impactful areas

  • KR 1 – General feedback form is used by 100% of MozActivate activities
  • KR 2 – We have implemented metrics and measurements for the existing MozActivate and to-be-launched activities as well as for the website itself
  • KR 3 – 70 Reps have organized one or more MozActivate activity
  • KR 4 – Activate is actively engaging 70 new technical contributors
  • KR 5 – 2 new activities are launched

Objective 3 – The Reps program demonstrates operational excellence in the Mozilla Project

  • KR 1 – Goals for Q3 have been set
  • KR 2 – We were involved and gave feedback about the Community Development Team OKRs for Q3 as well as the broader Open Innovation ones
  • KR 3 – The budget allocation for Q3 is finalized and communicated to all Reps
  • KR 4 – We have on average maximum one open action item from last week before every Council Meeting that is not tracked on GitHub and next steps/blockers are identified
  • KR 5 – We have planned 2 brainstorm sessions for the next improvement program
  • KR 6 – We have given feedback for Open Innovation’s “Strategy” project and are a valuable source for future consultation for strategy related questions

We will work closely with the Community Development Team to achieve our goals. You can follow the progress of these tasks in the Reps Issue Tracker. We also have a new Dashboard1 to track the status of each objective.

Which of the above objectives are you most interested in? What key result would you like to hear more about? What do you find intriguing? Which thoughts cross your mind upon reading this? Where would you like to help out? Let’s keep the conversation going! Join the discussion on Discourse.

 

The Mozilla BlogThis April, Mozilla is Standing Up for Science

Mozilla supports the March for Science. And we’re leading projects to make scientific research more open and accessible, from extraterrestrial hackathons to in-depth fellowships

 

We believe openness is a core component not just of a healthy Internet, but also a healthy society. Much like open practices can unlock innovation in the realm of technology, open practices can also invigorate fields like civics, journalism — and science.

In laboratories and at academic institutions, open source code, data and methodology foster collaboration between researchers; spark scientific progress; increase transparency and encourage reproducibility; and better serve the public interest.

Open data has been shown to speed up the study process and vaccine development for viruses, like Zika, at global scale. And open practices have allowed scientific societies from around the globe to pool their expertise and explore environments beyond Earth.

This April, Mozilla is elevating its commitment to open science. Mozilla Science Lab, alongside a broader network of scientists, developers and activists, is leading a series of programs and events to support open practices in science.

Our work aligns with the April 22 March for Science, a series of nonpartisan gatherings around the world that celebrate science in the public interest. We’re proud to say Teon Brooks, PhD — neuroscientist, open science advocate and Mozilla Science Fellow — is serving as a March for Science Partnership Outreach Co-Lead.

From science fellowships to NASA-fueled hackathons, here’s what’s happening at Mozilla this April:

Signage for Science Marchers

We want to equip March for Science participants — from the neuroscientist to the megalosaurus-obsessed third grader — with signs that spotlight their passion and reverence for science. So Mozilla is asking you for your most clever, impassioned science-march slogans. With them, our designers will craft handy posters you can download, print and heft high.

Learn more here.

Seeking Open Science Fellows

This month, Mozilla began accepting applications for Mozilla Fellowships for Science. For the third consecutive year, we are providing paid fellowships to scientists around the world who are passionate about collaborative, iterative and open research practices.

Mozilla Science Fellows spend 10 months as community catalysts at their institutions, and receive training and support from Mozilla to hone their skills around open source, data sharing, open science policy and licensing. Fellows also craft code, curriculum and other learning resources.

Fellowship alums hail from institutions like Stanford University and University of Cambridge, and have developed open source tools to teach and study issues like bioinformatics, climate science and neuroscience.

Apply for a fellowship here. And read what open science means to Mozillian Abigail Cabunoc Mayes: My Grandmother, My Work, and My Open Science Story

Calling for Open Data

In the United States, federal taxes help fund billions of dollars in scientific research each year. But the results of that research are frequently housed behind pricey paywalls, or within complex, confounding systems.

Citizens should have access to the research they help fund. Further, open access can spark even more innovation — it allows entrepreneurs, researchers and consumers to leverage and expand upon research. Just one example: Thanks to publicly-funded research made openly available, farmers in Colorado have access to weather data to predict irrigation costs and market cycles for crops.

Add your name to the petition: https://iheartopendata.org.

Calling for Open Citations

Earlier this month, Mozilla announced support for the Initiative for Open Citations (I4OC), a project to make citations in scientific research open and freely accessible. I4OC is a collaboration between Wikimedia, Bill & Melinda Gates Foundation, a slate of scholarly publishers and several other organizations.

Presently, citations in many scholarly publications are inaccessible, subject to restrictive and confusing licenses. Further, citation data is often not machine readable — meaning we can’t use computer programs to parse the data.

I4OC envisions a global, public web of citation data — one that empowers teaching, learning, innovation and progress.

Learn more about I4OC.

Extraterrestrial Hackathon (in Brooklyn)

Each year, the Space Apps hackathon allows scientists, coders and makers around the world to leverage NASA’s open data sets. In 2016, 5,000 people across six continents contributed. Participants built apps to measure air quality, to remotely explore gelid glaciers and to monitor astronauts’ vitals.

For the 2017 Space Apps Hackathon — slated for April 28-30 — participants will use NASA data to study Earth’s hydrosphere and ecological systems. Mozilla Science is hosting a Brooklyn-based Space Apps event, which will include a data bootcamp.

Learn more at http://spaceappsbrooklyn.com/

The post This April, Mozilla is Standing Up for Science appeared first on The Mozilla Blog.

Mozilla VR BlogWebVR Google Daydream support lands in Servo

WebVR Google Daydream support lands in Servo

Want to try this now? Download this three.js Rollercoaster Demo (Android APK)!

We are happy to announce that Google Daydream VR headset and Gamepad support are landing in Servo. The current implementation is WebVR 1.1 spec-compliant and supports asynchronous reprojection to achieve low-latency rendering.

If you are eager to explore, you can download an experimental three.js Rollercoaster Demo (Android APK) compatible with Daydream-ready Android phones. Put on the headset, switch on your controller, and run the app from Daydream Home or from a direct launch.

We have contributed to many parts in the Servo browser codebase in order to allow polished WebVR experiences on Android. It’s nice that our WebVR support goals has allowed to push forward some improvements that are also useful for other areas of the Android version of Servo.

VR Application life cycle

Daydream VR applications have to gracefully handle several VR Entry flows such as transitions between the foreground and background, showing and hiding the Daydream pairing screen, and adding the GvrLayout Android View on top of the view hierarchy. To manage the different scenarios we worked on proper implementations of native EGL context lost and restore, animation loop pause/resume, immersive full-screen mode, and support for surface-size and orientation changes.

Servo uses a NativeActivity, in combination with android-rs-glue and glutin, as an entry point for the application. We realized that NativeActivity ignores the Android view hierarchy because it’s designed to take over the surface from the window to directly draw to it. The Daydream SDK requires a GvrLayout view in the Activity’s view hierarchy in order to show the VR Scene, so things didn’t work out.

A research about this issue shows that most people decide to get rid of NativeActivity or bypass this limitation using hacky PopupWindow modal views. The PopupWindow hack may work for simple views like an Google AdMob banner but causes complications with a complex VR view. We found a more elegant solution by releasing the seized window and injecting a custom SurfaceView with its render callbacks redirected to the abstract implementation in NativeActivity:

This approach works great, and we can reuse the existing code for native rendering. We do, however, intend to remove NativeActivity in the future. We’d like to create a WebView API-based Servo component that will allow developers to embed their content from Android standalone apps or using WebView-based engine ecosystems such as Cordova. This will involve modifications to various Servo layers coupled with NativeActivity callbacks.

Build System

Thanks to the amazing job of both the Rustlang and Servo teams, the browser can be compiled with very few steps, even on Windows now. This is true for Android too, but the packaging step was still using ant combined with Python scripts. We replaced it with a new Gradle build system for the packaging step, which offers some nice benefits:

  • A scalable dependency system that allows to include Gradle/aar-based dependencies such as the GoogleVR SDK.
  • Relative paths for all project libraries and assets instead of multiple copies of the same files.
  • Product flavors for different versions of Servo (e.g. Default, VR Browser, WebView)
  • Android Studio and GPU debugger support.

The new Gradle integration paves the way for packaging Servo APKs with the Android AArch64 architecture. This is important to get optimal performance on VR-ready phone CPUs. Most of the Rust package crates that Servo uses can be compiled for AArch64 using the aarch64-linux-android Rust compilation target. We still, however, need to fix some compilation issues with some C/C++ dependencies that use cmake, autotools or pure Makefiles.

Other necessary improvements to support WebVR

There’s a plethora of rough edges we have to polish as we make progress with the WebVR implementation. This is a very useful exercise that improves Servo Android support as a compelling platform for delivering not only WebVR content, but graphics-intensive experiences. To reach this milestone, these are some of the areas we had to improve:

Daydream support on Rust WebVR

WebVR Google Daydream support lands in Servo

These notable Android improvements, combined with the existing cross-platform WebVR architecture, provide a solid base for Daydream integration into Servo. We started by integrating Daydream support in the browser dependency-free rust-webvr library.

The Google VR NDK for Android provides a C/C++ API for both Daydream and Cardboard headsets. As our codebase is written in Rust, we used rust-bindgen to generate the required bindings. We also published the gvr-sys crate, so from now on anyone can easily use the GVR SDK in Rust for other use cases.

The GoogleVRService class offers the entry point to access GVR SDK and handles life-cycle operations such as initialization, shutdown, and VR Device discovery. The integration with the headset is implemented in GoogleVRDisplay. Daydream lacks positional tracking, but by using the neck model provided in the SDK, we expose a basic position vector simulating how the human head naturally rotates relative to the base of the neck.

A Java GvrLayout view is required in order get a handle to the gvr_context, apply lens distortion, and enable asynchronous-reprojection-based rendering. This adds some complexity to the implementation because it involves adding both the Java Native Interface (JNI) and Java code to the modular rust-webvr library. We created a Gradle module to handle the GvrLayout-related tasks and a helper JNIUtils class to communicate between Rust and Java.

One of the complexities about this interoperation is that JNI FindClass function fails to find our custom Java classes. This happens because when attaching native Rust threads to a JavaVM, the JNI AttachCurrentThread call is unaware of the current Java application context and it uses the system Classloader instead of the one associated with the application. We fixed the issue by retrieving the Classloader from the NativeActivity’s jobject instance and performing loadClass calls directly to it. I’m waiting for variadic templates to land in Rustlang to extend and move these JNI Utils into it’s own crate providing a similar API like the one I implemented for the C++11 SafeJNI library.

In order to present the WebGL canvas into the headset we tried to use a shared texture_id as we did in the OpenVR implementation. Unfortunately, the GVR SDK allows attaching only external textures that originate from the Android MediaCodec or Camera streams. We opted for a BlitFramebuffer-based solution, instead of rendering a quad, to avoid implementing the required OpenGL state-change safeguards or context switching:

Once the Daydream integration was tested using the pure Rust room-scale demo, we integrated it pretty quickly into Servo. It fit perfectly into the existing WebVR architecture. WebVR tests ran well except that VRDisplay.requestPresent() failed in some random launches. This was caused because of a deadlock possibility during the very specific frame when the requestAnimationFrame is moved from window to VRDisplay. Fortunately, this was fixed with this PR.

In order to reduce battery usage, when a JavaScript thread starts presenting to the Daydream headset, the swap_buffers call of the NativeActivity’s EGLContext is avoided. The optimized VR render path draws into only the texture framebuffer attached to the WebGL Canvas. This texture is sent to the GVRLayout presentation view when VRDisplay.submitFrame() is called and lens distortion is then applied.

Gamepad Integration

Gamepad support is a necessity for complete WebVR experiences. Similarly to the VRDisplay implementation, integration with the vendor-specific SDK for gamepads are implemented in rust-webvr, based on the following traits and structs:

These traits are used in both the WebVR Thread and DOM Objects in the Gamepad API implementation in Servo.

Vendor-specific SDKs don’t allow using the VR gamepads independently, so navigator.vr.getDisplays() must be called in order to spin up VR runtimes and make VR gamepads discoverable later in subsequent navigator.getGamepads() calls.

The recommended way to get valid gamepad state on all browsers is calling navigator.getGamepads() within every frame in your requestAnimationFrame callback. We created a custom GamepadList container class with two main purposes:

  • Provide a fast and Garbage Collection-friendly container to share the gamepad list between Rust and JavaScript, without creating or updating JS arrays every frame.

  • Implement an indexed getter method which will be used to hide gamepads according to privacy rules. The Gamepad spec permits the browser to return inactive gamepads (e.g., [null, <object Gamepad>]) when gamepads are available but in a different, hidden tab.

WebVR Google Daydream support lands in Servo

The latest gamepads state is polled immediately in response to the navigator.getGamepads() API call. This is a different approach than the one implemented in Firefox, where the gamepads are vsync-aligned and have the data already polled when requestAnimationFrame is fired. Both options are equally valid, though the being able to immediately query for gamepads enables a bit more flexibility:

  • Gamepad state can be sampled multiple times per frame, which can be very useful for motion-capture or drawing WebVR applications.
  • Vsync-aligned polling can be simulated by just calling navigator.getGamepads at the start of the frame. Remember from the Servo WebVR architecture that requestAnimationFrame is fired in parallel and allows to get some JavaScript code executed ahead during the VR headset’s vsync time until VRDisplay#getFrameData is called.

Conclusion

We are very excited to see how far we’ve evolved the WebVR implementation on Servo. Now that Servo has a solid architecture on both desktop and mobile, our next steps will be to grow and tune up the WebGL implementation in order to create a first-class WebVR browser runtime. The Gear VR backend is coming too ;) Stay tuned!

Wladimir PalantIs undetectable ad blocking possible?

This announcement by the Princeton University is making its rounds in the media right now. What the media seems to be most interested in is their promise of ad blocking that websites cannot possibly detect, because the website can only access a fake copy of the page structures where all ads appear to be visible. The browser on the other hand would work with the real page structures where ads are hidden. This isn’t something the Princeton researchers implemented yet, but they could have, right?

First of all, please note how I am saying “hidden” rather than “blocked” here — in order to fake the presence of ads on the page you have to allow the ads to download. This means that this approach won’t protect you against any privacy or security threats. But it might potentially protect your eyes and your brain without letting the websites detect ad blocker usage.

Can we know whether this approach is doable in practice? Is a blue pill for the website really possible? The Princeton researchers don’t seem to be aware of it but it has been tried before, probably on a number of occasions even. One such occasion was the history leak via the :visited CSS pseudo-class — this pseudo-class is normally used to make links the user visited before look differently from the ones they didn’t. The problem was, websites could detect such different-looking links and know which websites the user visited — there were proof-of-concept websites automatically querying a large number of links in order to extract user’s browsing history.

One of the proposals back then was having getComputedStyle() JavaScript API return wrong values to the website, so that visited and unvisited links wouldn’t be distinguishable. And if you look into the discussion in the Firefox bug, even implementing this part turned out very complicated. But it doesn’t stop here, same kind of information would leak via a large number of other APIs. In fact, it has been demonstrated that this kind of attack could be performed without any JavaScript at all, by making visited links produce a server request and evaluating these requests on the server side.

Hiding all these side-effects was deemed impossible from the very start, and the discussion instead focused on the minimal set of functionality to remove in order to prevent this kind of attack. There was a proposal allowing only same-origin links to be marked as visited. However, the final solution was to limit the CSS properties allowed in a :visited psedo-class to those merely changing colors and nothing else. Also, the conclusion was that APIs like canvas.drawWindow() which allowed websites to inspect the display of the page directly would always have to stay off limits for web content. The whole process from recognizing an issue to the fix being rolled out took 8 (eight!) years. And mind you, this was an issue being addressed at the source — directly in the browser core, not from an extension.

Given this historical experience, it is naive to assume that an extension could present a fake page structure to a website without being detectable due to obvious inconsistencies. If at all, such a solution would have to be implemented deep in the browser core. I don’t think that anybody would be willing to limit functionality of the web platform for this scenario, but the solution search above was also constrained by performance considerations. If performance implications are ignored a blue pill for websites becomes doable. In fact, a fake page structure isn’t necessary and only makes things more complicated. What would be really needed is a separate layout calculation.

Here is how it would work:

  • Some built-in ad hiding mechanism would be able to mark page elements as “not for display.”
  • When displaying the page, the browser would treat such page elements as if they had a “visibility:hidden” style applied — all requests and behaviors triggered by such page elements should still happen but they shouldn’t display.
  • Whenever the page uses APIs that require access to positions (offsetTop, getBoundingClientRect etc), the browser uses a second page layout where the “not for display” flag is ignored. JavaScript APIs then produce their results based on that layout rather than the real one.
  • That second layout is necessarily calculated at the same time as the “real” one, because calculating it on demand would lead to delays that the website could detect. E.g. if the page is already visible, yet the first offsetTop access takes unusually long the website can guess that the browser just calculated a fake layout for it.

Altogether this means that the cost of the layout calculation will be doubled for every page, both in terms of CPU cycles and memory  — only because at some point the web page might try to detect ad blocking. Add to this significant complexity of the solution and considerable maintenance cost (the approach might have to be adjusted as new APIs are being added to the web platform). So I would be very surprised if any browser vendor would be interested in implementing it. And let’s not forget that all this is only about ad hiding.

And that’s where we are with undetectable ad blocking: possible in theory but completely impractical.

Hacks.Mozilla.OrgFirefox 53: Quantum Compositor, Compact Themes, CSS Masks, and More

Firefox 53, available today, includes the following key new features and enhancements.

Quantum Compositor Process on Windows

One of the first pieces of Project Quantum, the Compositor Process, has arrived on Windows. Compositors are responsible for flattening all of the various elements on a webpage into a single image to be drawn on the screen. Firefox can now run its compositor in a completely separate process from the main Firefox program, which means that Firefox will keep running even if the compositor crashes—it can simply restart it.

For more details on how this aspect of Project Quantum reduces crash rates for Firefox users, check out Anthony Hughes’ blog post.

Light and Dark Compact Themes

The “compact” themes that debuted with Firefox Developer Edition are now a standard feature of Firefox. Users can find light and dark variants of this space-saving, square-tabbed theme listed under the “Themes” menu in Customize mode.

Screenshot of the new compact themes in Firefox

New WebExtension Features

WebExtensions are browser add-ons that are designed to work safely and efficiently in Firefox, Chrome, Opera, and Edge, while also supporting powerful features unique to Firefox.

In Firefox 53, WebExtensions gained compatibility with several pre-existing Chrome APIs:

  • The browsingData API lets add-ons clear the browser’s cache, cookies, history, downloads, etc. For example, Firefox’s Forget Button could now be implemented as a WebExtension.
  • The identity API allows add-ons to request OAuth2 tokens with the consent of the user, making it easier to sign into services within an add-on.
  • The storage.sync API allows add-ons to save user preferences to Firefox Sync, where it can be shared and synchronized between devices.
  • The webRequest.onBeforeRequest API can now access the request body, in addition to headers.
  • The contextMenus API now supports adding menus to browser actions and page actions.

Firefox 53 also supports the following unique APIs:

New CSS Features: Positioned Masks and Flow-Root

Firefox 53 supports positioned CSS Masks, which allow authors to partially or fully hide visual elements within a webpage. Masks work by overlaying images or other graphics (like linear gradients) that define which regions of an element should be visible, translucent, or transparent.

Masks can be configured to use either luminance or alpha values for occlusion. When the mode is set to luminance, white pixels in the mask correspond to fully visible pixels in the underlying element, while black pixels in the mask render that area completely transparent. The alpha mode simply uses the mask’s own opacity: transparent pixels in the mask cause transparent pixels in the element.

Many masking properties function similarly to the equivalent background-* properties. For example, mask-repeat works just like background-repeat. To learn more about the available properties, see the documentation on MDN.

The specification also defines methods for clipping based on shapes and vector paths. Firefox 53 has partial support for clipping, and complete support is expected in Firefox 54.

Lastly, Firefox also supports the new display: flow-root value, which achieves similar results to clearfix, but using a standard CSS value instead of pseudo-elements or other hacks.

A Better Default Media Experience

Alongside many other UI refinements in Firefox 53, the default <video> and <audio> controls got a new, modern look:

Screenshot of the default HTML5 video controls in Firefox 53

Additionally, Firefox 53 includes brand new anti-annoyance technology: By default, HTML5 media will not autoplay until its tab is first activated. Try it by right-clicking on this link and choosing “Open in New Tab.” Notice that the video doesn’t start until you change to that tab.

Edit: Autoplay blocking is scheduled for Firefox 54, not 53. Oops. (Bug 1308154)

64-bit Everywhere

Windows users can now select between 32-bit and 64-bit Firefox during installation:

Screenshot of the Firefox installer on Windows offering a choice of 32-bit or 64-bit

We’ve also removed support for 32-bit Firefox on macOS, and for processors older than Pentium 4 and Opteron on Linux.

More Info

To find out more about Firefox 53, check out the general Release Notes as well as Firefox 53 for Developers on MDN.

The Mozilla BlogFirefox faster and more stable with the first big bytes of Project Quantum, simpler with compact themes and permissions redesign

Today’s release of Firefox includes the first significant piece of Project Quantum, as well as various visible and the under-the-hood improvements.

The Quantum Compositor speeds up Firefox and prevents graphics crashes on Windows

In case you missed our Project Quantum announcement, we’re building a next-generation browser engine that takes full advantage of modern hardware. Today we’re shipping one of the first important pieces of this effort – what we’ve referred to as the “Quantum Compositor”.

Some technical details – we’ve now extracted a core part of our browser engine (the graphics compositor) to run in a process separate from the main Firefox process. The compositor determines what you see on your screen by flattening into one image all the layers of graphics that the browser computes, kind of like how Photoshop combines layers. Because the Quantum Compositor runs on the GPU instead of the CPU, it’s super fast. And, because of occasional bugs in underlying device drivers, the graphics compositor can sometimes crash. By running the Quantum Compositor in a separate process, if it crashes, it won’t bring down all of Firefox, or even your current tab.

In testing, the Quantum Compositor reduced browser crashes by about 10%. You can learn more about our findings here. The Quantum Compositor will be enabled for about 70% of Firefox users – those on Windows 10, 8, and 7 with the Platform Update, on computers with graphics cards from Intel, NVidia, or AMD.

And if you’re wondering about the Mac – graphics compositing is already so stable on MacOS that a separate process for the compositor is not necessary.

Save screen real estate – and your eyes – with compact themes and tabs

It’s a browser’s job to get you where you want to go, and then get out of the way.

That’s why today’s release of Firefox for desktop ships with two new themes: Compact Light and Compact Dark. Compact Light shrinks the size of the browser’s user interface (the ‘chrome’) while maintaining Firefox’s default light color scheme. The Compact Dark theme inverts colors so it won’t strain your eyes, especially if you’re browsing in the dark. To turn on one of these themes, click the menu button and choose Add-ons. Then select the Appearance panel, and the theme you’d like to activate.

Firefox for Android also ships with a new setting for compact tabs. When you switch tabs, this new setting displays your tabs in two columns, instead of one, so it’s easier to switch tabs when you have several open. To activate compact tabs, go to Settings > General.

Easily control a website’s permission to access device sensors or send you notifications

In order to fully function, many websites must first get your permission to access your hardware or alert you of information. For example, video conferencing apps need to use your camera and microphone, and maps request your location so you don’t have to type it in. Similarly, news sites and social networks often ask to send you notifications of breaking stories or messages.

Today’s Firefox desktop release introduces a redesigned interface for granting and subsequently managing a website’s permissions. Now, when you visit a website that wants to access sensitive hardware or send you a notification, you’ll be prompted with a dialog box that explicitly highlights the permissions that site is requesting. If later on you would like to change a site’s permissions, just click the ‘i’ icon in the Awesome Bar.

You can learn more about the improvements to Firefox’s permissions in this post.

Lots more new

Check out the Firefox 53 release notes for a full list of what’s new, but here are a few more noteworthy items:

  • Firefox for Android is now localized in Arabic, Hebrew, Persian, and Urdu
  • Reader Mode now displays estimated reading times on both Android and desktop
  • Send tabs between desktop and mobile Firefox by right-clicking the tab
  • Firefox now uses TLS 1.3 to secure HTTPs connections

Web developers should check out the Hacks blog for more information about what’s in today’s release.

We hope you enjoy today’s release, and that you’re excited for the even bigger Quantum leaps still ahead.

The post Firefox faster and more stable with the first big bytes of Project Quantum, simpler with compact themes and permissions redesign appeared first on The Mozilla Blog.

Air MozillaWeekly SUMO Community Meeting Apr. 19, 2017

Weekly SUMO Community Meeting Apr. 19, 2017 This is the Sumo Weekly call for 4/19/17. PLEASE NOTE***( Known audio issue for the 2nd half of video)

Nathan Froydon customer service; or, how to treat bug reports

From United: Broken Culture, by Jean-Louis Gassée, writing on his time as the head of Apple France:

Over time, a customer service theorem emerged. When a customer brings a complaint, there are two tokens on the table: It’s Nothing and It’s Awful. Both tokens are always played, so whoever chooses first forces the other to grab the token that’s left. For example: Customer claims something’s wrong. I try to play down the damage: It’s Probably Nothing…are you sure you know what you’re doing? Customer, enraged at my lack of judgment and empathy, ups the ante: How are you boors still in business??

But if I take the other token first and commiserate with Customer’s complaint: This Is Awful! How could we have done something like this? Dear Customer is left with no choice, compelled to say Oh, it isn’t so bad…certainly not the end of the world..

It’s simple, it works…even in marriages, I’m told.

There’s no downside to taking the It’s Awful position. If, on further and calm investigation, the customer is revealed to be seriously wrong, you can always move to the playbook’s Upon Further Review page.

Daniel Stenbergcurl bug bounty

The curl project is a project driven by volunteers with no financing at all except for a few sponsors who pay for the server hosting and for contributors to work on features and bug fixes on work hours. curl and libcurl are used widely by companies and commercial software so a fair amount of work is done by people during paid work hours.

This said, we don’t have any money in the project. Nada. Zilch. We can’t pay bug bounties or hire people to do specific things for us. We can only ask people or companies to volunteer things or services for us.

This is not a complaint – far from it. It works really well and we have a good stream of contributions, bugs reports and more. We are fortunate enough to make widely used software which gives our project a certain impact in the world.

Bug bounty!

Hacker One coordinates a bug bounty program for flaws that affects “the Internet”, and based on previously paid out bounties, serious flaws in libcurl match that description and can be deemed worthy of bounties. For example, 3000 USD was paid for libcurl: URL request injection (the curl advisory for that flaw) and 1000 USD was paid for libcurl duphandle read out of bounds (the corresponding curl advisory).

I think more flaws in libcurl could’ve met the criteria, but I suspect more people than me haven’t been aware of this possibility for bounties.

I was glad to find out that this bounty program pays out money for libcurl issues and I hope it will motivate people to take an extra look into the inner workings of libcurl and help us improve.

What qualifies?

The bounty program is run and administered completely out of control or insight from the curl project itself and I must underscore that while libcurl issues can qualify, their emphasis is on fixing vulnerabilities in Internet software that have a potentially big impact.

To qualify for this bounty, vulnerabilities must meet the following criteria:

  • Be implementation agnostic: the vulnerability is present in implementations from multiple vendors or a vendor with dominant market share. Do not send vulnerabilities that only impact a single website, product, or project.
  • Be open source: finding manifests itself in at least one popular open source project.

In addition, vulnerabilities should meet most of the following criteria:

  • Be widespread: vulnerability manifests itself across a wide range of products, or impacts a large number of end users.
  • Have critical impact: vulnerability has extreme negative consequences for the general public.
  • Be novel: vulnerability is new or unusual in an interesting way.

If your libcurl security flaw matches this, go ahead and submit your request for a bounty. If you’re at a company using libcurl at scale, consider joining that program as a bounty sponsor!

Mozilla Localization (L10N)Localizing Nightly by Default

One of our goals for 2017 is to implement a continuous localization system at Mozilla for Firefox and other projects. The idea is to expose new strings to localizers earlier and more frequently, and to ship updates to users as soon as they’re ready. I’m excited to say that we’ve arrived at one of the key milestones toward a continuous localization system: transitioning localization from Aurora to Nightly.

How can you help?

Starting April 19th, the focus for localization is going to be on Nightly.

If you are a localizer, you should install Nightly in your own language and test your localization.

If you are a member of a local community, you should start spreading the message about the importance of using Nightly to help improve localized versions of Firefox and share feedback with localizers.

If you are new to localization, and you want to help with translation tasks, check out our tools (Pontoon and Pootle), and get in touch with the contributors already working on your language.

The amount of information might be overwhelming at times, if you ever get lost you can find help on IRC in the #l10n channel, on our mailing list, and even via Twitter @mozilla_l10n.

Firefox release channels

Mozilla has three (previously four) release channels for Firefox, each with their own dedicated purpose. There’s Nightly (built from the mozilla-central repository), Beta (mozilla-beta), and Release (mozilla-release).

  • Nightly: development of Firefox (and now localization)
  • Aurora: testing & localization (no longer available)
  • Beta: stable testing of Firefox
  • Release: global distribution of Firefox to general audience

A version of Firefox will “ride the trains” from Nightly to Beta and finally to Release, moving down the channel stream every 6-8 weeks.

With Aurora, localizers were given one cycle to localize new, unchanging content for Firefox. In fact, once moved to Aurora, code would be considered “string frozen”, and only exceptional changes to strings would be allowed to land. Any good update from localizers during that time was signed off and rode the trains for 6-12 weeks before end-users received it.

We spent the last two years asking localizers about their contribution frequency preferences. We learned that, while some preferred this 6 week cycle to translate their strings, the majority preferred to have new content to translate more frequently. We came away from this with the understanding that the thing localizers want most when it comes to their contribution frequency is freedom: freedom to localize new Firefox content whenever they choose. They also wanted the freedom to send those updated translations to end-users as early as possible, without waiting 6-12 weeks. To accommodate this desire for freedom, Axel set out to develop a plan for a continuous localization system that exposes new content to localizers early and often, as well as delivers new l10n updates to users more quickly.

Nightly localization

The first continuous localization milestone consisted of removing the sign-off obligation from localizer’s TODO list. The second milestone consists of transitioning localization from the old Aurora channel to the Nightly channel. This transition aims to set the stage for cross-channel localization (one repository per locale with Nightly, Beta, and Release strings together) as well as satisfy the first desired freedom: to localize new Firefox content whenever localizers choose to localize.

This is how it works:

  1. A developer lands new strings in mozilla-central for Nightly.
  2. Localization drivers (l10n-drivers) review those new strings and offer feedback to the dev where needed.
  3. Every 2-3 days, localization drivers update a special clone of mozilla-central used by localization tools.
  4. Pootle & Pontoon detect when new strings have been added to this special repository and pull them into their translation environments automatically.
  5. When a new l10n updates is made, Pootle & Pontoon push the change into the locale’s Nightly repository.
  6. Localization drivers review all new updates into l10n Nightly repositories and sign off on all good updates.
  7. Good updates are flagged for shipping to Release users when the version of Firefox “rides the trains” to Release.

Localizing on Nightly offers localizers a few benefits:

  1. Localizers are exposed to new strings earlier for l10n, making it easier for developers to make corrections to en-US strings when localizers report errors.
  2. Localizers have the freedom to localize whenever new strings land (every 2-3 days) or to define their own cadence (every 2 weeks, 4 weeks, 8 weeks, etc.).
  3. Without Aurora, new localization updates get to end-users in Release faster.

The next continuous localization milestone is to implement cross-channel localization. Cross-channel will satisfy the second desired freedom: delivering translation updates to end-users faster. It will also drastically simplify the localization process, allowing localizers to land fixes once, and shipping them in all versions of Firefox. If you’d like to follow the work related to cross-channel, you can find it here on GitHub. We expect cross-channel to be ready before June 2017.

Alex GibsonMy fourth year working at Mozilla

Mozilla staff photo from All-Hands event in Hawaii, December 2016

This week marks my 4th year Mozillaversary! As usual, I try to put together a short post to recap on some of the things that happened during the past year. It feels like I have some things to talk about this time around which are slightly more process-heavy than previous year’s efforts, but gladly there’s some good work in there too. Here goes!

Our team grew

Our functional team grew over the past year which is really great to see. We now manage the development and infrastructure for both www.mozilla.org and MDN. The idea is that having both teams more closely aligned will lead to increased sharing of knowledge and skills, as well as standardization on common tools, libraries, infra, deployment and testing. It’s great to have some more talented people on the team, hooray!

Are we agile yet?

While most of my day-to-day work is still spent tending to the needs of www.mozilla.org, a lot has changed in the last year with regard to how our development team manages work processes. The larger marketing organization at Mozilla has switched to a new agile sprint model, with dedicated durable teams for each focus area. While I think this is a good move for the marketing org as a whole, it has also been a struggle for many teams to adjust (the mozorg team included). While two week sprints can work well for product focused teams, a website such as mozorg can be quite a different beast; with multiple stakeholders, moving parts, technical debt, and often rapidly shifting priorities. It is also an open source project, with real contributors. We’re still experimenting with trying to make this new process fit the needs of our project, but I do wonder if we’ll slowly creep back to Kanban (our previous methodology) during the course of the next year. Let’s wait and see ;)

Contributions and other stats

Here are the usual stats from the past year:

  • I made over 166 commits to bedrock this past year (down from 269 commits last year).
  • I have now filed over 424 bugs on Bugzilla, been assigned over 474 bugs and made over 3967 comments.
  • I cycled over 1657 miles on my lunch breaks (one of my personal goals this past year was to become more healthy!).

Now, the number of commits to bedrock aren’t always a good representation of the level of work that occurred during the year. I did work on some large, far reaching bugs which took a lot of time and effort. But it does make me wonder if our new sprint process is actually less productive overall? Are all those smaller bugs going left unattended for longer? Would we have still have been hitting our high level goals doing Kanban? It’s hard to quantify, but there’s some food for thought here.

Firefox Download Pages

The main Firefox download page is one of the most high traffic pages on mozorg, so it’s naturally something we pay close attention to when making changes. This year we experimented on the page a lot. It got redesigned it no less than three times, and continually tweaked over the course of multiple A/B tests. Lots of scrutiny goes into every change, especially in relation to page weight, loading time, and the impact that can have on download conversions. Ultimately what used to be a relatively plain looking page turned into something quite beautiful.

Redesigned Firefox download page

We also experimented with things like making the sun rise over the horizon, but sadly this proved to be a bit too much of a distraction for some visitors. Nevertheless, kudos to our design team for the beautiful visuals. It was quite fun to work on :)

Firefox Stub Attribution

Another notable feature I spent time on was adding support to bedrock for tracking campaign referral data, and passing that along to the Firefox Stub Installer for profiling in Telemetry. The idea is that the Firefox Retention Team can look at data in Telemetry and try to attribute specific changes in retention (how long users actively use the product) to downloads triggered by specific referral sources or media campaigns. This work required coordination with multiple engineering teams within Mozilla, and took considerable time to test and gradually roll out. We’re still crunching the data and hope it can provide some useful insights going forward.

SHA-1 Bouncer Support

Firefox 52 marked the end of SHA-1 certificate support on the Web. In order to continue serving downloads to users, we had to switch Bouncer to SHA-2 only, and then set up a SHA-1 mirror to continue supporting users on Windows XP/Vista. This required modifying our download button logic in bedrock (something I was once a bit scared of doing) to provide SHA-1 specific links that get shown only to the users who need it. Once XP/Vista are officially no longer supported by Firefox ESR we can remove this logic.

Mozilla Global Navigation

As part of Mozilla’s new branding rollout, I also got to build the first prototype of the new global navigation for mozorg. We’re still iterating and refining how it works and performs, but the aim is that one day it can be used across many Mozilla web properties. I’m hopeful it may help to solve some of the information architecture issues we’ve faced on mozorg in recent years.

All-hands and travel

Photo of me in the crater of a volcano!

Mozilla’s All-Hands events are always pretty amazing. This time they happened in London and Hawaii. While London wasn’t really high on the excitement levels, it was nice to get to welcome all my colleagues to the UK. Hawaii was naturally the real highlight for me, especially because I got to go visit a real, live volcano! In between all that I also got to pay my second visit to the Mozilla Toronto office, almost exactly 4 years since my last visit (which was my very first week working for Mozilla!).

Armen ZambranoDocker image to generate allthethings.json

I've created a lot of hackery in the past (mozci) based on Release Engineering's allthethings.json file as well as improving the code to generate it reliably. This file contains metadata about the Buildbot setup and the relationship between builders (a build trigger these tests).

Now, I have never spent time ensuring that the setup to generate the file is reproducible. As I've moved over time through laptops I've needed to modify the script to generate the file to fit my new machine's set up.

Today I'm happy to announce that I've published a Docker image at Docker hub to help you generate this file anytime you want.

You can find the documentation and code in here.

Please give it a try and let me know if you find any issues!
docker pull armenzg/releng_buildbot_docker
docker run --name allthethings --rm -i -t releng_buildbot_docker bash
# This will generate an allthethings.json file; it will take few minutes
/braindump/community/generate_allthethings_json.sh
# On another tab (once the script is done)
docker cp allthethings:/root/.mozilla/releng/repos/buildbot-configs/allthethings.json .


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Gervase MarkhamMOSS End-of-Award Report: Mio

We are starting to ask MOSS project awardees to write an end-of-award report detailing what happened. Here’s one written a few months ago by the Mio project (Carl Lerche).

Bruce Van DykMigrating From LastPass to KeePass

I've recently been trying out KeePass 2 as an alternative to LastPass, in this post I'm going to go briefly into why I made the switch, and detail how you can do so with a fairly minimal amount of pain. If you're just interested in how to migrate, you can skip straight to the how section.

Why

The two major reasons I'm trying something else are security and performance. That said: I think any password manager is much better than the alternative of manually managing passwords. I also think LastPass is pretty good, I've used it historically because I like it. In terms of the why I'm trying something else, these reasons will apply to pretty much any browser extension based password manager.

Security

Password managers running in the browser have an attack surface which includes JavaScript and the DOM. This doesn't mean these managers are busted, but it makes the job of securing them that much harder. For example, LastPass recently have had some issues with their browser extenstion brought to light: see here and here. That said, they have swiftly dealt with the vulnerabilities raised, which is a great thing.

LastPass is in good company here, Project Zero have shown up issues with other password manager browser extensions such as 1Password and Dashlane. These issues too have been fixed, but they can exist in the first place because of the design of these extension based managers.

KeePass doesn't integrate into browsers (though it has plugins that do so). In switching I'm hoping to guard myself against vulnerabilities such as those above. I'm going to lose out on things like autofill. However, at this stage this is a trade I'm at least willing to try out.

Performance

I've found the LastPass extension to be a bit of a performance hog. In Firefox I would often run into janks when using IRCCloud (web based IRC client) with the LastPass addon installed. There's a bugzilla bug on it here. Aside from specific cases like this, LastPass adds also an inherent overhead which I'm not sure I'm cool with.

Obviously these programs need to use some resources run. However, extension based managers can end up doing quite a lot, some of which I didn't expect. For example, some of these extensions will parse the DOM to try and find places to insert icons (click me to fill passwords) or autofill, however, if you're dealing with large DOMs this can take seconds. This may not sound like a lot, but it gets old fast when you get multi second lockups regularly.

Other Nice Things

  • Free: Password managers don't cost a ton, and most have free version with limitations, but KeePass being free is nice.
  • Open source.
  • You control your own password database. This is a bit of a double edged sword, as you're now responsible for the safety of said database, but it does mean the data is in your hands.

How

Migrating to KeePass 2 is made pretty straight forward via the ability to export and import your password database.

Exporting from LastPass

We're gonna start off by exporting our LastPass passwords to a comma separated value (.csv) file. To do this navigate to LastPass -> More Options -> Advanced -> Export -> LastPass CSV File. Save this file somewhere safe, and make sure no evil hackers get their hands on it, it contains clear text passwords.

Importing to KeePass

Once you have KeePass installed you can import the csv file from above from KeePass -> File Menu -> Import. You will now have a prompt, select "LastPass CSV" and select the file you exported above. Voila, your passwords are now imported. Now is also a good point to delete your .csv file from earlier, so your passwords aren't lying around.

Using KeePass

By this stage you should be all set. You'll find KeePass is a bit of a different beast that your extension based managers. The following sections detail useful bits and bobs I've found helpful after switching to KeePass.

Hot Keys

KeePass has a number of hotkeys which I've found useful since switching:

Settings

KeePass has a lot of configuration you can tweak under Tools -> Options. Timeouts can be set in Tools -> Options -> Security, so that after you haven't used KeePass and/or your system for some time KeePass will require your password again. Also worth a look are the interface settings under Tools -> Options -> Interface. There's a lot of customization available here to cater to your personal preference.

Using Syncthing to Sync Password Databases

Syncthing Logo

I use Syncthing to sync my password database between computers. SyncThing is a nifty open source utility for syncing data between devices. Key points:

  • No centralised storage. If you're attracted to the KeePass because it allows you control over your password database, Syncthing also provides this benefit compared to other cloud storage.
  • Data is sent encrypted.
  • Free (like beer and speech)!

On my Windows machines I run a SyncTrayzor, and on Linux I use the web interface that comes with the baseline SyncThing. If you're looking for a way to sync your password DB, I'd certainly recommend giving it a look!

This Week In RustThis Week in Rust 178

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

Sadly, for lack of nominations we have no Crate of this Week.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

100 pull requests were merged in the last week.

New Contributors

  • Aaron Hill
  • alexey zabelin
  • nate
  • Nathaniel Ringo
  • Scott McMurray
  • Suchith J N

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust doesn't end unsafety, it just builds a strong, high-visibility fence around it, with warning signs on the one gate to get inside. As opposed to C's approach, which was to have a sign on the periphery reading "lol good luck".

Quxxy on reddit.

Thanks to msiemens for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

About:CommunityFirefox 53 new contributors

With the release of Firefox 53, we are pleased to welcome the 63 developers who contributed their first code change to Firefox in this release, 58 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Air MozillaMozilla and Stanford Law Panel on Intellectual Property Law and the First Amendment

Mozilla and Stanford Law Panel on Intellectual Property Law and the First Amendment Join us for a Mozilla and Stanford Program in Law, Science & Technology hosted panel series about the intersection between intellectual property law and the...

Mozilla Addons BlogAdd-ons Update – 2017/04

Here’s the state of the add-ons world this month.

The Road to Firefox 57 (recently updated) explains what developers should look forward to in regards to add-on compatibility for the rest of the year. Please give it a read if you haven’t already.

The Review Queues

In the past month, 1,209 listed add-on submissions were reviewed:

  • 984 (81%) were reviewed in fewer than 5 days.
  • 31 (3%) were reviewed between 5 and 10 days.
  • 194 (16%) were reviewed after more than 10 days.

There are 821 listed add-ons awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.

Compatibility

The blog post for 53 is up and the bulk validation was run. Here’s the post for Firefox 54 and the bulk validation is pending.

Multiprocess Firefox is enabled for some users, and will be deployed for most users very soon. Make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta to make sure that they continue to work correctly. You may also want  to review the post about upcoming changes to the Developer Edition channel.

End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • bkzhang
  • Aayush Sanghavi
  • saintsebastian
  • Thomas Wisniewski
  • Michael Kohler
  • Martin Giger
  • Andre Garzia
  • jxpx777
  • wildsky

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/04 appeared first on Mozilla Add-ons Blog.

Jennie Rose HalperinHello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Hacks.Mozilla.OrgSimplifying Firefox Release Channels and Improving Developer Edition’s Stability

Streamlining our release process and quickly getting stable new features to users and developers is a priority for Firefox. Taking a close critical look at our release channels, it became clear that Aurora was not meeting our expectations as a first stabilization channel.

Starting on April 18, the Firefox Aurora channel will stop updating, and over the course of the next several months, the Aurora build will be removed from the train release cycle. Developer Edition will be based on the Beta build. Developer Edition users will maintain their Developer Edition themes, tools, and preferences, will keep their existing profile, and should not experience any disruption.

This change benefits developers in several ways:

  • Clearer choices in pre-release channelsNightly for experimental features and Developer Edition/Beta for stability.
  • Higher quality and more stable environment for Developer Edition users.
  • Faster release cycles for platform features. (Benefits everyone!)

Here’s the timeline: On April 18, code for Firefox 54 will move from Aurora to Beta as usual, while Firefox 55 will remain on Nightly for a second cycle in a row (a total of 14 weeks). On the next merge day, June 12, Firefox 55 will move directly from Nightly to Beta. Between April and June, Firefox Aurora on Desktop (54) will continue to receive updates for critical security issues and the Aurora and Developer Edition populations will be migrated to the Beta update channel. On Android, Aurora users will be migrated to Nightly.

Aurora was originally created in 2011 to provide more user feedback after Firefox shifted from version 5 to the high-speed release cycle. Today, in 2017, we have more modern processes underlying our train model, and believe we can deliver feature-rich, stable products without the additional 6-8 week Aurora phase.

A staged rollout mechanism, similar to what we do today with Release, will be used for the first weeks of Beta. Our engineering and release workflow will continue to have additional checks and balances rolled out to ensure we ship a high quality release. A new feature will merge from Nightly to Beta only when it’s deemed ready, based on preestablished criteria determined by our engineering, product and product integrity teams. If features are not ready, they won’t migrate from Nightly to Beta.

New tools and processes will include:

  • Static analyzers integrated as part of the workflow, in order to detect issues during the review phase. They will be able to identify potential defects while minimizing technical debt.
  • Code coverage results will be used to analyze the quality of the test-suite and the risk introduced by the change.
  • The ability to identify potential risks carried by changes before they even land by correlating various data sources (VCS, Bugzilla, etc.) in order to identify functions where a modification is more likely to induce a regression.
  • Monitoring crash rates, QE’s sign offs, telemetry data and new regressions to determine overall Nightly quality and feature readiness to merge to Beta.

For a deeper dive into transition details, please see the Mozilla Release Management blog for in-depth answers to the most common questions about this change.

Cameron Kaiser45.9.0 available

TenFourFox 45.9.0 is now available for testing (downloads, hashes, release notes), a bit behind due to Mozilla delaying this release until the Wednesday and my temporary inability to get connected at our extended stay apartment. The only changes in this release from the beta are some additional tweaks to JavaScript and additional expansion of the font block list. Please test; this build will go live Tuesday "sometime."

The next step is then to overlay the NSPR from 52 onto 45.9, overlay our final stack of changesets, and upload that as the start of FPR1 and our Github repository. We can then finally retire the changesets and let them ride off into the sunset. Watch for that in a couple weeks along with new build instructions.

Mozilla Release Management TeamDawn project or the end of Aurora

As described in the post on the Hacks blog, we are changing the release mechanism of Firefox.

What

In order to address the complexity and cycle length issues, the release management team, in coordination with Firefox product management and engineering, is going to remove the Aurora stabilization phase from the cycle.

When

On April 18th, Firefox 55 will remain on Nightly. This means Firefox 55 will remain on Nightly for two full cycles. On June 13th, Firefox 55 will migrate directly from Nightly to Beta.

Why

As originally intended, Aurora was to be the first stabilization channel having a user base 10x the size of Nightly so as to provide additional user feedback. This original intent never materialized.

The release cycle time has required that we subvert the model regularly over the years by uplifting new features to meet market requirements.

How

The stabilization cycle from Nightly to Release will be shortened by 6-8 weeks.

A staged rollout mechanism, similar to what we do today with Release, will be used for the first weeks of Beta.

Our engineering and release workflow will continue to have additional checks and balances rolled out to ensure we ship a high quality release.

We will focus on finding and fixing regressions during the Nightly cycle and alleviate time pressure to ship to reduce the 400-600 patches currently uplifted to Aurora.

A new feature will merge from Nightly to Beta only when it's deemed ready, based on pre-established criteria determined by engineering, product, and product integrity.

Tooling such as static analysis, linters, and code coverage will be integrated into the development process

Dawn planning

FAQ

What will happen to the Aurora population on Desktop?

The Aurora population will be migrated to the Beta update channel in April 2017. We plan to keep them on a separate “pre-beta” update channel as compared to the rest of the Beta population. We will use this pre-beta audience to test and improve the stability and quality of initial Beta builds until we are ready to push to 100% of beta population. Because we presented Aurora as a stable product in the past, the beta channel is the closest in terms of stability and quality.

From the next merge (April 18th), users running 54 Aurora will remain on the Aurora channel but updates will be turned off. In case of critical security issues, we might push new updates to these aurora channel users. Aurora channel users will be migrated to Beta channel in April ‘17. For this to happen, we need to make sure that the Developer Edition features are working the same way on the Beta update channel (theme, profile, etc).

What will happen to the Aurora population on Android?

Because Google play doesn't allow the migration of a population from an application to another, the fennec population on aurora will be migrated to the nightly application. For now, we are planning to reuse the current Google play aurora application and replace it by Nightly to preserve the current population.

Why are we taking different approaches with the Desktop and Android Aurora populations?

Aurora channel on Desktop has been around for a long time and has a substantial end-user base that Beta channel will benefit from.

Fennec Aurora on Google Play is a recent addition and we believe merging this audience with Nightly makes more sense. It also simplifies implementation. !

I am running Developer Edition, what will happen to me?

Developer Edition, currently based off Aurora, will be updated to get builds from the Beta branch. There is nothing Developer Edition users need to do, they will update automatically to the Beta build keeping the Developer Edition themes, tools, and preferences as well as the existing profile.

Will I still be able to test add-ons with Developer Edition?

You can continue to test unsigned add-ons on Nightly builds or load WebExtensions temporarily in Beta and Release builds.

We are also continuing to provide unbranded builds of the beta and release branches which are able to run unsigned add-ons - including bootstrapped - for development and experimentation. These versions will not be verified by QE, but will receive updates , which is an improvement to the unbranded builds we currently provide for add-on development..

The majority of Developer Edition users won't experience any disruption. However those developers who rely on unsigned add-ons will need to use Nightly builds until we have finalized the unsigned add-on builds specifically for those developers.

How will you mitigate the quality risk from cutting 6-8 weeks of stabilization from the cycle?

Instead of pushing to 100 % of the beta population at once, we will use a staged rollout mechanism to push to a subset of the beta population. For the first phase, we will be pushing to the former aurora population. As a second phase, we will be targeting specific populations (Operating system, graphic card, etc)

In parallel, QE will also do preliminary nightly sign off to detect early new potential issues. Release management will be much more aggressive in term of feature deactivation.

Last but not least, the aurora cycle was used to finalize some features. Instead, feature stabilization will be performed during the nightly cycle.

What are we doing to improve Nightly quality?

To improve the overall quality of nightly, a few initiatives will help.

Nightly merge criteria

New end-user facing features landing in Nightly builds should meet Beta-readiness criteria before they can be pushed to Beta channel.

Static analyzers

In order to detect issues at review phase, static analyzers will be integrated as part of the workflow. They will be able to identify potential defects but also limit the technological debt.

Code coverage

Code coverage results are going to be used to analyze the quality of the testsuite and the risk introduced by the change.

Risk assessment

By correlating various data sources (VCS, Bugzilla, etc), we believe we can identify the potential risks carried by changes before they even land. The idea is to identify the functions where a modification has more chance to induce a regression.

How often will Beta builds be updated?

We will continue to push two Beta builds for Desktop and one Fennec build each week of the Beta cycle.

Will Developer Edition continue to have a separate profile?

Yes. The Developer Edition separate profile feature is a requirement for transition. If for whatever reason this feature cannot be completed by the end of the year we will need to return to creating rebuilds of Developer Edition as previously done to ensure those users are not cast away.

What will happen to the Aurora branch after Firefox 54 moves to Beta?

Updates on aurora channel will be disabled on April 18th. The desktop and Android aurora populations will be migrated as described above.

What criteria will be used to assess feature readiness to move to Beta?

We will be monitoring crash rates, QE's sign offs, telemetry data and new regressions to determine overall Nightly quality and feature readiness to merge to Beta.

How and who will determine whether a feature is ready to move to Beta?

End-user facing features will be reviewed for beta-readiness before they are pushed to Beta channel. Following is a list of criteria that will be used to evaluate feature readiness to merge to Beta:

  • No significant stability Issues
  • Missing Test Plans
  • Insufficient Testing
  • Feature is not Code Complete
  • Too Many Open Bugs

More detailed criteria defined in this document.

Are there any changes to Release or ESR channel?

No changes are planned for Release or ESR channel users.

Does this change how frequently we push mainline builds to Release channel?

No, but changes added in Nightly can make it into a Release build about 6-8 weeks sooner than they do now.

What will happen for l10n process when we remove Aurora?

Focus for localization will move from mozilla-aurora to mozilla-central. Localization tools (Pootle and Pontoon) will read en-US strings from a special mozilla-central clone: l10n-drivers will review patches with strings landing in the official mozilla-central repository, provide feedback to devs if necessary, and land updates every 2-3 days in this special repository. Localized content will be pushed to l10n-central repositories.

There are no changes for developers working on Firefox: Nightly and mozilla-central remain open to string changes, including the extra six weeks that Firefox 55 will spend in Nightly, while Beta is still considered string frozen, and requests to uplift changes affecting strings are evaluated case by case.

Users interested in helping with localization should download Nightly in their language.

What will happen for l10n process by the end of year?

For Firefox and Firefox for Android we will shift to a model with a single repository for all channels for each locale. This change will be reflected in localization tools, allowing localizers to make a change to a string and see that update applied across all channels at once.

How does Dawn impact engineering planning for landing features?

The biggest shift is that features will have to be completed before merge day. Developers will not be able to finalize feature development during the next branch cycle (as Aurora is used currently). See also “How and who will determine whether a feature is ready to move to Beta?”.

How will bug fixes and features not tracked by project management be impacted by Dawn?

Landing bug fixes in Nightly repository continues as before. Development on features that are not directly end-user visible and not tracked by EPMs, release management continues as before.

If Nightly quality and stability is negatively impacted by these untracked features or bug fixes, we will discuss potential mitigation options such as: back outs, stabilizing quality issues before continuing new feature development work, delaying Merge date, imposing code freeze in Nightly until blocking issues are resolved, etc.

What will happen to the diagnostic assert?

MOZDIAGNOSTICASSERT will enabled during the first part of the beta cycle. It will be automatically disabled when EARLYBETAOR_EARLIER is no longer defined.

The Servo BlogThis Week In Servo 98

In the last week, we landed 127 PRs in the Servo organization’s repositories.

We started publishing Windows nightly builds on download.servo.org. Please test them out and file issues about things that don’t work right!

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017. Q2 plans will appear soon; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • jdm fixed an assertion failure when loading multiple stylesheets from the same <link> element.
  • mckaymatt made line numbers correct in errors reported for inline stylesheets.
  • canaltinova implemented support for the shape-outside CSS property in Stylo.
  • waffles removed much of the code duplication for CSS parsing and serialization of basic shapes.
  • nox preserved out of bounds values when parsing calc() expressions.
  • Manishearth implemented MathML presentation hints for Stylo.
  • bholley improved performance of the style system by caching runtime preferences instead of querying them.
  • ferjm added an option to unminify JS and store it to disk for easier web compatibility investigations.
  • tiktakk converted a recursive algorithm to an iterative one for complex selectors.
  • emilio fixed some bugs that occurred when parsing media queries.
  • Manishearth implemented queries for font metrics during restyling.
  • jryans added support for @page rules to Stylo.
  • UK992 allowed Servo to build with MSVC 2017.
  • MortimerGoro implemented the Gamepad API.
  • jdm corrected an assertion failure when using text-overflow: ellipsis.
  • tomhoule refactored the style system types to preserve more specified values.
  • jonathandturner worked around the mysterious missing key events on Windows.
  • charlesvdv improved the handling of non-ascii characters in text inputs.
  • clementmiao added common keyboard shortcuts for text inputs.
  • manuel-woelker implemented support for Level 4 RGB and HSL CSS syntax.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Karl Dubost[worklog] Edition 063. Spring is here

webcompat life

  • Some issues takes a lot longer to analyze understand than what it seems at the start.

webcompat issues

webcompat.com dev

Otsukare!

Ehsan AkhgariQuantum Flow Engineering Newsletter #5

Another week full of performance related updates quickly went by, I’d like to share a few of them.
We’re almost mid-April, about 3 weeks after I shared my first update on our progress battling our sync IPC issues.  I have prepared a second Sync IPC Report for 2017-04-13.  For those who looked at the previous report, this is in the same spreadsheet, and the data is next to the previous report, for easy comparison.  We have made a lot of great progress fixing some of the really bad synchronous IPC issues in the recent few weeks, and even though telemetry data is laggy, we are starting to see this reflect in the data coming in through telemetry!  Here is a human readable summary of where we are now:
  • PCookieService::Msg_GetCookieString is still at the top of the list, now taking a whopping 45% piece of the pie chart!  I don’t think there is any reason to believe that this has gotten particularly worse, it’s just that we’re starting to get better at not doing synchronous IPC, so this is standing out even more now!  But its days are numbered.  🙂
  • PContent::Msg_RpcMessage and PBrowser::Msg_RpcMessage at 19%.  We still need to get better data about the sync IPC triggered from JS, that shows up in this data under one of these buckets.
  • PJavaScript::Msg_Get at 5% (CPOW overhead) could be caused by add-ons that aren’t e10s compatible.
  • PAPZCTreeManager::Msg_ReceiveMouseInputEvent.  This one (and a few other smaller APZ related ones) tends to have really low mean values, but super high count values which is why they tend to show high on this list, but they aren’t necessarily too terrible compared to the rest of our sync IPC issues.
  • PVRManager::Msg_GetSensorState also relatively low mean values but could be slightly worse.
  • PJavaScript::Msg_CallOrConstruct, more CPOW overhead.
  • PContent::Msg_SyncMessage, more JS triggered sync IPC.
A few items further down on the list are either being worked on or recently fixed as well.  I expect this to keep improving over the next few weeks.  It is really great to see this progress, thanks to everyone who has worked on fixing these issues, helping with the diagnoses, code reviews, etc.
We have also been working hard at triaging performance related bug reports.  In order to keep an eye over the bug-to-bug status of project you can use the Bugzilla queries on the wiki.  As of this moment, we have triaged 160 bugs as [qf:p1] (which means, these performance related bugs are the ones we believe should be fixed now for the Firefox 57 release).  Of these bugs, 92 bugs are unassigned right now.  If you see a bug on this list in your area of expertise which you think you can help with, please consider picking it up.  We really appreciate your help.  Please remember that not every bug on this list is complicated to fix, and there’s everything from major architectural changes to simple one-liner fixes up for grabs.  🙂
Another really nice effort that is starting to unfold and I’m super excited about is the new Photon performance project, which is a focused effort on the front-end performance.  This includes everything from engineering the new UI with things like animations running on the compositor in mind from the get-go, being laser focused on guaranteeing good performance on key UI interactions such as tab opening and closing, and lots of focused measurements and fixes to the browser front-end.
The performance story of this week is about how measurement tools can distort our vision.  And this one isn’t much of a story, it’s more of a lesson that I have been learning seemingly over and over again, these days.  You may have heard of the measurement problem, which basically amounts to the fact that you always change what you measure.  Markus and I were recently talking about the cost of style flushes for browser.xul that I had seen in my profiles and how they could sometimes be expensive, and noticed that this may be due to the profiler overhead that we incur in order to show information about the cause of the restyle in the profile UI.  He fixed the issue since.  I think the reason why I didn’t catch this in my own profiling was that I have gotten so used to seeing expensive reflows and restyles that sometimes I accept that as a fact of life and don’t look under the hood closely enough.  Lesson learned!
We have a bug tracking these types of issues, so if you know of something similar please create a dependency.  If you also profile Firefox regularly using the Gecko Profiler, adding yourself to the CC list of that bug may not be a bad idea.
Now it’s time to acknowledge those who have helped make Firefox faster in the past week.  I will probably forget a few people here, apologies for any unintended omissions!
Until next week, happy hacking!

Mozilla Open Policy & Advocacy BlogShould Patent Law Be a First Amendment Issue?

On Monday April 17th, Mozilla and Stanford Law are presenting a panel about intellectual property law and the First Amendment.

We’ll talk about how IP law and the First Amendment intersect in IP disputes, eligibility tests, and the balance of interests between patent holders and users.

Judge Mayer’s concurring opinion last year in Intellectual Ventures I LLC v. Symantec Corp, has put the debate over the First Amendment and boundaries of patent protection back in the spotlight.

Our all star panel will discuss both sides of the debate.

Panelists

Dan Burk, professor of law at UC Irvine School of Law.

Sandra Park, Senior Staff Attorney for the ACLU Women’s Rights Project.

Robert Sachs, a partner at Fenwick & West LLP, a leading Intellectual Property law firm.

Wendy Seltzer, Strategy Lead and Policy Counsel for the World Wide Web Consortium.

Elvin Lee, Product and Commercial Counsel at Mozilla, will moderate the event.

We’ll also hear opening remarks from professor Mark A. Lemley, who serves as the Director of the Stanford Program in Law, Science and Technology.

Topics and questions we’ll cover

  • Does patent law create conflicts with the First Amendment?
  • Do the subject-matter eligibility tests created by the Supreme Court (e.g., Alice) mitigate or impact any potential First Amendment issues?
  • How does the First Amendment’s intersection with patent law compare to other IP and regulatory contexts?
  • What are the different competing interests for IP owners and creators?
  • Registration of ‘offensive’ marks is currently being reviewed in light of the First Amendment. Are there any parallels to the grant of patent protection by the USPTO, or subsequent enforcement?

Watch

AirMozilla and Mozilla’s Facebook page will carry the livestream for this event. We hope you’ll tune in.

The post Should Patent Law Be a First Amendment Issue? appeared first on Open Policy & Advocacy.

Mozilla Addons BlogApply to Join the AMO Feature Board

Help people discover add-ons that make this browser do glorious things.

Do you have an eye for awesome add-ons? Can you distinguish a decent ad blocker from a stellar one? Interested in making a huge impact for millions of Firefox users? If so, please consider applying to join AMO’s Feature Board.

The board is comprised of a small group of community contributors who help select each month’s new featured add-ons. Every board serves for six months, then a new group of community curators take over. Now the time has come to assemble a new group of talented contributors.

Anyone from the add-ons community is welcome to apply: power users, theme designers, developers, and evangelists. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board.

This page provides more information on the duties of a board member. To be considered, please email us at amo-featured [at] mozilla [dot] org and tell us how you’re involved with AMO and why you think you’d make a strong content curator. The deadline for applications is Friday, April 28, 2017 at 23:59 PDT. The new board will be announced shortly thereafter.

We look forward to hearing from you!

The post Apply to Join the AMO Feature Board appeared first on Mozilla Add-ons Blog.