Botond BalloTrip Report: C++ Standards Meeting in Belfast, November 2019

Summary / TL;DR

Project What’s in it? Status
C++20 See below On track
Library Fundamentals TS v3 See below Under development
Concepts Constrained templates In C++20
Parallelism TS v2 Task blocks, library vector types and algorithms, and more Published!
Executors Abstraction for where/how code runs in a concurrent context Targeting C++23
Concurrency TS v2 See below Under active development
Networking TS Sockets library based on Boost.ASIO Published! Not in C++20.
Ranges Range-based algorithms and views In C++20
Coroutines Resumable functions (generators, tasks, etc.) In C++20
Modules A component system to supersede the textual header file inclusion model In C++20
Numerics TS Various numerical facilities Under active development
C++ Ecosystem TR Guidance for build systems and other tools for dealing with Modules Under active development
Contracts Preconditions, postconditions, and assertions Under active development
Pattern matching A match-like facility for C++ Under active development
Reflection TS Static code reflection mechanisms Publication imminent
Reflection v2 A value-based constexpr formulation of the Reflection TS facilities Under active development
Metaclasses Next-generation reflection facilities Early development

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of November 25, 2019). If you encounter such a link, please check back in a few days.

Introduction

Last week I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Belfast, Northern Ireland. This was the third and last committee meeting in 2019; you can find my reports on preceding meetings here (July 2019, Cologne) and here (February 2019, Kona), and previous ones linked from those. These reports, particularly the Cologne one, provide useful context for this post.

At the last meeting, the committee approved and published the C++20 Committee Draft (CD), a feature-complete draft of the C++20 standard which includes wording for all of the new features we plan to ship in C++20. The CD was then sent out to national standards bodies for a formal ISO ballot, where they have the opportunity to file technical comments on it, called “NB (national body) comments”.

We have 10-15 national standards bodies actively participating in C++ standardization, and together they have filed several hundred comments on the CD. This meeting in Belfast was the first of two ballot resolution meetings, where the committee processes the NB comments and approves any changes to the C++20 working draft needed to address them. At the end of the next meeting, a revised draft will be published as a Draft International Standard (DIS), which will likely be the final draft of C++20.

NB comments typically ask for bug and consistency fixes related to new features added to C++20. Some of them ask for fixes to longer-standing bugs and consistency issues, and some for editorial changes such as fixes to illustrative examples. Importantly, they cannot ask for new features to be added (or at least, such comments are summarily rejected, though the boundary between bug fix and feature can sometimes be blurry).

Occasionally, NB comments ask for a newly added feature to be pulled from the working draft due to it not being ready. In this case, there were comments requesting that Modules and Coroutines (among other things) be postponed to C++23 so they can be better-baked. I’m pleased to report that no major features were pulled from C++20 at this meeting. In cases where there were specific technical issues with a feature, we worked hard to address them. In cases of general “this is not baked yet” comments, we did discuss each one (at length in some cases), but ultimately decided that waiting another 3 years was unlikely to be a net win for the community.

Altogether, over half of the NB comments have been addressed at this meeting, putting us on track to finish addressing all of them by the end of the next meeting, as per our standardization schedule.

While C++20 NB comments were prioritized above all else, some subgroups did have time to process C++23 proposals as well. No proposals were merged into the C++23 working draft at this time (in fact, a “C++23 working draft” doesn’t exist yet; it will be forked from C++20 after the C++20 DIS is published at the end of the next meeting).

Procedural Updates

A few updates to the committee’s structure and how it operates:

  • As the Networking TS prepares to be merged into C++23, it has been attracting more attention, and the committee has been receiving more networking-related proposals (notable among them, one requesting that networking facilities be secure by default), so the Networking Study Group (SG4) has been re-activated so that a dedicated group can give these proposals the attention they deserve.
  • An ABI Review Group (ARG) was formed, comprised of implementors with ABI-related expertise on various different platforms, to advise the committee about the ABI impacts of proposed changes. The role of this group is not to set policy (such as to what extent we are willing to break ABI compatibility), but rather to make objective assessments of ABI impact on various platforms, which other groups can then factor into their decision-making.
  • Not something new, just a reminder: the committee now tracks its proposals in GitHub. If you’re interested in the status of a proposal, you can find its issue on GitHub by searching for its title or paper number, and see its status — such as which subgroups it has been reviewed by and what the outcome of the reviews were — there.
  • At this meeting, GitHub was also used to track NB comments, one issue per comment, and you can also see their status and resolution (if any) there.

Notes on this blog post

This blog post will be a bit different from previous ones. I was asked to chair the Evolution Working Group Incubator (EWG-I) at this meeting, which meant that (1) I was not in the Evolution Working Group (EWG) for most of the week, and thus cannot report on EWG proceedings in as much detail as before; and (2) the meeting and the surrounding time has been busier for me than usual, leaving less time for blog post writing.

As a result, in this blog post, I’ll mostly stick to summarizing what happened in EWG-I, and then briefly mention a few highlights from other groups. For a more comprehensive list of what features are in C++20, what NB comment resolutions resulted in notable changes to C++20 at this meeting, and which papers each subgroup looked at, I will refer you to the excellent collaborative Reddit trip report that fellow committee members have prepared.

Evolution Working Group Incubator (EWG-I)

EWG-I (pronounced “oogie” by popular consensus) is a relatively new subgroup, formed about a year ago, whose purpose is to give feedback on and polish proposals that include core language changes — particularly ones that are not in the purview of any of the domain-specific subgroups, such as SG2 (Modules), SG7 (Reflection), etc. — before they proceed to EWG for design review.

EWG-I met for two and a half days at this meeting, and reviewed 17 proposals. All of this was post-C++20 material.

I’ll go through the proposals that were reviewed, categorized by the review’s outcome.

Forwarded to EWG

The following proposals were considered ready to progress to EWG in their current state:

  • Narrowing contextual conversions to bool. This proposal relaxes a recently added restriction which requires an explicit conversion from integer types to bool in certain contexts. The motivation for the restriction was noexcept(), to remedy the fact that it was very easy to accidentally declare a function as noexcept(f()) (which means “the function is noexcept if f() returns a nonzero value”) instead of noexcept(noexcept(f())) (which means “the function is noexcept if f() doesn’t throw”), and this part of the restriction was kept. However, the proposal argued there was no need for the restriction to also cover static_assert and if constexpr.
  • Structured bindings can introduce a pack. This allows a structured binding declaration to introduce a pack, e.g. auto [...x] = f();, where f() is a function that returns a tuple or other decomposable object, and x is a newly introduced pack of bindings, one for each component of the object; the pack can then be expanded as x... just like a function parameter pack.
  • Reserving attribute names for future use. This reserves attribute names in the global attribute namespace, as well as the attribute namespace std (or std followed by a number) for future standardization.
  • Accessing object representations. This fixes a defect introduced in C++20 that makes it undefined behaviour to access the bytes making up an object (its “object representation”) by reinterpret_casting its address to char*.
  • move = relocates. This introduces “relocating move constructors”, which are move constructors declared using = relocates in places of = default. This generates the same implementation as for a defaulted move constructor, but the programmer additionally guarantees to the compiler that it can safely optimize a move + destructing the old object, into a memcpy of the bytes into the new location, followed by a memcpy of the bytes of a default-constructed instance of the type into the old location. This essentially allows compilers to optimize moves of many types (such as std::shared_ptr), as well as of arrays / vectors of such types, into memcpys. Currently, only types which have an explicit = relocates move constructor declaration are considered relocatable in this way, but the proposal is compatible with future directions where the relocatability of a type is inferred from that of its members (such as in this related proposal).

Forwarded to EWG with modifications

For the following proposals, EWG-I suggested specific revisions, or adding discussion of certain topics, but felt that an additional round of EWG-I review would not be helpful, and the revised paper should go directly to EWG:

  • fiber_context – fibers without scheduler. This is the current formulation of “stackful coroutines”, or rather a primitive on top of which stackful coroutines and other related things like fibers can be built. It was seen by EWG-I so that we can brainstorm possible interactions with other language features. TLS came up, as discussed in more detail in the EWG section.
  • Making operator?: overloadable. See the paper for motivations, which include SIMD blend operations and expression template libraries. The biggest sticking point here is we don’t yet have a language mechanism for making the operands lazily evaluated, the way the built-in operator behaves. However, not all use cases want lazy evaluation; moreover, the logical operators (&& and ||) already have this problem. EWG-I considered several mitigations for this, but ultimately decided to prefer an unrestricted ability to overload this operator, relying on library authors to choose wisely whether or not to overload it.
  • Make declaration order layout mandated. This is largely standardizing existing practice: compilers technically have the freedom to reorder class fields with differing access specifiers, but none are known to do so, and this is blocking future evolution paths for greater control over how fields are laid out in memory. The “modification” requested here is simply to catalogue the implementations that have been surveyed for this.
  • Portable optimisation hints. This standardizes __builtin_assume() and similar facilities for giving the compiler a hint it can use for optimization purposes. EWG-I expressed a preference for an attribute-based ([[assume(expr)]]) syntax.

Note that almost all of the proposals that were forwarded to EWG have been seen by EWG-I at a previous meeting, sometimes on multiple occasions, and revised since then. It’s rare for EWG-I to forward an initial draft (“R0”) of a proposal; after all, its job is to polish proposals and save time in EWG as a result.

Forwarded to another subgroup

The following proposals were forwarded to a domain-specific subgroup:

  • PFA: a generic, extendable and efficient solution for polymorphic programming. This proposed a mechanism for generalized type erasure, so that types like std::function (which a type-erased wrapper for callable objects) can easily be built for any interface. EWG-I forwarded this to the Reflection Study Group (SG7) because the primary core language facility involves synthesizing a new type (the proxy / wrapper) based on an existing one (the interface). EWG-I also recommended expressing the interface as a regular type, rather than introducing a new facade entity to the language.

Feedback given

For the following proposals, EWG-I gave the author feedback, but did not consider it ready to forward to another subgroup. A revised proposal would come back to EWG-I.

No proposals were outright rejected at this meeting, but the nature of the feedback did vary widely, from requesting minor tweaks, to suggesting a completely different approach to solving the problem.

  • Provided operator= returns lvalue-ref on an rvalue. This attempts to rectify a long-standing inconsistency in the language, where operator= for a class type can be called on temporaries, which is not allowed for built-in types; this can lead to accidental dangling. EWG-I agreed that it would be nice to resolve this, but asked the author to assess how much code this would break, so we can reason about its feasibility.
  • Dependent static assertion. The problem this tries to solve is that static_assert(false) in a dependent context fails eagerly, rather than being delayed until instantiation. The proposal introduces a new syntax, static_assert<T>(false), where T is some template parameter that’s in scope, for the delayed behaviour. EWG-I liked the goal, but not the syntax. Other approaches were discussed as well (such as making static_assert(false) itself have the delayed behaviour, or introducing a static_fail() operator), but did not have consensus.
  • Generalized pack declaration and usage. This is an ambitious proposal to make working with packs and pack-like types much easier in the language; it would allow drastically simplifying the implementations of types like tuple and variant, as well as making many compile-time programming tasks much easier. A lot of the feedback concerned whether packs should become first-class language entities (a “language tuple” of sorts, as previously proposed), or remain closer to their current role as depenent constructs that only become language entities after expansion.
  • Just-in-time compilation. Another ambitious proposal, this takes aim at use cases where static polymorphism (i.e. use of templates) is desired for performance, but the parameters (e.g. the dimensions of a matrix) are not known at compile time. Rather than being a general-purpose JIT or eval() like mechanism, the proposal aims to focus on the ability to instantiate some templates at runtime. EWG-I gave feedback related to syntax, error handling, restricting runtime parameters to non-types, and consulting the Reflection Study Group.
  • Interconvertible object representations. This proposes a facility to assert, at compile time, than one type (e.g. a struct containing two floats) has the same representation in memory as another (e.g. an array of two floats). EWG-I felt it would be more useful it the proposed annotation would actually cause the compiler to use the target layout.
  • Language support for class layout control. This aims to allow the order in which class members are laid out in memory to be customized. It was reviewed by SG7 (Reflection) as well, which expressed a preference for performing the customization in library code using reflection facilities, rather than having a set of built-in layout strategies defined by the core language. EWG-I preferred a keyword-based annotation syntax over attributes, though metaclasses might obviate the need for a dedicated syntax.
  • Epochs: a backward-compatible language evolution mechanism. This was probably the most ambitious proposal EWG-I looked at, and definitely the one that attracted the largest audience. It proposes a mechanism similar to Rust’s editions for evolving the language in ways we have not been able to so far. Central to the proposal is the ability to combine different modules which use different epochs in the same program. This generated a lot of discussion around the potential for fracturing the language into dialects, the granularity at which code opts into an epoch, and what sorts of new features should be allowed in older epochs, among other topics.

Thoughts on the role of EWG-I

Having spent time in both EWG and EWG-I, one difference that’s apparent is that EWG is the tougher crowd: features that make it successfully through EWG-I are often still shot down in EWG, sometimes on their first presentation. If EWG-I’s role is to act as a filter for EWG, it is effective in that role already, but there is probably potential for it to be more effective.

One dynamic that you often see play out in the committee is the interplay between “user enthusiasm” and “implementer skepticism”: users are typically enthusiastic about new features, while implementers will often try to curb enthusiasm and be more realistic about a feature’s implementation costs, interactions with other features, and source and ABI compatibility considerations. I’d say that EWG-I tends to skew more towards “user enthusiasm” than EWG does, hence the more permissive outcomes. I’d love for more implementers to spend time in EWG-I, though I do of course realize they’re in short supply and are needed in other subgroups.

Evolution Working Group

As mentioned, I didn’t spend as much time in EWG as usual, but I’ll call out a few of the notable topics that were discussed while I was there.

C++20 NB comments

As with all subgroups, EWG prioritized C++20 NB comments first.

  • The C++20 feature that probably came closest to removal at this meeting was class types as non-type template parameters (NTTPs). Several NB comments pointed out issues with their current specification and asked for either the issues to be resolved, or the feature to be pulled. Thankfully, we were able to salvage the feature. The fix approach involves axing the feature’s relationship with operator==, and instead having template argument equivalence be based on a structural identity, essentially a recursive memberwise comparison. This allows a larger category of types to be NTTPs, including unions, pointers and references to subobjects, and, notably, floating-point types. For class types, only types with public fields are allowed at this time, but future directions for opting in types with private fields are possible.
  • Parenthesized initialization of aggregates also came close to being removed but was fixed instead.
  • A suggestion to patch a functionality gap in std::is_constant_evaluated() by introducing a new syntactic construct if consteval was discussed at length but rejected. The feature may come back in C++23, but there are enough open design questions that it’s too late for C++20.
  • To my mild (but pleasant) surprise, ABI isolation for member functions, a proposal which divorces a method’s linkage from whether it is physically defined inline or out of line, and which was previously discussed as something that’s too late for C++20 but which we could perhaps sneak is as a Defect Report after publication, was now approved for C++20 proper. (It did not get to a plenary vote yet, where it might be controversial.)
  • A minor consistency fix between constexpr and consteval was approved.
  • A few Concepts-related comments:
    • The ability to constrain non-templated functions was removed because their desired semantics were unclear. They could come back in C++23 with clarified semantics.
    • One remaining visual ambiguity in Concepts is that in a template parameter list, Foo Bar can be either a constrained type template parameter (if Foo names a concept) or a non-type template parameter (if Foo names a type). The compiler knows which by looking up Foo, but a reader can’t necessarily tell just by the syntax of the declaration. A comment proposed resolving this by changing the syntax to Foo auto Bar for the type parameter case (similar to the syntax for abbreviated function templates). There was no consensus for this change; a notable counter-argument is that the type parameter case is by far the more common one, and we don’t want to make the common case more verbose (and the non-type syntax can’t be changed because it’s pre-existing).
    • Another comment pointed out that Concept<X> can also mean two different things: a type constraint (which is satisifed by a type T if Concept<T, X> is true), or an expression which evaluates Concept applied to the single argument X. The comment suggested disambiguating by e.g. changing the first case to Concept<, X>, but there was no consensus for this either.

Post-C++20 material

Having gotten through all Evolutionary NB comments, EWG proceeded to review post-C++20 material. Most of this had previously gone through EWG-I (you might recognize a few that I mentioned above because they went through EWG-I this week).

  • (Approved) Reserving attribute names for future use
  • (Approved) Accessing object representations. EWG agreed with the proposal’s intent and left it to the Core Working Group to figure out the exact way to specify this intent.
  • (Further work) std::fiber_context – stackful context switching. This was discussed at some length, with at least one implementer expressing significant reservations due to the feature’s interaction with thread-local storage (TLS). Several issues related to TLS were raised, such as the fact that compilers can cache pointers to TLS across function calls, and if a function call executes a fiber switch that crosses threads (i.e. the fiber is resumed on a different OS thread), the cache becomes invalidated without the compiler having expected that; addressing this at the compiler lever would be a performance regression even for code that doesn’t use fibers, because the compiler would need to assume that any out of line function call could potentially execute a fiber switch. A possible alternative that was suggested was to have a mechanism for a user-directed kernel context switch that would allow coordinating threads of execution (ToEs) in a co-operative way without needing a distinct kind of ToE (namely, fibers).
  • (Further work) Structured bindings can introduce a pack. EWG liked the direction, but some implementers expressed concerns about the implementation costs, pointing out that in some implementations, handling of packs is closely tied to templates, while this proposal would allow packs to exist outside of templates. The author and affected implementers will discuss the concerns offline.
  • (Further work) Automatically generate more operators. This proposal aims to build on the spaceship operator’s model of rewriting operators (e.g. rewriting a < b to a <=> b < 0), and allow other kinds of rewriting, such as rewriting a += b to a = a + b. EWG felt any such facility should be strictly opt-in (e.g. you could give you class an operator+=(...) = default to opt into this rewriting, but it wouldn’t happen by default), with the exception of rewriting a->b to (*a).b (and the less common a->*b to (*a).*b) which EWG felt could safely happen by default.
  • (Further work) Named character escapes. This would add a syntax for writing unicode characters in source code by using their descriptive names. Most of the discussion concerned the impacts of implementations having to ship a unicode database containing such descriptive names. EWG liked the direction but called for further exploration to minimize such impacts.
  • (Further work) tag_invoke. This concerns making it easier to write robust customization points for library facilities. There was a suggestion of trying to model the desired operations more directly in the language, and EWG suggested exploring that further.
  • (Rejected) Homogeneous variadic function parameters. This would have allowed things like template <typename T> void foo(T...); to mean “foo is a function template that takes zero or more parameters, all of the same type T“. There were two main arguments against this. First, it would introduce a novel interpretation of template-ids (foo<int> no longer names a single specialization of foo, it names a family of specializations, and there’s no way to write a template-id that names any individual specialization). The objection that seems to have played the larger role in the proposal’s rejection, however, is that C allows things like (int...) to be an alternative way of writing (int, ...) (meaning, an int parameter followed by C-style variadic parameters), and, while this was never allowed in C++, compilers allow it as an extension, and apparently a lot of old C++ code uses it. The author went to some lengths to analyze a large dataset of open-source C++ code for occurrences of such use (of which there were vanishingly few), but EWG felt this wasn’t representative of the majority of C++ code out there, most of which is proprietary.

Other Highlights

  • Ville Voutilainen’s paper proposing a high-level direction for C++23 was reviewed favourably by the committee’s Direction Group.
  • While I wasn’t able to attend the Reflection Study Group (SG7)’s meeting (it happened on a day EWG-I was in session), I hear that interesting discussions took place. In particular, a proposal concerning side effects during constant evaluation prompted SG7 to consider whether we should revise the envisioned metaprogramming model and take it even further in the direction of “compile-time programming is like regular programming”, such that if you e.g. wanted compile-time output, then rather than using a facility like the proposed constexpr_report, you could just use printf (or whatever you’d use for runtime output). The Circle programming language was pointed to as prior art in this area. SG7 did not make a decision about this paradigm shift, just encouraged exploration of it.
  • The Concurrency Study Group (SG1) came to a consensus on a design for executors. (Really, this time.)
  • The Networking Study Group (SG4) pondered whether C++ networking facilities should be secure by default. The group felt that we should standardize facilities that make use of TLS if and when they are ready, but not block networking proposals on it.
  • The determistic exceptions proposal was not discussed at this meeting, but one of the reactions that its previous discussions have provoked is a resurgence in interest in better optimizing today’s exceptions. There was an evening session on this topic, and benchmarking and optimization efforts were discussed.
  • web_view was discussed in SG13 (I/O), and I relayed some of Mozilla’s more recent feedback; the proposal continues to enjoy support in this subgroup. The Library Evolution Incubator did not get a chance to look at it this week.

Next Meeting

The next meeting of the Committee will be in Prague, Czech Republic, the week of February 10th, 2020.

Conclusion

This was an eventful and productive meeting, as usual, with the primary accomplishment being improving the quality of C++20 by addressing national body comments. While C++20 is feature-complete and thus no new features were added at this meeting, we did solidify the status of recently added features such as Modules and class types as non-type template parameters, greatly increasing the chances of these features remaining in the draft and shipping as part of C++20.

There is a lot I didn’t cover in this post; if you’re curious about something I didn’t mention, please feel free to ask in a comment.

Other Trip Reports

In addition to the collaborative Reddit report which I linked to earlier, here are some other trip reports of the Belfast meeting that you could check out:

Karl DubostBest viewed with… Mozilla Dev Roadshow Asia 2019

I was invited by Sandra Persing to participate to the Mozilla Developer Roadshow 2019 in Asia. The event is going through 5 cities: Tokyo, Seoul, Taipei, Singapore, Bangkok. I committed to participate to Tokyo and Seoul. The other speakers are still on the road. As I'm writing this, they are speaking in Taipei, when I'm back home.

Let's go through the talk and then some random notes about the audience, people and cities.

The Webcompat Talk

The talk was half about webcompat and half about the tools helping developers using Firefox to avoid Web compatibility issues. The part about the Firefox devtools was introduced by Daisuke. My notes here are a bit longer than what I actually said. I have the leisure of more space.

Let's talk about webcompat

Intro slide

The market dominance by one browser is not new. It has happened a couple of times already. In an article by Wired on September 2008 (Time flies!), they have this nice graph about the browser market share space. The first dominant browser was Netscape in 1996, the child of Mosaic. It already had influence on the other player. I remember how the introduction of images and tables. For example, I remember a version of Netscape where we had to close the table element absolutely. If not, the full table was not being displayed. This had consequences on the web developer job-At that time, we were webmasters. This seems to be a good consequence in this case by imposing a cleaner, leaner code.

browser market shares

Then Internet Explorer entered the market and took it all. The browser being distributed with Windows, it became de factor the default engine in all work environments. Internet Explorer reached his dominance peak around 2003, then started to decline through the effort of Firefox. Firefox never reached a peak (and that's good!). Probably the maximum market share in the world was around 20%-30% in 2009. Since there has been a steady decline of Firefox market share. The issue is not the loss of market share, the issue is the dominance by one player whichever the player is. I would not be comfortable to have Firefox as a dominant browser too. A balance in between all browsers is healthy.

note: World market shares are interesting, but they do not represent the full picture. There can be extreme diversity in between markets. That was specifically the case 10 years ago. A browser would have 80% of the market share in a specific country and 0% in another one. The issue is increased through mobile operators. It happened on Japan market, which went from 0 to very high dominance of Safari on iOS, to a shared dominance in between Chrome (Android) and Safari (iOS).

The promises of a website

Fantasy website

When a website is sold to a client. We sell the package, the look and feel, the design. In the cases of web applications, performances, conversion rates, user engagement pledge will be added into the basket. We very rarely talk about the engine. And still, people are more and more conscious about the quality and origin of food they buy, the sustainability of materials used to build a house, the maintenance cost of their car.

There is a lot of omissions in what is being promised to the client. This is accentuated by the complete absence of thinking about the resilience of the information contained on the website. Things change radically when we introduce the notion of archivability, time resilience, robustness over devices diversity and obsolescence. These are interesting questions that should be part of the process of thinking a website.

years ago websites were made of files; now they are made of dependencies. — Simon Pitt

A simple question such as "What the content of this page becomes when the site is not updated anymore and the server is not maintained anymore?" A lot of valuable information disappears every day on the Web, just because we didn't ask the right questions.

But I'm drifting a bit from webcompat. Let's come back on track.

The reality of a website

And here the photo of an actual website.

mechanics workshop

This is dirty, messy, full of dependencies, and forgotten bits. Sometimes different version of the same library is used in the code with conflicting ways of doing things. CSS is botched, JS is in a dire state of complexity, html and accessibility are a deeply nested soup of codes where the meaning has been totally forgotten. With the rise of frameworks such as ReactJS and their components, we fare a lot of worse in terms of semantics than what we did a couple of years ago with table layouts.

These big piles of codes have consequences. Maintainability suffers. Web compatibility issues increase. By Web compatibility I mean the ability of a website to work correctly on any devices, any context. Not as it should look the same everywhere, but as in any users should be able to perform the tasks they were expecting doing on it.

Examples?

  • Misconfigured user agent sniffing creating a ping-pong game (http/JS redirection) in between a mobile and a desktop site.
  • User agent detection in JavaScript code to deliver a specific feature, which fails when the user agent change, or the browser is being fixed.
  • Detection of a feature to change the behavior of the browser. window.event was not standard, and not implemented in Firefox for a long time. Webcompat issues pushed Mozilla to implement it for solving issues. In return, it created new webcompat issues, because some sites where using this to mean Firefox and not IE and then choose in between keyCode and charCode, which had yet another series of unintended consequences.
  • WebKit prefixes for CSS and JS… Circa 2015, 20% of the Japanese and Chinese mobile websites were breaking in a way or the other on Firefox on Android. It make impossible to have a reasonable impact on the Japanese market or to create an alliance with an operator to distribute Firefox more widely. So some of the WebKit prefixes became part of the Web platform, because you can't exist when developing a new browser if you do not have these aliases. Forget about asking web developers to do the right thing. Some sites are not maintained anymore, but users are still using them.

The list goes on.

The ultimate victim?

one person in the subway

The user who decided to use a specific browser for personal reasons is the victim. They are caught in between a website not doing the right thing, and a tool (the browser) not working properly. If you are not the user of the dominant browser of the market, you are up for a bumpy ride on the Web. Your browser usage becomes more a conviction of doing the right thing, more than making your life easier.

This should not.

A form to save the Web

webcompat site screenshot

We, the Web compatibility team, created a form to help users report issues about website working in one browser but not the others. The report can be made for any browsers, not only Mozilla Firefox (for which I'm working). The majority of our issues are coming from Firefox for two reasons.

  1. Firefox not having a market dominance, web developers do not test in Firefox. It's even more acute on mobile.
  2. The bug reporting is integrated into the Firefox UI (Developer, Nightly releases).

When we triage the issues, an amazing team of 3 persons (Ciprian, Oana and Sergiu), we ping the right persons working for other browsers companies for analyzing the issues. I would love more participation from the other browsers, but the participation is too irregular to make it effective for other browsers.

On Mozilla side, we have a good crew.

Diagnosis: a web plumbers team

person in a workshop

Every day, someone on the Mozilla webcompat team (Dennis, Ksenia, Thomas and myself) will handle the diagnosis of incoming issues. We call this diagnosis. You remember the messy website picture? Well, we put our sleeves up and dig right into the greasy bolts and screws. Minified code, broken code, unmaintained libraries, ill-defined CSS, we go through it and try to make sense of it without sometimes access to the original sources.

Our goal is to determine if it's really one of these:

  • not a webcompat issue. There's a difference but created by different widget rendering, or browser features specific to one vendor (font inflation for example)
  • a bug in Gecko (core engine of Firefox) that needs to be fixed
  • a mistake of the website

Once the website has been diagnosed, we have options…

Charybdis and Scylla: Difficult or… dirty

chess player, people in phone booth, tools to package a box

The options once we know what the issue is are not perfect.

Difficult: We can try to do outreach. It means trying to find out the owner of the website or the developer who created it. It's a very difficult task to discover who is in charge and is very dependent on the country and type of business the site is doing. Contacting a bank site and getting it fixed is nearly impossible. Finding a Japanese developer of a big corporation website is very hard (a lot of secrecy is going around). It's a costly process, with not always great results. If you are part of the owners of broken websites, please contact us.

Probably we should create a search engine for broken websites. So people can find their own sites.

Dirty: The other option is to fix on the browser side. The ideal scenario is when there is really a bug in Firefox and Mozilla needs to fix it. But sometimes, the webcompat team is pushing Gecko engineers to introduce dirty fix into the code, so Firefox can be compatible with what a big website is doing. These can be a site intervention that will modify the code on the fly or it can be a more general issue which requires to fix both the browser and the technical specification.

The specification… ??? Yes. Back to the browser market share and its influences on the world. If a dominant browser implements wrongly the specification, it doesn't matter what is right. The Web developers will plunge into using the feature as it behaves in the dominant browser. This solidifies the technical behavior of a feature and creates the burden of implementing a different behavior on smaller browsers, and then changing the specification to match what everyone was forced to implement.

The Web is amazing!

korean palace corridor and scissors

All of that said, the Web is an amazing place. It's a space which gave us the possibility to express ourselves, to discover each other, to work together, to create beautiful things on a daily basis. Just continue doing that, but be mindful that the web is for everyone.

Notes from a dilettantish speaker

I wanted to mention a couple of things, but I think I will do that in a separate blog post with other things that have happened this week.

Otsukare!

The Firefox FrontierHere’s why pop culture and passwords don’t mix

Were they on a break or not?! For nearly a decade, Ross and Rachel’s on-screen relationship was a point of contention for millions of viewers around the world. It’s no … Read more

The post Here’s why pop culture and passwords don’t mix appeared first on The Firefox Frontier.

Mozilla Security BlogAdding CodeQL and clang to our Bug Bounty Program

At Github Universe, Github announced the GitHub Security Lab, an initiative to help secure open source software alongside the community and an initial set of partners including Mozilla. As part of this announcement, Github is providing free access to CodeQL, a security research tool which makes it easier to identify flaws in open source software. Mozilla has used these tools privately for the past two years, and have been very impressed and hopeful about how these tools will improve software security. Mozilla recognizes the need to scale security to work automatically, and tighten the feedback loop in the development <-> security auditing/engineering process.

One of the ways we’re supporting this initiative at Mozilla is through renewed investment in automation and static analysis. We think the broader Mozilla community can participate, and we want to encourage it. Today, we’re announcing a new area of our bug bounty program to encourage the community to use the CodeQL tools.  We are exploring the use of CodeQL tools and will award a bounty – above and beyond our existing bounties – for static analysis work that identifies present or historical flaws in Firefox.

The highlights of the bounty are:

  • We will accept static analysis queries written in CodeQL or as clang-based checkers (clang analyzer, clang plugin using the AST API or clang-tidy).
  • Each previously unknown security vulnerability your query matches will be eligible for a bug bounty per the normal policy.
  • The query itself will also be eligible for a bounty, the amount dependent upon the quality of the submission.
  • Queries that match historical issues but do not find new vulnerabilities are eligible. This means you can look through our historical advisories to find examples of issues you can write queries for.
  • Mozilla and Github’s Bug Bounties are compatible not exclusive so if you meet the requirements of both, you are eligible to receive bounties from both. (More details below.)
  • The full details of this program are available at our bug bounty program’s homepage.

When fixing any security bug, retrospective is an important part of the remediation process which should provide answers to the following questions: Was this the only instance of this issue? Is this flaw representative of a wider systemic weakness that needs to be addressed? And most importantly: can we prevent an issue like this from ever occurring again? Variant analysis, driven manually, is usually the way to answer the first two questions. And static analysis, integrated in the development process, is one of the best ways to answer the third.

Besides our existing clang analyzer checks, we’ve made use of CodeQL over the past two years to do variant analysis. This tool allows identifying bugs both in the context of targeted, zero-false-positive queries, and more expansive results where the manual analysis starts from a more complete and less noise-filled point than simple string matching. To see examples of where we’ve successfully used CodeQL, we have a meta tracking bug that illustrates the types of bugs we’ve identified.

We hope that security researchers will try out CodeQL too, and share both their findings and their experience with us. And of course regardless of how you find a vulnerability, you’re always welcome to submit bugs using the regular bug bounty program. So if you have custom static analysis tools, fuzzers, or just the mainstay of grep and coffee – you’re always invited.

Getting Started with CodeQL

Github is publishing a guide covering how to use CodeQL at https://securitylab.github.com/tools/codeql

Getting Started with Clang Analyzer

We currently have a number of custom-written checks in our source tree. So the easiest way to write and run your query is to build Firefox, add ‘ac_add_options –enable-clang-plugin’ to your mozconfig, add your check, and then ‘./mach build’ again.

To learn how to add your check, you can review this recent bug that added a couple of new checks – it shows how to add a new plugin to Checks.inc, ChecksIncludes.inc, and additionally how to add tests. This particular plugin also adds a couple of attributes that can be used in the codebase, which your plugin may or may not need. Note that depending on how you view the diffs, it may appear that the author modified existing files, but actually they copied an existing file, then modified the copy.

Future of CodeQL and clang within our Bug Bounty program

We retain the ability to be flexible. We’re planning to evaluate the effectiveness of the program when we reach $75,000 in rewards or after a year. After all, this is something new for us and for the bug bounty community. We—and Github—welcome your communication and feedback on the plan, especially candid feedback. If you’ve developed a query that you consider more valuable than what you think we’d reward – we would love to hear that. (If you’re keeping the query, hopefully you’re submitting the bugs to us so we can see that we are not meeting researcher expectations on reward.) And if you spent hours trying to write a query but couldn’t get over the learning curve – tell us and show us what problems you encountered!

We’re excited to see what the community can do with CodeQL and clang; and how we can work together to improve on our ability to deliver a browser that answers to no one but you.

The post Adding CodeQL and clang to our Bug Bounty Program appeared first on Mozilla Security Blog.

Mozilla Addons Blog2019 Add-ons Community Meetup in London

At the end of October, the Firefox add-ons team hosted a day-long meetup with a group of privacy extension developers as part of the Mozilla Festival in London, UK. With 2019 drawing to a close, this meetup provided an excellent opportunity to hear feedback from developers involved in the Recommended Extensions program and to get input about some of our plans for 2020.

Recommended Extensions

Earlier this summer we launched the Recommended Extensions program to provide Firefox users with a list of curated extensions that meet the highest standards of security, utility, and user experience. Participating developers agree to actively maintain their extensions and to have each new version undergo a code review. We invited a handful of Recommended developers to attend the meetup and gather their feedback about the program so far. We also discussed more general issues around publishing content on addons.mozilla.org (AMO), such as ways of addressing user concerns over permission prompts.

Scott DeVaney, Senior Editorial & Campaign Manager for AMO, led a session on ways developers can improve a few key experiential components of their extensions. These tips may be helpful to the developer community at large:

  • AMO listing page. Use clear, descriptive language to convey exactly what your extension does and how it benefits users. Try to avoid overly technical jargon that average users might not understand. Also, screenshots are critical. Be sure to always include updated, relevant screenshots that really capture your extension’s experience.
  • Extension startup/post-install experience. First impressions are really important. Developers are encouraged to take great care in how they introduce new users to their extension experience. Is it clear how users are supposed to engage with the content? Or are they left to figure out a bunch of things on their own with little or no guidance? Conversely, is the guidance too cumbersome (i.e. way too much text for a user to comfortably process?)
  • User interface. If your extension involves customization options or otherwise requires active user engagement, be sure your settings management is intuitive and all UI controls are obvious.

Monetization. It is of course entirely fine for developers to solicit donations for their work or possibly even charge for a paid service. However, monetary solicitation should be tastefully appropriate. For instance, some extensions solicit donations just after installation, which makes little sense given the extension hasn’t proven any value to the user yet. We encourage developers to think through their user experience to find the most compelling moments to ask for donations or attempt to convert users to a paid tier.

WebExtensions API and Manifest v3

One of our goals for this meetup was to learn more about how Firefox extension developers will be affected by Chrome’s proposed changes to their extensions API (commonly referred to as Manifest v3).  As mentioned in our FAQ about Manifest v3, Mozilla plans to adopt some of these changes to maintain compatibility for developers and users, but will diverge from Chrome where it makes sense.

Much of the discussion centered around the impact of changes to the `blocking webRequest` API and replacing background scripts with service workers. Attendees outlined scenarios where changes in those areas will cause breakage to their extensions, and the group spent some time exploring possible alternative approaches for Firefox to take. Overall, attendees agreed that Chrome’s proposed changes to host permission requests could give users more say over when extensions can run. We also discussed ideas on how the WebExtensions API could be improved in light of the goals Manifest v3 is pursuing.

More information about changes to the WebExtensions API for Manifest v3 compatibility will be available in early 2020. Many thanks to everyone who has contributed to this conversation over the last few months on our forums, mailing list, and blogs!

Firefox for Android

We recently announced that Firefox Preview, Mozilla’s next generation browser for Android built on GeckoView, will support extensions through the WebExtensions API. Members of the Android engineering team will build select APIs needed to initially support a small set of Recommended Extensions.

The group discussed a wishlist of features for extensions on Android, including support for page actions and browser actions, history search, and the ability to manipulate context menus. These suggestions will be considered as work on Firefox Preview moves forward.

Thank you

Many thanks to the developers who joined us for the meetup. It was truly a pleasure to meet you in person and to hear first hand about your experiences.

The add-ons team would also like to thank Mandy Chan for making us feel at home in Mozilla’s London office and all of her wonderful support during the meetup.

Hacks.Mozilla.OrgThermostats, Locks and Extension Add-ons – WebThings Gateway 0.10

Happy Things Thursday! Today we are releasing WebThings Gateway 0.10. If you have a gateway using our Raspberry Pi builds then it should already have automatically updated itself.

This new release comes with support for thermostats and smart locks, as well as an updated add-ons system including extension add-ons, which enable developers to extend the gateway user interface. We’ve also added localisation settings so that you can choose your country, language, time zone and unit preferences. From today you’ll be able to use the gateway in American English or Italian, but we’re already receiving contributions of translations in different languages!

Thermostat and lock in Things UI

Thermostats

Version 0.10 comes with support for smart thermostats like the Zigbee Zen Thermostat, the Centralite HA 3156105 and the Z-Wave Honeywell TH8320ZW1000.

Thermostat UIYou can view the current temperature of your home remotely, set a heating or cooling target temperature and set the current heating mode. You can also create rules which react to temperature or control your heating/cooling via the rules engine. In this way, you could set the heating to come on at a particular time of day or change the colour of lights based on how warm it is, for example.

Smart Locks

Ever wonder if you’ve forgotten to lock your front door? Now you can check when you get to work, and even lock or unlock the doors remotely. With the help of the rules engine, you can also set rules to lock doors at a particular time of day or notify you when they are unlocked.

Lock UI

So far we have support for Zigbee and Z-Wave smart locks like the Yale YRD226 Deadbolt and Yale YRD110 Deadbolt.

Extension Add-ons

Version 0.10 also comes with a revamped add-ons system which includes a new type of add-on called extensions. Like a browser extension, an extension add-on can be used to augment the gateway’s user interface.

For example, an extension can add its own entry in the gateway’s main menu and display its own dedicated screen with new functionality.

Together with a new mechanism for add-on developers to extend the gateway’s REST API, this opens up a whole new world of possibilities for add-on developers to customise the gateway.

Note that the updated add-ons system comes with a new manifest format inspired by Web Extensions. Stand by for a blog post next week which will explain in more depth how to use the new add-ons system. We’ll walk you through building your own extension add-on.

Localisation Settings

Many add-ons use location-specific data like weather, sunrise/sunset and tide times, but it’s no fun to have to configure your location for each add-on. It’s now possible to choose your country, time zone and language via the gateway’s web interface.

With time zone support, time-based rules should now correctly adjust for daylight savings time in your region. Since the gateway is configured to use Greenwich Mean Time by default, your rules may show times you didn’t expect at first. To fix this, you’ll need to set your time zone appropriately and adjust your rule times. You can also set your preference of unit used to display temperature, to either degrees Celsius or Fahrenheit.

And finally, many of you have asked for the user interface to support multiple languages. We are shipping with an Italian translation in this release thanks to our resident Italian speaker Kathy. We already have French, Dutch and Polish translations in the pipeline thanks to our wonderful community. Stand by for more information on how to contribute to translations in your language!

API Changes & Standardisation

For developers, in addition to the new add-ons system, it’s now possible to communicate with all the gateway’s web things via a single WebSocket connection. Previously it was necessary to open a WebSocket per device, so this is a significant enhancement.

We’ve recently started the Web Thing Protocol Community Group at the W3C with the intention of standardising this WebSocket sub-protocol in order to further improve interoperability on the Web of Things. We welcome developers to join this group to contribute to the standardisation process.

Coming Soon

Coming up next, expect Mycroft voice controls, translations into more languages and new ways to install and use WebThings Gateway.

As always, you can head over to the forums for support. And we welcome your contributions on GitHub.

The post Thermostats, Locks and Extension Add-ons – WebThings Gateway 0.10 appeared first on Mozilla Hacks - the Web developer blog.

Firefox NightlyThese Weeks in Firefox: Issue 68

Highlights

  • The “Omniscient” Browser Toolbox will be enabled by default in the coming days
    • This is a browser developer toolbox with the ability inspect/debug multiple processes
    • As a follow up, the Browser Content Toolbox (which debugs the content process of for the current tab) will be no longer needed and removed.
  • The JavaScript Debugger’s new Watchpoints feature, which let you pause on object property get/set, are on by default in Nightly and will ride to DevEdition for more QA and user feedback
    • The JavaScript debugger is paused at a line of code where a property was set, due to a watchpoint on that property.

      A watchpoint paused on this.model.attributes.completed being set.

  • The Network Monitor’s new Request Blocking lets you block requests based on matches patterns – it is now enabled by default on Nightly and DevEdition
  • User-Initiated Picture-in-Picture for videos is now enabled by default on macOS and Linux on Nightly! Please test it out, and file bugs against this meta for us to triage.

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug
  • Alex Vincent [:WeirdAl]
  • Ben Campbell
  • Christoph Walcher
  • Chujun Lu
  • Florens Verschelde :fvsch
  • Itiel
  • jaril
  • Mustafa
  • Tim Nguyen :ntim
New contributors (🌟 = first patch)

Project Updates

Accessibility

Add-ons / Web Extensions

  • Shane landed the first bits of the changes to the webextensions CSP (Bug 1581611, Bug 1587939, Bug 1581609, content script CSP is currently behind the extensions.content_script_csp.enabled and extensions.content_script_csp.report_only prefs), and fixed a bug related to the behavior of the “private browsing” checkbox included in the post install notification (Bug 1581852).
  • Tomislav fixed AddonManagerWebAPI::IsAPIEnabled in Fission out-of-process iframes (Bug 1591736)
  • Mark Striemer moved the about:addons page header, page options, search addons field and global warnings from XUL to HTML (Bug 1545346)
  • Andrea Marchesini is moving secure-proxy experimental APIs into mozilla-central (Bug 1592687, Bug 1592932), and added the third-party state to the request details provided by the webRequest and proxy API events (Bug 1591900)
  • Gijs made sure that Firefox reuses an existing about:preferences tab when the user is opening it from the about:addons page (Bug 1592600)
  • Matt Woodrow fixed webRequest API regression (Bug 1590898, recently regressed by Bug 1583700)
  • Graham extended the browserAction/pageAction onClicked API event to allow extensions to receive “middle-button” mouse clicks and extended the onClicked event details to also include mouse buttons and keyboard modifiers states (Bug 1405031). Thanks Graham for contributing this enhancement!
  • Itiel fixed the alignment of the warnings/errors messages part of the about:addons extensions shortcuts view when the RTL mode is enabled (Bug 1575472). Thanks Itiel for contributing this fix!
  • Trishul fixed a webRequest bug triggered by webRequest API events received for tabs that are already closed (Bug 1447807). Thanks Trishul for looking into it and contributing a fix!

Developer Tools

  • Console’s multi-line editor mode now has shortcuts for importing and saving your snippets back files: Ctrl/Cmd-O and Ctrl/Cmd-S . This is based on feedback from users who were used to these shortcuts when using Scratchpad.
  • Contributor Sorin Davidoi optimized the Debugger to use less resources when the tab is in the background.
  • Browser Toolbox’s –jsdebugger for can now be overridden with another Firefox executable that will be used for the Browser Toolbox, so you can run DevTools in an optimized build: ./mach run –jsdebugger $NIGHTLY

Fission

  • M4 came and went with 130 or so tests still not enabled. Teams will continue to fix tests while making the Fission enabled browser stable enough for daily use, which is our M5 target.
  • Classified front end work for M5
  • Some work already finished:

New Tab Page

  • Getting Discovery Stream working for the de locale. (right now it’s just en-US and en-CA)
  • Turning personalization back on for sponsored content.
  • Some minor UI and UX fixes and some experiments.

Password Manager

Performance

Performance Tools

  • The profile metadata panel now shows the settings that were used to capture the profile.
    • The Profiler metadata panel is showing what settings the profile was gathered with. It lists the sampling interval (1ms), the buffer capacity (16.0MB), the buffer duration (Unlimited), and the profiler features that were enabled.

      This helps us see at a glance how a profile was gathered, which can help explain some of the patterns within the profile.

  • There’s a new Track IPC feature (off by default) that can be enabled from the profiler’s capture panel to track async IPC.
    • The Profiler showing the Marker Chart, and indicating when IPC messages were sent and received.

      The parent process appears to be sending quite a few IPC messages here.

    • Example profile of the IPC happening when opening new tabs.

Picture-in-Picture

Search and Navigation

Search
Address Bar
  • Regression fixes:
  • Visual redesign (aka “megabar”, Firefox 73)
    • Temporarily disabled in Nightly, we are working on a new revision of the design, additional feedback on the old revision was not useful. Will re-enable once we are closer to MVP.
    • New one-off buttons flex behavior for small windows
    • Removed some more legacy urlbar code, especially from autocomplete
  • Search Interventions experiment (Firefox 72)
    • Experimental add-on is being worked on.
  • Search Nudges experiment (Firefox 72)
    • Project revised to use the Search Interventions API.

User Journey

  • Launched a couple of “Relationship” CFRs in nightly and beta, riding the trains to release
  • What’s New feature made it to the release channel
  • Investigating user-agent attribution focusing on “Chrome switchers” to show contextual onboarding, e.g., dynamic first-run cards
  • Potentially expanding new-user onboarding cards to show for pre-Skyline profiles with remote messages to current release users

Hacks.Mozilla.OrgUpcoming notification permission changes in Firefox 72

Notifications. Can you keep count of how many websites or services prompt you daily for permission to send notifications? Can you remember the last time you were thrilled to get one?

Earlier this year we decided to reduce the amount of unsolicited notification permission prompts people receive as they move around the web using the Firefox browser. We see this as an intrinsic part of Mozilla’s commitment to putting people first when they are online.

In preparation, we ran a series of studies and experiments. We wanted to understand how to improve the user experience and reduce annoyance. In response, we’re now making some changes to the workflow for how sites ask users for permission to send them notifications. Firefox will require explicit user interaction on all notification permission prompts, starting in Firefox 72.

For the full background story, and details of our analysis and experimentation, please read Restricting Notification Permission Prompts in Firefox. Today, we want to be sure web developers are aware of the upcoming changes and share best practices for these two key scenarios:

  1. How to guide the user toward the prompt.
  2. How to acknowledge the user changing the permission.

an animation showing the browser UI where a user can click on the small permission icon that appears in the address bar.

We anticipate that some sites will be impacted by changes to the user flow. We suspect that many sites do not yet deal with the latter in their UX design. Let’s briefly walk through these two scenarios:

How to guide the user toward the prompt

Ideally, sites that want permission to notify or alert a user already guide them through this process. For example, they ask if the person would like to enable notifications for the site and offer a clickable button.

document.getElementById("notifications-button").addEventListener("click", () => {
  Notification.requestPermission().then(setupNotifications);
});

Starting with Firefox 72, the notification permission prompt is gated behind a user gesture. We will not deliver prompts on behalf of sites that do not follow the guidance above. Firefox will instantly reject the promise returned by Notification.requestPermission() and PushManager.subscribe(). However, the user will see a small notification permission icon in the address bar.

Note that because PushManager.subscribe() requires a ServiceWorkerRegistration, Firefox will carry user-interaction flags through promises that return ServiceWorkerRegistration objects. This enables popular examples to continue to work when called from an event handler.

Firefox shows the notification permission icon after a successful prompt. The user can select this icon to make changes to the notification permission. For instance, if they decide to grant the site the permission, or change their preference to no longer receive notifications.

How to acknowledge the user changing the permission

When the user changes the notification permission through the notification permission icon, this is exposed via the Permissions API:

navigator.permissions.query({ name: "notifications" }).then(status => {
  status.onchange = () => potentiallyUpdateNotificationPermission(status.state);
  potentiallyUpdateNotificationPermission(status.state);
}

We believe this improves the user experience and makes it more consistent. And allows to align the site interface with the notification permission. Please note that the code above works in earlier versions of Firefox as well. However, users are unlikely to change the notification permission dynamically in earlier Firefox releases. Why? Because there was no notification permission icon in the address bar.

Our studies show that users are more likely to engage with prompts that they’ve interacted with explicitly. We’ve seen that through pre-prompting in the user interface, websites can inform the user of the choice they are making before presenting a prompt. Otherwise, unsolicited prompts are denied in over 99% of cases.

We hope these changes will lead to a better user experience for all and better and healthier engagement with notifications.

The post Upcoming notification permission changes in Firefox 72 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Open Policy & Advocacy BlogMozilla plays role in Kenya’s adoption of crucial data protection law

The Kenyan Data Protection and Privacy Act 2019, was signed into law last week. This GDPR-like law is the first data protection law in Kenyan history, and marks a major step forward in the protection of Kenyans’ privacy. Mozilla applauds the Government of Kenya, the National Assembly, and all stakeholders who took part in the making of this historic law. It is indeed a huge milestone that sees Kenya become the latest addition to the list of countries with data protection related laws in place; providing much-needed safeguards to its citizens in the digital era.

Strong data protection laws are critical in ensuring that user rights are protected; that companies and governments are compelled to appropriately handle the data that they are entrusted with. As part of its policy work in Africa, Mozilla has been at the forefront in advocating for the new law since 2018. The latest development is most welcome, as Mozilla continues to champion the 5 policy hot-spots that are key to Africa’s digital transformation.

Mozilla is pleased to see that the Data Protection Act is consistent with international data protection standards, through its approach to placing users’ rights at the centre of the digital economy. They also applaud the creation of an empowered data protection commission with a high degree of independence from the government. The law also imposes strong obligations placed on data controllers and processors. It requires them to abide by principles of meaningful user consent, collection limitation, purpose limitation, data minimization, and data security.

It is commendable that the law has maintained integrity throughout the process, and many of the critical comments Mozilla submitted on the initial Data Protection Bill (2018) have been reflected in the final act. The suggestions included the requirement for robust protections of data subjects with the rights to rectification, erasure of inaccurate data, objection to processing of their data, as well as the right to access, and to be informed of the use of their data; with the aim of providing users with control over their personal data and online experiences.

Mozilla continues to be actively engaged in advocating for strong data privacy and  protection in the entire African region, where fewer than 20 countries have a data protection law. Considering that the region has the world’s fastest growth in internet use over the past decade, the continent is poised for great opportunities around accessing the internet. However, without the requisite laws in place, many users, many of whom are accessing the internet for the first time, will be put at risk.

The post Mozilla plays role in Kenya’s adoption of crucial data protection law appeared first on Open Policy & Advocacy.

The Firefox FrontierTracking Diaries with Tiffany LaTrice Williams

In Tracking Diaries, we invited people from all walks of life to share how they spent a day online while using Firefox’s privacy protections to keep count of the trackers … Read more

The post Tracking Diaries with Tiffany LaTrice Williams appeared first on The Firefox Frontier.

Hacks.Mozilla.OrgAnnouncing the Bytecode Alliance: Building a secure by default, composable future for WebAssembly

Today we announce the formation of the Bytecode Alliance, a new industry partnership coming together to forge WebAssembly’s outside-the-browser future by collaborating on implementing standards and proposing new ones. Our founding members are Mozilla, Fastly, Intel, and Red Hat, and we’re looking forward to welcoming many more.

Three wasm runtimes (Wasmtime, Lucet and WAMR) with linked arms under a banner that says Bytecode Alliance and saying 'Come join us!'

We have a vision of a WebAssembly ecosystem that is secure by default, fixing cracks in today’s software foundations. And based on advances rapidly emerging in the WebAssembly community, we believe we can make this vision real.

We’re already putting these solutions to work on real world problems, and those are moving towards production. But as an Alliance, we’re aiming for something even bigger…

Why

As an industry, we’re putting our users at risk more and more every day. We’re building massively modular applications, where 80% of the code base comes from package registries like npm, PyPI, and crates.io.

Making use of these flourishing ecosystems isn’t bad. In fact, it’s good!

The problem is that current software architectures weren’t built to make this safe, and bad guys are taking advantage of that… at a dramatically increasing rate.

What the bad guys are exploiting here is that we’ve gotten our users to trust us. When the user starts up your application, it’s like the user’s giving your code the keys to their house. They’re saying “I trust you”.

But then you invite all of your dependencies, giving each one of them a full set of keys to the house. These dependencies are written by people who you don’t know, and have no reason to trust.

A user entrusting their keys to an app. Then that app turns around and gives copies of the keys to all the dependencies. Two of the dependencies have masks on and say 'Look at that poor, lonely bitcoin. I’ll find a nice home for it' and 'look at what a lovely file system they have'

As a community, we have a choice. The WebAssembly ecosystem could provide a solution here… at least, if we choose to design it in a way that’s secure by default. But if we don’t, WebAssembly could make the problem even worse.

As the WebAssembly ecosystem grows, we need to solve this problem. And it’s a problem that’s too big to solve alone. That’s where the Bytecode Alliance comes in.

A personified WebAssembly logo holding a shield and saying 'I stand for the user!' vs a WebAssembly logo sitting on the couch saying 'I am busy. Cant the users protect themselves?'

What

The Bytecode Alliance is a group of companies and individuals, coming together to form an industry partnership.

Together, we’re putting in solid, secure foundations that can make it safe to use untrusted code, no matter where you’re running it—whether on the cloud, natively on someone’s desktop, or even on a tiny IoT device.

With this, developers can be as productive as they are today, using open source in the same way, but without putting their users at risk.

This common, reusable set of foundations can be used on their own, or embedded in other libraries and applications.

Currently, we’re collaborating on:

Runtimes:

  • Wasmtime is a stand-alone WebAssembly runtime that can be used as a CLI tool or embedded into other systems. It’s very configurable and scalable so that it can serve as the base for many use-case specific runtimes, from small IoT devices all the way up to cloud data centers.
  • Lucet is an example of a use-case specific runtime. It’s ideal for fast CDNs and Edge Compute, using AOT compilation and other techniques to provide low-latency and high-concurrency. We are refactoring it to use Wasmtime at its core.
  • WebAssembly Micro Runtime (WAMR) is another use-case specific runtime. It’s ideal for small embedded devices that have extremely limited resources. It provides a small footprint and uses an interpreter to keep memory overhead low.

Runtime components:

  • Cranelift is emerging as a state-of-the-art code generator. It is designed to generate optimized machine code very quickly because it parallelizes compilation on a function-by-function level.
  • WASI common is a standalone implementation of the WebAssembly System Interface that runtimes can use.

Language tooling

  • cargo-wasi is a lightweight Cargo subcommand that compiles Rust code to target WebAssembly and the WebAssembly System Interface for outside-the-browser use.
  • wat and wasmparser parse WebAssembly. wat parses the text format, and wasmparser is an event-driven library for parsing the binary format.

And we expect this set of projects to expand as we grow the Alliance.

Our members are also leading the WASI standards effort itself, as well as the Rust to WebAssembly working group.

Who

The founding members of the Bytecode Alliance are Mozilla, Fastly, Intel, and Red Hat.

We’re starting with a lightweight governance structure. We intend to formalize this structure gradually over time. You can read more about this our FAQ.

As we said before, this is too big to go it alone. That’s why we’re happy to welcome new members to the Alliance. If you want to join, please email us at hello@bytecodealliance.org.

Here’s why we think this is important:

WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers. This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix what’s broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way. Together with our partners in the Bytecode Alliance, Mozilla is building these new secure foundations—for everything from small, embedded devices to large, computing clouds.

— Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly

Fastly is very happy to help bring the Bytecode Alliance to the community. Lucet and Cranelift have been developed together for years, and we’re excited to formalize their relationship and help them grow faster together. This is an important moment in computing history, marking our chance to redefine how software will be built across clients, origins, and the edge. The Bytecode Alliance is our way of contributing to and working with the community, to create the foundations that the future of the internet will be built on.

Tyler McMullen, CTO at Fastly

Intel is joining the Bytecode Alliance as a founding member to help extend WebAssembly’s performance and security benefits beyond the browser to a wide range of applications and servers. Bytecode Alliance technologies can help developers extend software using a wide selection of languages, building upon the full capabilities of leading-edge compute platforms
— Mark Skarpness; VP, Intel Architecture, Graphics, and Software; Director, Data-Centric System Stacks

Red Hat believes deeply in the role open source technologies play in helping provide the foundation for computing from the operating system to the browser to the open hybrid cloud. Wasmtime is an exciting development that helps move WebAssembly out of the browser into the server space where we are experimenting with it to change the trust model for applications, and we are happy to be involved in helping it grow into a mature, community-based project.

— Chris Wright, senior vice president and Chief Technology Officer at Red Hat

So that’s the big news! 🎉

To learn more about what we’re building together, read on.

The problem

The way that we architect software has radically changed over the past 20 years. In 2003, companies had a hard time getting developers to reuse code.

Now 80% of your average code base is built with modules downloaded from registries like JavaScript’s npm, Python’s PyPI, Rust’s crates.io, and others. Even C++ is moving towards enabling an ecosystem of composable modules.

This new way of developing applications has made us much more productive as an industry. But it has also introduced gaping wide holes in our security. And as I talked about above, the bad guys are using those holes to attack our users.

The user is entrusting us with the keys to their house and we’re giving that access away like candy… and that’s not because we’re irresponsible. It’s because there’s huge value in these packages, but no easy way to mitigate the security risks that come from using them.

More than this, it’s not just our own dependencies that come along for the ride. It’s also any modules that they depend on—the indirect dependencies.

Modules passing keys down the dependency tree

What do these developers get access to?

  1. resources on the machine—things like files and memory
  2. APIs and syscalls—tools that they can use to do things to those resources

system resources on one side, including memory, file system, and network connections. Host-provided APIs and syscalls on the other side, including open, write, getrandom, clock, and usb_make_path

This means that these modules can do a lot of damage. This could either be on purpose, as with malicious code, or completely accidentally, as with vulnerable code.

Let’s look at how these attacks work.

Malicious code

Malicious code is written by the attacker themselves.

Attackers often use social engineering to get their package into applications. They create a package that has useful features, and then sneak in some malicious code. Once the code is in the app and the user starts up the app, the code can attack the user.

Dependency tree with every module holding keys. One of them is a malicious module saying Let's take a look at that file system

This is how a hacker stole $1.5 million worth of cryptocurrency (and almost $13 million more) from users, starting in March of this year.

  • Day 0 (March 6): The attacker published a module to npm: electron-native-notify. This seemed useful—it helped Electron apps fire off native notifications in a way that worked across platforms. It didn’t have any malicious code in it, yet.
  • Day 2: To pull of the heist, the attacker has to get this module into the cryptocurrency app. The vector they choose is a dependency in an application that helps users manage their cryptocurrency, the Agama Wallet.
  • Day 17: The attacker adds the malicious payload.
  • Day 41-66: The app is rebuilt, pulling in the most recent version of the dependency, and electron-native-notify with it. At this point, it starts sending user’s “seeds” (username/password combos) to a server. The attacker can then use these seeds to empty users’ wallets.
  • Day 90: A user alerts npm to suspicious behavior in electron-native-notify, and they notify the cryptocurrency platform, which moved funds from vulnerable wallets to a secure one.

Let’s look at what access this malicious code needed to pull off the attack.

It needed the seed. To do this, it got access to memory holding the seed.

Then it needed to send the seed to a server. For this it needed access to a socket, and access to an API or syscall for opening that socket.

Diagram of the system resources and syscall needed to pull this off

Malicious code attacks are on the rise as more and more attackers realize how vulnerable our systems of trust are. For example, the number of malicious modules published to npm more than doubled from 2017 to 2019. And npm’s VP of security points out that these attacks are getting more serious.

In 2019, we’re seeing more financially motivated attacks. Early attacks were focused on shenanigans—deleting files and trying to steal some credentials. Real basic attacks, kind of smash and grab style. Now end users are the target. And [the attackers] are trying hard to be patient, to be sneaky, to really have a well planned out attack.

Vulnerable code

Vulnerabilities are different. The module maintainer isn’t trying to do anything bad.

The module maintainer just has a bug in their code. But attackers can use that bug to trick their code into doing something that shouldn’t do.

Dependency tree with a vulnerable module talking on the phone to an attacker, asking what the attacker wants it to do

For an example of this, let’s look at ZipSlip.

ZipSlip is a vulnerability found in modules in many ecosystems: JavaScript, Java, .NET, Go,Ruby, C++, Python… the list goes on. And it affected thousands of projects, including ones from HP, Amazon, Apache, Pivotal, and many more.

If a module had this vulnerability, attackers could use it to replace files anywhere in the file system. For example, the attacker could use it to replace .js files in the node_modules directory. Then, when the .js file was required, the attacker’s code would run.

Attacker telling dependency to unpack a zip file that will overwrite a file in the node modules directory

So how did this work?

The attacker would create a file, and give it a filename that included ‘../’, building up a relative file path. When the vulnerable code unzipped it, it wouldn’t sanitize the filename. So it would call the write syscall with the relative path, placing the file wherever the attacker wanted it to go in the file system.

So what access did the vulnerable code need for the attacker to pull this off?

It needed access to the directory with the sensitive files.

It also needed access to the write syscall.

Diagram of the system resource and syscall needed to pull this off

Vulnerabilities are on the rise, as well, with an 88% increase in the last two years. As Snyk reported in their 2019 State of Open Source Security Report:

In 2018, vulnerabilities for npm grew by 47%. Maven Central and PHP Packagist disclosures grew by 27% and 56% respectively.

With these risks, you would think that patching vulnerabilities would be a high priority. But as Snyk found, only 59% of packages have known fixes for disclosed vulnerabilities, because many maintainers don’t have the time or the security know-how to fix these vulnerabilities.

This leads to widespread vulnerability, as these vulnerable modules are depended on by other modules. For example, a study of npm modules found that up to 40% of packages depend on code with at least one publicly known vulnerability.

How can we protect users today?

So how can you protect your users against these threats in today’s software ecosystem?

  • You can run scanners that detect fishy coding patterns in your code and that of your dependencies. But there are many things these automated tools can’t catch.
  • You could subscribe to a monitoring service that alerts you when a vulnerability is found one of your dependencies. But this only works for those that have been found. And even once a vulnerability has been found, there’s a good chance the maintainer won’t be able to fix it quickly. For example, Snyk found in the npm ecosystem that for the top 6 packages, the median time-to-fix (measured starting at the vulnerability’s inclusion) was 2.5 years.
  • You could try and do manual code review. Whenever there’s an update in a dependency, you’d review the changed lines. But if you have hundreds of modules in your dependency tree, that could easily be 100,000 lines of code to review every few weeks.
  • You could pin your dependencies, so that malicious code can’t get in until you’ve had a chance to review. But then fixes for vulnerabilities would be stalled, leaving your app vulnerable longer.

This all makes for a kind of no-win scenario, which makes you want to throw up your hands and just hope for the best. But as an industry, we simply can’t do that. There’s too much at stake for our users.

All of these solutions try to catch malicious and vulnerable code. But what if we look at this a different way?

Part of the problem was the access that these modules had. What if we took away that access?

system resources and syscalls with red no-access signs crossing them out

We’ve faced this kind of challenge as an industry before… just at a different granularity.

When you have two programs running on a computer at the same time, how do you know that one won’t mess with the other? Do you have to trust the programs to behave?

No, because protection is built into the operating system. The tool that OSs use to protect programs from each other is the process.

When you start a program, the OS fires up a new process. Each process gets its own chunk of memory to use, and it can’t access memory in other processes.

If it wants to get data from the other process, it has to ask. Then the data is sent across the process boundary. This makes sure each program is in control of its own data in memory.

Two processes, each with its own memory. A module in one process says 'Hey, I am sending over some data'. Then it has to serialize the data, send it across a pipe that connects the two processes, and then deserialize it.

This memory isolation does make it much safer to run two programs at the same time. But this isn’t perfect security. A malicious program can still mess with certain other resources, like files in the file system.

VMs and containers were developed to fix this. They ensure that something running in one VM or container can’t access the file system of another. And with sandboxing, it’s possible to take away access to APIs and syscalls.

So could we put each package in its own little isolated unit? Its own little sandboxed process?

That would solve our problem. But it would also introduce a new problem.

All of these techniques are relatively heavyweight. If we wrap hundreds of packages into their own sandboxed process, we’d quickly run out of memory. We’d also make the function calls between the different packages much slower and more complicated.

A large dependency tree of heavy weight processes connected by pipes, vs a small tree of modules

But it turns out that new technologies are giving us new options.

As we’re building out the WebAssembly ecosystem, we can design how the pieces fit together in a way that gives you the kind of isolation that you get with processes or containers, but without the downsides.

Tomorrow’s solution: WebAssembly “nanoprocesses”

WebAssembly can provide the kind of isolation that makes it safe to run untrusted code. We can have an architecture that’s like Unix’s many small processes, or like containers and microservices.

But this isolation is much lighter weight, and the communication between them isn’t much slower than a regular function call.

This means you can use them to wrap a single WebAssembly module instance, or a small collection of module instances that want to share things like memory among themselves.

Plus, you don’t have to give up the nice programming language affordances—like function signatures and static type checking.

Two heavyweight processes connected by slow pipes next to two small nanoprocesses connected with a small slot

So how does this work? What about WebAssembly makes this possible?

First, there’s the fact that each WebAssembly module is sandboxed by default.

By default, the module doesn’t have access to APIs and system calls. If you want the module to be able to interact with anything outside of the module, you have to explicitly provide the module with the function or syscall. Then the module can call it.

WebAssembly engine passing a single syscall into a nanoprocess

Second, there’s the memory model.

Unlike a normal binary compiled directly to something like x86, a WebAssembly module doesn’t have access to all of the memory in its process. It only has access to the chunk of memory that has been assigned to it.

In theory, scripting languages would also provide this kind of isolation. Code in scripting languages can’t directly access the memory in the process. It can only access memory through the variables it has in scope.

But in most scripting language ecosystems, code makes a lot of use of a shared global object. That’s effectively the same as shared memory. So the conventions in the ecosystem make memory isolation a problem for scripting languages as well.

WebAssembly could have had this problem. In the early days, some wanted to establish a convention of passing a shared memory in to every module. But the community group opted for the more secure convention of keeping memory encapsulated by default.

This gives us memory isolation between the two modules. That means that a malicious module can’t mess with the memory of its parent module.

A nanoprocess containing a memory object that is a subset of the process's memory

But how do we share data from our module? You have to pass it in or out as the values of a function call.

There’s a problem here, though. By default, WebAssembly only has a handful of numeric types, which means you can only pass single digits across.

Two nanoprocesses passing numbers to each other, but can't pass more complex data in memory

Here’s where the third feature comes in—the interface type proposal which we demoed in August. With interface types, modules can communicate using more complex values—things like like strings, sequences, records, variants, and nested combinations of these.

That makes it easy for two modules to exchange data, but in a way that’s secure and fast. The WebAssembly engine can do direct copies between the caller and the callee’s memories, without having to serialize and deserialize the data. And this works even if the two modules aren’t compiled from the same language.

One module in a nanoprocess asking the engine to pass a string from its memory over to the other nanoprocess

So that’s how we ensure that a malicious module can’t mess with the memory of other modules in the application.

But we don’t just need to take precautions around how memory is handled between these modules. Because if the application is actually going to be able to do anything with that data, it is going to need to call APIs or system calls. And those APIs or system calls might have access to shared resources, like the file system. And as we talked about in a previous post, the way that most operating systems handle access to the file system really falls down in providing the security we need here.

So we need APIs and system calls that actually have the concept of permissions baked into them, so that they can give different modules different permissions to different resources.

This is where the fourth feature comes in: WASI, the WebAssembly system interface.

That gives us a way to isolate these different modules from each other and give them fine-grained permissions to particular parts of the file system and other resources, and also fine grained permissions for different system calls.

System resources with locks and chains around them

So with this, we’ve taken control of the keys.

What’s missing?

Right now, we don’t have a way to pass these keys down through the dependency tree. We need a way for parent modules to give these keys to their dependencies. This way, they can give their dependencies exactly the keys they need and no other ones. And then, those dependencies can do the same thing for their children, all the way down through the tree.

That’s what we’ll be working on next. In technical terms, we’re planning to use a fine grained form of per-module virtualization. This is an idea that researchers have demonstrated in research contexts, and we’re working on bringing this to WebAssembly.

A parent nanoprocess with two children. The parent is passing keys for the open syscall and a particular directory to one, and the getrandom syscall to the other

Taken all together, these features make it possible for us to have similar isolation to that of a process, but with much lower overhead. This pattern of usage is what we’re calling a WebAssembly nanoprocess.

It’s still just WebAssembly, but it follows a particular pattern. And if we build this pattern into the tools and conventions we use, we can make third-party code reuse safe in WebAssembly in a way it hasn’t been in other ecosystems to date.

Sidenote: At the moment, every nanoprocess is made up of exactly one wasm module. In the future, we’ll focus on toolchain support for creating nanoprocesses containing multiple wasm modules, allowing native-style dynamic linking while preserving the memory isolation of nanoprocesses

With these foundations in place, developers will be able to build dependency trees of isolated packages, each with their own tiny little nanoprocess around them.

How do nanoprocesses protect users?

So how does this help us keep users safe?

In both of these cases, it’s the fact that we’re following the principle of least authority that’s keeping us secure. Let’s walk through how this helps.

Malicious code

In order to carry out an attack, a module often needs access to resources and APIs that it wouldn’t otherwise need access to. With our approach, the module has to declare that it needs access to these things. That makes it easy to see when it’s asking to do things it shouldn’t.

For example, the wallet stealing module needed access to both a socket pointing to the server that it was sending the seed to, and the ability to open a network connection.

But this module was just supposed to take the text that was passed in to it and hand it off to the system, so that the system could display it. Why would the module need to open a network connection to a remote server to do that?

If electron-native-notify had asked for these permissions from the start, that would’ve been a serious red flag. The maintainers of EasyDEX-GUI probably wouldn’t have accepted it into their dependency tree.

A malicious module asking a direct dependency of the target app to join the dependency tree. The malicious module says 'Hey, I want to join you! All I need is access to a socket and the open syscall' and the direct dependency responds 'Wait, why? I dont even have that. I would need to ask the app for it... and I'm not gonna do that'

If the malicious maintainer had tried to slip these access requests in later, sometime after it was already in EasyDEX-GUI, then that would’ve been a breaking change. The module’s signature would have changed, and WebAssembly throws an error when the calling code doesn’t provide the imports or parameters that a module expects.

This means there’s no way to sneak in access to a new resource or system call under the radar.

A malicious module that is in the dependency tree asking for an upgrade in permissions and its parent module saying 'That doesnt make sense. Let me look at those changes'

So it’s unlikely that the maintainer could get the basic access they needed to communicate with the server. But even if this maintainer were somehow able to trick the app developer into granting them this access, they still would have an extremely hard time getting the seed.

That’s because the seed would be in another module’s memory. There are two ways for a module to get access to something from some other module’s memory. And neither of them allow the malicious module to be sneaky about it.

  1. The module that has the seed exports the seed variable, making it automatically visible to all other modules in the app.
  2. The module that has the seed passes it directly to a function in the malicious module as a parameter.

Both seem incredibly unlikely for a variable this sensitive.

A malicious dependency thinking to itself 'Dang it! How do I get the seed?' because the seed is in the memory of a different nanoprocess

There is unfortunately one less straightforward way that attackers can access another module’s memory—side-channel attacks like Spectre. OS processes attempt to provide something called time protection, and this helps protect against these attacks. Unfortunately, CPUs and OSes don’t offer finer-than-process granularity time protection features.

There’s a good chance you’ve never thought about using processes to protect your code. Why don’t most people have to care? Because hosting providers take care of it, sometimes using complex techniques.

If you are architecting your code with isolated process now, you may still need to. Making a shift to nanoprocesses here would take careful analysis.

But there are lots of situations where timing protection isn’t needed, or where people aren’t using processes at all right now. These are good candidates for nanoprocesses.

Plus, as CPUs evolve to provide cheaper time protection features, WebAssembly nanoprocesses will be in a good position to quickly take advantage of these features, at which point you won’t need to use OS processes for this.

Vulnerable code

How does this protect against attackers exploiting vulnerabilities?

Just as with malicious modules, it’s less likely that a vulnerable module legitimately needs the combination of access rights that the attacker needs.

In our ZipSlip example, even for legitimate uses, the vulnerable module does need access to a dangerous syscall—the write syscall which allows it to write files.

But it doesn’t need access to the node_modules directory for any legitimate use. It would only have access to the directory that its parent passes to it, which would be some directory that zip files can be unarchived into.

That makes it impossible for the vulnerable module to carry out the attackers wishes (unless the parent knowingly passes a very sensitive directory into to the vulnerable module).

A dependency with the ZipSlip vulnerability talking on the phone to an attacker saying 'Oh, I am sorry. I dont have access to that folder. My parent only gave me a handle for the uploads directory'

As noted in the caveat above, it is possible that the application developer would pass in a sensitive directory, like the root directory. But for this to work, this mistake would have to be made in the top level package. It can’t happen in a transitive dependency. By making all of these accesses explict, it makes it much easier to catch those kinds of sloppy mistakes in review.

Other benefits of nanoprocesses

With nanoprocesses, we can continue to build massively modular applications… we can keep the developer productivity gains while also ensuring that our users are safe.

But what else can nanoprocesses help with?

Isolation that fits anywhere

These nanoprocesses—these little container-like things—can fit in all sorts of places that regular processes and containers and VMs can’t go.

This is what makes it possible for us to wrap each package or groups of packages into these tiny little micro processes. Because they’re so small, they won’t blow up the size of your application. But dependency trees aren’t the only places where you would use these nanoprocesses.

Here are just a couple of other use cases we see for them, and which partners in the Alliance are working on:

Handling requests for tens of thousands of customers on the same machine

Fastly serves 13% of the Internet’s traffic. That means responding to millions of requests, and trying to do it with as little overhead as possible while keeping customers secure.

They’ve come up with an innovative architecture using WebAssembly nanoprocesses which makes it possible to securely host tens of thousands of simultaneously running programs in the same process. Their approach completely isolates the request from previous requests, ensuring full VM isolation. It has the added advantage of having cold start that’s orders of magnitude faster.

Software fault isolation for individual libraries in native applications

In some applications, most of your code is written in-house, and only a few libraries are written by untrusted developers outside of the organization.

In these cases, you can use a nanoprocess to create a lightweight, in-process sandbox for a single library, rather than having your full dependency tree wrapped in nanoprocesses.

We’ve started building this kind of sandboxing in Firefox to protect users from bugs in third party libraries, like font rendering engines, and image and audio decoders.

Improved software composability

For decades, software development best practices have been driving towards more and more componentized architectures, built out of small, Lego-like pieces.

This started with libraries and modules described above, running in the same process in the same language. More recently, there’s been a progression towards services, and then microservices, which give you the security benefits of isolation.

But beyond security, the isolation services provide also makes software more composable. It means that you can plug together services that are written in different languages, or different versions of the same language. That gives you a much larger set of lego blocks to play with.

But these services can’t go all the places that libraries can go because they’re too big. They are often running inside a process, which is running inside of a container, which is running on a server. This means that you often have to use a coarse-grained approach when breaking your app apart into these services.

With wasm, we can replace microservices with nanoprocesses and get the same security and language independence benefits. It gives us the composability of microservices without the weight. This means we can use a microservices-style architecture and the language interoperability that provides, but with a finer-grained approach to defining the component services.

A dependency tree of nanoprocesses, each containing a module written in a different language

Join us

So that’s the vision we have for a safer future for our users. We believe that WebAssembly doesn’t just have the opportunity but the responsibility to provide this kind of safety.

And we hope that you’ll join us!

The post Announcing the Bytecode Alliance: Building a secure by default, composable future for WebAssembly appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Addons BlogExtensions in Firefox 71

Firefox 71 is a light release in terms of extension changes. I’d like to tell you about a few interesting improvements nevertheless.

Thanks to Nils Maier, there have been various improvements to the downloads API, specifically in handling download failures. In addition to previously reported failures, the browser.downloads.download API will now report an error in case of various 4xx error codes. Similarly, HTTP 204 (No Content) and HTTP 205 (Reset Content) are now treated as bad content errors. This makes the API more compatible with Chrome and gives developers a way to handle these errors in their code. With the new allowHttpErrors parameter, extensions may also ignore some http errors when downloading. This will allow them to download the contents of server error pages.

Please also note, the lowercase isarticle filter for the tabs.onUpdated listener has been removed in Firefox 71. Developers should instead use the camelCase isArticle filter.

A few more minor updates are available as well:

  • Popup windows now include the extension name instead of its moz-extension:// url when using the windows.create API.
  • Clearer messaging when encountering unexpected values in manifest.json (they are often warnings, not errors)
  • Extension-registered devtools panels now interact better with screen readers

Thank you contributors Nils and Myeongjun Go for the improvements, as well as our WebExtensions team for fixing various tests and backend related issues. If you’d like to help make this list longer for the next releases, please take a look at our wiki on how to contribute. I’m looking forward to seeing what Firefox 72 will bring!

The post Extensions in Firefox 71 appeared first on Mozilla Add-ons Blog.

The Mozilla BlogNew Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing

New community effort will create a new cross-platform, cross-device computing runtime based on the unique advantages of WebAssembly 

MOUNTAIN VIEW, California, November 12, 2019 — The Bytecode Alliance is a newly-formed open source community dedicated to creating new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI). Mozilla, Fastly, Intel, and Red Hat are founding members.

The Bytecode Alliance will, through the joint efforts of its contributing members, deliver a state-of-the-art runtime environment and associated language toolchains, where security, efficiency, and modularity can all coexist across the widest possible range of devices and architectures. Technologies contributed and collaboratively evolved through the Alliance leverage established innovation in compilers, runtimes, and tooling, and focus on fine-grained sandboxing, capabilities-based security, modularity, and standards such as WebAssembly and WASI.

Founding members are making several open source project contributions to the Bytecode Alliance, including:

  • Wasmtime, a small and efficient runtime for WebAssembly & WASI
  • Lucet, an ahead-of-time compiler and runti
  • me for WebAssembly & WASI focused on low-latency, high-concurrency applications
  • WebAssembly Micro Runtime (WAMR), an interpreter-based WebAssembly runtime for embedded devices
  • Cranelift, a cross-platform code generator with a focus on security and performance, written in Rust

Modern software applications and services are built from global repositories of shared components and frameworks, which greatly accelerates creation of new and better multi-device experiences but understandably increases concerns about trust, data integrity, and system vulnerability. The Bytecode Alliance is committed to establishing a capable, secure platform that allows application developers and service providers to confidently run untrusted code, on any infrastructure, for any operating system or device, leveraging decades of experience doing so inside web browsers.

Partner quotes:

Mozilla:

“WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers. This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix what’s broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way. Together with our partners in the Bytecode Alliance, Mozilla is building these new secure foundations—for everything from small, embedded devices to large, computing clouds,” said Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly.

Fastly:

“Fastly is very happy to help bring the Bytecode Alliance to the community,” said Tyler McMullen, CTO at Fastly. “Lucet and Cranelift have been developed together for years, and we’re excited to formalize their relationship and help them grow faster together. This is an important moment in computing history, marking our chance to redefine how software will be built across clients, origins, and the edge. The Bytecode Alliance is our way of contributing to and working with the community, to create the foundations that the future of the internet will be built on.”

Intel:

“Intel is joining the Bytecode Alliance as a founding member to help extend WebAssembly’s performance and security benefits beyond the browser to a wide range of applications and servers. Bytecode Alliance technologies can help developers extend software using a wide selection of languages, building upon the full capabilities of leading-edge compute platforms,” said Mark Skarpness, VP, Intel Architecture, Graphics, and Software; Director, Data-Centric System Stacks.

Red Hat: 

“Red Hat believes deeply in the role open source technologies play in helping provide the foundation for computing, from the operating system to the browser to the open hybrid cloud,” said Chris Wright, senior vice president and Chief Technology Officer at Red Hat. “Wasmtime is an exciting development that helps move WebAssembly out of the browser into the server space where we are experimenting with it to change the trust model for applications, and we are happy to be involved in helping it grow into a mature, community-based project.”

Useful Links:

About Mozilla

Mozilla has been a pioneer and advocate for the web for more than 20 years. We are a global organization with a mission to promote innovation and opportunity on the Web. Today, hundreds of millions of people worldwide use the popular Firefox browser to discover, experience, and connect to the Web on computers, tablets and mobile phones. Together with our vibrant, global community of developers and contributors, we create and promote open standards that ensure the internet remains a global public resource, open and accessible to all.

The post New Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 312

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

#Rust2020

Find all #Rust2020 posts at Read Rust.

Crate of the Week

This week has multiple crates:

  • accurate, accumulator types for more accurate (or even provably correct) sum and dot product of floating-point numbers
  • transfer, a crate to transfer values between pinned instances.
  • genawaiter, a crate to allow generators on stable Rust.

Thanks to Nestor Demeure and Willi Kappler for the suggestions!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

310 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In my experience, prayers are not a very effective concurrency primitive.

Robert Lord on his blog

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla Addons BlogSecurity improvements in AMO upload tools

We are making some changes to the submission flow for all add-ons (both AMO- and self-hosted) to improve our ability to detect malicious activity.

These changes, which will go into effect later this month, will introduce a small delay in automatic approval for all submissions. The delay can be as short as a few minutes, but may take longer depending on the add-on file.

If you use a version of web-ext older than 3.2.1, or a custom script that connects to AMO’s upload API, this new delay in automatic approval will likely cause a timeout error. This does not mean your upload failed; the submission will still go through and be approved shortly after the timeout notification. Your experience using these tools should remain the same otherwise.

You can prevent the timeout error from being triggered by updating web-ext or your custom scripts before this change goes live. We recommend making these updates this week.

  • For web-ext: update to web-ext version 3.2.1, which has a longer default timeout for `web-ext sign`. To update your global install, use the command `npm install -g web-ext`.
  • For custom scripts that use the AMO upload API: make sure your upload scripts account for potentially longer delays before the signed file is available. We recommend allowing up to 15 minutes.

The post Security improvements in AMO upload tools appeared first on Mozilla Add-ons Blog.

David HumphreyHacktoberfest 2019

I've been marking student submissions in my open source course this weekend, and with only a half-dozen more to do, the procrastinator in me decided a blog post was in order.

Once again I've asked my students to participate in Hacktoberfest.  I wrote about the experience last year, and wanted to give an update on how it went this time.

I layer a few extra requirements on the students, some of them to deal with things I've learned in the past.  For one, I ask them to set some personal goals for the month, and look at each pull request as a chance to progress toward achieving these goals.  The students are quite different from one another, which I want to celebrate, and this lets them go in different directions, and move at different paces.

Here are some examples of the goals I heard this time around:

  • Finish all the required PRs
  • Increase confidence in myself as a developer
  • Master git/GitHub
  • Learn new languages and technologies (Rust, Python, React, etc)
  • Contribute to projects we use and enjoy on a daily basis (e.g., VSCode)
  • Contribute to some bigger projects (e.g., Mozilla)
  • Add more experience to our resume
  • Read other people's code, and get better at understanding new code
  • Work on projects used around the world
  • Work on projects used locally
  • Learn more about how big projects do testing

So how did it go?  First, the numbers:

  • 62 students completed all 4 PRs during the month (95% completion rate)
  • 246 Pull Requests were made, consisting of 647 commits to 881 files
  • 32K lines of code were added or modified

I'm always interested in the languages they choose.  I let them work on any open source projects, so given this freedom, how will they use it?  The most popular languages by pull request ere:

  • JavaScript/TypeScript - 50%
  • HTML/CSS - 11%
  • C/C++/C# - 11%
  • Python - 10%
  • Java - 5%

Web technology projects dominate GitHub, and it's interesting to see that this is not entirely out of sync with GitHub's own stats on language positions.  As always, the long-tail provides interesting info as well.  A lot of people worked on bugs in languages they didn't know previously, including:

Swift, PHP, Go, Rust, OCaml, PowerShell, Ruby, Elixir, Kotlin

Because I ask the students to "progress" with the complexity and involvement of their pull requests, I had fewer people working in "Hacktoberfest" style repos (projects that popup for October, and quickly vanish).  Instead, many students found their way into larger and well known repositories and organizations, including:

Polymer, Bitcoin, Angular, Ethereum, VSCode, Microsoft Calculator, React Native for Windows, Microsoft STL, Jest, WordPress, node.js, Nasa, Mozilla, Home Assistant, Google, Instacart

The top GitHub organization by pull request volume was Microsoft.  Students worked on many Microsoft projects, which is interesting, since they didn't coordinate their efforts.  It turns out that Microsoft has a lot of open source these days.

When we were done, I asked the students to reflect on the process a bit, and answer a few questions.  Here's what I heard.

1. What are you proud of?  What did you accomplish during October?

  • Contributing to big projects (e.g., Microsoft STL, Nasa, Rust)
  • Contributing to small projects, who really needed my help
  • Learning a new language (e.g., Python)
  • Having PRs merged into projects we respect
  • Translation work -- using my personal skills to help a project
  • Seeing our work get shipped in a product we use
  • Learning new tech (e.g., complex dev environments, creating browser extensions)
  • Successfully contributing to a huge code base
  • Getting involved in open source communities
  • Overcoming the intimidation of getting involved

2. What surprised you about Open Source?  How was it different than you expected?

  • People in the community were much nicer than I expected
  • I expected more documentation, it was lacking
  • The range of projects: big companies, but also individuals and small communities
  • People spent time commenting on, reviewing, and helping with our PRs
  • People responded faster than we anticipated
  • At the same time, we also found that some projects never bothered to respond
  • Surprised to learn that everything I use has some amount of open source in it
  • Surprised at how many cool projects there are, so many that I don’t know about
  • Even on small issues, lead contributors will get involved in helping (e.g., 7 reviews in a node.js fix)
  • Surprised at how unhelpful the “Hacktoberfest” label is in general
  • “Good First Issue” doesn’t mean it will be easy.  People have different standards for what this means
  • Lots of things on GitHub are inactive, be careful you don’t waste your time
  • Projects have very different standards from one to the next, in terms of process, how professional they are, etc.
  • Surprised to see some of the hacks even really big projects use
  • Surprised how willing people were to let us get involved in their projects
  • Lots of camaraderie between devs in the community

3. What advice would you give yourself for next time?

  • Start small, progress from there
  • Manage your time well, it takes way longer than you think
  • Learn how to use GitHub’s Advanced Search well
  • Make use of your peers, ask for help
  • Less time looking for a perfect issue, more time fixing a good-enough issue
  • Don’t rely on the Hacktoberfest label alone.
  • Don’t be afraid to fail.  Even if a PR doesn’t work, you’ll learn a lot in the process
  • Pick issues in projects you are interested in, since it takes so much time
  • Don’t be afraid to work on things you don’t (yet) know.  You can learn a lot more than you think.
  • Read the contributing docs, and save yourself time and mistakes
  • Run and test code locally before you push
  • Don’t be too picky with what you work on, just get involved
  • Look at previously closed PRs in a project for ideas on how to solve your own.

One thing that was new for me this time around was seeing students get involved in repos and projects that didn't use English as their primary language.  I've had lots of students do localization in projects before.  But this time, I saw quite a few students working in languages other than English in issues and pull requests.  This is something I've been expecting to see for a while, especially with GitHub's Trending page so often featuring projects not in English.  But it was the first time it happened organically with my own students.

Once again, I'm grateful to the Hacktoberfest organizers, and to the hundreds of maintainers we encountered as we made our way across GitHub during October.  When you've been doing open source a long time, and work in git/GitHub everyday, it can be hard to remember what it's like to begin.  Because I continually return to the place where people start, I know first-hand how valuable it is to be given the chance to get involved, for people to acknowledge and accept your work, and for people to see that it's possible to contribute.

Ryan HarterTechnical Leadership Paths

I found this article a few weeks ago and I really enjoyed the read. The author outlines what a role can look like for very senior ICs. It's the first in a (yet to be written) series about technical leadership and long term IC career paths. I'm excited to read more!

In particular, I am delighted to see her call out strategic work as a way for a senior IC to deliver value. I think there's a lot of opportunity for senior ICs to deliver strategic work, but in my experience organizations tend to under-value this type of work (often unintentionally).

My favorite projects to work on are high impact and difficult to execute even if there not deeply technical. In fact, I've found that my most impactful projects tend to only have a small technical component. Instead, the real value tends to come from spanning a few different technical areas, tackling some cultural change, or taking time to deeply understand the problem before throwing a solution at it. Framing these projects as "strategic" help me put my thumb on the type of work I like doing.

Keavy also calls out strike teams as a valuable way for ICs to work on high impact projects without moving into management. In my last three years at Mozilla, I've been fortunate to be a part of several strike teams and upon reflection I find that these are the projects I'm most proud of.

I'm fortunate that Mozilla has a well documented growth path for senior ICs. All the same, I am learning a lot from her framing. I'm excited to read more!

The Firefox FrontierNine tips for better tab management

Poll time! No judgment if you’re in the high end of the range. Keeping a pile of open tabs is the sign of an optimistic, enthusiastic, curious digital citizen, and … Read more

The post Nine tips for better tab management appeared first on The Firefox Frontier.

The Rust Programming Language BlogAnnouncing Rust 1.39.0

The Rust team is happy to announce a new version of Rust, 1.39.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.39.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.39.0 on GitHub.

What's in 1.39.0 stable

The highlights of Rust 1.39.0 include async/.await, shared references to by-move bindings in match guards, and attributes on function parameters. Also, see the detailed release notes for additional information.

The .await is over, async fns are here

Previously in Rust 1.36.0, we announced that the Future trait is here. Back then, we noted that:

With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we'll tell you more about in the future.

A promise made is a promise kept. So in Rust 1.39.0, we are pleased to announce that async / .await is stabilized! Concretely, this means that you can define async functions and blocks and .await them.

An async function, which you can introduce by writing async fn instead of fn, does nothing other than to return a Future when called. This Future is a suspended computation which you can drive to completion by .awaiting it. Besides async fn, async { ... } and async move { ... } blocks, which act like closures, can be used to define "async literals".

For more on the release of async / .await, read Niko Matsakis's blog post.

References to by-move bindings in match guards

When pattern matching in Rust, a variable, also known as a "binding", can be bound in the following ways:

  • by-reference, either immutably or mutably. This can be achieved explicitly e.g. through ref my_var or ref mut my_var respectively. Most of the time though, the binding mode will be inferred automatically.

  • by-value -- either by-copy, when the bound variable's type implements Copy, or otherwise by-move.

Previously, Rust would forbid taking shared references to by-move bindings in the if guards of match expressions. This meant that the following code would be rejected:

fn main() {
    let array: Box<[u8; 4]> = Box::new([1, 2, 3, 4]);

    match array {
        nums
//      ---- `nums` is bound by move.
            if nums.iter().sum::<u8>() == 10
//                 ^------ `.iter()` implicitly takes a reference to `nums`.
        => {
            drop(nums);
//          ----------- `nums` was bound by move and so we have ownership.
        }
        _ => unreachable!(),
    }
}

With Rust 1.39.0, the snippet above is now accepted by the compiler. We hope that this will give a smoother and more consistent experience with match expressions overall.

Attributes on function parameters

With Rust 1.39.0, attributes are now allowed on parameters of functions, closures, and function pointers. Whereas before, you might have written:

#[cfg(windows)]
fn len(slice: &[u16]) -> usize {
    slice.len()
}
#[cfg(not(windows))] 
fn len(slice: &[u8]) -> usize {
    slice.len()
}

...you can now, more succinctly, write:

fn len(
    #[cfg(windows)] slice: &[u16], // This parameter is used on Windows.
    #[cfg(not(windows))] slice: &[u8], // Elsewhere, this one is used.
) -> usize {
    slice.len()
}

The attributes you can use in this position include:

  1. Conditional compilation: cfg and cfg_attr

  2. Controlling lints: allow, warn, deny, and forbid

  3. Helper attributes used by procedural macro attributes applied to items.

    Our hope is that this will be used to provide more readable and ergonomic macro-based DSLs throughout the ecosystem.

Borrow check migration warnings are hard errors in Rust 2018

In the 1.35.0 release, we announced that NLL had come to Rust 2015 after first being released for Rust 2018 in 1.31.

As noted in the 1.35.0 release, the old borrow checker had some bugs which would allow memory unsafety. These bugs were fixed by the NLL borrow checker. As these fixes broke some stable code, we decided to gradually phase in the errors by checking if the old borrow checker would accept the program and the NLL checker would reject it. If so, the errors would instead become warnings.

With Rust 1.39.0, these warnings are now errors in Rust 2018. In the next release, Rust 1.40.0, this will also apply to Rust 2015, which will finally allow us to remove the old borrow checker, and keep the compiler clean.

If you are affected, or want to hear more, read Niko Matsakis's blog post.

More const fns in the standard library

With Rust 1.39.0, the following functions became const fn:

Additions to the standard library

In Rust 1.39.0 the following functions were stabilized:

Other changes

There are other changes in the Rust 1.39.0 release: check out what changed in Rust, Cargo, and Clippy.

Please also see the compatibility notes to check if you're affected by those changes.

Contributors to 1.39.0

Many people came together to create Rust 1.39.0. We couldn't have done it without all of you. Thanks!

The Rust Programming Language BlogAsync-await on stable Rust!

On this coming Thursday, November 7, async-await syntax hits stable Rust, as part of the 1.39.0 release. This work has been a long time in development -- the key ideas for zero-cost futures, for example, were first proposed by Aaron Turon and Alex Crichton in 2016! -- and we are very proud of the end result. We believe that Async I/O is going to be an increasingly important part of Rust's story.

While this first release of "async-await" is a momentous event, it's also only the beginning. The current support for async-await marks a kind of "Minimum Viable Product" (MVP). We expect to be polishing, improving, and extending it for some time.

Already, in the time since async-await hit beta, we've made a lot of great progress, including making some key diagnostic improvements that help to make async-await errors far more approachable. To get involved in that work, check out the Async Foundations Working Group; if nothing else, you can help us by filing bugs about polish issues or by nominating those bugs that are bothering you the most, to help direct our efforts.

Many thanks are due to the people who made async-await a reality. The implementation and design would never have happened without the leadership of cramertj and withoutboats, the implementation and polish work from the compiler side (davidtwco, tmandry, gilescope, csmoe), the core generator support that futures builds on (Zoxc), the foundational work on Future and the Pin APIs (aturon, alexcrichton, RalfJ, pythonesque), and of course the input provided by so many community members on RFC threads and discussions.

Major developments in the async ecosystem

Now that async-await is approaching stabilization, all the major Async I/O runtimes are at work adding and extending their support for the new syntax:

Async-await: a quick primer

(This section and the next are reproduced from the "Async-await hits beta!" post.)

So, what is async await? Async-await is a way to write functions that can "pause", return control to the runtime, and then pick up from where they left off. Typically those pauses are to wait for I/O, but there can be any number of uses.

You may be familiar with the async-await from JavaScript or C#. Rust's version of the feature is similar, but with a few key differences.

To use async-await, you start by writing async fn instead of fn:

async fn first_function() -> u32 { .. }

Unlike a regular function, calling an async fn doesn't have any immediate effect. Instead, it returns a Future. This is a suspended computation that is waiting to be executed. To actually execute the future, use the .await operator:

async fn another_function() {
    // Create the future:
    let future = first_function();
    
    // Await the future, which will execute it (and suspend
    // this function if we encounter a need to wait for I/O): 
    let result: u32 = future.await;
    ...
}

This example shows the first difference between Rust and other languages: we write future.await instead of await future. This syntax integrates better with Rust's ? operator for propagating errors (which, after all, are very common in I/O). You can simply write future.await? to await the result of a future and propagate errors. It also has the advantage of making method chaining painless.

Zero-cost futures

The other difference between Rust futures and futures in JS and C# is that they are based on a "poll" model, which makes them zero cost. In other languages, invoking an async function immediately creates a future and schedules it for execution: awaiting the future isn't necessary for it to execute. But this implies some overhead for each future that is created.

In contrast, in Rust, calling an async function does not do any scheduling in and of itself, which means that we can compose a complex nest of futures without incurring a per-future cost. As an end-user, though, the main thing you'll notice is that futures feel "lazy": they don't do anything until you await them.

If you'd like a closer look at how futures work under the hood, take a look at the executor section of the async book, or watch the excellent talk that withoutboats gave at Rust LATAM 2019 on the topic.

Summary

We believe that having async-await on stable Rust is going to be a key enabler for a lot of new and exciting developments in Rust. If you've tried Async I/O in Rust in the past and had problems -- particularly if you tried the combinator-based futures of the past -- you'll find async-await integrates much better with Rust's borrowing system. Moreover, there are now a number of great runtimes and other libraries available in the ecosystem to work with. So get out there and build stuff!

Daniel Stenbergcurl 7.67.0

There has been 56 days since curl 7.66.0 was released. Here comes 7.67.0!

This might not be a release with any significant bells or whistles that will make us recall this date in the future when looking back, but it is still another steady step along the way and thanks to the new things introduced, we still bump the minor version number. Enjoy!

As always, download curl from curl.haxx.se.

If you need excellent commercial support for whatever you do with curl. Contact us at wolfSSL.

Numbers

the 186th release
3 changes
56 days (total: 7,901)

125 bug fixes (total: 5,472)
212 commits (total: 24,931)
1 new public libcurl function (total: 81)
0 new curl_easy_setopt() option (total: 269)

1 new curl command line option (total: 226)
68 contributors, 42 new (total: 2,056)
42 authors, 26 new (total: 744)
0 security fixes (total: 92)
0 USD paid in Bug Bounties

The 3 changes

Disable progress meter

Since virtually forever you’ve been able to tell curl to “shut up” with -s. The long version of that is --silent. Silent makes the curl tool disable the progress meter and all other verbose output.

Starting now, you can use --no-progress-meter, which in a more granular way only disables the progress meter and lets the other verbose outputs remain.

CURLMOPT_MAX_CONCURRENT_STREAMS

When doing HTTP/2 using curl and multiple streams over a single connection, you can now also set the number of parallel streams you’d like to use which will be communicated to the server. The idea is that this option should be possible to use for HTTP/3 as well going forward, but due to the early days there it doesn’t yet.

CURLU_NO_AUTHORITY

This is a new flag that the URL parser API supports. It informs the parser that even if it doesn’t recognize the URL scheme it should still allow it to not have an authority part (like host name).

Bug-fixes

Here are some interesting bug-fixes done for this release. Check out the changelog for the full list.

Winbuild build error

The winbuild setup to build with MSVC with nmake shipped in 7.66.0 with a flaw that made it fail. We had added the vssh directory but not adjusted these builds scripts for that. The fix was of course very simple.

We have since added several winbuild builds to the CI to make sure we catch these kinds of mistakes earlier and better in the future.

FTP: optimized CWD handling

At least two landed bug-fixes make curl avoid issuing superfluous CWD commands (FTP lingo for “cd” or change directory) thereby reducing latency.

HTTP/3

Several fixes improved HTTP/3 handling. It builds on Windows better, the ngtcp2 backend now also behaves correctly on macOS, the build instructions are clearer.

Mimics socketpair on Windows

Thanks to the new socketpair look-alike function, libcurl now provides a socket for the application to wait for even when doing name resolves in the dedicated resolver thread. This makes the Windows code work catch up with the similar change that landed in 7.66.0. This makes it easier for applications to behave correctly during the short time gaps when libcurl resolves a host name and nothing else is happening.

curl with lots of URLs

With the introduction of parallel transfers in 7.66.0, we changed how curl allocated handles and setup transfers ahead of time. This made command lines that for example would use [1-1000000] ranges create a million CURL handles and thus use a lot of memory.

It did in fact break a few existing use cases where people did very large ranges with curl. Starting now, curl will just create enough curl handles ahead of time to allow the maximum amount of parallelism requested and users should yet again be able to specify ranges with many million iterations.

curl -d@ was slow

It was discovered that if you ask curl to post data with -d @filename, that operation was unnecessary slow for large files and was sped up significantly.

DoH fixes

Several corrections were made after some initial fuzzing of the DoH code. A benign buffer overflow, a memory leak and more.

HTTP/2 fixes

We relaxed the :authority push promise checks, fixed two cases where libcurl could “forget” a stream after it had delivered all data and dup’ed HTTP/2 handles could issue dummy PRIORITY frames!

connect with ETIMEDOUT now makes CURLE_OPERATION_TIMEDOUT

When libcurl’s connect attempt fails and errno says ETIMEDOUT it means that the underlying TCP connect attempt timed out. This will now be reflected back in the libcurl API as the timed out error code instead of the previously used CURLE_COULDNT_CONNECT.

One of the use cases for this is curl’s --retry option which now considers this situation to be a timeout and thus will consider it fine to retry…

Parsing URL with fragment and question mark

There was a regression in the URL parser that made it mistreat URLs without a query part but with a question mark in the fragment.

Marco ZeheNolan Lawson shares what he has learned about accessibility

Over the past year and a half, I have ventured time and again into the federated Mastodon social network. In those ventures, I have contributed bug reports to both the Mastodon client as well as some alternative clients on the web, iOS, and Android.

One of those clients, a single-page, progressive web app, is Pinafore by Nolan Lawson. He had set out to create a fast, light-weight, and accessible, client from the ground up. When I started to use Pinafore, I immediately noticed that a lot of thought and effort had already gone into the client and I could immediately start using it.

I then started contributing some bug reports, and over time, Nolan has improved what was already very good tremendously, by adding more keyboard support, so that even as a screen reader user, one can use Pinafore without using virtual buffers, various light and dark themes, support for reducing animations, and much, much more.

And now, Nolan has shared what he has learned about accessibility in the process. His post is an excellent recollection of some of the challenges when dealing with an SPA, cross-platform, taking into account screen readers, keyboard users, styling stuff etc., and how to overcome those obstacles. It is an excellent read which contains suggestions and food for thought for many web developers. Enjoy the read!

This Week In RustThis Week in Rust 311

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

#Rust2020

Find all #Rust2020 posts at Read Rust.

Crate of the Week

This week's crate is displaydoc, a procedural derive macro to implement Display by string-interpolating the doc comment.

Thanks to Willi Kappler for the suggesion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

217 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I did manage to get this compile in the end - does anyone else find that the process of asking the question well on a public forum organizes their thoughts well enough to solve the problem?

David Mason on rust-users

Thanks to Daniel H-M for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

The Firefox FrontierTracking Diaries with Matt Navarra

In Tracking Diaries, we invited people from all walks of life to share how they spent a day online while using Firefox’s privacy protections to keep count of the trackers … Read more

The post Tracking Diaries with Matt Navarra appeared first on The Firefox Frontier.

Mozilla Future Releases BlogRestricting Notification Permission Prompts in Firefox

In April we announced our intent to reduce the amount of annoying permission prompts for receiving desktop notifications that our users are seeing on a daily basis. To that effect, we ran a series of studies and experiments around restricting these prompts.

Based on these studies, we will require user interaction on all notification permission prompts, starting in Firefox 72. That is, before a site can ask for notification permission, the user will have to perform a tap, click, or press a key.

In this blog post I will give a detailed overview of the study results and further outline our plans for the future.

Experiments

As previously described, we designed a measurement using Firefox Telemetry that allows us to report details around when and how a user interacts with a permission prompt without revealing personal information. The full probe definition can be seen in our source code. It was enabled for a randomly chosen pool of study participants (0.1% of the user population) on Firefox Release, as well as for all users of Firefox Nightly. The Release study additionally differentiated between new users and existing users, to account for an inherent bias of existing users towards denying permission requests (because they usually already “have” the right permissions on sites relevant to them).

We further enabled requiring user interaction for notification permission prompts in Nightly and Beta.

Results

Most of the heavy lifting here was done by Felix Lawrence, who performed a thorough analysis of the data we collected. You can read his full report for our Firefox Release study. I will highlight some of the key takeaways:

Notification prompts are very unpopular. On Release, about 99% of notification prompts go unaccepted, with 48% being actively denied by the user. This is even worse than what we’ve seen on Nightly, and it paints a dire picture of the user experience on the web. To add from related telemetry data, during a single month of the Firefox 63 Release, a total of 1.45 Billion prompts were shown to users, of which only 23.66 Million were accepted. I.e, for each prompt that is accepted, sixty are denied or ignored. In about 500 Million cases during that month, users actually spent the time to click on “Not Now”.

Users are unlikely to accept a prompt when it is shown more than once for the same site. We had previously given websites the ability to ask users for notification every time they visit a site in a new tab. The underlying assumption that users would want to take several visits to make up their minds turns out to be wrong. As Felix notes, around 85% of prompts were accepted without the user ever having previously clicked “Not Now”.

Most notification prompts don’t follow user interaction. Especially on Release, the overall number of prompts that are already compatible with this intervention is very low.

Prompts that are shown as a result of user interaction have significantly better interaction metrics. This is an important takeaway. Along with the significant decrease in overall volume, we can see a significantly better rate of first-time allow decisions (52%) after enforcing user interaction on Nightly. The same can be observed for prompts with user interaction in our Release study, where existing users will accept 24% of first-time prompts with user interaction and new users would accept a whopping 56% of first-time prompts with user interaction.

Changes

Based on the outlined results we have decided to enact two new restrictions:

  • Starting from Firefox 70, replace the default “Not Now” option with “Never”, which will effectively hide the prompt on a page forever.
  • Starting from Firefox 72, require user interaction to show notification permission prompts. Prompts without interaction will only show a small icon in the url bar.

When a permission prompt is denied by Firefox, the user still has the ability to override this automatic decision by clicking the small permission icon that appears in the address bar. This lets users use the feature on websites with that prompt without waiting for user interaction.

 

Besides the clear improvements in user interaction rates that our study has shown, these restrictions were derived from a few other considerations:

  • Easy to upgrade. Requiring user interaction allows for an easy upgrade path for affected websites, while hiding annoying “on load” prompts.
  • Transparent. Unlike other heuristics (such as “did the user visit this site a lot in the past”), interaction is easy to understand for both developers and users.
  • Encourages pre-prompting. We want to websites to use in-content controls to enable notifications, as long as they have an informative style and do not try to mimic native browser UI. Faking (“spoofing”) browser UI is considered a security risk and will be met with stronger enforcement in the future. A good pre-prompt follows the style of the page and adds additional context to the request. Pre-prompting, when done well, will increase the chance of users opting to receive notifications. Annoying users, as our data shows, will lead to churn.

We will release additional information and resources for web developers on our Mozilla Hacks blog.

We hope that these restrictions will lead to an overall better and less annoying user experience for all our users while retaining the functionality for those that need notifications.

The post Restricting Notification Permission Prompts in Firefox appeared first on Future Releases.

Mozilla Reps CommunityRep of the Month – October 2019

Please join us in congratulating Shina Dhingra, Rep of the Month for October 2019!

Shina is from Pune, Maharashtra, India. Her journey started with the Mozilla Pune community while she was in college in 2017, with Localization in Hindi and quality assurance bugs.

She’s been an active contributor to the community and since then has helped a lot of newcomers in their onboarding and helping them understand better what the Mozilla Community is all about.

She joined the Reps Program in February 2019 and since then she has actively participated and contributed to Common Voice, A-Frame, Localization, Add-ons, and other Open Source Contributions. She built her own project as a mentee under the Open Leaders Program, and will be organizing and hosting her own cohort called “Healthier AI” which she launched at MozFest this year.

Congratulations and keep rocking the open web!

To congratulate her, please head over to Discourse!

Nathan Froydevaluating bazel for building firefox, part 2

In our last post, we highlighted some of the advantages that Bazel would bring.  The remote execution and caching benefits Bazel bring look really attractive, but it’s difficult to tell exactly how much they would benefit Firefox.  I looked for projects that had switched to Bazel, and a brief summary of each project’s experience is written below.

The Bazel rules for nodejs highlight Dataform’s switch to Bazel, which took about 2 months.  Their build involves some combination of “NPM packages, Webpack builds, Node services, and Java pipelines”. Switching plus enabling remote caching reduced the average time for a build in CI from 30 minutes to 5 minutes; incremental builds for local development have been “reduced to seconds from minutes”.  It’s not clear whether the local development experience is also hooked up to the caching infrastructure as well.

Pinterest recently wrote about their switch to Bazel for iOS.  While they call out remote caching leading to “build times [dropping] under a minute and as low as 30 seconds”, they state their “time to land code” only decreased by 27%.  I wasn’t sure how to reconcile such fast builds with (relatively) modest decreases in CI time.  Tests have gotten a lot faster, given that test results can be cached and reused if the tests in question have their transitive dependencies unchanged.

One of the most complete (relatively speaking) descriptions I found was Redfin’s switch from Maven to Bazel, for building a large amount of JavaScript modules and Java code, nearly 30,000 files in all.  Their CI builds went from 40-90 minutes to 5-6 minutes; in fairness, it must be mentioned that their Maven builds were not parallelized (for correctness reasons) whereas their Bazel builds were.  But it’s worth highlighting that they managed to do this incrementally, by generating Bazel build definitions from their Maven ones, and that the quoted build times did not enable caching.  The associated tech talk slides/video indicates builds would be roughly in the 1-2 minute range with caching, although they hadn’t deployed that yet.

None of the above accounts talked about how long the conversion took, which I found peculiar.  Both Pinterest and Redfin called out how much more reliable their builds were once they switched to Bazel; Pinterest said, “we haven’t performed a single clean build on CI in over a year.”

In some negative results, which are helpful as well, Dropbox wrote about evaluating Bazel for their Android builds.  What’s interesting here is that other parts of Dropbox are heavily invested in Bazel, so there’s a lot of in-house experience, and that Bazel was significantly faster than their current build system (assuming caching was turned on; Bazel was significantly slower for clean builds without caching).  Yet Dropbox decided to not switch to Bazel due to tooling and development experience concerns.  They did leave open the possibility of switching in the future once the ecosystem matures.

The oddly-named Bazel Fawlty describes a conversion to Bazel from Go’s native tooling, and then a switch back after a litany of problems, including slower builds (but faster tests), a poor development experience (especially on OS X), and various things not being supported in Bazel leading to the native Go tooling still being required in some cases.  This post was also noteworthy for noting the amount of porting effort required to switch: eight months plus “many PR’s accepted into the bazel go rules git repo”.  I haven’t used Go, but I’m willing to discount some of the negative experience here due to the native Go tools being so good.

Neither one of these negative experiences translate exactly to Firefox: different languages/ecosystems, different concerns, different scales.  But both of them cite the developer experience specifically, suggesting that not only is there a large investment required to actually do the switchover, but you also need to write tooling around Bazel to make it more convenient to use.

Finally, a 2018 BazelCon talk discusses two Google projects that made the switch to Bazel and specifically to use remote caching and remote execution on Google’s public-facing cloud infrastructure: Android Studio and TensorFlow.  (You may note that this is the first instance where somebody has called out supporting remote execution as part of the switch; I think that implies getting a build to the point of supporting remote execution is more complicated than just supporting remote caching, which makes a certain amount of sense.)  Android Studio increased their test presubmit coverage by 4x, presumably by being able to run more than 4x test jobs than previously due to remote execution.  In the same vein, TensorFlow decreased their build and test times by 80%, and they could use significantly less powerful machines to actually run the builds, given that large machines in the cloud were doing the actual heavy lifting.

Unfortunately, I don’t think expecting those same reductions in test time, were Firefox to switch to Bazel, is warranted.  I can’t speak to Android Studio, but TensorFlow has a number of unit tests whose test results can be cached.  In the Firefox context, these would correspond to cppunittests, which a) we don’t have that many of and b) don’t take that long to run.  The bulk of our tests depend in one way or another on kitchen-sink-style artifacts (e.g. libxul, the JS shell, omni.ja) which essentially depend on everything else.  We could get some reductions for OS-specific modifications; Windows-specific changes wouldn’t require re-running OS X tests, for instance, but my sense is that these sorts of changes are not common enough to lead to an 80% reduction in build + test time.  I suppose it’s also possible that we could teach Bazel that e.g. devtools changes don’t affect, say, non-devtools mochitests/reftests/etc. (presumably?), which would make more test results cacheable.

I want to believe that Bazel + remote caching (+ remote execution if we could get there) will bring Firefox build (and maybe even test) times down significantly, but the above accounts don’t exactly move the needle from belief to certainty.

Tantek Çelik#Redecentralize 2019 Session: Decentralized Identity & Rethinking Reputation

Decentralized lunch

After the first open session of the day, the Redecentralize confrerence provided a nice informal buffet lunch for participants. Though we picked up our eats from a centralized buffet, people self-organized into their own distributed groups. There were a few folks I knew or had recently met, and many more that I had not. I sat with a few people who looked like they had just started talking and that’s when I met Kate.

I asked if she was running a session and she said yes in the next time slot, on decentralized identity and rethinking reputation. She also noted that she wanted to approach it from a human exploration perspective rather than a technical perspective, and was looking to learn from participants. I decided I’d join, looking forward to a humans-first (rather than technology plumbing first) conversation and discussion.

Discussion circle

After lunch everyone found their way to various sessions or corners of the space to work on their own projects. The space for Kate’s session was an area in the middle of a large room, without a whiteboard or projector. About a half dozen of us assembled chairs in a rough oval to get started.

As we informally chatted a few more people showed up and we broadened our circle. The space was a bit noisy with chatter drifting in from other sessions, yet we could hear each other we if leaned in a little. Kate started us off asking our opinions of the subject matter, experiences, and about existing approaches in contrast to letting any one company control identity and reputation.

Gaming of centralized systems

We spent quite a bit of time on discussing existing online or digital reputation systems, and how portable or not these were. China was a subject of discussion along with the social reputation system that they had put in place that was starting to be used for various purposes. Someone provided the example of people putting their phones into little shaker machines to fake an increased stepcount to increase their reputation in that way. Apparently lots of people are gaming the Chinese systems in many ways.

Portability and resets

Two major concerns were brought up about decentralized reputation systems.

  1. Reputation portability. If you build reputation in one system or service, how do you transfer that reputation to another?
  2. Reset abuse. If you develop a bad reputation in a system, what is to stop you from deleting that identity, and creating a new one to reset your reputation?

No one had good answers for either. I offered one observation for the latter, which was that as reputation systems evolve over time, the lack of reputation, i.e. someone just starting out (or a reset), is seen as having a default negative reputation, that they have to prove otherwise. For example the old Twitter “eggs”, so called due to the default icons that Twitter (at some point) assigned to new users that were a white cartoon egg on a pastel background.

Another subsequent thought, Twitter’s profile display of when someone joined has also reinforced some of this “default negative” reputation, as people are suspicious of accounts that have just recently joined Twitter and all of sudden start posting forcefully (especially about political or breaking news stories). Are they bots or state operatives pretending to be someone they’re not? Hard to tell.

Session dynamics

While Kate did a good job keeping discussions on topic, prompting with new questions when the group appeared to rathole in some area, there were a few challenging dynamics in the group.

It looked like no one was using laptop to take notes (myself included), emergently so (no one was told not to use their laptop). While “no laptop” meetings are often praised for focus & attention, they do have several downsides.

First, no one writes anything down, so follow-up discussions are difficult, or rather, it becomes likely that past discussions will be repeated without any new information. Caught in a loop. History repeating.

Second, with only speaking and no writing or note-taking, conversations tend to become more reactive, less thoughtful, and more about the individuals & personalities than about the subject matter.

I noticed that one participant in particular was much more forceful and spoke a lot more than anyone else in the group, asserting all kinds of domain knowledge (usually without citation or reasoning). Normally I tend to question this kind of behavior, but this time I decided to listen and observe instead. On a session about reputation, how would this person’s behavior affect their dynamic reputation in this group?

Eventually Kate was able to ask questions and prompt others who were quiet to speak-up, which was good to see.

Decentralized identity

We did not get into any deep discussions of any specific decentralized identity systems, and that was perhaps ok. Mostly there discussion about the downsides of centrally controlled identity, and how each of us wanted more control over various aspects of our online identities.

For anyone who asked, I posited that a good way to start with decentralized identity was to buy and use a personal domain name for your primary online presence, setting it up to sign-into sites, and build a reputation using that. Since you can pick the domain name, you can pick whatever facet(s) of your identity you wish to represent. It may not be perfectly distributed, however it does work today, and is a good way to explore a lot of the questions and challenges of decentralized identity.

The Nirvana Fallacy

Another challenge discussing various systems both critically, and aspirationally, was the inability to really assess how “real” any examples were, or applicable to any of us, or their usability, or even if they were deployed in any even experimental way instead of just being a white paper proposal.

This was a common theme in several sessions, that of comparing the downsides of real existing systems with the aspirational features of conceived but unimplemented systems. I had just recently come across a name for this phenomenon, and like many things you learn about, was starting to see it a lot: The Nirvana Fallacy. I didn’t bring it up in this session but rather tried to keep it in mind as a way to assess various comparisons.

Distributed reputation

After lunch sessions are always a bit of a challenge. People are full or tired. I myself was already feeling a bit spent from the lightning talk and the session Kevin and I had led right after that.

All in all it was a good discussion, even though we couldn’t point to any notes or conclusions. It felt like everyone walked away having learned something from someone else, and in general people got to know each other in a semi-distributed way, starting to build reputation for future interactions.

Watching that happen in-person made me wonder if there was some way to apply a similar kind of semi-structured group discussion dynamic as a method for building reputation in the online world. Could there be some way to parse out the dynamics of individual interactions in comments or threads to reflect that back to users in the form of customized per-person-pair reputations that they could view as a recent summary or trends over the years?

Previous #Redecentralize 2019 posts

Mozilla Security BlogValidating Delegated Credentials for TLS in Firefox

At Mozilla we are well aware of how fragile the Web Public Key Infrastructure (PKI) can be. From fraudulent Certification Authorities (CAs) to implementation errors that leak private keys, users, often unknowingly, are put in a position where their ability to establish trust on the Web is compromised. Therefore, in keeping with our mission to create a Web where individuals are empowered, independent and safe, we welcome ideas that are aimed at making the Web PKI more robust. With initiatives like our Common CA Database (CCADB), CRLite prototyping, and our involvement in the CA/Browser Forum, we’re committed to this objective, and this is why we embraced the opportunity to partner with Cloudflare to test Delegated Credentials for TLS in Firefox, which is currently undergoing standardization at the IETF.

As CAs are responsible for the creation of digital certificates, they dictate the lifetime of an issued certificate, as well as its usage parameters. Traditionally, end-entity certificates are long-lived, exhibiting lifetimes of more than one year. For server operators making use of Content Delivery Networks (CDNs) such as Cloudflare, this can be problematic because of the potential trust placed in CDNs regarding sensitive private key material. Of course, Cloudflare has architectural solutions for such key material but these add unwanted latency to connections and present with operational difficulties. To limit exposure, a short-lived certificate would be preferable for this setting. However, constant communication with an external CA to obtain short-lived certificates could result in poor performance or even worse, lack of access to a service entirely.

The Delegated Credentials mechanism decentralizes the problem by allowing a TLS server to issue short-lived authentication credentials (with a validity period of no longer than 7 days) that are cryptographically bound to a CA-issued certificate. These short-lived credentials then serve as the authentication keys in a regular TLS 1.3 connection between a Firefox client and a CDN edge server situated in a low-trust zone (where the risk of compromise might be higher than usual and perhaps go undetected). This way, performance isn’t hindered and the compromise window is limited. For further technical details see this excellent blog post by Cloudflare on the subject.

See How The Experiment Works

We will soon test Delegated Credentials in Firefox Nightly via an experimental addon, called TLS Delegated Credentials Experiment. In this experiment, the addon will make a single request to a Cloudflare-managed host which supports Delegated Credentials. The Delegated Credentials feature is disabled in Firefox by default, but depending on the experiment conditions the addon will toggle it for the duration of this request. The connection result, including whether Delegated Credentials was enabled or not, gets reported via telemetry to allow for comparative study. Out of this we’re hoping to gain better insights into how effective and stable Delegated Credentials are in the real world, and more importantly, of any negative impact to user experience (for example, increased connection failure rates or slower TLS handshake times). The study is expected to start in mid-November and run for two weeks.

For specific details on the telemetry and how measurements will take place, see bug 1564179.

See The Results In Firefox

You can open a Firefox Nightly or Beta window and navigate to about:telemetry. From here, in the top-right is a Search box, where you can search for “delegated” to find all telemetry entries from our experiment. If Delegated Credentials have been used and telemetry is enabled, you can expect to see the count of Delegated Credentials-enabled handshakes as well as the time-to-completion of each. Additionally, if the addon has run the test, you can see the test result under the “Keyed Scalars” section.

Delegated Credentials telemetry in Nightly 72

Delegated Credentials telemetry in Nightly 72

You can also read more about telemetry, studies, and Mozilla’s privacy policy by navigating to about:preferences#privacy.

See It In Action

If you’d like to enable Delegated Credentials for your own testing or use, this can be done by:

  1. In a Firefox Nightly or Beta window, navigate to about:config.
  2. Search for the “security.tls.enable_delegated_credentials” preference – the preference list will update as you type, and “delegated” is itself enough to find the correct preference.
  3. Click the Toggle button to set the value to true.
  4. Navigate to https://dc.crypto.mozilla.org/
  5. If needed, toggling the value back to false will disable Delegated Credentials.

Note that currently, use of Delegated Credentials doesn’t appear anywhere in the Firefox UI. This will change as we evolve the implementation.

We would sincerely like to thank Christopher Patton, fellow Mozillian Wayne Thayer, and the Cloudflare team, particularly Nick Sullivan and Watson Ladd for helping us to get to this point with the Delegated Credentials feature. The Mozilla team will keep you informed on the development of this feature for use in Firefox, and we look forward to sharing our results in a future blog post.

 

 

 

The post Validating Delegated Credentials for TLS in Firefox appeared first on Mozilla Security Blog.

The Mozilla BlogAsking Congress to Examine the Data Practices of Internet Service Providers

At Mozilla, we work hard to ensure our users’ browsing activity is protected when they use Firefox. That is why we launched enhanced tracking protection this year – to safeguard users from the pervasive online tracking of personal data by ad networks and companies. And over the last two years, Mozilla, in partnership with other industry stakeholders, has been working to develop, standardize, and deploy DNS over HTTPs (DoH). Our goal with DoH is to protect essentially that same browsing activity from interception, manipulation, and collection in the middle of the network.

This dedication to protecting your browsing activity is why today we’ve also asked Congress to examine the privacy and security practices of internet service providers (ISPs), particularly as they relate to the domain name services (DNS) provided to American consumers. Right now these companies have access to a stream of a user’s browsing history. This is particularly concerning in light of to the rollback of the broadband privacy rules, which removed guardrails for how ISPs can use your data.  The same ISPs are now fighting to prevent the deployment of DoH.

These developments have raised serious questions. How is your browsing data being used by those who provide your internet service? Is it being shared with others? And do consumers understand and agree to these practices? We think it’s time Congress took a deeper look at ISP practices to figure out what exactly is happening with our data.

At Mozilla, we refuse to believe that you have to trade your privacy and control in order to enjoy the technology you love. Our hope is that a congressional review of these practices uncovers valuable insights, informs the public, and helps guide continuing efforts to draft smart and comprehensive consumer privacy legislation.

See our full letter to Congressional leaders here.

The post Asking Congress to Examine the Data Practices of Internet Service Providers appeared first on The Mozilla Blog.

Nick FitzgeraldAlways Bump Downwards

When writing a bump allocator, always bump downwards. That is, allocate from high addresses, down towards lower addresses by decrementing the bump pointer. Although it is perhaps less natural to think about, it is more efficient than incrementing the bump pointer and allocating from lower addresses up to higher ones.

What is Bump Allocation?

Bump allocation is a super fast method for allocating objects. We have a chunk of memory, and we maintain a “bump pointer” within that memory. Whenever we allocate an object, we do a quick test that we have enough capacity left in the chunk, and then, assuming we have enough room, we move the bump pointer over by sizeof(object) bytes and return the pointer to the space we just reserved for the object within the chunk.

That’s it!

Here is some pseudo-code showing off the algorithm:

bump(size):
    if our capacity < size:
        fail
    else
        self.ptr = move self.ptr over by size bytes
        return pointer to the freshly allocated space

The trade off with bump allocation is that we can’t deallocate individual objects in the general case. We can deallocate all of them en mass by resetting the bump pointer back to its initial location. We can deallocate in a LIFO, stack order by moving the bump pointer in reverse. But we can’t deallocate an arbitrary object in the middle of the chunk and reclaim its space for new allocations.

Finally, notice that the chunk of memory we are bump allocating within is always split in two: the side holding allocated objects and the side with free memory. The bump pointer separates the two sides. Furthermore, note that I haven’t defined which side of the bump pointer is free or allocated space, and I’ve carefully avoided saying whether the bump pointer is incremented or decremented.

Bumping Upwards

First, let’s consider what we shouldn’t do: bump upwards by initializing the bump pointer at the low end of our memory chunk and incrementing the bump pointer on each allocation.

We begin with a struct that holds the start and end addresses of our chunk of memory, as well as our current bump pointer:

pub struct BumpUp {
    // A pointer to the first byte of our memory chunk.
    start: *mut u8,
    // A pointer to the last byte of our memory chunk.
    end: *mut u8,
    // The bump pointer. At all times, we maintain the
    // invariant that `start <= ptr <= end`.
    ptr: *mut u8,
}

Constructing our upwards bump allocator requires giving it the start and end pointers, and it will initialize its bump pointer to the start address:

impl BumpUp {
    pub unsafe fn new(
        start: *mut u8,
        end: *mut u8,
    ) -> BumpUp {
        assert!(start as usize <= end as usize);
        let ptr = start;
        BumpUp { start, end, ptr }
    }
}

To allocate an object, we will begin by grabbing the current bump pointer, and saving it in a temporary variable: this is going to be the pointer to the newly allocated space. Then we increment the bump pointer by the requested size, and check if it is still less than end. If so, then we have capacity for the allocation, and can commit the new bump pointer to self.ptr and return the temporary pointing to the freshly allocated space.

But first, there is one thing that the pseudo-code ignored, but which a real implementation cannot: alignment. We need to round up the initial bump pointer to a multiple of the requested alignment before we compute the new bump pointer by adding the requested size.0

Put all that together, and it looks like this:

impl BumpUp {
    pub unsafe fn alloc(
        &mut self,
        size: usize,
        align: usize,
    ) -> *mut u8 {
        debug_assert!(align > 0);
        debug_assert!(align.is_power_of_two());

        let ptr = self.ptr as usize;

        // Round the bump pointer up to the requested
        // alignment. See the footnote for details.
        let aligned = (ptr + align - 1) & !(align - 1);

        let new_ptr = aligned + size;

        let end = self.end as usize;
        if new_ptr > end {
            // Didn't have enough capacity!
            return std::ptr::null_mut();
        }

        self.ptr = new_ptr as *mut u8;
        aligned as *mut u8
    }
}

If we compile this allocation routine to x86-64 with optimizations, we get the following code, which I’ve lightly annotated:

; Incoming arguments:
;   * `rdi`: pointer to `BumpUp`
;   * `rsi`: The allocation's `size`
;   * `rdx`: The allocation's `align`
; Outgoing result:
;   * `rax`: the pointer to the allocation or null
alloc_up:
    mov rax, rdx
    mov rcx, QWORD PTR [rdi+0x10]
    add rcx, rdx
    add rcx, 0xffffffffffffffff
    neg rax
    and rax, rcx
    add rsi, rax
    cmp rsi, QWORD PTR [rdi+0x8]
    ja  .return_null
    mov QWORD PTR [rdi+0x10], rsi
    ret

.return_null:
    xor eax, eax
    ret

I’m not going to explain each individual instruction. What’s important to appreciate here is that this is just a small handful of fast instructions with only a single branch to handle the not-enough-capacity case. This is what makes bump allocation so fast — great!

But before we get too excited, there is another practicality to consider: to maintain memory safety, we must handle potential integer overflows in the allocation procedure, or else we could have bugs where we return pointers outside the bounds of our memory chunk. No good!

There are two opportunities for overflow we must take care of:

  1. If the requested allocation’s size is large enough, aligned + size can overflow.

  2. If the requested allocation’s alignment is large enough, the ptr + align - 1 sub-expression we use when rounding up to the alignment can overflow.

To handle both these cases, we will use checked addition and return a null pointer if either addition overflows. Here is the new Rust source code:

// Try and unwrap an `Option`, returning a null pointer
// if the option is `None`.
macro_rules! try_null {
    ( $e:expr ) => {
        match $e {
            None => return std::ptr::null_mut(),
            Some(e) => e,
        }
    };
}

impl BumpUp {
    pub unsafe fn alloc(
        &mut self,
        size: usize,
        align: usize,
    ) -> *mut u8 {
        debug_assert!(align > 0);
        debug_assert!(align.is_power_of_two());

        let ptr = self.ptr as usize;

        // Round the bump pointer up to the requested
        // alignment.
        let aligned =
            try_null!(ptr.checked_add(align - 1)) & !(align - 1);

        let new_ptr = try_null!(aligned.checked_add(size));

        let end = self.end as usize;
        if new_ptr > end {
            // Didn't have enough capacity!
            return std::ptr::null_mut();
        }

        self.ptr = new_ptr as *mut u8;
        aligned as *mut u8
    }
}

Now that we’re handling overflows in addition to the alignment requirements, let’s take a look at the x86-64 code that rustc and LLVM produce for the function now:

; Incoming arguments:
;   * `rdi`: pointer to `BumpUp`
;   * `rsi`: The allocation's `size`
;   * `rdx`: The allocation's `align`
; Outgoing result:
;   * `rax`: the pointer to the allocation or null
alloc_up:
    lea rax, [rdx-0x1]
    add rax, QWORD PTR [rdi+0x10]
    jb  .return_null
    neg rdx
    and rax, rdx
    add rsi, rax
    jb  .return_null
    cmp rsi, QWORD PTR [rdi+0x8]
    ja  .return_null
    mov QWORD PTR [rdi+0x10], rsi
    ret

.return_null:
    xor eax, eax
    ret

Now there are three conditional branches rather than one. The two new branches are from those two new overflow checks that we added. Less than ideal.

Can bumping downwards do better?

Bumping Downwards

Now let’s implement a bump allocator where the bump pointer is initialized at the end of the memory chunk and is decremented on allocation, so that it moves downwards towards the start of the memory chunk.

The struct is identical to the previous version:

#[repr(C)]
pub struct BumpDown {
    // A pointer to the first byte of our memory chunk.
    start: *mut u8,
    // A pointer to the last byte of our memory chunk.
    end: *mut u8,
    // The bump pointer. At all times, we have the
    // invariant that `start <= ptr <= end`.
    ptr: *mut u8,
}

Constructing a BumpDown is similar to constructing a BumpUp except we initialize the ptr to end rather than start:

impl BumpDown {
    pub unsafe fn new(start: *mut u8, end: *mut u8) -> BumpDown {
        assert!(start as usize <= end as usize);
        let ptr = end;
        BumpDown { start, end, ptr }
    }
}

When we were allocating by incrementing the bump pointer, the original bump pointer value before it was incremented pointed at the space that was about to be reserved for the allocation. When we are allocating by decrementing the bump pointer, the original bump pointer is pointing at either the end of the memory chunk, or at the last allocation we made. What we want to return is the value of the bump pointer after we decrement it down, at which time it will be pointing at our allocated space.

First we subtract the allocation size from the bump pointer. This subtraction might overflow, so we check for that and return a null pointer if that is the case, just like we did in the previous, upward-bumping function. Then, we round that down to the nearest multiple of align to ensure that the allocated space has the object’s alignment. At this point, we check if we are down past the start of our memory chunk, in which case we don’t have the capacity to fulfill this allocation, and we return null. Otherwise, we update the bump pointer to its new value and return the pointer!

impl BumpDown {
    pub unsafe fn alloc(
        &mut self,
        size: usize,
        align: usize,
    ) -> *mut u8 {
        debug_assert!(align > 0);
        debug_assert!(align.is_power_of_two());

        let ptr = self.ptr as usize;

        let new_ptr = try_null!(ptr.checked_sub(size));

        // Round down to the requested alignment.
        let new_ptr = new_ptr & !(align - 1);

        let start = self.start as usize;
        if new_ptr < start {
            // Didn't have enough capacity!
            return std::ptr::null_mut();
        }

        self.ptr = new_ptr as *mut u8;
        self.ptr
    }
}

And here is the x86-64 code generated for this downward-bumping allocation routine!

; Incoming arguments:
;   * `rdi`: pointer to `BumpDown`
;   * `rsi`: The allocation's `size`
;   * `rdx`: The allocation's `align`
; Outgoing result:
;   * `rax`: the pointer to the allocation or null
alloc_down:
    mov rax, QWORD PTR [rdi+0x10]
    sub rax, rsi
    jb  .return_null
    neg rdx
    and rax, rdx
    cmp rax, QWORD PTR [rdi]
    jb  .return_null
    mov QWORD PTR [rdi+0x10], rax
    ret

.return_null:
    xor eax, eax
    ret

Because rounding down doesn’t require an addition or subtraction operation, it doesn’t have an associated overflow check. That means one less conditional branch in the generated code, and downward bumping only has two conditional branches versus the three that upward bumping has.

Additionally, because we don’t need to save the original bump pointer value, this version uses fewer registers than the upward-bumping version. Bump allocation functions are designed to be fast paths that are inlined into callers, which means that downward bumping is creating less register pressure at every call site.

Finally, this downwards-bumping version is implemented with eleven instructions, while the upwards-bumping version requires thirteen instructions. In general, fewer instructions implies a shorter run time.

Benchmarks

I recently switched the bumpalo crate from bumping upwards to bumping downwards. It has a nice, little micro-benchmark suite that is written with the excellent, statistics-driven Criterion.rs benchmarking framework. With Criterion’s built-in support for defining a baseline measurement and comparing an alternate implementation of the code against it, I compared the new, downwards-bumping implementation against the original, upwards-bumping implementation.

The new, downwards-bumping implementation has up to 19% better allocation throughput than the original, upwards-bumping implementation! We’re down to 2.7 nanoseconds per allocation.

The plot below shows the probability of allocating 10,000 small objects taking a certain amount of time. The red curve represents the old, upwards-bumping implementation, while the blue curve shows the new, downwards-bumping implementation. The lines represent the mean time.

You can view the complete, nitty-gritty benchmark results in the pull request.

The One Downside: Losing a realloc Fast Path

bumpalo doesn’t only provide an allocation method, it also provides a realloc method to resize an existing allocation. realloc is O(n) because in the worst-case scenario it needs to allocate a whole new region of memory and copy the data from the old to the new region. But the old, upwards-bumping implementation had a fast path for growing the last allocation: it would add the delta size to the bump pointer, leaving the allocation in place and avoiding that copy. The new, downwards-bumping implementation also has a fast path for resizing the last allocation , but even if we reuse that space, the start of the allocated region of memory has shifted, and so we can’t avoid the data copy.

The loss of that fast path leads to a 4% slow down in our realloc benchmark that formats a string into a bump-allocated buffer, triggering a number of reallocs as the string is constructed. We felt that this was worth the trade off for faster allocation.

Less Work with More Alignment?

It is rare for types to require more than word alignment. We could enforce a minimum alignment on the bump pointer at all times that is greater than or equal to the vast majority of our allocations’ alignment requirements. If our allocation routine is monomorphized for the type of the allocation it’s making, or it is aggressively inlined — and it definitely should be — then we should be able to completely avoid generating any code to align the bump pointer in most cases, including the conditional branch on overflow if we are rounding up for upwards bumping.

impl BumpDown {
    pub unsafe fn alloc<T>(&mut self) -> *mut MaybeUninit<T> {
        let ptr = self.ptr as usize;

        // Ensure we always always keep the bump pointer
        // `MIN_ALIGN`-aligned by rounding the size up. This
        // should be boiled away into a constant by the compiler
        // after monomorphization.
        let size =
            (size_of::<T>() + MIN_ALIGN - 1) & !(MIN_ALIGN - 1);

        let new_ptr = try_null!(ptr.checked_sub(size));

        // If necessary, round down to the requested alignment.
        // Again, this `if`/`else` should be boiled away by the
        // compiler after monomorphization.
        let new_ptr = if align_of::<T> > MIN_ALIGN {
            new_ptr & !(align_of::<T>() - 1)
        } else {
            new_ptr
        };

        let start = self.start as usize;
        if new_ptr < start {
            // Didn't have enough capacity!
            return std::ptr::null_mut();
        }

        self.ptr = new_ptr as *mut u8;
        self.ptr as *mut _
    }
}

The trade off is extra memory overhead from introducing wasted space between small allocations that don’t require that extra alignment.

Conclusion

If you are writing your own bump allocator, you should bump downwards: initialize the bump pointer to the end of the chunk of memory you are allocating from within, and decrement it on each allocation so that it moves down towards the start of the memory chunk. Downwards bumping requires fewer registers, fewer instructions, and fewer conditional branches. Ultimately, that makes it faster than bumping upwards.

The one exception is if, for some reason, you frequently use realloc to grow the last allocation you made, in which case you might get more out of a fast path for growing the last allocation in place without copying any data. And if you do decide to bump upwards, then you should strongly consider enforcing a minimum alignment on the bump pointer to recover some of the performance that you’re otherwise leaving on the table.

Finally, I’d like to thank Jim Blandy, Alex Crichton, Jeena Lee, and Jason Orendorff for reading an early draft of this blog post, for discussing these ideas with me, and for being super friends :)


0 The simple way to round n up to a multiple of align is

(n + align - 1) / align * align

Consider the numerator: n + align - 1. This is ensuring that if there is any remainder for n / align, then the result of the division sub-expression is one greater than n / align, and that otherwise we get exactly the same result as n / align due to integer division rounding off the remainder. In other words, we only round up if n is not aligned to align.

However, we know align is a power of two, and therefore anything / align is equivalent to anything >> log2(align) and anything * align is equivalent to anything << log2(align). We can therefore rewrite our expression into:

(n + align - 1) >> log2(align) << log2(align)

But shifting a value right by some number of bits b and then shifting it left by that same number of bits b is equivalent to clearing the bottom b bits of the number. We can clear the bottom b bits of a number by bit-wise and’ing the number with the bit-wise not of 2^b - 1. Plugging this into our equation and simplifying, we get:

  (n + align - 1) >> log2(align) << log2(align)
= (n + align - 1) & !(2^log2(align) - 1)
= (n + align - 1) & !(align - 1)

And now we have our final version of rounding up to a multiple of a power of two!

If you find these bit twiddling hacks fun, definitely find yourself a copy of Hacker’s Delight. It’s a wonderful book!

The Rust Programming Language BlogCompleting the transition to the new borrow checker

For most of 2018, we've been issuing warnings about various bugs in the borrow checker that we plan to fix -- about two months ago, in the current Rust nightly, those warnings became hard errors. In about two weeks, when the nightly branches to become beta, those hard errors will be in the beta build, and they will eventually hit stable on December 19th, as part of Rust 1.40.0. If you're testing with Nightly, you should be all set -- but otherwise, you may want to go and check to make sure your code still builds. If not, we have advice for fixing common problems below.

Background: the non-lexical lifetime transition

When we released Rust 2018 in Rust 1.31, it included a new version of the borrow checker, one that implemented "non-lexical lifetimes". This new borrow checker did a much more precise analysis than the original, allowing us to eliminate a lot of unnecessary errors and make Rust easier to use. I think most everyone who was using Rust 2015 can attest that this shift was a big improvement.

The new borrow checker also fixed a lot of bugs

What is perhaps less well understood is that the new borrow checker implementation also fixed a lot of bugs. In other words, the new borrow checker did not just accept more programs -- it also rejected some programs that were only accepted in the first place due to memory unsafety bugs in the old borrow checker!

Until recently, those fixed bugs produced warnings, not errors

Obviously, we don't want to accept programs that could undermine Rust's safety guarantees. At the same time, as part of our commitment to stability, we try to avoid making sudden bug fixes that will affect a lot of code. Whenever possible, we prefer to "phase in" those changes gradually. We usually begin with "Future Compatibility Warnings", for example, before moving those warnings to hard errors (sometimes a small bit at a time). Since the bug fixes to the borrow checker affected a lot of crates, we knew we needed a warning period before we could make them into hard errors.

To implement this warning period, we kept two copies of the borrow checker around (this is a trick we use quite frequently, actually). The new checker ran first. If it found errors, we didn't report them directly: instead, we ran the old checker in order to see if the crate used to compile before. If so, we reported the errors as Future Compatibility Warnings, since we were changing something that used to compile into errors.

All good things must come to an end; and bad ones, too

Over time we have been slowly transitioning those future compatibility warnings into errors, a bit at a time. About two months ago, we decided that the time had come to finish the job. So, over the course of two PRs, we converted all remaining warnings to errors and then removed the old borrow checker implementation.

What this means for you

If you are testing your package with nightly, then you should be fine. In fact, even if you build on stable, we always recommend that you test your builds in CI with the nightly build, so that you can identify upcoming issues early and report them to us.

Otherwise, you may want to check your dependencies. When we decided to remove the old borrow checker, we also analyzed which crates would stop compiling. For anything that seemed to be widely used, we made sure that there were newer versions of that crate available that do compile (for the most part, this had all already happened during the warning period). But if you have those older versions in your Cargo.lock file, and you are only using stable builds, then you may find that your code no longer builds once 1.40.0 is released -- you will have to upgrade the dependency.

The most common crates that were affected are the following:

  • url version 1.7.0 -- you can upgrade to 1.7.2, though you'd be better off upgrading to 2.1.0
  • nalgebra version 0.16.13 -- you can upgrade to 0.16.14, though you'd be better off upgrading to 0.19.0
  • rusttype version 0.2.0 to 0.2.3 -- you can upgrade to 0.2.4, though you'd be better upgrading to 0.8.1

You can find out which crates you rely upon using the cargo-tree command. If you find that you do rely (say) on url 1.7.0, you can upgrade to 1.7.2 by executing:

cargo update --package url --precise 1.7.2

Want to learn more?

If you'd like to learn more about the kinds of bugs that were fixed -- or if you are seeing errors in your code that you need to fix -- take a look at this excellent blog post by Felix Klock, which goes into great detail.

Mozilla Addons BlogUpcoming changes to extension sideloading

Sideloading is a method of installing an extension in Firefox by adding an extension file to a special location using an executable application installer. This installs the extension in all Firefox instances on a computer.

Sideloaded extensions frequently cause issues for users since they did not explicitly choose to install them and are unable to remove them from the Add-ons Manager. This mechanism has also been employed in the past to install malware into Firefox. To give users more control over their extensions, support for sideloaded extensions will be discontinued. 

November 1 update: we’ve heard some feedback expressing confusion about how this change will give more control to Firefox users. Ever since we implemented abuse reporting in Firefox 68, the top kind of report we receive by far has been extension installs that weren’t expected and couldn’t be removed, and the extensions being reported are known to be distributed through sideloading. With this change, we are enforcing more transparency in the installation process, by letting users choose whether they want to install an application companion extension or not, and letting them remove it when they want to. Developers will still be free to self-distribute extensions on the web, and users will still be able to install self-distributed extensions. Enterprise administrators will continue to be able to deploy extensions to their users via policies. Other forms of automatic extension deployment like the ones used for some Linux distributions and applications like Selenium may be impacted by these changes. We’re still investigating some technical details around these cases and will try to strike the right balance between user choice and minimal disruption.

During the release cycle for Firefox version 73, which goes into pre-release channels on December 3, 2019 and into release on February 11, 2020, Firefox will continue to read sideloaded files, but they will be copied over to the user’s individual profile and installed as regular add-ons. Sideloading will stop being supported in Firefox version 74, which will be released on March 10, 2020. The transitional stage in Firefox 73 will ensure that no installed add-ons will be lost, and end users will gain the ability to remove them if they chose to.

If you self-distribute your extension via sideloading, please update your install flows and direct your users to download your extension through a web property that you own, or through addons.mozilla.org (AMO). Please note that all extensions must meet the requirements outlined in our Add-on Policies and Developer Agreement.  If you choose to continue self-distributing your extension, make sure that new versions use an update URL to keep users up-to-date. Instructions for distributing an extension can be found in our Extension Workshop document repository.

If you have any questions, please head to our community forum.

The post Upcoming changes to extension sideloading appeared first on Mozilla Add-ons Blog.

The Mozilla BlogFacebook Is Still Failing at Ad Transparency (No Matter What They Claim)

Yesterday, Jack Dorsey made a bold statement: Twitter will cease all political advertising on the platform. “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale,” he tweeted.

Later that day, Sheryl Sandberg responded: Facebook doesn’t have to cease political advertising… because the platform is “focused and leading on transparency.” Sandberg cited Facebook’s ad archive efforts, which ostensibly allow researchers to study the provenance and impact of political ads.

To be clear: Facebook is still falling short on its transparency commitments. Further, even perfect transparency wouldn’t change the fact that Facebook is accepting payment to promote dangerous and untrue ads. 

Some brief history: Because of the importance of transparency in the political ad arena, Mozilla has been closely analyzing Facebook’s ad archive for over a year, and assessing its ability to provide researchers and others with meaningful information.

In February, Mozilla and 37 civil society organizations urged Facebook to provide better transparency into political advertising on their platform. Then, in March, Mozilla and leading disinformation researchers laid out exactly what an effective ad transparency archive should look like.

But when Facebook finally released its ad transparency API in March, it was woefully ineffective. It met just two of experts’ five minimum guidelines. Further, a Mozilla researcher uncovered a long list of bugs and shortcomings that rendered the API nearly useless.

The New York Times agreed: “Ad Tool Facebook Built to Fight Disinformation Doesn’t Work as Advertised,” reads a July headline. The article continues: “The social network’s new ad library is so flawed, researchers say, that it is effectively useless as a way to track political messaging.”

Since that time, Mozilla has confirmed that Facebook has made small changes in the API’s functionality — but we still judge the tool to be fundamentally flawed for its intended purpose of providing transparency and a data source for rigorous research.

Rather than deceptively promoting their failed API, Facebook must heed researchers’ advice and commit to truly transparent political advertising. If they can’t get that right, maybe they shouldn’t be running political ads at all for the time being.

The post Facebook Is Still Failing at Ad Transparency (No Matter What They Claim) appeared first on The Mozilla Blog.

Paul McLanahanThe Lounge on Dokku

Mozilla has hosted an enterprise instance of IRCCloud for several years now, and it’s been a great client to use with our IRC network. IRCCloud has deprecated their enterprise product and so Mozilla recently decommissioned our instance. I then saw several colleagues praising The Lounge as a good self-hosted alternative. I became even more interested when I saw that the project maintains a docker image distribution of their releases. I now have an instance running and I’m using irc.mozilla.org via this client and I agree with my colleagues: it’s a decent replacement.

Anne van KesterenShadow tree encapsulation theory

A long time ago Maciej wrote down five types of encapsulation for shadow trees (i.e., node trees that are hidden in the shadows from the document tree):

  1. Encapsulation against accidental exposure — DOM nodes from the shadow tree are not leaked via pre-existing generic APIs — for example, events flowing out of a shadow tree don't expose shadow nodes as the event target.
  2. Encapsulation against deliberate access — no API is provided which lets code outside the component poke at the shadow DOM. Only internals that the component chooses to expose are exposed.
  3. Inverse encapsulation — no API is provided which lets code inside the component see content from the page embedding it (this would have the effect of something like sandboxed iframes or Caja).
  4. Isolation for security purposes — it is strongly guaranteed that there is no way for code outside the component to violate its confidentiality or integrity.
  5. Inverse isolation for security purposes — it is strongly guaranteed that there is no way for code inside the component to violate the confidentiality or integrity of the embedding page.

Types 3 through 5 do not have any kind of support and type 4 and 5 encapsulation would be hard to pull off due to Spectre. User agents typically use a weaker variant of type 4 for their internal controls, such as the video and input elements, that does not protect confidentiality. The DOM and HTML standards provide type 1 and 2 encapsulation to web developers, and type 2 mostly due to Apple and Mozilla pushing rather hard for it. It might be worth providing an updated definition for the first two as we’ve come to understand them:

  1. Open shadow trees — no standardized web platform API provides access to nodes in an open shadow tree, except APIs that have been explicitly named and designed to do so (e.g., composedPath()). Nothing should be able to observe these shadow trees other than through those designated APIs (or “monkey patching”, i.e., modifying objects). Limited form of information hiding, but no integrity or confidentiality.
  2. Closed shadow trees — very similar to open shadow trees, except that designated APIs also do not get access to nodes in a closed shadow tree.

Type 2 encapsulation gives component developers control over what remains encapsulated and what is exposed. You need to take all your users into account and expose the best possible public API for them. At the same time, it protects you from folks taking a dependency on the guts of the component. Aspects you might want to refactor or add functionality to over time. This is much harder with type 1 encapsulation as there will be APIs that can reach into the details of your component and if users do so you cannot refactor it without updating all the callers.

Now, both type 1 and 2 encapsulation can be circumvented, e.g., by a script changing the attachShadow() method or mutating another builtin that the component has taken a dependency on. I.e., there is no integrity and as they run in the same origin there is no security boundary either. The limited form of information hiding is primarily a maintenance boon and a way to manage complexity. Maciej addresses this as well:

If the shadow DOM is exposed, then you have the following risks:

  1. A page using the component starts poking at the shadow DOM because it can — perhaps in a rarely used code path.
  2. The component is updated, unaware that the page is poking at its guts.
  3. Page adopts new version of component.
  4. Page breaks.
  5. Page author blames component author or rolls back to old version.

This is not good. Information hiding and hiding of implementation details are key aspects of encapsulation, and are good software engineering practices.

Dave TownsendCreating HTML content with a fixed aspect ratio without the padding trick

It seems to be a common problem, you want to display some content on the web with a certain aspect ratio but you don’t know the size you will be displaying at. How do you do this? CSS doesn’t really have the tools to do the job well currently (there are proposals). In my case I want to display a video and associated controls as large as possible inside a space that I don’t know the size of. The size of the video also varies depending on the one being displayed.

Padding height

The answer to this according to almost all the searching I’ve done is the padding-top/bottom trick. For reasons that I don’t understand when using relative lengths (percentages) with the CSS padding-top and padding-bottom properties the values are calculated based on the width of the element. So padding-top: 100% gives you padding equal to the width of the element. Weird. So you can fairly easily create a box with a height calculated from its width and from there display content at whatever aspect ratio you choose. But there’s an inherent problem here, you need to know the width of the box in the first place, or at least be able to constrain it based on something. In my case the aspect ratio of the video and the container are both unknown. In some cases I need to constrain the width and calculate the height, but in others I need to constrain the height and calculate the width which is where this trick fails.

object-fit

There is one straightforward solution. The CSS object-fit property allows you to scale up content to the largest size possible for the space allocated. This is perfect for my needs, except that it only works for replaced content like videos and images. In my case I also need to overlay some controls on top and I won’t know where to position them unless they are inside a box the size of the video.

The solution?

So what I need is something where I can create a box with set sizes and then scale both width and height to the largest that fit entirely in the container. What do we have on the web that can do that … oh yes, SVG. In SVG you can define the viewport for your content and any shapes you like inside with SVG coordinates and then scale the entire SVG viewport using CSS properties. I want HTML content to scale here and luckily SVG provides the foreignObject element which lets you define a rectangle in SVG coordinates that contains non-SVG content, such as HTML! So here is what I came up with:

<!DOCTYPE html>

<html>
<head>
<style type="text/css">
html,
body,
svg,
div {
  height: 100%;
  width: 100%;
  margin: 0;
  padding: 0;
}

div {
  background: red;
}
</style>
</head>
<body>
  <svg viewBox="0 0 4 3">
    <foreignObject x="0" y="0" width="100%" height="100%">
      <div></div>
    </foreignObject>
  </svg>
</body>
</html>

This is pretty straightforward. It creates an SVG document with a viewport with a 4:3 aspect ratio, a foreignObject container that fills the viewport and then a div that fills that. what you end up with is a div with a 4:3 aspect ratio. While this shows it working against the full page it seems to work anywhere with constraints on either height, width or both such as in a flex or grid layout. Obviously changing the viewBox allows you to get any aspect ratio you like, just setting it to the size of the video gives me exactly what I want.

You can see it working over on codepen.

William LachanceUsing BigQuery JavaScript UDFs to analyze Firefox telemetry for fun & profit

For the last year, we’ve been gradually migrating our backend Telemetry systems from AWS to GCP. I’ve been helping out here and there with this effort, most recently porting a job we used to detect slow tab spinners in Firefox nightly, which produced a small dataset that feeds a small adhoc dashboard which Mike Conley maintains. This was a relatively small task as things go, but it highlighted some features and improvements which I think might be broadly interesting, so I decided to write up a small blog post about it.

Essentially all this dashboard tells you is what percentage of the Firefox nightly population saw a tab spinner over the past 6 months. And of those that did see a tab spinner, what was the severity? Essentially we’re just trying to make sure that there are no major regressions of user experience (and also that efforts to improve things bore fruit):

Pretty simple stuff, but getting the data necessary to produce this kind of dashboard used to be anything but trivial: while some common business/product questions could be answered by a quick query to clients_daily, getting engineering-specific metrics like this usually involved trawling through gigabytes of raw heka encoded blobs using an Apache Spark cluster and then extracting the relevant information out of the telemetry probe histograms (in this case, FX_TAB_SWITCH_SPINNER_VISIBLE_MS and FX_TAB_SWITCH_SPINNER_VISIBLE_LONG_MS) contained therein.

The code itself was rather complicated (take a look, if you dare) but even worse, running it could get very expensive. We had a 14 node cluster churning through this script daily, and it took on average about an hour and a half to run! I don’t have the exact cost figures on hand (and am not sure if I’d be authorized to share them if I did), but based on a back of the envelope sketch, this one single script was probably costing us somewhere on the order of $10-$40 a day (that works out to between $3650-$14600 a year).

With our move to BigQuery, things get a lot simpler! Thanks to the combined effort of my team and data operations[1], we now produce “stable” ping tables on a daily basis with all the relevant histogram data (stored as JSON blobs), queryable using relatively vanilla SQL. In this case, the data we care about is in telemetry.main (named after the main ping, appropriately enough). With the help of a small JavaScript UDF function, all of this data can easily be extracted into a table inside a single SQL query scheduled by sql.telemetry.mozilla.org.

CREATE TEMP FUNCTION
  udf_js_json_extract_highest_long_spinner (input STRING)
  RETURNS INT64
  LANGUAGE js AS """
    if (input == null) {
      return 0;
    }
    var result = JSON.parse(input);
    var valuesMap = result.values;
    var highest = 0;
    for (var key in valuesMap) {
      var range = parseInt(key);
      if (valuesMap[key]) {
        highest = range > 0 ? range : 1;
      }
    }
    return highest;
""";

SELECT build_id,
sum (case when highest >= 64000 then 1 else 0 end) as v_64000ms_or_higher,
sum (case when highest >= 27856 and highest < 64000 then 1 else 0 end) as v_27856ms_to_63999ms,
sum (case when highest >= 12124 and highest < 27856 then 1 else 0 end) as v_12124ms_to_27855ms,
sum (case when highest >= 5277 and highest < 12124 then 1 else 0 end) as v_5277ms_to_12123ms,
sum (case when highest >= 2297 and highest < 5277 then 1 else 0 end) as v_2297ms_to_5276ms,
sum (case when highest >= 1000 and highest < 2297 then 1 else 0 end) as v_1000ms_to_2296ms,
sum (case when highest > 0 and highest < 50 then 1 else 0 end) as v_0ms_to_49ms,
sum (case when highest >= 50 and highest < 100 then 1 else 0 end) as v_50ms_to_99ms,
sum (case when highest >= 100 and highest < 200 then 1 else 0 end) as v_100ms_to_199ms,
sum (case when highest >= 200 and highest < 400 then 1 else 0 end) as v_200ms_to_399ms,
sum (case when highest >= 400 and highest < 800 then 1 else 0 end) as v_400ms_to_799ms,
count(*) as count
from
(select build_id, client_id, max(greatest(highest_long, highest_short)) as highest
from
(SELECT
    SUBSTR(application.build_id, 0, 8) as build_id,
    client_id,
    udf_js_json_extract_highest_long_spinner(payload.histograms.FX_TAB_SWITCH_SPINNER_VISIBLE_LONG_MS) AS highest_long,
    udf_js_json_extract_highest_long_spinner(payload.histograms.FX_TAB_SWITCH_SPINNER_VISIBLE_MS) as highest_short
FROM telemetry.main
WHERE
    application.channel='nightly'
    AND normalized_os='Windows'
    AND application.build_id > FORMAT_DATE("%Y%m%d", DATE_SUB(CURRENT_DATE(), INTERVAL 2 QUARTER))
    AND DATE(submission_timestamp) >= DATE_SUB(CURRENT_DATE(), INTERVAL 2 QUARTER))
group by build_id, client_id) group by build_id;

In addition to being much simpler, this new job is also way cheaper. The last run of it scanned just over 1 TB of data, meaning it cost us just over $5. Not as cheap as I might like, but considerably less expensive than before: I’ve also scheduled it to only run once every other day, since Mike tells me he doesn’t need this data any more often than that.

[1] I understand that Jeff Klukas, Frank Bertsch, Daniel Thorn, Anthony Miyaguchi, and Wesley Dawson are the principals involved - apologies if I’m forgetting someone.

Mozilla VR BlogA Year with Spoke: Announcing the Architecture Kit

A Year with Spoke: Announcing the Architecture Kit

Spoke, our 3D editor for creating environments for Hubs, is celebrating its first birthday with a major update. Last October, we released the first version of Spoke, a compositing tool for mixing 2D and 3D content to create immersive spaces. Over the past year, we’ve made a lot of improvements and added new features to make building scenes for VR easier than ever. Today, we’re excited to share the latest feature that adds to the power of Spoke: the Architecture Kit!

We first talked about the components of the Architecture Kit back in March. With the Architecture Kit, creators now have an additional way to build custom content for their 3D scenes without using an external tool. Specifically, we wanted to make it easier to take existing components that have already been optimized for VR and make it easy to configure those pieces to create original models and scenes. The Architecture Kit contains over 400 different pieces that are designed to be used together to create buildings - the kit includes wall, floor, ceiling, and roof pieces, as well as windows, trim, stairs, and doors.

Because Hubs runs across mobile, desktop, and VR devices and delivered through the browser, performance is a key consideration. The different components of the Architecture Kit were created in Blender so that each piece would align together to create seamless connections. By avoiding mesh overlap, which is a common challenge when building with pieces that were made separately, z-fighting between faces becomes less of a problem. Many of the Architecture Kit pieces were made single-sided, which reduces the number of faces that need to be rendered. This is incredibly useful when creating interior or exterior pieces where one side of a wall or ceiling piece will never be visible by a user.

We wanted the Architecture Kit to be configurable beyond just the meshes. Buildings exist in a variety of contexts, so different pieces of the Kit can have one or more material slots with unique textures and materials that can be applied. This allows you to customize a wall with a window trim to have, for example, a brick wall and a wood trim. You can choose from the built-in textures of the Architecture Kit pieces directly in Spoke, or download the entire kit from GitHub.

A Year with Spoke: Announcing the Architecture Kit

This release, which focuses on classic architectural styles and interior building tools, is just the beginning. As part of building the architecture kit, we’ve built a Blender add-on that will allow 3D artists to create their own kits. Creators will be able to specify individual collections of pieces and set an array of different materials that can be applied to the models to provide variation for different meshes. If you’re an artist interested in contributing to Spoke and Hubs by making a kit, drop us a line at hubs@mozilla.com.

A Year with Spoke: Announcing the Architecture Kit

In addition to the Architecture Kit, Spoke has had some other major improvements over the course of its first year. Recent changes to object placement have made it easier when laying out scene objects, and an update last week introduced the ability to edit multiple objects at one time. We added particle systems over the summer, enabling more dynamic elements to be placed in your rooms. It’s also easier to visualize different components of your scene with a render mode toggle that allows you to swap between wireframe, normals, and shadows. The Asset Panel got a makeover, multi-select and edit was added, and we fixed a huge list of bugs when we made Spoke available as a fully hosted web app.

Looking ahead to next year, we’re planning on adding features that will give creators more control over interactivity within their scenes. While the complete design and feature set for this isn’t yet fully scoped, we’ve gotten great feedback from Spoke creators that they want to add custom behaviors, so we’re beginning to think about what it will look like to experiment with scripting or programmed behavior on elements in a scene. If you’re interested in providing feedback and following along with the planning, we encourage you to join the Hubs community Discord server, and keep an eye on the Spoke channel. You can also follow development of Spoke on GitHub.

There has never been a better time to start creating 3D content for the web. With new tools and features being added to Spoke all the time, we want your feedback in and we’d love to see what you’re building! Show off your creations and tag us on Twitter @ByHubs, or join us in the Hubs community Discord and meet other users, chat with the team, and stay up to date with the latest in Mozilla Social VR.

Mozilla Open Policy & Advocacy BlogA Year in Review: Fighting Online Disinformation

A year ago, Mozilla signed the first ever Code of Practice on Disinformation, brokered in Europe as part of our commitment to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts. The Code set a wide range of commitments for all the signatories, from transparency in political advertising to the closure of fake accounts, to address the spread of disinformation online. And we were hopeful that the Code would help to drive change in the platform and advertising sectors.

Since then, we’ve taken proactive steps to help tackle this issue, and today our self assessment of this work was published by the European Commission. Our assessment covers the work we’ve been doing at Mozilla to build tools within the Firefox browser to fight misinformation, empower users with educational resources, support research on disinformation and lead advocacy efforts to push the ecosystem to live up to their own commitments within the Code of Practice.

Most recently, we’ve rolled-out enhanced security features in the default setting of Firefox that safeguards our users from the pervasive tracking and collection of personal data by ad networks and tech companies by blocking all third-party cookies by default. As purveyors of disinformation feed off of information that can be revealed about an individual’s browsing behavior, we expect this protection to reduce the exposure of users to the risks of being targeted by disinformation campaigns.

We are proud of the steps we’ve taken during the last year, but it’s clear that the platforms and online advertising sectors  need to do more – e.g. protection against online tracking, meaningful transparency of political ads and support of the research community  – to fully tackle online disinformation and truly empower individuals. In fact, recent studies have highlighted the fact that disinformation is not going away – rather, it is becoming a “pervasive and ubiquitous part of everyday life”.

The Code of Practice represents a positive shift in the fight to counter misinformation, and today’s self assessments are proof of progress. However, this work has only just begun and we must make sure that the objectives of the Code are fully realized. At Mozilla, we remain committed to further this agenda to counter disinformation online.

The post A Year in Review: Fighting Online Disinformation appeared first on Open Policy & Advocacy.

Cameron KaiserSourceForge download issues (and Github issues issues)

There are two high-priority problems currently affecting TenFourFox's download and development infrastructure. Please don't open any more Tenderapp tickets on these: I am painfully aware of them and am currently trying to devise workarounds, and the more tickets get opened the more time I spend redirecting people instead of actually working on fixes.

The first one is that the hack we use to relax JavaScript syntax to get Github working (somewhat) is now causing the browser to go into an infinite error loop on Github issue reports and certain other Github pages, presumably due to changes in code on their end. Unfortunately we use Github heavily for the wiki and issues tracker, so this is a major problem. The temporary workaround is, of course, a hack to relax JavaScript syntax even further. This hack is disgusting and breaks a lot of tests but is simple and does seem to work, so if I can't come up with something better it will be in FPR17. Most regular users won't be affected by this.

However, the one that is now starting to bite people is that SourceForge has upgraded its cipher suite to require a new cipher variant TenFourFox doesn't support. Unfortunately all that happens is links to download TenFourFox don't work; there is no error message, no warning and no explanation. That is a bug in and of itself but this leaves people dead in the water being pinged to install an update they can't access directly.

Naturally you can download the browser on another computer but this doesn't help users who only have a Power Mac. Fortunately, the good old TenFourFox Downloader works fine. As a temporary measure I have started offering the Downloader to all users from the TenFourFox main page, or you can get it from this Tenderapp issue. I will be restricting it to only Power Mac users regardless of browser very soon because it's not small and serves no purpose on Intel Macs (we don't support them) or other current browsers (they don't need it), but I want to test that the download gate works properly first and I'm not at my G5 right this second, so this will at least unblock people for the time being.

For regression and stability reasons I don't want to do a full NSPR/NSS update but I think I can hack this cipher in as we did with another one earlier. I will be speeding up my timetable for FPR17 so that people can test the changes (including some other site compatibility updates), but beta users are forewarned that you will need to use another computer to download the beta. Sorry about that.

The Firefox FrontierPassword dos and don’ts

So many accounts, so many passwords. That’s online life. The average person with a typical online presence is estimated to have close to 100 online accounts, and that figure is … Read more

The post Password dos and don’ts appeared first on The Firefox Frontier.

Hacks.Mozilla.OrgAuditing For Accessibility Problems With Firefox Developer Tools

Since its debut in Firefox 61, the Accessibility Inspector in the Firefox Developer Tools has evolved from a low-level tool showing the accessibility structure of a page. In Firefox 70, the Inspector has become an auditing facility to help identify and fix many common mistakes and practices that reduce site accessibility. In this post, I will offer an overview of what is available in this latest release.

Inspecting the Accessibility Tree

First, select the Accessibility Inspector from the Developer Toolbox. Turn on the accessibility engine by clicking “Turn On Accessibility Features.” You’ll see a full representation of the current foreground tab as assistive technologies see it. The left pane shows the hierarchy of accessible objects. When you select an element there, the right pane fills to show the common properties of the selected object such as name, role, states, description, and more. To learn more about how the accessibility tree informs assistive technologies, read this post by Hidde de Vries.

screenshot of accessibility tree in the Firefox developer Tools

The DOM Node property is a special case. You can double-click or press ENTER to jump straight to the element in the Page Inspector that generates a specific accessible object. Likewise, when the accessibility engine is enabled, open the context menu of any HTML element in the Page Inspector to inspect any associated accessible object. Note that not all DOM elements have an accessible object. Firefox will filter out elements that assistive technologies do not need. Thus, the accessibility tree is always a subset of the DOM tree.

In addition to the above functionality, the inspector will also display any issues that the selected object might have. We will discuss these in more detail below.

The accessibility tree refers to the full tree generated from the HTML, JavaScript, and certain bits of CSS from the web site or application. However, to find issues more easily, you can filter the left pane to only show elements with current accessibility issues.

Finding Accessibility Problems

To filter the tree, select one of the auditing filters from the “Check for issues” drop-down in the toolbar at the top of the inspector window. Firefox will then reduce the left pane to the problems in your selected category or categories. The items in the drop-down are check boxes — you can check for both text labels and focus issues. Alternatively, you can run all the checks at once if you wish. Or, return the tree to its full state by selecting None.

Once you select an item from the list of problems, the right pane fills with more detailed information. This includes an MDN link to explain more about the issue, along with suggestions for fixing the problem. You can go into the page inspector and apply changes temporarily to see immediate results in the accessibility inspector. Firefox will update Accessibility information dynamically as you make changes in the page inspector. You gain instant feedback on whether your approach will solve the problem.

Text labels

Since Firefox 69, you have the ability to filter the list of accessibility objectss to only display those that are not properly labeled. In accessibility jargon, these are items that have no names. The name is the primary source of information that assistive technologies, such as screen readers, use to inform a user about what a particular element does. For example, a proper button label informs the user what action will occur when the button is activated.

The check for text labels goes through the whole document and flags all the issues it knows about. Among other things, it finds missing alt-text (alternative text) on images, missing titles on iframes or embeds, missing labels for form elements such as inputs or selects, and missing text in a heading element.

Screenshot of text label issues being displayed in the inspector

Check for Keyboard issues

Keyboard navigation and visual focus are common sources of accessibility issues for various types of users. To help debug these more easily, Firefox 70 contains a checker for many common keyboard and focus problems. This auditing tool detects elements that are actionable or have interactive semantics. It can also detect if focus styling has been applied. However, there is high variability in the way controls are styled. In some cases, this results in false positives. If possible, we would like to hear from you about these false positives, with an example that we can reproduce.

For more information about focus problems and how to fix them, don’t miss Hidde’s post on indicating focus.

Screenshot of keyboard issues displayed in the inspector

Contrast

Firefox includes a full-page color contrast checker that checks for all three types of color contrast issues identified by the Web Content Accessibility Guidelines 2.1 (WCAG 2.1):

  • Does normal text on background meet the minimum requirement of 4.5:1 contrast ratio?
  • Does the heading, or more generally, does large-scale, text on background meet the 3:1 contrast ratio requirement?
  • Will interactive elements and graphical elements meet a minimum ratio of 3:1 (added in WCAG 2.1)?

In addition, the color contrast checker provides information on the triple-A success criteria contrast ratio. You can see immediately whether your page meets this advanced standard.

Are you using a gradient background or a background with other forms of varying colors? In this case, the contrast checker (and accessibility element picker) indicates which parts of the gradient meet the minimum requirements for color contrast ratios.

Color Vision Deficiency Simulator

Firefox 70 contains a new tool that simulates seven types of color vision deficiencies, a.k.a. color blindness. It shows a close approximation of how a person with one of these conditions would see your page. In addition, it informs you if you’ve colored something that would not be viewable by a colorblind user. Have you provided an alternative? For example, someone who has Protanomaly (low red) or Protanopia (no red) color perception would be unable to view an error message colored in red.

As with all vision deficiencies, no two users have exactly the same perception. The low red, low green, low blue, no red, no green, and no blue deficiencies are genetic and affect about 8 percent of men, and 0.5 percent of women worldwide. However, contrast sensitivity loss is usually caused by other kinds of mutations to the retina. These may be age-related, caused by an injury, or via genetic predisposition.

Note: The color vision deficiency simulator is only available if you have WebRender enabled. If it isn’t enabled by default, you can toggle the gfx.webrender.all property to True in about:config.

Screenshot of a full page view with the simulator in action

Quick auditing with the accessibility picker

As a mouse user, you can also quickly audit elements on your page using the accessibility picker. Like the DOM element picker, it highlights the element you selected and displays its properties. Additionally, as you hover the mouse over elements, Firefox displays an information bar that shows the name, role, and states, as well as color contrast ratio for the element you picked.

First, click the Accessibility Picker icon. Then click on an element to show its properties. Want to quickly check multiple elements in rapid succession? Click the picker, then hold the shift key to click on multiple items one after another to view their properties.

screenshot of the accessibility picker in action

In Conclusion

Since its release back in June 2018, the Accessibility Inspector has become a valuable helper in identifying many common accessibility issues. You can rely on it to assist in designing your color palette. Use it to ensure you always offer good contrast and color choices. We build a11y features into the DevTools that you’ve come to depend on, so you do not need to download or search for external services or extensions first.

The post Auditing For Accessibility Problems With Firefox Developer Tools appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 310

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

Sadly, there was no nomination for crate of the week.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

347 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Africa
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

…man, starting to dig through the source code of a really large open source program is so weird. It’s like wandering around a giant cathedral that’s being constantly renovated and repaired and maintained over the course of years by a giant team of invisible crafters and architects, who mostly communicate via notes and designs pinned to the walls in various places.

icefoxen on their wiki

Thanks to Ralf Jung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Tantek Çelik#Redecentralize 2019 Session: IndieWeb Decentralized Standards and Methods

Kevin Marks wearing an IndieWebCamp t-shirt leading a discussion session with a projector screen next to him showing an indie event for Homebrew Website Club

Kevin Marks started the session by having me bring up the tabs that I’d shown in my lightning talk earlier, digging into the specifications, tools, and services linked therein. Participants asked questions and Kevin & I answered, demonstrating additional resources as necessary.

IndieWeb Profiles and IndieWebify

One of the first questions was about how do people represent themselves on the IndieWeb, in a way that is discoverable and expresses various properties.

Kevin described how the h-card standard works and is used to express a person’s name, their logo or photo, and other bits of optional information. He showed his own site kevinmarks.com and asked me to Show View Source to illustrate the markup.

Next we showed indiewebify.me which has a form to check your h-card, show what it found and suggest properties you could add to your profile on your home page.

Checking microformats and JSON output

From the consuming code perspective, we demonstrated the microformats2 parser at microformats.io using Kevin’s site again. We went through the standard parser JSON output with clear values for the name, photo, and other properties.

Similarly we took a look at one of my posts parsed by microformats.io as an examle of parsing an h-entry and seeing the author, content etc. properties in the JSON output.

IndieWeb Standards, W3C Micropub Recommendation & Test Suite

Next we walked through the overview of IndieWeb specifications that I’d quickly mentioned by name in my lightning talk but had not explicitly described. We explained each of these building block standards, its features, and what user functionality each provides when implemented.

In particular we spent some time on the Micropub living standard for client software and websites to post and update content. The living standard editor’s draft has errata and updates from the official W3C Micropub Recommendation which itself was finished using the Micropub.rocks test suite & implementation results used to demonstrate that each feature was interoperably implementable, by several implementations.

Lastly we noted that many more Micropub clients & servers have been interoperably developed since then using the test suite, and the importance of test suites for longterm interopability and dependable standards in general.

IndieWeb Events & RSVPs

Kevin used his mobile phone to post an Indie RSVP post in response to the Indie Event post that I’d shown in my talk. He had me bring it up again to show that this time it had an RSVP from him.

Clicking it took us to Kevin’s Known site which he’d used to post the RSVP from his mobile. I had to enable JavaScript for the “Filter Content” dropdown navigation menu to work (It really should work without JS, via CSS using googleable well established techniques). Choosing RSVP showed a list of recent RSVPs, at the top the one he’d just posted: RSVP No: But I do miss it.

We viewed the source of the RSVP post and walked through the markup, identifying the p-rsvp property that was used along with the no value. Additionaly we ran it through microformats.io to show the resulting JSON with the "p-rsvp" property and "no" value.

IndieWeb Identity, Signing-in, and IndieAuth

As had been implied so far, the IndieWeb built upon the widely existing practice of using personal domain names for identity. While initially we had used OpenID, early usage & implementation frustrations (from confusing markup to out of date PHP libraries etc.) led us down the path of using the XFN rel=me value to authenticate using providers that allowed linking back to your site. We mentioned RelMeAuth and Web Sign-in accordingly.

We used yet another form on indiewebify.me to check the rel=me markup on KevinMarks.com and my own site tantek.com. As a demonstration I signed out of indieweb.org and click sign-in in the top right corner.

I entered my domain https://tantek.com/ and the site redirected to Indie Login authentication screen where it found one confirmed provider, GitHub, and showed a green button accordingly. Clicking the green button briefly redirected to GitHub for authentication (I was already signed into GitHub) and then returned back through the flow to IndieWeb.org which now showed that I was logged-in in the top right corner with tantek.com.

To setup your own domain to sign-into IndieWeb.org, we showed the setup instructions for the IndieLogin service, noting in addition to rel=me to an OAuth-based identity provider like GitHub, you could use a PGP public key. If you choose PGP at the confirmed provider screen, IndieLogin provides challenge text for you to encrypt with your private key and submit, and it decrypts with your public key that you’ve provided to confirm your identity.

Popping up a level, we noted that the IndieLogin service works by implementing the IndieAuth protocol as a provider, that IndieWeb.org uses as a default authentication provider (sites can specify their own authetication providers, naturally).

Andre (Soapdog) asked:

How do I add a new way to authenticate, like SecureScuttleButt (SSB)?

The answer is to make an IndieAuth provider that handles SSB authentication. See the IndieAuth specification for reference, however, first read Aaron Parecki's article on "OAuth for the Open Web"

Social Readers and Microsub

Another asked:

How does reading work on the IndieWeb?

From the longterm experience with classic Feed Readers (RSS Readers), the IndieWeb community figured out that there was a need to modularize readers. In particular there was a clear difference in developer expertise and incentive models of serverside feed aggregators and clientside feed readers that would be better served by independent development, with a standard protocol for communicating between the two.

The Microsub standard was designed from this experience and these identified needs. In the past couple of years, several Microsub clients and a few servers have been developed, listed in the section on Social Readers.

Social Readers also build upon the IndieAuth authentication standard for signing-in, and then associate your information with your domain accordingly. I demonstrated this by signing into the Aperture feed aggregator (and Microsub server) with my own domain name, and it listed my channels and feeds therein.

I demonstarted adding another feed to aggregate in my "IndieWeb" channel by entering Kevin Marks’s Known, choosing its microformats h-feed, which then resulted in 200+ new unread items!

I signed-into the Monocle social reader which showed notifications by default and a list of channels. Selecting the IndieWeb channel showed the unread items from Kevin’s site.

Does this work with static sites?

In short, yes. The IndieWeb works great with static sites.

One of the most common questions we get in the IndieWeb community is whether or not any one partcular standard or technique works with static sites and static site generator setups.

During the many years on the W3C Social Web Working group, many different approaches were presented for solving various social web use-cases. So many of these approaches had strong dynamic assumptions that they outright rejected static sites as a use-case. It was kind of shocking to be honest, as if the folks behind those particular approaches had not actually encountered the real world diversity of web site developers and techniques that were out there.

Fortunately we were able to uphold static sites as a legitimate use-case for the majority of specifications, and thus at least all the W3C Recommendations which were the result of incubations and contributions by the IndieWeb community were designed to support static sites.

There are couple of great reference pages on the IndieWeb wiki for static site setups:

In addition, there are IndieWeb pages for any particular static site system with particular recommendations and setup steps for adding support for various IndieWeb standards.

Kevin also pointed out that his home page kevinmarks.com is simple static HTML page that uses the Heroku Webmention service to display comments, likes, and mentions of his home page in the footer.

What Next: Join Chat & IndieWebCamps!

As we got the 2 minute warning that our session time was almost up we wrapped up the session with how to keep the conversation going. We encouraged everyone to join the online IndieWeb Chat which is available via IRC (Freenode #indieweb), Slack, Matrix, Discourse, and of course the web.

See: chat.indieweb.org to view today’s chats, and links to join from Slack, Matrix, etc.

Lastly we announced the next two IndieWebCamps coming up!

We encouraged all the Europeans to sign-up for IndieWebCamp Berlin, while encouraging folks from the US to sign-up for San Francisco.

With that we thanked everyone for their participation, excellent questions & discussion and look forward to seeing them online and meeting up in person!

The Rust Programming Language BlogA call for blogs 2020

What will Rust development look like in 2020? That's partially up to you! Here's how it works:

  • Anyone and everyone in the Rust community writes a blog post about what they'd like Rust development to be like in 2020.
  • The core team reads all the posts, and writes up a "Roadmap RFC" to make a formal proposal.
  • The RFC is reviewed by everyone, comments are made, adjustments are made, and eventually it is accepted.
  • This RFC is a guide to either accept or postpone RFCs for 2020. If a proposal fits into the themes of what we want to accomplish, we'll take it, but if it doesn't, we'll put it off until the next year.

This process takes time, and it won't quite be complete before 2020 starts.

  • We'll review the posts December 1. That gives you a month to think about Rust in 2020 and write something up.
  • We'll aim to produce the RFC draft in the week or two after
  • Depending on how many comments the RFC gets, we may not end up accepting it until early January.

What we're looking for

We are accepting ideas about almost anything having to do with Rust: language features, tooling needs, community programs, ecosystem needs... if it's related to Rust, we want to hear about it.

One big question for this year: will there be a Rust 2021 edition? If so, 2020 would be the year to do a lot of associated work and plan the details. What would the edition's theme be?

  • Rust 2015: Stability
  • Rust 2018: Productivity
  • Rust 2021: ?

Let us know what you think!

Please share these posts with us

You can write up these posts and email them to community@rust-lang.org or tweet them with the hashtag #rust2020. If you'd prefer to not participate publicly, emailing something to community@rust-lang.org is fine as well.

Thanks for helping make Rust awesome! We are looking forward to doing amazing things in 2020.

Firefox NightlyThese Weeks in Firefox: Issue 67

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • Dennis van der Schagt [:dennisschagt]
  • Florens Verschelde :fvsch
  • Itiel
  • Logan Smyth [:loganfsmyth]
  • Marco Vega
  • Sorin Davidoi

New contributors (🌟 = first patch)

Project Updates

Accessibility

Add-ons / Web Extensions

Developer Tools

Console
  • Console settings menu (Bug 1523868), see the next screenshot. Contributed by Armando
    • The Console developer tool shows a menupopup beneath a settings button, giving the user the ability to Persist Logs, Show Timestamps and Group Similar Messages.

      We’ve tucked additional capabilities for the Console inside of this gear menu.

  • Scratchpad is deprecated and will be removed/replaced by Editor mode in the Console panel. The only missing thing blocking the removal is adding load/save buttons in Browser Console Editor Toolbar (bug 1584259) See the next screenshot
    • The Scratchpad Developer Tool toggle in the Developer Tools Settings menu shows a warning saying that Scratchpad is deprecated.

      Pour one out for Scratchpad – thank you for your years of service!

  • Jump to definition button (displayed next to  functions) should now properly handle sourcemap (Bug 1433373), thanks to mattheww
Network
  • The Waterfall column (aka Timeline) hides automatically when a side bar is opened (bug)
Layout
  • Whitespace text nodes are now displayed with a “whitespace” badge in the markup view (bug). Thanks to thomas.jean.lynch for the patch! See the `whitespace` badges on the next screenshot.
    • The Inspector Developer Tool showing whitespace nodes with a clearly labeled "whitespace" marker.

      This should make it clearer to identify when there’s whitespace in your mark-up that might be affecting the page.

Fission
  • Most of the team is focusing on Fission and we are working towards our first milestone that should be finished in December. The focus is on the Browser Toolbox we call Omniscient.
  • List of preferences related to the Fission work is available through a toolbar FIS button.
    • A "FIS" menu inside of the Browser Toolbox lists a number of important Fission-related preferences and what their values are set to.

Fission

  • Autocomplete and password manager now work in fission. Many related tests have been re-enabled. 🎉🎉🎉
  • Abdoulaye has finished fullscreen video support. 🎉🎉🎉
  • Talos tests now run to completion with Fission enabled
  • The following components are being ported to JSWindowActors by MSU capstone students:
    • PageStyle port is functioning with Fission enabled, just need to hammer out a final bug before landing
    • ShieldFrame port is being reviewed
    • PageThumbs is inching closer to completion, just one test failure remaining
    • UITour port is beginning
    • PopupBlocking port still in early phases, but the notification bar now appears
    • PictureInPicture works with JSWindowActors, but appears to be broken in oop-iframes. Currently investigating.

Mobile

Mobile Browsers
  • Firefox Account login can now be accessed directly from Firefox for Android’s Awesomescreen. Bug 1570880
Mobile Applications
  • Lockwise for Android v3.0.0 was released as part of the Skyline release packet. Features include updating and deleting of credentials, and capturing from autofilled pages and apps (including in Firefox Preview!).

Password Manager

 

 

 

Performance

Performance Tools

  • The checkbox to enable “No Periodic Sampling” mode has landed in mozilla-central today, will be available in the next Nightly. That will help users by capturing only markers and reducing the profiler overhead.
  • We are working on some cool stuff like:

Picture-in-Picture

  • Plan continues to be to do a slow roll-out of Picture-in-Picture to our Windows users in Firefox 71 Release.
  • Fixed:
    • Bug 1585769 – Attempting to open some videos in the Picture-in-Picture player results in an all-white player window and SecurityError: The operation is insecure error
    • Bug 1568373 – Picture in Picture should have a black background
    • Bug 1587362 – PiP button doesn’t work after putting tab back in window
  • Next:
    • Bug 1568316 – Picture-in-Picture window is missing the OS drop shadow
    • Bug 1589158 – Add in-tree documentation for Picture-in-Picture component
    • Begin working on macOS support, tentatively aiming to have that done for Firefox 72
    • Have QA begin testing macOS and Linux support for a potential Firefox 72 release

Privacy/Security

Search and Navigation

Search
  • Separate search engine for private browsing (Firefox 71)
    • Enabled by default in Nightly and early Beta (can be controlled through the temporary feature pref browser.search.separatePrivateDefault.ui.enabled).
    • When you report bugs, please make them block Bug 1411340.
    • It is now possible to enable Search suggestions also for private windows from about:preferences#search.
    • It is possible to directly run a search in a private window from a normal window (with the private search engine, if set) from the content contextual menu, the urlbar, the command line.
    • The feature discovery banner in about:privatebrowsing for now has been disabled (pending a future different design).
  • Search configuration modernization (Firefox 72)
    • No updates, team moving back to this project
Address Bar

User Journey

Mozilla Addons BlogAdd-on Policies Update: Newtab and Search

As part of our ongoing work to make add-ons safer for Firefox users, we are updating our Add-on Policies to add clarification and guidance for developers regarding data collection. The following is a summary of the changes, which will go into effect on December 2, 2019.

  • Search functionality provided or loaded by the add-on must not collect search terms or intercept searches that are going to a third-party search provider.
  • If the collection of visited URLs or user search terms is required for the add-on to work, the user must provide affirmative consent (i.e., explicit opt-in from the user) at first-run, since that information can contain personal information. For more information on how to create a data collection consent dialog, refer to our best practices.
  • Add-ons must not load or redirect to a remote new tab page. The new tab page must be contained within the add-on.

You can preview the policies and ensure your extensions abide by them to avoid any disruption. If you have questions about these updated policies or would like to provide feedback, please post to this forum thread.

The post Add-on Policies Update: Newtab and Search appeared first on Mozilla Add-ons Blog.

Nathan Froydevaluating bazel for building firefox, part 1

After the Whistler All-Hands this past summer, I started seriously looking at whether Firefox should switch to using Bazel for its build system.

The motivation behind switching build systems was twofold.  The first motivation was that build times are one of the most visible developer-facing aspects of the build system and everybody appreciates faster builds.  What’s less obvious, but equally important, is that making builds faster improves automation: less time waiting for try builds, more flexibility to adjust infrastructure spending, and less turnaround time with automated reviews on patches submitted for review.  The second motivation was that our build system is used by exactly one project (ok, two projects), so there’s a lot of onboarding cost both in terms of developers who use the build system and in terms of developers who need to develop the build system.  If we could switch to something more off-the-shelf, we could improve the onboarding experience and benefit from work that other parties do with our chosen build system.

You may have several candidates that we should have evaluated instead.  We did look at other candidates (although perhaps none so deeply as Bazel), and all of them have various issues that make them unsuitable for a switch.  The reasons for rejecting other possibilities fall into two broad categories: not enough platform support (read: Windows support) and unlikely to deliver on making builds faster and/or improving the onboarding/development experience.  I’ll cover the projects we looked at in a separate post.

With that in mind, why Bazel?

Bazel advertises itself with the tagline “{Fast, Correct} – Choose two”.  What’s sitting behind that tagline is that when building software via, say, Make, it’s very easy to write Makefiles in such a way that builds are fast, but occasionally (or not-so-occasionally) fail because somebody forgot to specify “to build thing X, you need to have built thing Y”.  The build doesn’t usually fail because thing Y is built before thing X: maybe the scheduling algorithm for parallel execution in make chooses to build Y first 99.9% of the time, and 99% of those times, building Y finishes prior to even starting to build X.

The typical solution is to become more conservative in how you build things such that you can be sure that Y is always built before X…but typically by making the dependency implicit by, say, ordering the build commands Just So, and not by actually making the dependency explicit to make itself.  Maybe specifying the explicit dependency is rather difficult, or maybe somebody just wants to make things work.  After several rounds of these kind of fixes, you wind up with Makefiles that are (probably) correct, but probably not as fast as it could be, because you’ve likely serialized build steps that could have been executed in parallel.  And untangling such systems to the point that you can properly parallelize things and that you don’t regress correctness can be…challenging.

(I’ve used make in the above example because it’s a lowest-common denominator piece of software and because having a concrete example makes differentiating between “the software that runs the build” and “the specification of the build” easier.  Saying “the build system” can refer to either one and sometimes it’s not clear from context which is in view.  But you should not assume that the problems described above are necessarily specific to make; the problems can happen no matter what software you rely on.)

Bazel advertises a way out of the quagmire of probably correct specifications for building your software.  It does this—at least so far as I understand things, and I’m sure the Internet will come to correct me if I’m wrong—by asking you to explicitly specify dependencies up front.  Build commands can then be checked for correctness by executing the commands in a “sandbox” containing only those files specified as dependencies: if you forgot to specify something that was actually needed, the build will fail because the file(s) in question aren’t present.

Having a complete picture of the dependency graph enables faster builds in three different ways.  The first is that you can maximally parallelize work across the build.  The second is that Bazel comes with built-in facilities for farming out build tasks to remote machines.  Note that all build tasks can be distributed, not just C/C++/Rust compilation as via sccache.  So even if you don’t have a particularly powerful development machine, you can still pretend that you have a large multi-core system at your disposal.  The third is that Bazel also comes with built-in facilities for aggressive caching of build artifacts.  Again, like remote execution, this caching applies across all build tasks, not just C/C++/Rust compilation.  In Firefox development terms, this is Firefox artifact builds done “correctly”: given appropriate setup, your local build would simply download whatever was appropriate for the changes in your current local tree and rebuild the rest.

Having a complete picture of the dependency graph enables a number of other nifty features.  Bazel comes with a query language for the dependency graph, enabling you to ask questions like “what jobs need to run given that these files changed?”  This sort of query would be valuable for determining what jobs to run in automation; we have a half-hearted (and hand-updated) version of this in things like files-changed in Taskcluster job specifications.  But things like “run $OS tests for $OS-only changes” or “run just the mochitest chunk that contains the changed mochitest” become easy.

It’s worth noting here that we could indeed work towards having the entire build graph available all at once in the current Firefox build system.  And we have remote execution and caching abilities via sccache, even moreso now that sccache-dist is being deployed in Mozilla offices.  We think we have a reasonable idea of what it would take to work towards Bazel-esque capabilities with our current system; the question at hand is how a switch to Bazel compares to that and whether a switch would be more worthwhile for the health of the Firefox build system over the long term.  Future posts are going to explore that question in more detail.

Firefox UXPrototyping Firefox With CSS Grid

Prototyping with HTML and CSS grid is really helpful for understanding flexibility models. I was able to understand how my design works in a way that was completely different than doing it in a static design program.

Links:

Transcript:
So I’m working on a design of the new address bar in Firefox. Our code name for it is the QuantumBar. There’s a lot of pieces to this. But one of the things that I’ve been trying to figure out is how it fits into the Firefox toolbar and how it collapses and expands, the squishiness of the toolbar, and trying to kinda rethink that by just building it in code. So I have this sort of prototype here where I’ve recreated the Firefox top toolbar here in HTML, and you can see how it like collapses and things go away and expands as I grow and shrink this here.

I’ve used CSS Grid to do a lot of this layout. And here I’ve turned on the grid lines just for this section of the toolbar. It’s one of the many grids that I have here. But I wanna point out these like flexible spaces over here, and then this part here, the actual QuantumBar piece, right? So you can see I’ve played around with some different choices about how big things can be at this giant window size here. And I was inspired by my friend, Jen Simmons, who’s been talking about Grid for a long time, and she was explaining how figuring out whether to use minmax or auto or fr units. It’s something that is, you can only really figure out by coding it up and a little trial and error here. And it allowed me to understand better how the toolbar works as you squish it and maybe come up with some better ways of making it work.

Yeah, as we squish it down, maybe here we wanna prioritize the width of this because this is where the results are gonna show up in here, and we let these flexible spaces squish a little sooner and a little faster. And that’s something that you can do with Grid and some media queries like at this point let’s have it squished this way, at this point let’s have it squished another way. Yeah, and I also wanted to see how it would work then if your toolbar was full of lots of icons and you have the other search bar and no spacers, how does that work? And can we prioritize maybe the size of the address bar a little more so that because you’ll notice on the top is the real Firefox, we can get in weird situations where this, it’s just not even usable anymore. And maybe we should stop it from getting that small and still have it usable.

Anyway it’s a thing I’ve been playing with and what I’ve found was that using HTML and CSS to mock this up had me understand it in a way that was way better than doing it in some sort of static design program.

Wladimir PalantAvast Online Security and Avast Secure Browser are spying on you

Are you one of the allegedly 400 million users of Avast antivirus products? Then I have bad news for you: you are likely being spied upon. The culprit is the Avast Online Security extension that these products urge you to install in your browser for maximum protection.

But even if you didn’t install Avast Online Security yourself, it doesn’t mean that you aren’t affected. This isn’t obvious but Avast Secure Browser has Avast Online Security installed by default. It is hidden from the extension listing and cannot be uninstalled by regular means, its functionality apparently considered an integral part of the browser. Avast products promote this browser heavily, and it will also be used automatically in “Banking Mode.” Given that Avast bought AVG a few years ago, there is also a mostly identical AVG Secure Browser with the built-in AVG Online Security extension.

Avast watching you while browsing the web

Summary of the findings

When Avast Online Security extension is active, it will request information about your visited websites from an Avast server. In the process, it will transmit data that allows reconstructing your entire web browsing history and much of your browsing behavior. The amount of data being sent goes far beyond what’s necessary for the extension to function, especially if you compare to competing solutions such as Google Safe Browsing.

Avast Privacy Policy covers this functionality and claims that it is necessary to provide the service. Storing the data is considered unproblematic due to anonymization (I disagree), and Avast doesn’t make any statements explaining just how long it holds on to it.

What is happening exactly?

Using browser’s developer tools you can look at an extension’s network traffic. If you do it with Avast Online Security, you will see a request to https://uib.ff.avast.com/v5/urlinfo whenever a new page loads in a tab:

Request performed by Avast Online Security in Chrome's developer tools

So the extension sends some binary data and in return gets information on whether the page is malicious or not. The response is then translated into the extension icon to be displayed for the page. You can clearly see the full address of the page in the binary data, including query part and anchor. The rest of the data is somewhat harder to interpret, I’ll get to it soon.

This request isn’t merely sent when you navigate to a page, it also happens whenever you switch tabs. And there is an additional request if you are on a search page. This one will send every single link found on this page, be it a search result or an internal link of the search engine.

What data is being sent?

The binary UrlInfoRequest data structure used here can be seen in the extension source code. It is rather extensive however, with a number of fields being nested types. Also, some fields appear to be unused, and the purpose of others isn’t obvious. Finally, there are “custom values” there as well which are a completely arbitrary key/value collection. That’s why I decided to stop the extension in the debugger and have a look at the data before it is turned into binary. If you want to do it yourself, you need to find this.message() call in scripts/background.js and look at this.request after this method is called.

The interesting fields were:

Field Contents
uri The full address of the page you are on.
title Page title if available.
referer Address of the page that you got here from, if any.
windowNum
tabNum
Identifier of the window and tab that the page loaded into.
initiating_user_action
windowEvent
How exactly you got to the page, e.g. by entering the address directly, using a bookmark or clicking a link.
visited Whether you visited this page before.
locale Your country code, which seems to be guessed from the browser locale. This will be “US” for US English.
userid A unique user identifier generated by the extension (the one visible twice in the screenshot above, starting with “d916”). For some reason this one wasn’t set for me when Avast Antivirus was installed.
plugin_guid Seems to be another unique user identifier, the one starting with “ceda” in the screenshot above. Also not set for me when Avast Antivirus was installed.
browserType
browserVersion
Type (e.g. Chrome or Firefox) and version number of your browser.
os
osVersion
Your operating system and exact version number (the latter only known to the extension if Avast Antivirus is installed).

And that’s merely the fields which were set. The data structure also contains fields for your IP address and a hardware identifier but in my tests these stayed unused. It also seems that for paying Avast customers the identifier of the Avast account would be transmitted as well.

What does this data tell about you?

The data collected here goes far beyond merely exposing the sites that you visit and your search history. Tracking tab and window identifiers as well as your actions allows Avast to create a nearly precise reconstruction of your browsing behavior: how many tabs do you have open, what websites do you visit and when, how much time do you spend reading/watching the contents, what do you click there and when do you switch to another tab. All that is connected to a number of attributes allowing Avast to recognize you reliably, even a unique user identifier.

If you now think “but they still don’t know who I am” – think again. Even assuming that none of the website addresses you visited expose your identity directly, you likely have a social media account. There has been a number of publications showing that, given a browsing history, the corresponding social media account can be identified in most cases. For example, this 2017 study concludes:

Of the 374 people who confirmed the accuracy of our de-anonymization attempt, 268 (72%) were the top candidate generated by the MLE, and 303 participants (81%) were among the top 15 candidates. Consistent with our simulation results, we were able to successfully de-anonymize a substantial proportion of users who contributed their web browsing histories.

With the Avast data being far more extensive, it should allow identifying users with an even higher precision.

Isn’t this necessary for the extension to do its job?

No, the data collection is definitely unnecessary to this extent. You can see this by looking at how Google Safe Browsing works, the current approach being largely unchanged compared to how it was integrated in Firefox 2.0 back in 2006. Rather than asking a web server for each and every website, Safe Browsing downloads lists regularly so that malicious websites can be recognized locally.

No information about you or the sites you visit is communicated during list updates. […] Before blocking the site, Firefox will request a double-check to ensure that the reported site has not been removed from the list since your last update. This request does not include the address of the visited site, it only contains partial information derived from the address.

I’ve seen a bunch of similar extensions by antivirus vendors, and so far all of them provided this functionality by asking the antivirus app. Presumably, the antivirus has all the required data locally and doesn’t need to consult the web service every time. In fact, I could see Avast Online Security also consult the antivirus application for the websites you visit if this application is installed. It’s an additional request however, the request to the web service goes out regardless. Update (2019-10-29): I understand this logic better now, and the requests made to the antivirus application have a different purpose.

Wait, but Avast Antivirus isn’t always installed! And maybe the storage requirements for the full database exceed what browser extensions are allowed to store. In this case the browser extension has no choice but to ask the Avast web server about every website visited. But even then, this isn’t a new problem. For example, the Mozilla community had a discussion roughly a decade ago about whether security extensions really need to collect every website address. The decision here was: no, sending merely the host name (or even a hash of it) is sufficient. If higher precision is required, the extension could send the full address only if a potential match is detected.

What about the privacy policy?

But Avast has a privacy policy. They surely explained there what they need this data for and how they handle it. There will most certainly be guarantees in there that they don’t keep any of this data, right?

Let’s have a look. The privacy policy is quite long and applies to all Avast products and websites. The relevant information doesn’t come until the middle of it:

We may collect information about the computer or device you are using, our products and services running on it, and, depending on the type of device it is, what operating systems you are using, device settings, application identifiers (AI), hardware identifiers or universally unique identifiers (UUID), software identifiers, IP Address, location data, cookie IDs, and crash data (through the use of either our own analytical tools or tolls provided by third parties, such as Crashlytics or Firebase). Device and network data is connected to the installation GUID.

We collect device and network data from all users. We collect and retain only the data we need to provide functionality, monitor product and service performance, conduct research, diagnose and repair crashes, detect bugs, and fix vulnerabilities in security or operations (in other words, fulfil our contract with you to provision the service).

Unfortunately, after reading this passage I still don’t know whether they retain this data for me. I mean, “conduct research” for example is a very wide term and who knows what data they need to do it? Let’s look further.

Our AntiVirus and Internet security products require the collection of usage data to be fully functional. Some of the usage data we collect include:

[…]

  • information about where our products and services are used, including approximate location, zip code, area code, time zone, the URL and information related to the URL of sites you visit online

[…]

We use this Clickstream Data to provide you malware detection and protection. We also use the Clickstream Data for security research into threats. We pseudonymize and anonymize the Clickstream Data and re-use it for cross-product direct marketing, cross-product development and third party trend analytics.

And that seems to be all of it. In other words, Avast will keep your data and they don’t feel like they need your approval for that. They also reserve the right to use it in pretty much any way they like, including giving it to unnamed third parties for “trend analytics.” That is, as long as the data is considered anonymized. Which it probably is, given that technically the unique user identifier is not tied to you as a person. That your identity can still be deduced from the data – well, bad luck for you.

Edit (2019-10-29): I got a hint that Avast acquired Jumpshot a bunch of years ago. And if you take a look at the Jumpshot website, they list “clickstream data from 100 million global online shoppers and 20 million global app users” as their product. So you now have a pretty good guess as to where your data is going.

Conclusions

Avast Online Security collecting personal data of their users is not an oversight and not necessary for the extension functionality either. The extension attempts to collect as much context data as possible, and it does so on purpose. The Avast privacy policy shows that Avast is aware of the privacy implications here. However, they do not provide any clear retention policy for this data. They rather appear to hold on to the data forever, feeling that they can do anything with it as long as the data is anonymized. The fact that browsing data can usually be deanonymized doesn’t instill much confidence however.

This is rather ironic given that all modern browsers have phishing and malware protection built in that does essentially the same thing but with a much smaller privacy impact. In principle, Avast Secure Browser has this feature as well, it being Chromium-based. However, all Google services have been disabled and removed from the settings page – the browser won’t let you send any data to Google, sending way more data to Avast instead.

Update (2019-10-28): Somehow I didn’t find existing articles on the topic when I searched initially. This article mentions the same issue in passing, it was published in January 2015 already. The screenshot there shows pretty much the same request, merely with less data.

IRL (podcast)“The Weird Kids at the Big Tech Party” from ZigZag

Season 4 of ZigZag is about examining the current culture of business and work, figuring out what needs to change, and experimenting with new ways to do it. Sign up for their newsletter and subscribe to the podcast for free wherever you get your podcasts.

Niko Matsakiswhy async fn in traits are hard

After reading boat’s excellent post on asynchronous destructors, I thought it might be a good idea to write some about async fn in traits. Support for async fn in traits is probably the single most common feature request that I hear about. It’s also one of the more complex topics. So I thought it’d be nice to do a blog post kind of giving the “lay of the land” on that feature – what makes it complicated? What questions remain open?

I’m not making any concrete proposals in this post, just laying out the problems. But do not lose hope! In a future post, I’ll lay out a specific roadmap for how I think we can make incremental progress towards supporting async fn in traits in a useful way. And, in the meantime, you can use the async-trait crate (but I get ahead of myself…).

The goal

In some sense, the goal is simple. We would like to enable you to write traits that include async fn. For example, imagine we have some Database trait that lets you do various operations against a database, asynchronously:

trait Database {
    async fn get_user(
        &self, 
    ) -> User;
}

Today, you should use async-trait

Today, of course, the answer is that you should dtolnay’s excellent async-trait crate. This allows you to write almost what we wanted:

#[async_trait]
trait Database {
    async fn get_user(&self) -> User;
}

But what is really happening under the hood? As the crate’s documentation explains, this declaration is getting transformed to the following. Notice the return type.

trait Database {
    fn get_user(&self) -> Pin<Box<dyn Future<Output = User> + Send + '_>>;
}

So basically you are returning a boxed dyn Future – a future object, in other words. This desugaring is rather different from what happens with async fn in other contexts – but why is that? The rest of this post is going to explain some of the problems that async fn in traits is trying to solve, which may help explain why we have a need for the async-trait crate to begin with!

Async fn normally returns an impl Future

We saw that the async-trait crate converts an async fn to something that returns a dyn Future. This is contrast to the async fn desugaring that the Rust compiler uses, which produces an impl Future. For example, imagine that we have an inherent method async fn get_user() defined on some particular service type:

impl MyDatabase {
    async fn get_user(&self) -> User {
        ...
    }
}

This would get desugared to something similar to:

impl MyDatabase {
    fn get_user(&self) -> impl Future<Output = User> + '_ {
        ... 
    }
}

So why does async-trait do something different? Well, it’s because of “Complication #1”…

Complication #1: returning impl Trait in traits is not supported

Currently, we don’t support -> impl Trait return types in traits. Logically, though, we basically know what the semantics of such a construct should be: it is equivalent to a kind of associated type. That is, the trait is promising that invoking get_user will return some kind of future, but the precise type will be determined by the details of the impl (and perhaps inferred by the compiler). So, if know logically how impl Trait in traits should behave, what stops us from implementing it? Well, let’s see…

Complication #1a. impl Trait in traits requires GATs

Let’s return to our Database example. Imagine that we permitted async fn in traits. We would therefore desugar

trait Database {
    async fn get_user(&self) -> User;
}

into something that returns an impl Future:

trait Database {
    fn get_user(&self) -> impl Future<Output = User> + '_;
}

and then we would in turn desugar that into something that uses an associated type:

trait Database {
    type GetUser<'s>: Future<Output = User> + 's;
    fn get_user(&self) -> Self::GetUser<'_>;
}

Hmm, did you notice that I wrote type GetUser<'s>, and not type GetUser? Yes, that’s right, this is not just an associated type, it’s actually a generic associated type. The reason for this is that async fn always capture all of their arguments – so whatever type we return will include the &self as part of it, and therefore it has to include the lifetime 's. So, that’s one complication, we have to figure out generic associated types.

Now, in some sense that’s not so bad. Conceptually, GATs are fairly simple. Implementation wise, though, we’re still working on how to support them in rustc – this may require porting rustc to use chalk, though that’s not entirely clear. In any case, this work is definitely underway, but it’s going to take more time.

Unfortunately for us, GATs are only the beginning of the complications around async fn (and impl Trait) in traits!

Complication #2: send bounds (and other bounds)

Right now, when you write an async fn, the resulting future may or may not implement Send – the result depends on what state it captures. The compiler infers this automatically, basically, in typical auto trait fashion.

But if you are writing generic code, you may well want to need to require that the resulting future is Send. For example, imagine we are writing a finagle_database thing that, as part of its inner working, happens to spawn off a parallel thread to get the current user. Since we’re going to be spawning a thread with the result from d.get_user(), that result is going to have to be Send, which means we’re going to want to write a function that looks something like this1:

fn finagle_database<D: Database>(d: &D)
where
    for<'s> D::GetUser<'s>: Send,
{
    ...
    spawn(d.get_user());
    ...
}

This example seems “ok”, but there are four complications

  • First, we wrote the name GetUser, but that is something we introduced as part of “manually” desugaring async fn get_user. What name would the user actually use?
  • Second, writing for<'s> D::GetUser<'s> is kind of grody, we’re obviously going to want more compact syntax (this is really an issue around generic associated types in general).
  • Third, our example Database trait has only one async fn, but obviously there might be many more. Probably we will want to make all of them Send or None – so you can expand a lot more grody bounds in a real function!
  • Finally, forcing the user to specify which exact async fns have to return Send futures is a semver hazard.

Let me dig into those a bit.

Complication #2a. How to name the associated type?

So we saw that, in a trait, returning an impl Trait value is equivalent to introducing a (possibly generic) associated type. But how should we name this associated type? In my example, I introduced a GetUser associated type as the result of the get_user function. Certainly, you could imagine a rule like “take the name of the function and convert it to camel case”, but it feels a bit hokey (although I suspect that, in practice, it would work out just fine). There have been other proposals too, such as typeof expressions and the like.

Complication #2b. Grody, complex bounds, especially around GATs.

In my example, I used the strawman syntax for<'s> D::GetUser<'s>: Send. In real life, unfortunately, the bounds you need may well get more complex still. Consider the case where an async fn has generic parameters itself:

trait Foo {
    async fn bar<A, B>(a: A, b: B);
}

Here, the future that results bar is only going to be Send if A: Send and B: Send. This suggests a bound like

where
    for<A: Send, B: Send> { S::bar<A, B>: Send }

From a conceptual point-of-view, bounds like these are no problem. Chalk can handle them just fine, for example. But I think this is pretty clearly a problem and not something that ordinary users are going to want to write on a regular basis.

Complication #2c. Listing specific associated types reveals implementation details

If we require functions to specify the exact futures that are Send, that is not only tedious, it could be a semver hazard. Consider our finagle_database function – from its where clause, we can see that it spawns out get_user into a scoped thread. But what if we wanted to modify it in the future to spawn off more database operations? That would require us to modify the where-clauses, which might in turn break our callers. Seems like a problem, and it suggests that we might want some way to say “all possible futures are send”.

Conclusion: We might want a new syntax for propagating auto traits to async fns

All of this suggests that we might want some way to propagate auto traits through to the results of async fns explicitly. For example, you could imagine supporting async bounds, so that we might write async Send instead of just Send:

pub fn finagle_database<DB>(t: DB)
where
    DB: Database + async Send,
{
}

This syntax would be some kind of “default” that expands to explicit Send bounds both DB and all the futures potentially returned by DB.

Or perhaps we’d even want to avoid any syntax, and somehow “rejigger” how Send works when applied to traits that contain async fns? I’m not sure about how that would work.

It’s worth pointing out this same problem can occur with impl Trait in return position2, or indeed any associaed types. Therefore, we might prefer a syntax that is more general and not tied to async.

Complication #3: supporting dyn traits that have async fns

Now imagine that had our trait Database, containing an async fn get_user. We might like to write functions that operate over dyn Database values. There are many reasons to prefer dyn Database values:

  • We don’t want to generate many copies of the same function, one per database type;
  • We want to have collections of different sorts of databases, such as a Vec<Box<dyn Database>> or something like that.

In practice, a desire to support dyn Trait comes up in a lot of examples where you would want to use async fn in traits.

Complication #3a: dyn Trait have to specify their associated type values

We’ve seen that async fn in traits effectively desugars to a (generic) associated type. And, under the current Rust rules, when you have a dyn Trait value, the type must specify the values for all associated types. If we consider our desugared Database trait, then, it would have to be written dyn Database<GetUser<'s> = XXX>. This is obviously no good, for two reasons:

  1. It would require us to write out the full type for the GetUser, which might be super complicated.
  2. And anyway, each dyn Database is going to have a distinct GetUser type. If we have to specify GetUser, then, that kind of defeats the point of using dyn Database in the first place, as the type is going to be specific to some particular service, rather than being a single type that applies to all services.

Complication #3b: no “right choice” for X in dyn Database<GetUser<'s> = X>

When we’re using dyn Database, what we actually want is a type where GetUser is not specified. In other words, we just want to write dyn Database, full stop, and we want that to be expanded to something that is perhaps “morally equivalent” to this:

dyn Database<GetUser<'s> = dyn Future<..> + 's>

In other words, all the caller really wants to know when it calls get_user is that it gets back some future which it can poll. It doesn’t want to know exactly which one.

Unfortunately, actually using dyn Future<..> as the type there is not a viable choice. We probably want a Sized type, so that the future can be stored, moved into a box, etc. We could imagine then that dyn Database defaults its “futures” to Box<dyn Future<..>> instead – well, actually, Pin<Box<dyn Future>> would be a more ergonomic choice – but there are a few concerns with that.

First, using Box seems rather arbitrary. We don’t usually make Box this “special” in other parts of the language.

Second, where would this box get allocated? The actual trait impl for our service isn’t using a box, it’s creating a future type and returning it inline. So we’d need to generate some kind of “shim impl” that applies whenever something is used as a dyn Database – this shim impl would invoke the main function, box the result, and return that.

Third, because a dyn Future type hides the underlying future (that is, indeed, its entire purpose), it also blocks the auto trait mechanism from figuring out if the result is Send. Therefore, when we make e.g. a dyn Database type, we need to specify not only the allocation mechanism we’ll use to manipulate the future (i.e., do we use Box?) but also whether the future is Send or not.

Now you see why async-trait desugars the way it does

After reviewing all these problems, we now start to see where the design of the async-trait crate comes from:

  • To avoid Complications #1 and #2, async-trait desugars async fn to return a dyn Future instead of an impl Future.
  • To avoid Complication #3, async-trait chooses for you to use a Pin<Box<dyn Future + Send>> (you can opt-out from the Send part). This is almost always the correct default.

All in all, it’s a very nice solution.

The only real drawback here is that there is some performance hit from boxing the futures – but I suspect it is negligible in almost all applications. I don’t think this would be true if we boxed the results of all async fns; there are many cases where async fns are used to create small combinators, and there the boxing costs might start to add up. But only boxing async fns that go through trait boundaries is very different. And of course it’s worth highlighting that most languages box all their futures, all of the time. =)

Summary

So to sum it all up, here are some of the observations from this article:

  • async fn desugars to a fn returning impl Trait, so if we want to support async fn in traits, we should also support fns that return impl Trait in traits.
    • It’s worth pointing out also that sometimes you have to manually desugar an async fn to a fn that returns impl Future to avoid capturing all your arguments, so the two go hand in hand.
  • Returning impl Trait in a trait is equivalent to an associated type in the trait.
    • This associated type does need to be nameable, but what name should we give this associated type?
    • Also, this associated type often has to be generic, especially for async fn.
  • Applying Send bounds to the futures that can be generated is tedious, grody, and reveals semver details. We probably some way to make that more ergonomic.
    • This quite likely applies to the general impl Trait case too, but it may come up somewhat less frequently.
  • We do want the ability to have dyn Trait versions of traits that contain associated functions and/or impl Trait return types.
    • But currently we have no way to have a dyn Trait without fully specifying all of its associated types; in our case, those associated types have a 1-to-1 relationship with the Self type, so that defeats the whole point of dyn Trait.
    • Therefore, in the case of dyn Trait, we would want to have the async fn within returning some form of dyn Future. But we would have to effectively “hardcode” two choices:
      • What form of pointer to use (e.g., Box)
      • Is the resulting future Send, Sync, etc
    • This applies to the general impl Trait case too.

The goal of this post was just to lay out the problems. I hope to write some follow-up posts digging a bit into the solutions – though for the time being, the solution is clear: use the async-trait crate.

Footnotes

  1. Astute readers might note that I’m eliding a further challenge, which is that you need a scoping mechanism here to handle the lifetimes. Let’s assume we have something like Rayon’s scope or crossbeam’s scope available. 

  2. Still, consider a trait IteratorX that is like Iterator, where the adapters return impl Trait. In such a case, you probably want a way to say not only “I take a T: IteratorX + Send” but also that the IteratorX values returned by calls to map and the like are Send. Presently you would have to list out the specific associated types you want, which also winds up revealing implementation details. 

Robert O'CallahanPernosco Demo Video

Over the last few years we have kept our work on the Pernosco debugger mostly under wraps, but finally it's time to show the world what we've been working on! So, without further ado, here's an introductory demo video showing Pernosco debugging a real-life bug:

This demo is based on a great gdb tutorial created by Brendan Gregg. If you read his blog post, you'll get more background and be able to compare Pernosco to the gdb experience.

Pernosco makes developers more productive by providing scalable omniscient debugging — accelerating existing debugging strategies and enabling entirely new strategies and features — and by integrating that experience into cloud-based workflows. The latter includes capturing test failures occurring in CI so developers can jump into a debugging session with one click on a URL, separating failure reproduction from debugging so QA staff can record test failures and send debugger URLs to developers, and letting developers collaborate on debugging sessions.

Over the next few weeks we plan to say a lot more about Pernosco and how it benefits software developers, including a detailed breakdown of its approach and features. To see those updates, follow @_pernosco_ or me on Twitter. We're opening up now because we feel ready to serve more customers and we're keen to talk to people who think they might benefit from Pernosco; if that's you, get in touch. (Full disclosure: Pernosco uses rr so for now we're limited x86-64 Linux, and statically compiled languages like C/C++/Rust.)

The Mozilla BlogLongtime Mozilla board member Bob Lisbonne moves from Foundation to Corporate Board; Outgoing CEO Chris Beard Corporate Board Term Ends

Today, Mozilla Co-Founder and Chairwoman Mitchell Baker announced that Mozilla Foundation Board member Bob Lisbonne has moved to the Mozilla Corporation Board; and as part of a planned, phased transition, Mozilla Corporation’s departing CEO Chris Beard has stepped down from his role as a Mozilla Corporation board member.

“We are in debt to Chris for his myriad contributions to Mozilla,” said Mozilla Chairwoman and Co-Founder Mitchell Baker. “We’re fortunate to have Bob make this shift at a time when his expertise is so well matched for Mozilla Corporation’s current needs.”

Bob has been a member of the Mozilla Foundation Board since 2006, but his contributions to the organization began with Mozilla’s founding. Bob played an important role in converting the earlier Netscape code into open source code and was part of the team that launched the Mozilla project in 1998.

“I’m incredibly fortunate to have been involved with Mozilla for over two decades,” said Bob Lisbonne. “Creating awesome products and services that advance the Mozilla mission remains as important as ever. In this new role, I’m eager to contribute my expertise and help advance the Internet as a global public resource, open and accessible to all.”

During his tenure on the Mozilla Foundation board, Bob has been a significant creative force in building both the Foundation’s programs — in particular the programs that led to MozFest — and the strength of the board. As he moves to the Mozilla Corporation Board, Bob will join the other Mozilla Corporation Board members in selecting, onboarding, and supporting a new CEO for Mozilla Corporation. Bob’s experience across innovation, investment, strategy and execution in the startup and technology arenas are particularly well suited to Mozilla Corporation’s setting.

Bob’s technology career spans 25 years, during which he played key roles as entrepreneur, venture capitalist, and executive. He was CEO of internet startup Luminate, and a General Partner with Matrix Partners. He has served on the Boards of companies which IPO’ed and were acquired by Cisco, HP, IBM, and Yahoo, among others. For the last five years, Bob has been teaching at Stanford University’s Graduate School of Business.

With Bob’s move and Chris’ departure, the Mozilla Corporation board will include: Mitchell Baker, Karim Lakhani, Julie Hanna, and Bob Lisbonne. The remaining Mozilla Foundation board members are: Mitchell Baker, Brian Behlendorf, Ronaldo Lemos, Helen Turvey, Nicole Wong and Mohamed Nanabhay.

The Mozilla Foundation board will begin taking steps to fill the vacancy created by Bob’s move. At the same time, the Mozilla Corporation board’s efforts to expand its make-up will continue.

Founded as a community open source project in 1998, Mozilla currently consists of two organizations: the 501(c)3 Mozilla Foundation, which backs emerging leaders and mobilizes citizens to create a global movement for the health of the internet; and its wholly owned subsidiary, the Mozilla Corporation, which creates products, advances public policy and explores new technologies that give people more control over their lives online, and shapes the future of the internet platform for the public good. Each is governed by a separate board of directors. The two organizations work in concert with each other and a global community of tens of thousands of volunteers under the single banner: Mozilla.
Because of its unique structure, Mozilla stands apart from its peers in the technology and social enterprise sectors globally as one of the most impactful and successful social enterprises in the world.

The post Longtime Mozilla board member Bob Lisbonne moves from Foundation to Corporate Board; Outgoing CEO Chris Beard Corporate Board Term Ends appeared first on The Mozilla Blog.

The Firefox FrontierFirefox Extension Spotlight: Enhancer for YouTube

“I wanted to offer a useful extension people can trust,” explains Maxime RF, creator of Enhancer for YouTube, a browser extension providing a broad assortment of customization options so you … Read more

The post Firefox Extension Spotlight: Enhancer for YouTube appeared first on The Firefox Frontier.

Jan-Erik RedigerThis Week in Glean: A Release

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

Last week's blog post: This Week in Glean: Glean on Desktop (Project FOG) by chutten.


Back in June when Firefox Preview shipped, it also shipped with Glean, our new Telemetry library, initially targeting mobile platforms. Georg recently blogged about the design principles of Glean in Introducing Glean — Telemetry for humans.

Plans for improving mobile telemetry for Mozilla go back as as far as December 2017. The first implementation of the Glean SDK was started around August 2018, all written in Kotlin (though back then it was mostly ideas in a bunch of text documents). This implementation shipped in Firefox Preview and was used up until now.

On March 18th I created an initial Rust workspace. This kicked of a rewrite of Glean using Rust to become a cross-platform telemetry SDK to be used on Android, iOS and eventually coming back to desktop platforms again.

1382 commits later1 I tagged v19.0.02.

Obviously that doesn't make people use it right away, but given all consumers of Glean right now are Mozilla products, it's up on us to get them to use it. So Alessio did just that by upgrading Android Components, a collection of Android libraries to build browsers or browser-like applications, to this new version.

This will soon roll out to nightly releases of Firefox Preview and, given we don't hit any larger bugs, hit the release channel in about 2 weeks. Additionally, that finally unblocks the Glean team to work on new features, ironing out some sharp edges and bringing Glean to Firefox on Desktop. Oh, and of course we still need to actually release it for iOS.

Thanks

Glean in Rust is the project I've been constantly working on since March. But getting it to a release was a team effort with help from a multitude of people and teams.

Thanks to everyone on the Glean SDK team:

Thanks to Frank Bertsch for a ton of backend work as well as the larger Glean pipeline team led by Mark Reid, to ensure we can handle the incoming telemetry data and also reliably analyse it. Thanks to the Data Engineering team led by Katie Parlante. Thanks to Mihai Tabara, the Release Engineering team and the Cloud Operations team, to help us with the release on short notice. Thanks to the Application Services team for paving the way of developing mobile libraries with Rust and to the Android Components team for constant help with Android development.


1

Not all of which are just code for the Android version. There's a lot of documentation too.

2

This is the first released version. This is just the version number that follows after the Kotlin implementation. Version numbers are cheap.

Hacks.Mozilla.OrgFrom js13kGames to MozFest Arcade: A game dev Web Monetization story

This is a short story of how js13kGames, an online “code golf” competition for web game developers, tried out Web Monetization this year. And ended up at the Mozilla Festival, happening this week in London, where we’re showcasing some of our winning entries.

Decorative banner for the js13K Games MozFest Arcade

A brief history of js13kGames

The js13kGames online competition for HTML5 game developers is constantly evolving. We started in 2012, and we run every year from August 13th to September 13th. In 2017, we added a new A-Frame category.

You still had to build web games that would fit within the 13 kilobytes zipped package as before, but the new category added the A-Frame framework “for free”, so it wasn’t counted towards the size limit. The new category resulted in some really cool entries.

Fast forward twelve months to 2018 – the category changed its name to WebXR. We added Babylon.js as a second option. In 2019, the VR category was extended again, with Three.js as the third library of choice. Thanks to the Mozilla Mixed Reality team we were able to give away three Oculus Quest devices to the winning entries.

The evolution of judging mechanics

The process for judging js13kGames entries has also evolved. At the beginning, about 60 games were submitted each year. Judges could play all the games to judge them fairly. In recent years, we’ve received nearly 250 entries. It’s really hard to play all of them, especially since judges tend to be busy people. And then, how can you be sure you scored fairly?

That’s why we introduced a new voting system. The role of judges changed: they became experts focused on giving constructive feedback, rather than scoring. Expert feedback is valued highly by participants, as one of the most important benefits in the competition.

At the same time, Community Awards became the official results. We upgraded the voting system with the new mechanism of “1 on 1 battles.” By comparing two games at once, you can focus and judge them fairly, and then move on to vote on another pair.

Voters compared the games based on consistent criteria: gameplay, graphics, theme, etc. This made “Community” votes valuable to developers as a feedback mechanism also. Developers could learn what their game was good at, and where they could improve. Many voting participants also wrote in constructive feedback, similar to what the experts provided. This feedback was accurate and eventually valuable for future improvements.

Web Monetization in the world of indie games

js13kGames Web Monetization with Coil

This year we introduced the Web Monetization category in partnership with Coil. The challenge to developers was to integrate Web Monetization API concepts within their js13kGames entries. Out of 245 games submitted overall, 48 entries (including WebXR ones) had implemented the Web Monetization API. It wasn’t that difficult.

Basically, you add a special monetization meta tag to index.html:

<!DOCTYPE HTML>
<html>
<head>
  <meta charset="utf-8">
  <title>Flood Escape</title>
  <meta name="monetization" content="your_payment_pointer">
  // ...
</head>

And then you need to add code to detect if a visitor is a paid subscriber (to Coil or any other similar service available in the future):

if(document.monetization && document.monetization.state === 'started') {
  // do something
}

You can do this detection via an event too:

function startEventHandler(event){
  // do something
}
document.monetization.addEventListener('monetizationstart', startEventHandler);

If the monetization event starts, that means the visitor has been identified as a paying subscriber. Then they can receive extra or special content: be it more coins, better weapons, shorter cooldown, extra level, or any other perk for the player.

It’s that simple to implement web monetization! No more bloated, ever changing SDKs to place advertisements into the games. No more waiting months for reports to see if spending time on this was even worth it.

The Web Monetization API gives game developers and content creators a way to monetize their creative work, without compromising their values or the user experience. As developers, we don’t have to depend on annoying in-game ads that interrupt the player. We can get rid of tracking scripts invading player privacy. That’s why Enclave Games creations never have any ads. Instead, we’ve implemented the Web Monetization API. We now offer extra content and bonuses to subscribers.

See you at MozFest

This all leads to London for the 2019 Mozilla Festival. Working with Grant for the Web, we’ve prepared something special: MozFest Arcade.

If you’re attending Mozfest, check out our special booth with game stations, gamepads, virtual reality headsets, and more. You will be able to play Enclave Games creations and js13kGames entries that are web-monetized! You can see for yourself how it all works under the hood.

Grant for the Web is a $100M fund to boost open, fair, and inclusive standards and innovation in web monetization. It is funded and led by Coil, working in collaboration with founding collaborators Mozilla and Creative Commons. (Additional collaborators may be added in the future.) A program team, led by Loup Design & Innovation, manages the day-to-day operations of the program.

It aims to distribute grants to web creators who would like to try web monetization as their business model, to earn revenue, and offer real competition to intrusive advertising, paywalls, and closed marketplaces.

If you’re in London, please join us at the Friday’s Science Fair at MozFest House. You can learn more about Web Monetization, Grant for the Web, while playing cool games. Also, you can get a free Coil subscription in the process. Join us through the weekend at the Indie Games Arcade at Ravensbourne University!

The post From js13kGames to MozFest Arcade: A game dev Web Monetization story appeared first on Mozilla Hacks - the Web developer blog.

Mike HoyeThe State Of Mozilla, 2019

As I’ve done in previous years, here’s The State Of Mozilla, as observed by me and presented by me to our local Linux user group.

Presentation:[ https://www.youtube.com/embed/RkvDnIGbv4w ]

And Q&A: [ https://www.youtube.com/embed/jHeNnSX6GcQ ]

Nothing tectonic in there – I dodged a few questions, because I didn’t want to undercut the work that was leading up to the release of Firefox 70, but mostly harmless stuff.

Can’t be that I’m getting stockier, though. Must be the shirt that’s unflattering. That’s it.

Mozilla Addons BlogFirefox Preview/GeckoView Add-ons Support

Back in June, Mozilla announced Firefox Preview, an early version of the new browser for Android that is built on top of Firefox’s own mobile browser engine, GeckoView. We’ve gotten great feedback about the superior performance of GeckoView so far. Not only is it faster than ever, it also opens up many opportunities for building deeper privacy features that we have already started exploring, and a lot of users were wondering what this step meant for add-ons.

We’re happy to confirm that GeckoView is currently building support for extensions through the WebExtensions API. This feature will be available in Firefox Preview, and we are looking forward to offering a great experience for both mobile users and developers.

Bringing GeckoView and Firefox Preview up to par with the APIs that were supported previously in Firefox for Android won’t happen overnight. For the remainder of 2019 and leading into 2020, we are focusing on building support for a selection of content from our Recommended Extensions program that work well on mobile and cover a variety of utilities and features.

At the moment, Firefox Preview does not yet officially support extensions. While some members of the community have discovered that some extensions inadvertently work in Firefox Preview, we do not recommend attempting to install them until they are officially supported as other issues may arise. We expect to implement support for the initial selection of extensions in the first half of 2020, and will post updates here as we make progress.

If you haven’t yet had a chance, why don’t you give Firefox Preview a try and let us know what you think.

The post Firefox Preview/GeckoView Add-ons Support appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgThe two-value syntax of the CSS Display property

If you like to read release notes, then you may have spotted in the Firefox 70 notes a line about the implementation of the two-value syntax of the display CSS property. Or maybe you saw a mention in yesterday’s Firefox 70 roundup post. Today I’ll explain what this means, and why understanding this two-value syntax is important despite only having an implementation in Firefox right now.

The display property

The display property is how we change the formatting context of an element and its children. In CSS some elements are block by default, and others are inline. This is one of the first things you learn about CSS.

The display property enables switching between these states. For example, this means that an h1, usually a block element, can be displayed inline. Or a span, initially an inline element, can be displayed as a block.

More recently we have gained CSS Grid Layout and Flexbox. To access these we also use values of the display property — display: grid and display: flex. Only when the value of display is changed do the children become flex or grid items and begin to respond to the other properties in the grid or flexbox specifications.

Two-value display – span with display: flex

What grid and flexbox demonstrate, however, is that an element has both an outer and an inner display type. When we use display: flex we create a block-level element, with flex children. The children are described as participating in a flex formatting context. You can see this if you take a span and apply display: flex to it — the span is now block-level. It behaves as block-level things do in relationship to other boxes in the layout. It’s as if you had applied display: block to the span, however we also get the changed behavior of the children. In the CodePen below you can see that the string of text and the em have become two flex items.

See the Pen
Mozilla Hacks two-value Display: span with display: flex
by rachelandrew (@rachelandrew)
on CodePen.

Two-value display – span with display: grid

Grid layout behaves in the same way. If we use display: grid we create a block-level element and a grid formatting context for the children. We also have methods to create an inline-level box with flex or grid children with display: inline-flex and display: inline-grid. The next example shows a div, normally a block-level element, behaving as an inline element with grid item children.

As an inline element, the box does not take up all the space in the inline dimension, and the following string of text displays next to it. The children however are still grid items.

See the Pen
Mozilla Hacks two-value display: inline-grid
by rachelandrew (@rachelandrew)
on CodePen.

Refactoring display

As the above examples show, the outer display type of an element is always block or inline, and dictates how the box behaves in the normal flow of the document. The inner display type then changes the formatting context of the children.

To better describe this behavior, the CSS Display specification has been refactored to allow for display to accept two values. The first describes whether the outer display type is block or inline, whereas the second value describes the formatting of the children. This table shows how some of these new values map to the single values – now referred to as legacy values – in the spec.

Single value New value
block block flow
flow-root block flow-root
inline inline flow
inline-block inline flow-root
flex block flex
inline-flex inline flex
grid block grid
inline-grid inline grid

There are more values of display, including lists and tables; to see the full set of values visit the CSS Display Specification.

We can see how this would work for Flexbox. If I want to have a block-level element with flex children I use display: block flex, and if I want an inline-level element with flex children I use display: inline flex. The below example will work in Firefox 70.

See the Pen
Mozilla Hacks two-value Display: two value flex values
by rachelandrew (@rachelandrew)
on CodePen.

Our trusty display: block and display: inline don’t remain untouched either, display: block becomes display: block flow – that is a block element with children participating in normal flow. A display: inline element becomes display: inline flow.

display: inline-block and display: flow-root

This all becomes more interesting if we look at a couple of values of display – one new, one which dates back to CSS2. Inline boxes in CSS are designed to sit inside a line box, the anonymous box which wraps each line of text in a sentence. This means that they behave in certain ways: If you add padding to all of the edges of an inline box, such as in the example below where I have given the inline element a background color, the padding applies. And yet, it does not push the line boxes in the block direction away. In addition, inline boxes do not respect width or height (or inline-size and block-size).

Using display: inline-block causes the inline element to contain this padding, and to accept the width and height properties. It remains an inline thing however; it continues to sit in the flow of text.

In this next CodePen I have two span elements, one regular inline and the other inline-block, so that you can see the difference in layout that this value causes.

See the Pen
Mozilla Hacks two-value display: inline-block
by rachelandrew (@rachelandrew)
on CodePen.

We can then take a look at the newer value of display, flow-root. If you give an element display: flow-root it becomes a new block formatting context, becoming the root element for a new normal flow. Essentially, this causes floats to be contained. Also, margins on child elements stay inside the container rather than collapsing with the margin of the parent.

In the next CodePen, you can compare the first example without display: flow-root and the second with display: flow-root. The image in the first example pokes out of the bottom of the box, as it has been taken out of normal flow. Floated items are taken out of flow and shorten the line boxes of the content that follows. However, the actual box does not contain the element, unless that box creates a new block formatting context.

The second example does have flow-root and you can see how the box with the grey background now contains the float, leaving a gap underneath the text. If you have ever contained floats by setting overflow to auto, then you were achieving the same thing, as overflow values other than the default visible create a new block formatting context. However, there can be some additional unwanted effects such as clipping of shadows or unexpected scrollbars. Using flow-root gives you the creation of a block formatting context (BFC) without anything else happening.

See the Pen
Mozilla Hacks two-value display: flow-root
by rachelandrew (@rachelandrew)
on CodePen.

The reason to highlight display: inline-block and display: flow-root is that these two things are essentially the same. The well-known value of inline-block, creates an inline flow-root which is why the new two-value version of display: inline-block is display: inline flow-root. It does exactly the same job as the flow-root property which, in a two-value world, becomes display: block flow-root.

You can see both of these values used in this last example, using Firefox 70.

See the Pen
Mozilla Hacks two-value display: inline flow-root and block flow-root
by rachelandrew (@rachelandrew)
on CodePen.

Can we use these two value properties?

With support currently available only in Firefox 70, it is too early to start using these two-value properties in production. Currently, other browsers will not support them. Asking for display: block flex will be treated as invalid except in Firefox. Since you can access all of the functionality using the one-value syntax, which will remain as aliases of the new syntax, there is no reason to suddenly jump to these.

However, they are important to be aware of, in terms of what they mean for CSS. They properly explain the interaction of boxes with other boxes, in terms of whether they are block or inline, plus the behavior of the children. For understanding what display is and does, I think they make for a very useful clarification. As a result, I’ve started to teach display using these two values to help explain what is going on when you change formatting contexts.

It is always exciting to see new features being implemented, I hope that other browsers will also implement these two-value versions soon. And then, in the not too distant future we’ll be able to write CSS in the same way as we now explain it, clearly demonstrating the relationship between boxes and the behavior of their children.

The post The two-value syntax of the CSS Display property appeared first on Mozilla Hacks - the Web developer blog.

Mike Hoye80×25

Every now and then, my brain clamps on to obscure trivia like this. It takes so much time. “Because the paper beds of banknote presses in 1860 were 14.5 inches by 16.5 inches, a movie industry cartel set a standard for theater projectors based on silent film, and two kilobytes is two kilobytes” is as far back as I have been able to push this, but let’s get started.

In August of 1861, by order of the U.S. Congress and in order to fund the Union’s ongoing war efforts against the treasonous secessionists of the South, the American Banknote Company started printing what were then called “Demand Notes”, but soon widely known as “greenbacks”.

It’s difficult to research anything about the early days of American currency on Wikipedia these days; that space has been thoroughly colonized by the goldbug/sovcit cranks. You wouldn’t notice it from a casual examination, which is of course the plan; that festering rathole is tucked away down in the references, where articles will fold a seemingly innocuous line somewhere into the middle, tagged with an exceptionally dodgy reference. You’ll learn that “the shift from demand notes to treasury notes meant they could no longer be redeemed for gold coins[1]” – which is strictly true! – but if you chase down that footnote you wind up somewhere with a name like “Lincoln’s Treason – Fiat Currency, Maritime Law And The U.S. Treasury’s Conspiracy To Enslave America”, which I promise I am only barely exaggerating about.

It’s not entirely clear if this is a deliberate exercise in coordinated crank-wank or just years of accumulated flotsam from the usual debate-club dead-enders hanging off the starboard side of the Overton window. There’s plenty of idiots out there that aren’t quite useful enough to work the k-cups at the Heritage Institute, and I guess they’re doing something with their time, but the whole thing has a certain sinister elegance to it that the Randroid crowd can’t usually muster. I’ve got my doubts either way, and I honestly don’t care to dive deep enough into that sewer to settle them. Either way, it’s always good to be reminded that the goldbug/randroid/sovcit crank spectrum shares a common ideological klancestor.

Mercifully that is not what I’m here for. I am here because these first Demand Notes, and the Treasury Notes that came afterwards, were – on average, these were imprecise times – 7-3/8” wide by 3-1/4” tall.

I haven’t been able to precisely answer the “why” of that – I believe, but do not know, that that this is because of the size of the specific dimensions of the presses they were printed on. Despite my best efforts I haven’t been able to find the exact model and specifications of that device. I’ve asked the U.S. Congressional Research Service for some help with this, but between them and the Bureau of Engraving and Printing, we haven’t been able to pin it down. From my last correspondence with them:

Unfortunately, we don’t have any materials in the collection identifying the specific presses and their dimension for early currency production. The best we can say is that the presses used to print currency in the 1860s varied in size and model. These presses went by a number of names, including hand presses, flat-bed presses, and spider presses. They also were capable of printing sheets of paper in various sizes. However, the standard size for printing securities and banknotes appears to have been 14.5 inches by 16.5 inches. We hope this bit of information helps.

… which is unfortunate, but it does give us some clarity. A 16.5″ by 14.5″ printing sheet lets you print eight 7-3/8” by 3-1/4″ sheets to size, with a fraction of an inch on either side for trimming.

The answer to that question starts to matter about twenty years later on the heels of the 1880 American Census. Mandated to be performed once a decade, the United States population had grown some 30% since the previous census, and even with enormous effort the final tabulations weren’t finished until 1888, an unacceptable delay.

One of the 1880 Census’ early employees was a man named Herman Hollerith, a recent graduate of the Columbia School of Mines who’d been invited to join the Census efforts early on by one of his professors. The Census was one of the most important social and professional networking exercises of the day, and Hollerith correctly jumped at the opportunity:

The absence of a permanent institution meant the network of individuals with professional census expertise scattered widely after each census. The invitation offered a young graduate the possibility to get acquainted with various members of the network, which was soon to be dispersed across the country.

As an aside, that invitation letter is one of the most important early documents in the history of computing for lots of reasons, including this one:

The machine in that picture was the third generation of the “Hollerith Tabulator”, notable for the replaceable plugboard that made it reprogrammable. I need to find some time to dig further into this, but that might be the first multipurpose, if not “general purpose” as we’ve come to understand it, electronic computation device. This is another piece of formative tech that emerged from this era, one that led to directly to the removable panels (and ultimately the general componentization) of later computing hardware.

Well before the model 3, though, was the original 1890 Hollerith Census Tabulator that relied on punchcards much like this one.

Hollerith took the inspiration for those punchcards from the “punch photographs” used by some railways at the time to make sure that tickets belonged to the passengers holding them. You can see a description of one patent for them here dating to 1888, but Hollerith relates the story from a few years earlier:

One thing that helped me along in this matter was that some time before I was traveling in the west and I had a ticket with what I think was called a punch photograph. When the ticket was first presented to a conductor he punched out a description of the individual, as light hair, dark eyes, large nose etc. So you see I only made a punch photograph of each person.

Tangentially: this is the birth of computational biometrics. And as you can see from this extract from The Railway News (Vol. XLVIII, No. 1234 , published Aug. 27, 1887) people have been concerned about harassment because of unfair assessment by the authorities from day one:

punch-photograph

After experimenting with a variety of card sizes Hollerith decided that to save on production costs he’d use the same boxes the U.S. Treasury was using for the currency of the day: the Demand Note. Punch cards stayed about that shape, punched with devices that looked a lot like this for about 20 years until Thomas Watson Sr. (IBM’s first CEO, from whom the Watson computer gets its name) asked Clair D. Lake and J. Royden Peirce to develop a new, higher data-density card format.

Tragically, this is the part where I need to admit an unfounded assertion. I’ve got data, the pictures line up and numbers work, but I don’t have a citation. I wish I did.

Take a look at “Type Design For Typewriters: Olivetti, written by Maria Ramos Silvia. (You can see a historical talk from her on the history of typefaces here that’s also pretty great.)

Specifically, take a look on page 46 at Mikron Piccolo, Mikron Condensed. The fonts don’t precisely line up – see the different “4”, for example, when comparing it to the typesetting of IBM’s cards – but the size and spacing do. In short: a line of 80 characters, each separated by a space, is the largest round number of digits that the tightest typesetting of the day would allow to be fit on a single 7-3/8” wide card: a 20-point condensed font.

I can’t find a direct citation for this; that’s the only disconnect here. But the spacing all fits, the numbers all work, and I’d bet real money on this: that when Watson gave Lake the task of coming up with a higher information-density punch card, Lake looked around at what they already had on the shelf – a typewriter with the highest-available character density of the day, on cards they could manage with existing and widely-available tooling – and put it all together in 1928. The fact that a square hole – a radical departure from the standard circular punch – was a patentable innovation at the time was just icing on the cake.

The result of that work is something you’ll certainly recognize, the standard IBM punchcard, though of course there’s lot more to it than that. Witness the full glory of the Card Stock Acceptance Procedure, the protocol for measuring folding endurance, air resistance, smoothness and evaluating the ash content, moisture content and pH of the paper, among many other things.

At one point sales of punchcards and related tooling constituted a completely bonkers 30% of IBM’s annual profit margin, so you can understand that IBM had a lot invested in getting that consistently, precisely correct.

At around this time John Logie Baird invented the first “mechanical television”; like punchcards, the first television cameras were hand-cranked devices that relied on something called a Nipkow disk, a mechanical tool for separating images into sequential scan lines, a technique that survives in electronic form to this day. By linearizing the image signal Baird could transmit the image’s brightness levels via a simple radio signal and in 1926 he did just that, replaying that mechanically encoded signal through a CRT and becoming the inventor of broadcast television. He would go on to pioneer colour television – originally called Telechrome, a fantastic name I’m sad we didn’t keep – but that’s a different story.

Baird’s original “Televisor” showed its images on a 7:3 aspect ration vertically oriented cathode ray tube, intended to fit the head and shoulders of a standing person, but that wouldn’t last.

For years previously, silent films had been shot on standard 35MM stock, but the addition of a physical audio track to 35MM film stock didn’t leave enough space left over for the visual area. So – after years of every movie studio having its own preferred aspect ratio, which required its own cameras, projectors, film stock and tools (and and and) – in 1929 the movie industry agreed to settle on the Society of Motion Picture And Television Engineers’ proposed standard of 0.8 inches by 0.6 inches, what became known as the Academy Ratio, or as we better know it today, 4:3.

Between 1932 and 1952, when widescreen for cinemas came into vogue as a differentiator from standard television, just about all the movies made in the world were shot in that aspect ratio, and just about every cathode ray tube made came in that shape, or one that could display it reliably. In 1953 studios started switching to a wider “Cinemascope”, to aggressively differentiate themselves from television, but by then television already had a large, thoroughly entrenched install base, and 4:3 remained the standard for in-home displays – and CRT manufacturers – until widescreen digital television came to market in the 1990s.

As computers moved from teleprinters – like, physical, ink-on-paper line printers – to screens, one byproduct of that standardization was that if you wanted to build a terminal, you either used that aspect ratio or you started making your own custom CRTs, a huge barrier to market entry. You can do that if you’re IBM, and you’re deeply reluctant to if you’re anyone else. So when DEC introduced their VT52 terminal, a successor to the VT50 and earlier VT05 that’s what they shipped, and with only 1Kb of display ram (one kilobyte!) it displayed only twelve rows of widely-spaced text. Math is unforgiving, and 80×12=960; even one more row breaks the bank. The VT52 and its successor the VT100, though, doubled that capacity giving users the opulent luxury of two entire kilobytes of display memory, laid out with a font that fit nicely on that 4:3 screen. The VT100 hit the market in August of 1978, and DEC sold more than six million of them over the product’s lifespan.

You even got an extra whole line to spare! Thanks to the magic of basic arithmetic 80×25 just sneaks under that opulent 2k limit with 48 bytes to spare.

This is another point where direct connections get blurry, because 1976 to 1984 was an incredibly fertile time in the history of computing history. After a brief period where competing terminal standards effectively locked software to the hardware that it shipped on, the VT100 – being the first terminal to market fully supporting the recently codified ANSI standard control and escape sequences – quickly became the de-facto standard, and soon afterwards the de-jure, codified in ANSI-X3.64/ECMA-48. CP/M, soon to be replaced with PC-DOS and then MS-DOS came from this era, with ANSI.SYS being the way DOS programs talked to the display from DOS 2.0 through to beginning of Windows. Then in 1983 the Apple IIe was introduced, the first Apple computer to natively support an 80×24 text display, doubling the 40×24 default of their earlier hardware. The original XTerm, first released in 1984, was also created explicitly for VT100 compatibility.

Fascinatingly, the early versions of the ECMA-48 standard specify that this standard isn’t solely meant for displays, specifying that “examples of devices conforming to this concept are: an alpha-numeric display device, a printer or a microfilm output device.”

A microfilm output device! This exercise dates to a time when microfilm output was a design constraint! I did not anticipate that cold-war spy-novel flavor while I was dredging this out, but it’s there and it’s magnificent.

It also dates to a time that the market was shifting quickly from mainframes and minicomputers to microcomputers – or, as we call them today, “computers” – as reasonably affordable desktop machines that humans might possibly afford and that companies might own a large number of, meaning this is also where the spectre of backcompat starts haunting the industry – This moment in a talk from the Microsoft developers working on the Windows Subsystem for Linux gives you a sense of the scale of that burden even today. In fact, it wasn’t until the fifth edition of ECMA-48 was published in 1991, more than a decade after the VT100 hit the market, that the formal specification for terminal behavior even admitted the possibility (Appendix F) that a terminal could be resized at all, meaning that the existing defaults were effectively graven in stone during what was otherwise one of the most fertile and formative periods in the history of computing.

As a personal aside, my two great frustrations with doing any kind of historical CS research remain the incalculable damage that academic paywalls have done to the historical record, and the relentless insistence this industry has on justifying rather than interrogating the status quo. This is how you end up on Stack Overflow spouting unresearched nonsense about how “4 pixel wide fonts are untidy-looking”. I’ve said this before, and I’ll say it again: whatever we think about ourselves as programmers and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize, and by telling and retelling these unsourced, inaccurate just-so stories without ever doing the work of finding the real truth, we’re betraying ourselves, our history and our future. But it’s pretty goddamned difficult to convince people that they should actually look things up instead of making up nonsense when actually looking things up, even for a seemingly simple question like this one, can cost somebody on the outside edge of an academic paywall hundreds or thousands of dollars.

So, as is now the usual in these things:

  • There are technical reasons,
  • There are social reasons,
  • It’s complicated, and
  • Open access publication or GTFO.

But if you ever wondered why just about every terminal in the world is eighty characters wide and twenty-five characters tall, there you go.

Mozilla GFXDramatically reduced power usage in Firefox 70 on macOS with Core Animation

In Firefox 70 we changed how pixels get to the screen on macOS. This allows us to do less work per frame when only small parts of the screen change. As a result, Firefox 70 drastically reduces the power usage during browsing.

Bar chart showing the power consumption between Firefox 69 and Firefox 70. In every scenario, Firefox 70 does appreciably better. Scrolling: before: 16.4W, after: 9.4W Spinning Square: before: 12.8W, after: 2.9W YouTube Video: before: 28.4W, after: 16.4W Google Docs Idle: before: 7.4W, after: 1.6W Tagesschau Audio Player: before: 24.4W, after: 4.1W Loading Animation, low complexity: before: 4.7W, after: 1.8W Loading Animation, medium complexity: before: 7.7W, after: 2.1W Loading Animation, high complexity: before: 19.4W, after: 1.8W<figcaption style="text-align: center;">Power usage, in Watts, as displayed by Intel Power Gadget. Lower numbers are better.</figcaption>

In short, Firefox 70 improves power usage by 3x or more for many use cases. The larger the Firefox window and the smaller the animation, the bigger the difference. Users have reported much longer battery life, cooler machines and less fan spinning.

I’m seeing a huge improvement over here too (2015 13″ MacBook Pro with scaled resolutions on internal display as well as external 4K display). Prior to this update I literally couldn’t use Firefox because it would spin my fans way up and slow down my whole computer. Thank you, I’m very happy to finally see Core Animation being implemented.

Charlie Siegel

After so many years, I have been able to use Firefox on my Mac – I used to test every Firefox release, and nothing had worked in the past.

Vivek Kapoor

I usually try nightly builds every few weeks but end up going back to Edge Chromium or Chrome for speed and lack of heat. This makes my 2015 mbp without a dedicated dGPU become a power sipper compared to earlier builds.

atiensivu

Read on for the technical details behind these changes.

Technical Details

Let’s take a brief look at how the Firefox compositing pipeline works. There are three major steps to getting pixels on the screen:

<figcaption style="text-align: center;">Step 1: Firefox draws pixels into “Gecko layers”.</figcaption>
<figcaption style="text-align: center;">Step 2: The Firefox “compositor” assembles these Gecko layers to produce the rendering of the window.</figcaption>
<figcaption style="text-align: center;">Step 3: The operating system’s window manager assembles all windows on the screen to produce the screen content.</figcaption>

The improvements in Firefox 70 were the result of reducing the work in steps 2 and 3: In both steps, we were doing work for the entire window, even if only a small part of the window was updating.

Why was our compositor always redrawing the entire window? The main reason was the lack of convenient APIs on macOS for partial compositing.

The Firefox compositor on macOS makes use of hardware acceleration via OpenGL. Apple’s OpenGL documentation recommends the following method of getting OpenGL content to the screen: You create an NSOpenGLContext, you attach it to an NSView (using -[NSOpenGLContext setView:]), and then you render to the context’s default framebuffer, filling the entire framebuffer with fresh content. At the end of each frame, you call -[NSOpenGLContext flushBuffer]. This updates the screen with your rendered content.

The crucial limitation here is that flushBuffer gives you no way to indicate which parts of the OpenGL context have changed. This is a limitation which does not exist on Windows: On Windows, the corresponding API has full support for partial redraws.

Every Firefox window contains one OpenGL context, which covers the entire window. Firefox 69 was using the API described above. So we were always redrawing the whole window on every change, and the window manager was always copying our entire window to the screen on every change. This turned out to be a problem despite the fact that these draws were fully hardware accelerated.

Enter Core Animation

Core Animation is the name of an Apple framework which lets you create a tree of layers (CALayer). These layers usually contain textures with some pixel content. The layer tree defines the positions, sizes, and order of the layers within the window. Starting with macOS 10.14, all windows use Core Animation by default, as a way to share their rendering with the window manager.

So, does Core Animation have an API which lets us indicate which areas inside an OpenGL context have changed? No, unfortunately it does not. However, it provides a number of other useful capabilities, which are almost as good and in some cases even better.

First and foremost, Core Animation lets us share a GPU buffer with the window manager in a way that minimizes copies: We can create an IOSurface and render to it directly using OpenGL by treating it as an offscreen framebuffer, and we can assign that IOSurface to a CALayer. Then, when the window manager composites that CALayer onto the screen surface, it will read directly from our GPU buffer with no additional copies. (IOSurface is the macOS API which provides a handle to a GPU buffer that can be shared between processes. It’s worth noting that the ability to assign an IOSurface to the CALayer contents property is not properly documented. Nevertheless, all major browsers on macOS now make use of this API.)

Secondly, Core Animation lets us display OpenGL rendering in multiple places within the window at the same time and update it in a synchronized fashion. This was not possible with the old API we were using: Without Core Animation, we would have needed to create multiple NSViews, each with their own NSOpenGLContext, and then call flushBuffer on each context on every frame. There would have been no guarantee that the rendering from the different contexts would end up on the screen at the same time. But with Core Animation, we can just group updates from multiple layers into the same CATransaction, and the screen will be updated atomically.

Having multiple layers allows us to update just parts of the window: Whenever a layer is mutated in any way, the window manager will redraw an area that includes the bounds of that layer, rather than the bounds of the entire window. And we can mark individual layers as opaque or transparent. This cuts down the window manager’s work some more for areas of the window that only contain opaque layers. With the old API, if any part of our OpenGL context’s default framebuffer was transparent, we needed to make the entire OpenGL context transparent.

Lastly, Core Animation allows us to move rendered content around in the window cheaply. This is great for efficient scrolling. (Our current compositor does not yet make use of this capability, but future work in WebRender will take advantage of it.)

The Firefox Core Animation compositor

How do we make use of those capabilities in Firefox now?

The most important change is that Firefox is now in full control of its swap chain. In the past, we were asking for a double-buffered OpenGL context, and our rendering to the default framebuffer was relying on the built-in swap chain. So on every frame, we could guess that the existing framebuffer content was probably two frames old, but we could never know for sure. Because of this, we just ignored the framebuffer content and re-rendered the entire buffer. In the new world, Firefox renders to offscreen buffers of its own creation and it knows exactly which pixels of each buffer need to be updated and which pixels still contain valid content. This allows us to reduce the work in step 2 drastically: Our compositor can now finally do partial redraws. This change on its own is responsible for most of the power savings.

In addition, each Firefox window is now “tiled” into multiple square Core Animation layers whose contents are rendered separately. This cuts down on work in step 3.

And finally, Firefox windows are additionally split into transparent and opaque parts: Transparent CALayers cover the “vibrant” portions of the window, and opaque layers cover the rest of the window. This saves some more work in step 3. It also means that the window manager does not need to redraw the vibrancy blur effect unless something in the vibrant part of the window changes.

The rendering pipeline in Firefox on macOS now looks as follows:

Step 1: Firefox draws pixels into “Gecko layers”.

Step 2: For each square CALayer tile in the window, the Firefox compositor combines the relevant Gecko layers to redraw the changed parts of that CALayer.

Step 3: The operating system’s window manager assembles all updated windows and CALayers on the screen to produce the screen content.

You can use the Quartz Debug app to visualize the improvements in step 3. Using the “Flash screen updates” setting, you can see that the window manager’s repaint area in Firefox 70 (on the right) is a lot smaller when a tab is loading in the background:

And in this screenshot with the “Show opaque regions” feature, you can see that Firefox now marks most of the window as opaque (green):

Future Work

We are planning to build onto this work to improve other browsing use cases: Scrolling and full screen video can be made even more efficient by using Core Animation in smarter ways. We are targeting WebRender for these further optimizations. This will allow us to ship WebRender on macOS without a power regression.

Acknowledgements

We implemented these changes with over 100 patches distributed among 28 bugzilla bugs. Matt Woodrow reviewed the vast majority of these patches. I would like to thank everybody involved for their hard work. Thanks to Firefox contributor Mark, who identified the severity of this problem early on, provided sound evidence, and was very helpful with testing. And thanks to all the other testers that made sure this change didn’t introduce any bugs, and to everyone who followed along on Bugzilla.

During the research phase of this project, the Chrome source code and the public Chrome development notes turned out to be an invaluable resource. Chrome developers (mostly Chris Cameron) had already done the hard work of comparing the power usage of various rendering methods on macOS. Their findings accelerated our research and allowed us to implement the most efficient approach right from the start.

Questions and Answers

  • Are there similar problems on other platforms?
    • Firefox uses partial compositing on some platforms and GPU combinations, but not on all of them. Notably, partial compositing is enabled in Firefox on Windows for non-WebRender, non-Nvidia systems on reasonably recent versions of Windows, and on all systems where hardware acceleration is off. Firefox currently does not use partial compositing on Linux or Android.
  • OpenGL on macOS is deprecated. Would Metal have posed similar problems?
    • In some ways yes, in other ways no. Fundamentally, in order to get Metal content to the screen, you have to use Core Animation: you need a CAMetalLayer. However, there are no APIs for partial updates of CAMetalLayers either, so you’d need to implement a solution with smaller layers similarly to what was done here. As for Firefox, we are planning to add a Metal back-end to WebRender in the future, and stop using OpenGL on machines that support Metal.
  • Why was this only a problem now? Did power usage get worse in Firefox 57?
    • As far as we are aware, the power problem did not start with Firefox Quantum. The OpenGL compositor has always been drawing the entire window ever since Firefox 4, which was the first version of Firefox that came with hardware acceleration. We believe this problem became more serious over time simply because screen resolutions increased. Especially the switch to retina resolutions was a big jump in the number of pixels per window.
  • What do other browsers do?
    • Chrome’s compositor tries to use Core Animation as much as it can and has a fallback path for some rare unhandled cases. And Safari’s compositor is entirely Core Animation based; Safari basically skips step 2.
  • Why does hardware accelerated rendering have such a high power cost per pixel?
    • The huge degree to which these changes affected power usage surprised us. We have come up with some explanations, but this question probably deserves its own blog post. Here’s a summary: At a low level, the compositing work in step 2 and step 3 is just copying of memory using the GPU. Integrated GPUs share their L3 cache and main memory with the CPU. So they also share the memory bandwidth. Compositing is mostly memory bandwidth limited: The destination pixels have to be read, the source texture pixels have to be read, and then the destination pixel writes have to be pushed back into memory. A screen worth of pixels takes up around 28MB at the default scaled retina resolution (1680×1050@2x). This is usually too big for the L3 cache, for example the L3 cache in my machine is 8MB big. So each screenful of one layer of compositing takes up 3 * 28MB of memory bandwidth. My machine has a memory bandwidth of ~28GB/s, so each screenful of compositing takes about 3 milliseconds. We believe that the GPU runs at full frequency while it waits for memory. So you can estimate the power usage by checking how long the GPU runs each frame.
  • How does this affect WebRender’s architecture? Wasn’t the point of WebRender to redraw the entire window every frame?
    • These findings have informed substantial changes to WebRender’s architecture. WebRender is now adding support for native layers and caching, so that unnecessary redraws can be avoided. WebRender still aims to be able to redraw the entire window at full frame rate, but it now takes advantage of caching in order to reduce power usage. Being able to paint quickly allows more flexibility and fewer performance cliffs when making layerization decisions.
  • Details about the measurements in the chart:
    • These numbers were collected from test runs on a Macbook Pro (Retina, 15-inch, Early 2013) with an Intel HD Graphics 4000, on macOS 10.14.6, with the default Firefox window size of a new Firefox profile, at a resolution of 1680×1050@2x, at medium display brightness. The numbers come from the PKG measurement as displayed by the Intel Power Gadget app. The power usage in the idle state on this machine is 3.6W, so we subtracted the 3.6W baseline from the displayed values for the numbers in the chart in order to get a sense of Firefox’s contribution. Here are the numbers used in the chart (after the subtraction of the 3.6W idle baseline):
      • Scrolling: before: 16.4W, after: 9.4W
      • Spinning Square: before: 12.8W, after: 2.9W
      • YouTube Video: before: 28.4W, after: 16.4W
      • Google Docs Idle: before: 7.4W, after: 1.6W
      • Tagesschau Audio Player: before: 24.4W, after: 4.1W
      • Loading Animation, low complexity: before: 4.7W, after: 1.8W
      • Loading Animation, medium complexity: before: 7.7W, after: 2.1W
      • Loading Animation, high complexity: before: 19.4W, after: 1.8W
    • Details on the scenarios:
    • Some users have reported even higher impacts from these changes than what our test machine showed. There seem to be large variations in power usage from compositing on different Mac models.

Hacks.Mozilla.OrgFirefox 70 — a bountiful release for all

Firefox 70 is released today, and includes great new features such as secure password generation with Lockwise and the new Firefox Privacy Protection Report; you can read the full details in the Firefox 70 Release Notes.

Amazing user features and protections aside, we’ve also got plenty of cool additions for developers in this release. These include DOM mutation breakpoints and inactive CSS rule indicators in the DevTools, several new CSS text properties, two-value display syntax, and JS numeric separators. In this article, we’ll take a closer look at some of the highlights!

For all the details, check out the following:

Note that the new Mozilla Developer YouTube channel will have videos covering many of the features mentioned below. Why not subscribe, so you can get them when they come out?

HTML forms and secure passwords

To enable the generation of secure passwords (as mentioned above) we’ve updated HTML <input> elements; any <input> element of type password will have an option to generate a secure password available in the context menu, which can then be stored in Lockwise.

For example, take the following:

<input type=”password”>

In the Firefox UI, you’ll then be able to generate a secure password like so:Context menu showing password generation option

In addition, any type="password" field with autocomplete=”new-password” set on it will have an autocomplete UI to generate a new password in-context.

Note: It is advisable to use autocomplete=”new-password” on password change and registration forms as a strong signal to password managers that a field expects a new password, not an existing one.

CSS

Let’s turn our attention to the new CSS features in Firefox 70.

New options for styling underlines!

Firefox 70 introduces three new properties related to text decoration/underline:

  • text-decoration-thickness: sets the thickness of lines added via text-decoration.
  • text-underline-offset: sets the distance between a text decoration and the text it is set on. Bear in mind that this only works on underlines.
  • text-decoration-skip-ink: sets whether underlines and overlines are drawn if they cross descenders and ascenders. The default value, auto, causes them to only be drawn where they do not cross over a glyph. To allow underlines to cross glyphs, set the value to none.

So, for example, the following code:

h1 {
  text-decoration: underline red;
  text-decoration-thickness: 3px;
  text-underline-offset: 6px;
}

will give you this kind of effect:

a heading with a thick, red, offset underline that isn't drawn over the heading's descenders

Two-keyword display values

For years, the humble display property has taken a single value, whether we are talking about simple display choices like block, inline, or none, or newer display modes like flex or grid.

However, as Rachel explains, the boxes on your page have an outer display type, which determines how the box is laid out in relation to other boxes on the page, and an inner display type, which determines how the box’s children will behave. Browsers have done this for a while, but it has only been specified recently. The new set of two-keyword values allow you to explicitly specify the outer and inner display values.

In supporting browsers (just Firefox at the time of writing), the single keyword values we know and love will map to new two-keyword values, for example:

  • display: flex; is equivalent to display: block flex;
  • display: inline-flex; is equivalent to display: inline flex;

Rachel will explain this in more detail in an upcoming blog post. For now, watch this space!

JavaScript

Now let’s move on to the JavaScript.

Numeric separators

Numeric separators are now supported in JavaScript — underscores can now be used as separators in large numbers, so that they are more readable. For example:

let myNumber = 1_000_000_000_000;
console.log(myNumber); // Logs 1000000000000

let myHex = 0xA0_B0_C0
console.log(myHex); // Logs 10531008

Numeric separators are usable with any kind of numeric literal, including BigInts.

Intl improvements

We’ve improved JavaScript i18n (internationalization), starting with the implementation of the Intl.RelativeTimeFormat.formatToParts() method. This is a special version of Intl.RelativeTimeFormat.format() that returns an array of objects, each one representing a part of the value, rather than returning a string of the localized time value.

const rtf = new Intl.RelativeTimeFormat("en", { numeric: "auto" });

rtf.format(-2, "day"); // Returns "2 days ago"

rtf.formatToParts(-2, "day");
/*
Returns

[
​  { type: "integer", value: "2", unit: "day" },
​  { type: "literal", value: " days ago" }
​]
*/

This is useful because it allows you to easily isolate the numeric value out of the string, for example.

In addition, Intl.NumberFormat.format() and Intl.NumberFormat.formatToParts() now accept BigInt values.

Performance improvements

JavaScript has got generally faster thanks to our new baseline interpreter! You can learn more by reading The Baseline Interpreter: a faster JS interpreter in Firefox 70.

Developer tools

There is a whole host of awesome new things going on with Firefox 70 developer tools. Let’s find out what they are!

Inactive CSS rules indicator in rules panel

Inactive CSS properties in the Rules view of the Page Inspector are now colored gray and have an information icon displayed next to them. The properties are technically valid, but won’t have any effect on the element. When you hover over the info icon, you’ll see a useful message about why the CSS is not being applied, including a hint about how to fix the problem and a “Learn more” link for more information.

For example, in this case our grid-auto-columns property is inactive because we are trying to apply it to an element that is not a grid container:

a warning message saying that grid-auto-columns isn't being applied because the element doesn't have display: grid applied

And in this case, our flex property is inactive because we are trying to apply it to an element that is not a flex item. (Its parent is not a flex container.):

A warning message saying that the flex property is inactive because the element's parent is not a flex container

To fix this second issue, we can go into the inspector, find the element’s parent (a <div> in this case), and apply display: flex; to it:

using the rules inspector in firefox to fix the problem with display: flex

Our fix is shown in the Changes panel, and from there can be copied and put into our code base. Sorted!

the changes panel, which can be used to copy the code changes you made, and then paste them back into your codebase

Pause on DOM Mutation in Debugger

In complex dynamic web apps it is sometimes hard to tell which script changed the page and caused the issue when you run into a problem. DOM Mutation Breakpoints (aka DOM Change Breakpoints) let you pause scripts that add, remove, or change specific elements.

Try inspecting any element on your page. When you /ctrl + click it in the HTML inspector, you’ll see a new context menu item “Break on…”, with the following sub-items:

  • Subtree modification
  • Attribute modification
  • Node removal

Once a DOM mutation breakpoint is set, you’ll see it listed under “DOM Mutation Breakpoints” in the right-hand pane of the Debugger; this is also where you’ll see breaks reported.

the degugger interface showing paused dom mutation code after a DOM mutation breakpoint was hit

For more details, see Break on DOM mutation. If you find them useful for your work, you might find Event Listener Breakpoints and XHR Breakpoints useful too!

Color contrast information in the color picker!

In the CSS Rules view, you can click foreground colors with the color picker to determine if their contrast with the background color meets accessibility guidelines.

a color picker showing color contrast information between the foreground and background colors

Accessibility inspector: keyboard checks

The Accessibility inspector‘s Check for issues dropdown now includes keyboard accessibility checks:

The firefox accessibility inspector, showing acessibility check options; contrast, keyboard, or text labels

Selecting this option causes Firefox to go through each node in the accessibility tree and highlight all that have a keyboard accessibility issue:

the accessibility inspector showing the results of keyboard checks - several messages highlighting where and what the problems are

Hovering over or clicking each one will reveal information about what the issue is, along with a “Learn more” link for more details on how to fix it.

Try it now, on a web page near you!

Web socket inspector

In Firefox DevEdition, the Network monitor now has a new “Messages” panel, which appears when you are monitoring a web socket connection (i.e. a 101 response). This can be used to inspect web socket frames sent and received through the connection.

Read Firefox’s New WebSocket Inspector to find out more. Note that this functionality was originally supposed to be in Firefox 70 general release, but we had a few more bugs to iron out, so expect it in Firefox 71! For now, you can use it in DevEditionplease share your constructive feedback!

The post Firefox 70 — a bountiful release for all appeared first on Mozilla Hacks - the Web developer blog.

About:CommunityFirefox 70 new contributors

With the release of Firefox 70, we are pleased to welcome the 45 developers who contributed their first code change to Firefox in this release, 32 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

The Mozilla BlogLatest Firefox Brings Privacy Protections Front and Center Letting You Track the Trackers

Our push this year has been building privacy-centric features in our products that are on by default. With this move, we’re taking the guesswork out of how to give yourself more privacy online thanks to always-on features like blocking third-party tracking cookies and cryptominers also known as Enhanced Tracking Protection. Since July 2 we’ve blocked more than 450 billion tracking requests that attempt to follow you around the web.

450 billion tracker have been blocked with Enhanced Tracking Protection

Much of this work has been behind the scenes — practically invisible to you — making it so that whenever you use Firefox, the privacy protections are working for you in the background.

But now with growing threats to your privacy, it’s clear that you need more visibility into how you’re being tracked online so you can better combat it. That’s why today we’re introducing a new feature that offers you a free report outlining the number of third-party and social media trackers blocked automatically by the Firefox browser with Enhanced Tracking Protection.

In some ways a browser is like a car, where the engine drives you to the places you want to go and a dashboard tells you the basics like how fast you’re going or whether you need gas. Nowadays, most cars go beyond the basics, and dashboards tell you much more than ever, like when you need to brake or when a car is in your blind spot, essentially taking extra steps to protect you. Similar to a car’s dashboard, we created an easy-to-view report within Firefox that shows you the extra steps it takes to protect you when you’re online. So you can enjoy your time without worrying who’s tracking you, potentially using your data or browsing history without your knowledge.

Here’s how Firefox’s Privacy Protections report works for you:



The Firefox Privacy Protections report includes:

    • See how many times Enhanced Tracking Protection blocks an attempt to tag you with cookies –  One of the many unseen ways that Firefox keeps you safe is to block third-party tracking cookies. It’s part of our Enhanced Tracking Protection that we launched by default in September. It prevents third-party trackers from building a profile of you based on your online activity. Now, you’ll see the number of cross-site and social media trackers, fingerprinters and cryptominers we blocked on your behalf.
    • Keep up to date on data breaches with Firefox Monitor –  Data breaches are not uncommon, so it’s more important than ever to stay on top of your email accounts and passwords. Now, you can view at a glance a summary of the number of unsafe passwords that may have been used in a breach, so that you can take action to update and change those passwords.
    • Manage your passwords and synced devices with Firefox Lockwise– Now, you can get a brief look at the number of passwords you have safely stored with Firefox Lockwise. We’ve also added a button where you can click to view your logins and update. You’ll also have the ability to quickly view and manage how many devices you are syncing and sharing your passwords with.

“The industry uses dark patterns to push people to “consent” to an unimaginable amount of data collection. These interfaces are designed to push you to allow tracking your behavior as you browse the web,” said Selena Deckelmann, Senior Director of Firefox Engineering at Mozilla. “Firefox’s Data Privacy Principles are concise and clear. We respect your privacy, time, and attention. You deserve better. For Firefox, this is business as usual. And we extend this philosophy to how we protect you from others online.”

Stay up-to-date on Your Personalized Privacy Protections

There are a couple ways to access your personalized Firefox’s privacy protections. First, when you visit a site and see a shield icon in the address bar, Firefox is blocking 10 billion — that’s billion with a B —  trackers every day, stopping thousands of companies from viewing your online activity. Now, when you click on the shield icon, then click on Show Report, you’ll see a complete overview.

Click on the shield icon, then click on Show Report, to see a complete overview

 

The number of cross-site and social media trackers, fingerprinters and cryptominers

 

A complete overview of your Privacy Protections

Another way to access the report is to visit here. The Privacy Protections section of your report are based on your recent week’s online activities.

Keep your passwords safe with Lockwise’s new password generator and improved management

As a further demonstration of our commitment to your privacy and security, we’ve built visible consumer-facing products like Monitor and Lockwise, available to you when you sign up for a Firefox account. Equipped with this information, you can take full advantage of the products and services that’s also part of this latest release.

Last year, Firefox Lockwise (previously Lockbox) launched as a Test Pilot experiment to safely store and take your passwords everywhere. Since then, we’ve incorporated feedback from users like launching it on Android, in addition to desktop and iOS. Today, we’ve added two of the most popular requested features for Lockwise that are now available in Firefox: password generator with improved management plus integrated updates on breached accounts with Firefox Monitor.

Take a look at the improved Lockwise dashboard:



The newest Firefox release includes enabling users to generate and manage passwords with Firefox Lockwise, stay informed about data breaches with Firefox Monitor, which is now even better integrated with the Firefox browser and its features.

  • Generate new, secure passwords – With multiple accounts like email, banks, retailers or delivery services, it can be tough to come up with creative and secure passwords rather than the typical 123456 or your favorite sports team, which are not secure at all. Now, when you create an account you’ll be auto-prompted to let Lockwise generate a safe password, which you can save directly in the Firefox browser. For current accounts, you can right click in the password field to access securely generated passwords through the fill option. All securely generated passwords are auto-saved to your Firefox Lockwise account.
  • Improved dashboard to manage your passwords – To access the new Lockwise dashboard, click on the main menu button located on the far right of your toolbar. It looks like ☰ with three parallel lines. From there click on “Logins and Passwords”, you’ll see the new and improved Firefox Lockwise dashboard open in a new tab, which allows you to search, sort, create, update, delete and manage your passwords to all your accounts. Plus, you’ll see a notification from Firefox Monitor if the account may have been involved in a data breach.
  • Take your passwords with you everywhere – Use saved passwords in the Firefox browser on any device by downloading Firefox Lockwise for Android and iOS. With a Firefox Account, you can sync all your logins between Firefox browsers and the Firefox Lockwise apps to auto-fill and safely access your passwords across devices whenever you are on the go.

Preventing additional data leaks in today’s Firefox release

We’re always looking for ways to protect your privacy, and as you know there are multiple ways that companies can access your data. Initially launched in Private Browsing mode in January 2018, we’re now stripping path information from the HTTP referrer sent to third-party trackers to prevent additional data leaks in today’s Firefox release. The HTTP referrer is data sent in an HTTP connection, and can be leveraged to track you from site to site. Additionally, companies can collect and sell this data to third parties and use it to build user profiles.

To see what else is new or what we’ve changed in today’s release, you can check out our release notes.

Check out and download the latest version of Firefox available here.

 

The post Latest Firefox Brings Privacy Protections Front and Center Letting You Track the Trackers appeared first on The Mozilla Blog.

The Mozilla BlogThe Illusion of choice and the need for default privacy protection

Since July 2019, Firefox’s Enhanced Tracking Protection has blocked over 450 Billion third-party tracking requests from exploiting user data for profit. This shocking number reveals the sheer scale of online tracking and it highlights why the current advertising industry push on transparency, choice and “consent” as a solution to online privacy simply won’t work. The solutions put forth by other tech companies and the ad industry provide the illusion of choice. Let’s step through the reasons why that is and why we ultimately felt it necessary to enable Enhanced Tracking Protection by default.

A few months ago, we began to enable Enhanced Tracking Protection, which protects Firefox users from cookie-based tracking by default. We did this for a few reasons:

1. People do not expect their data to be sent to, and collected by, third-party companies as they browse the web. For example, 72% of people do not expect that Facebook uses “Like” buttons to collect data about a person’s online activity on websites outside of Facebook (when the buttons are not actually clicked). Many in the ad industry will point to conversion rates for behaviorally targeted ads as evidence for consumers being okay with the privacy tradeoff, but people don’t know they are actually making such a tradeoff. And even if they were aware, we shouldn’t expect them to have the information necessary to evaluate the tradeoff. When people are asked explicitly about it, they are generally opposed. 68% of people believe that using online tracking to tailor advertisements is unethical.

2. The scale of the problem is immense. We currently see about 175 tracking domains being blocked per Firefox client per day. This has very quickly resulted in over 450B trackers being blocked in total since July. You can see the numbers accelerate in the beginning of September after we enabled Enhanced Tracking Protection for all users.

Estimate of Tracking Requests Blocked by Firefox with Enhanced Tracking Protection

It should be clear from these numbers that users would be quickly overwhelmed if they were required to make individual choices about data sharing to this many companies.

3. The industry uses dark patterns to push people to “consent” via cookie/consent banners.

We’ve all had to click through consent banners every time we visit a new site. Let’s walk through the dark patterns in one large tech company’s consent management flow as an example, keeping in mind that this experience is not unique — you can find plenty of other examples just like this one. This particular consent management flow shows how these interfaces are often designed counterintuitively so that users likely don’t think they are agreeing to be tracked. We’ve redacted the company name in the example to focus on the content of the experience.

To start off, we’re presented with a fairly standard consent prompt, which is meant to allow the site visitor to make an informed choice about how their data can be collected and used. However note that clicking anywhere on the page provides “consent”. It only gets worse from here…

Consent prompt on large tech company website

If the user manages to click “Manage Ad Cookies” before clicking elsewhere on the page, they are given the options to “Save Settings” or “Allow All”. According to the highlighted text, clicking either of these buttons at this point provides consent to all partners to collect user data.  Users are not given the option to “Disable All”.

Confusing consent dialog

Instead, if a user wants to manage consent they have to click the link labeled view vendor consent. Wording matters here! If a person is skimming through that dialog they’ll assume that link is informational. This consent flow is constructed to make the cognitive load required to protect oneself as high as possible, while providing ample opportunity to “take the easy way out” and allow all tracking.

Finally, users who make it to the consent management section of the flow are presented with 415 individual sliders. The website provides a global toggle, but let’s assume a user actually wants to make informed choices about each partner. After all, that is the point, right?

Confusing consent mechanism

Eight of the 415 privacy policies linked from the consent management page are inaccessible. They throw certificate errors, fail to resolve, or time out.

Error loading privacy policies for 3rd party partners

The 407 privacy policies that load correctly total over 1.3 million words. That will take the average adult over 86 hours — two solid work weeks — just to read. That doesn’t even consider the time needed to reflect on that information and make an informed choice.

Proposals for “transparency and consent” as a solution to rampant web tracking should be seen for what they really are: proposals to continue business as usual.

Thankfully Firefox blocks almost all of the third-party cookies loaded on the page by default, despite the deceptive methods used to get the visitor to “consent”.

Tracking Cookies Blocked in Firefox by Default

While it is easy to focus on this particular example, this experience is far from unique. The sheer volume of tracker blocking that we see with Firefox’s Enhanced Tracking Protection (around 175 blocks per client per day) confirms that the average individual would never be able to make informed choices about whether or not individual companies can collect their data. This also highlights how tech companies need to do more if they are really serious about privacy, rather than push the burden onto their customers.

Firefox already blocks tracking by default. Today, a new version of Firefox will be released which will make it clear when tracking attempts are happening without your knowledge and it will highlight how Firefox is keeping you safe.

We invite you to try and download the Firefox browser here.

 

The post The Illusion of choice and the need for default privacy protection appeared first on The Mozilla Blog.

The Firefox FrontierFirefox privacy protections reveal who’s trying to track you

You could say that a web browser is kind of like a car. The engine drives you where you want to go, and a dashboard tells you what’s happening under … Read more

The post Firefox privacy protections reveal who’s trying to track you appeared first on The Firefox Frontier.

The Firefox FrontierNew password security features come to Firefox with Lockwise

Remembering unique, strong passwords for all your accounts and apps is a challenge, but it’s also essential for good digital security. We’re making that easier by helping you generate and … Read more

The post New password security features come to Firefox with Lockwise appeared first on The Firefox Frontier.

The Firefox FrontierNo Judgment Digital Definitions: What is a web tracker?

Let’s say you’re on an outdoor pizza oven website dreaming about someday owning one. Mmm pizza. Next you switch gears and visit a fitness site; lo and behold an ad … Read more

The post No Judgment Digital Definitions: What is a web tracker? appeared first on The Firefox Frontier.

The Firefox FrontierNo-judgment digital definitions: What are social media trackers?

Let’s be honest. We’re usually pretty particular about what we post on social media, right? When we’re on vacation, we’ll post photos on Facebook of a beautiful sunset… and crop … Read more

The post No-judgment digital definitions: What are social media trackers? appeared first on The Firefox Frontier.

This Week In RustThis Week in Rust 309

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is grubbnet, a TCP client/server library for networked applications and games.

Thanks to Dooskington for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

353 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Africa
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust helped me grasp concepts I should have known when writing C++

Alexander Clarke on the Microsoft Security Response Center blog.

Thanks to mmmmib for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

The Mozilla BlogNate Weiner, formerly CEO of Pocket, to take expanded role at Mozilla focused on New Markets

Nate Weiner, founder and CEO of Pocket, has been promoted to SVP of a new product organization, New Markets, at Mozilla. The New Markets organization will be working to expand and scale Mozilla’s product portfolio alongside the Firefox and Emerging Technologies teams. The Pocket and Emerging Markets teams will live within the New Markets organization.

As part of this change, Pocket Chief Technology Officer and Head of Operations Matt Koidin will step into a new role taking over day-to-day leadership as VP and General Manager of Pocket, continuing to report to Nate. Acquired by Mozilla in 2017, Pocket is a platform used by millions of people worldwide to discover, capture, and spend time with the stories that fascinate and fuel them.

The post Nate Weiner, formerly CEO of Pocket, to take expanded role at Mozilla focused on New Markets appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgQuickly Alter Typography with Firefox Font Editor

Fonts and typography are at the heart of design on the web. We now have powerful tools to inspect, understand, and design our typography in the browser. For instance, have you ever landed on a web page and wondered what fonts are being used? Then, have you asked yourself where those fonts come from or why a particular font isn’t loading?

The font editor in Firefox provides answers and insights. You gain the ability to make font changes directly, with a live preview. As for me, I use the editor for understanding variable fonts, how they work, and the options they expose.

Get started with a quick overview by Jen Simmons:

I’ve been using Firefox font tools since they were first released. And yet, I wasn’t aware of all the things the editor can do. Start using the built-in unit-conversion feature, which quickly exposes relative units like em, rem, as well as percents along with computed px values. Interested in more detail? Check out the follow-up video:

Enjoy styling type on the web!

The post Quickly Alter Typography with Firefox Font Editor appeared first on Mozilla Hacks - the Web developer blog.

Chris H-CFour-Year Moziversary

Wowee what a year that was. And I’m pretty sure the year to come will be even more so.

We gained two new team members, Travis and Beatriz. And with Georg taking a short break, we’ve all had more to do that usual. Glean‘s really been working out well, though I’ve only had the pleasure of working on it a little bit.

Instead I’ve been adding fun new features to Firefox Desktop like Origin Telemetry. I also gave a talk at a conference about Data and Responsibility. Last December’s All Hands returned us to Orlando, and June brought me to Whistler for the first time. We held a Virtual Work Week (or “vorkweek”) a couple of weeks ago when we couldn’t find a time and the budget to meet in person, and spent it planning out how we’ll bring Glean to Firefox Desktop with Project FOG. First with a Prototype (FOGotype) by end of year. And then 2020 will be the year of Glean on the Desktop.

Blogging-wise I’ve slowed down quite a lot. 12 posts so far this calendar year is much lower than previous years’ 25+. The velocity I’d kept up by keeping tabs on the Ontario Provincial Legislature and pontificating about video games I’d played died in the face of mounting work pressures. Instead of spending my off time writing non-mozilla things I spent a lot of it reading instead (as my goodreads account can attest).

But now that I’ve written this one, maybe I’ll write more here.

Resolution for the coming year? More blogging. Continued improvement. Put Glean on Firefox. That is all.